Complexity Reduction, Explainability, and Interpretability (KEI)
In Search of Explainable and Interpretable Machine Learning Through Philosophy and Physics
Machine learning (ML) algorithms are increasingly permeating our daily lives and public sphere. They make predictions, but why they arrive at certain decisions rather than others often remains difficult to understand; in a sense, they are“opaque.” In our project, we aim to understand how this opacity arises and how it might be retroactively resolved. To this end, we intend to interpret the nature of the (implicit) abstractions generated by ML itself, drawing on insights from physics and other theories of complexity. Our working hypothesis is that the complexity of ML and the difficulty of understanding certain components of the learning process together give rise to the problem of opacity. In this sense, a solution requires not simply “more understanding” or “less complexity,” but a meaningful reduction of complexity. By this we mean adequate abstractions and non-trivial simplifications that ensure a well-founded approach to understanding . In our project, we will develop tools to analyze the complexity of ML algorithms in new ways and to identify meaningful reductions from the perspective of many-body physics and philosophy .