AI will only be accurate, if the investigated system is similar to systems which had already been simulated. Therefore it does not constitute a first-principle method.
The emergence of AI models capable of predicting quantum chemical properties without solving the fundamental equations of quantum mechanics presents a profound challenge to our traditional understanding of "first-principles" computation. This development forces us to re-examine three core philosophical questions in theoretical chemistry:
The Nature of First-Principles Knowledge: Traditional density functional theory derives its authority from being grounded in the fundamental laws of quantum mechanics. The Kohn-Sham equations, while approximate, represent a principled approach to solving the many-body Schrödinger equation. When AI bypasses these equations entirely and achieves comparable results through pattern recognition in existing data, it fundamentally questions whether we should privilege equation-solving approaches as being more "first-principle" than data-driven ones.
The Relationship Between Prediction and Understanding: There exists a crucial distinction between predicting molecular properties and understanding why those properties emerge. Conventional DFT provides interpretable intermediate results - electron densities, orbital interactions, and energy decompositions - that offer physical insight. AI models typically lack this transparency, raising important questions about whether predictive accuracy alone constitutes scientific understanding, or if explanatory power remains an essential component of theoretical chemistry.
The Changing Meaning of Theoretical Rigor: The theoretical chemistry community has historically valued methods that can trace their lineage back to fundamental physical laws. AI's success demonstrates that equally accurate results can be obtained through very different epistemological pathways. This suggests we may need to expand our definition of theoretical rigor to include approaches that are provably reliable, even if their connection to first principles is mediated through data rather than direct equation-solving.
These philosophical considerations have immediate practical consequences for how we conduct and evaluate computational research. The field appears to be evolving toward a hybrid paradigm where:
AI handles rapid property prediction and preliminary screening
Traditional methods provide validation and mechanistic insight
The combination pushes forward both predictive accuracy and fundamental understanding
This transition mirrors broader shifts in scientific methodology, where machine learning is supplementing (though not yet replacing) conventional theoretical approaches across multiple disciplines. The ultimate philosophical resolution may lie in recognizing that "first-principles" need not refer exclusively to equation-solving approaches, but rather to any method whose reliability is systematically verifiable and whose limitations are well-understood - whether that verification comes from mathematical derivation or rigorous empirical validation.
AI does not challenge "the nature of first-principles knowledge". DFT stays a first-principles method while AI as an aggregator of that will never be a first principles method. There is no difference to a classical fit with parametrization here.
Also, it doesn't change "the relationship between prediction and understanding". Again, we already have had parametrized models for years and an AI may produce more accurate results than the parametrization, but in the end it doesn't add anything fundamentally new here, either.
Regarding "theoretical rigor": the requirement of reproducibility for a method have not changed and if an AI can't deliver that, it's simply bad science (or not science at all). In this context an AI indeed deviates from a parametrization because the parametrization is reproducible while that is often not the case for AI.