Development of a theoretical (i.e. Mathematical) theory from experimental/simulation results: relevant papers, links, and textbooks will be appreciated.
In full agreement with all the others who answered the question:
"How can one develop a theoretical (mathematical) model from experimental / simulation results?"
I also say that when we study a new phenomenon (and not only then, but also when we want to obtain new information about a known phenomenon, possibly with higher-level applications than existing ones), it is essential to start from the experimental data. In addition, the theoretical models created should be taken into account, if any (at least not to reinvent the wheel ...).
The experimental data can be harnessed through: statistical theoretical modeling (as Bharat Soni shows) or dimensional analysis (Buckingham theorem, as Sanjiv Sharma shows). After validating these primary theoretical models (but often very useful in important applications of driving and optimizing dynamic processes), one can move on to trying to create theoretical models based on the laws of classical physics. These models should be able to use all the input and control data of the process used and to predict the output parameters of the system (modeled process). In addition, such models should give additional data about the phenomenon, data that can give new explanations, new parametric combinations (possibly optimal) that will improve the clit and quantitative process. I also do not exclude theoretical models that show that obtaining superior performances is impossible, or that the modeled process, for a long time of operation leads to negative irreversible processes for certain entities.
For example, in the Resistance of the materials the calculus relations were for centuries purely empirical. However, with these elementary models, the engineers (more art than calculation?) Has built gigantic bridges and cathedrals, temples, etc. When the classical mechanics and the mechanics of the continuous bodies were constituted as theories, the first improvements and optimizations appeared. Spectacular improvements and optimizations have emerged with the use of computers and numerical analysis (combined with superior statistics and stochastic models, etc.). But at the same time there were spectacular accidents (broken suspension bridges, collapsed domes, railway and aviation accidents).
However, experience dominates the world of high engineering performance. After hundreds of years of theoretical developments, in very high environments is used Reverse Structural Analysis ... Loads are measured (not given) are introduced into the systems of equations (for example), the displacements are calculated, and then the problem is solved by loading the system with the resulting displacements, to determine interesting physical characteristics at inaccessible points, etc. Isn't it great art to estimate loads in engineering for the last centuries or even millennia?
And to not be boring, I add a minor example I have been dealing with for the last few months: compressing and compacting granular materials and powders. Theoretical stage (with experimental origins) at the time we started:
Cardei P., Gageanu I., (2017), A Critical Analysis of Empirical Formulas Describing the Phenomenon of Compaction of The Powders, J. Modern Technology & Engineering, vol. 2, No. 1, pp. 1-20;
Gageanu I., Cardei P., Matache M., Voicu Gh., (2019), Description of the experimental data of the pelleting process using elementary statistics, Proceedings of the Sixth International Conference "Research People and Actual Tasks on Multidisciplinary Sciences", June 12-15, 2019, Lozenec, Bulgaria, pp. 437-445;
and higher order models
Cardei P., A mathematical model for a process of compacting granular materials
- How time can the theory and science be complicated yet, so that the components from which it is made, remain united and not yield , so that science can be mastered by the human mind?
-how long time, will humanity still think that it is rational to support a research that at each result sets out another 3-5 or more research directions needed (the more we know, the less we know ...?); some people might believe us dishonest people, especially as scientific texts are increasingly inaccessible to a growing part of humanity ...!
-How many people can understand the efforts required to obtain top results?
- the inflation of specific literature reaches quotas that mimic the growth of the population on Earth ... much more is written than read!
For the first one, you need to have insight into the governing equations of the phenomenon and sometime for simplicity you consider a parameter for complex issues, You have to know continuum mechanics and statistical mechanics, those models are based on PDE or integral equations...
For the second model, you do regression analysis, time series analysis, AI, neural network, machine learning. Those models are data dependent and you cannot change parameters based on physics. Any "experiment design" book can teach you the basics of those type of models.
There are tons of books on mathematical modeling, it depends on your discipline: "Mathematical biology", "Traffic Engineering", "Fracture Mechanics", "Water Quality" .... they have their specific books. One very general but extremely insightful book you may love to read on the big picture on the mathematical modeling is:
The Nature of Mathematical Modeling By Neil A. Gershenfeld, Neil Gershenfeld
If you narrow down to what you want to study, I may introduce a book which is more useful for you.
Based on the observations/results from experiments or simulations, the modeling is done by statistical tools and similar to these tools like AI. But these methods are not much effective because they do not filter out the errors and also do not properly obey the physical phenomena behind it.
The most suitable approach is the mathematical collaboration with physical phenomena. This links the dependent parameters with each other. Although it is not that easy, it also need some help from experimental observations. The exact methodology depends on the area of interest.
Another approach is to use Buckingham's Pi Theorem. For a beginners guide, with some examples, please have a look at: http://www.astro.yale.edu/coppi/astro520/buckingham_pi/Buckinghamforlect1.pdf
Specially slide 16 onwards - it may be closer to the case you are studying.
The theorem is based on finding physically compatible non-dimensional relationships amongst variables. The common difficulty is to ensure that the set of variables that describe the phenomena are complete.
Another good source is: https://user.engineering.uiowa.edu/~fluids/Posting/Schedule/Example/Dimensional%20Analysis_11-03-2014.pdf
In full agreement with all the others who answered the question:
"How can one develop a theoretical (mathematical) model from experimental / simulation results?"
I also say that when we study a new phenomenon (and not only then, but also when we want to obtain new information about a known phenomenon, possibly with higher-level applications than existing ones), it is essential to start from the experimental data. In addition, the theoretical models created should be taken into account, if any (at least not to reinvent the wheel ...).
The experimental data can be harnessed through: statistical theoretical modeling (as Bharat Soni shows) or dimensional analysis (Buckingham theorem, as Sanjiv Sharma shows). After validating these primary theoretical models (but often very useful in important applications of driving and optimizing dynamic processes), one can move on to trying to create theoretical models based on the laws of classical physics. These models should be able to use all the input and control data of the process used and to predict the output parameters of the system (modeled process). In addition, such models should give additional data about the phenomenon, data that can give new explanations, new parametric combinations (possibly optimal) that will improve the clit and quantitative process. I also do not exclude theoretical models that show that obtaining superior performances is impossible, or that the modeled process, for a long time of operation leads to negative irreversible processes for certain entities.
For example, in the Resistance of the materials the calculus relations were for centuries purely empirical. However, with these elementary models, the engineers (more art than calculation?) Has built gigantic bridges and cathedrals, temples, etc. When the classical mechanics and the mechanics of the continuous bodies were constituted as theories, the first improvements and optimizations appeared. Spectacular improvements and optimizations have emerged with the use of computers and numerical analysis (combined with superior statistics and stochastic models, etc.). But at the same time there were spectacular accidents (broken suspension bridges, collapsed domes, railway and aviation accidents).
However, experience dominates the world of high engineering performance. After hundreds of years of theoretical developments, in very high environments is used Reverse Structural Analysis ... Loads are measured (not given) are introduced into the systems of equations (for example), the displacements are calculated, and then the problem is solved by loading the system with the resulting displacements, to determine interesting physical characteristics at inaccessible points, etc. Isn't it great art to estimate loads in engineering for the last centuries or even millennia?
And to not be boring, I add a minor example I have been dealing with for the last few months: compressing and compacting granular materials and powders. Theoretical stage (with experimental origins) at the time we started:
Cardei P., Gageanu I., (2017), A Critical Analysis of Empirical Formulas Describing the Phenomenon of Compaction of The Powders, J. Modern Technology & Engineering, vol. 2, No. 1, pp. 1-20;
Gageanu I., Cardei P., Matache M., Voicu Gh., (2019), Description of the experimental data of the pelleting process using elementary statistics, Proceedings of the Sixth International Conference "Research People and Actual Tasks on Multidisciplinary Sciences", June 12-15, 2019, Lozenec, Bulgaria, pp. 437-445;
and higher order models
Cardei P., A mathematical model for a process of compacting granular materials
- How time can the theory and science be complicated yet, so that the components from which it is made, remain united and not yield , so that science can be mastered by the human mind?
-how long time, will humanity still think that it is rational to support a research that at each result sets out another 3-5 or more research directions needed (the more we know, the less we know ...?); some people might believe us dishonest people, especially as scientific texts are increasingly inaccessible to a growing part of humanity ...!
-How many people can understand the efforts required to obtain top results?
- the inflation of specific literature reaches quotas that mimic the growth of the population on Earth ... much more is written than read!
The System Identification Method developed by J. T. Katsikadelis answers this question. It produces theory from experimental data (numerical measurements.)
The method is illustrated in the papers:
1. Katsikadelis J.T. (1995). System Identification by the Analog Equation Method. Transactions on modelling and Simulation, Vol. 10, 1995, WIT, www.witpress.com, ISSN 1743-355X
2. Katsikadelis, J. T. (2015). Derivation of Newton’s law of motion using Galileo’s experimental data, Acta Mech, 226, 3195–3204, DOI 10.1007/s00707-015-1354
3. Katsikadelis, J. T. (2017). Derivation of Newton’s law of motion from Kepler’s laws of planetary motion, Archive of Applied Mechanics, DOI 10.1007/s00419-017-1245-x.
4. Katsikadelis, J.T. (2019). Is Newton’s Law of Motion Really of Integer Order? Archive of Applied Mechanics 89:639–647, https://doi.org/10.1007/s00419-018-1486-3).
I think the best approach is to start with the theory based on the best idea of the time (or your idea if you have a new idea), make predictions from it, and see if the predictions agree with data. If you start with the data and try to find theories that work, you can justify all sorts of theories. Most papers published in the literature claim agreement with data, because that helps the papers get published, even when different papers disagree with each other. Data are essential for refuting or supporting (not proving, just supporting) theories, and for inspiring ideas for theories. But data cannot create a theory.
As Jaafar El Karkri said, "it depends on the context and the objectif of the study".
In certain circumstances level of measurement of at least some of the variables is only ordinal or even qualitative.
This could happen in Psychology, Sociology or in certain fields of Biology.
The mathematical structure of the models, in this case, is quite different to what is usual in Physics, Chemistry, Engineering, and Physiology where the level of measurement is mainly quantitative.
"The Nature of Mathematical Modeling" by Neil A. Gershenfeld, suggested by Kaveh Zamani is a very good general introduction to the kind of mathematical models that can be constructed with variables in a quantitative level of measurement.
A framework with a more general scope is presented in:
P. Fishwick, Handbook of Dynamic System Modeling, Chapman&Hall/CRC, 2007.