The acceptable practice is to validate a model used under any study. What validation methods are available and would we confirm one superior over others?
You can validate your analytical model by the following methods:
- running experiments under the same assumptions of the analytical mode
- By using numerical simulation where the the system is solved by the numerical methods. There are now powerful simulator for many disciplines of science such as electronic circuits , electronic devices, semicondcutor fabrication processing.
- In some statistical systems they can be validated by Monte Carlo simulations.
You can validate your analytical model by the following methods:
- running experiments under the same assumptions of the analytical mode
- By using numerical simulation where the the system is solved by the numerical methods. There are now powerful simulator for many disciplines of science such as electronic circuits , electronic devices, semicondcutor fabrication processing.
- In some statistical systems they can be validated by Monte Carlo simulations.
I will focus on differential equations as they are of core interest in physics.
I personally perceive qualitative features over quantitative. I start at numerical model of low order, but with proper qualitative behavior. and necessary consistency conditions. If that is the case there is usually something I can do to increase the order of accuracy of the model - composition of numerical methods, symmetrizing equations of the method etc.
In order to determine whether the model is superior to other ones, you could compare the model in terms of the solving speed and stability (how easy is it for the model to converge?). If the model is used for optimization, you could also compare the objective value obtained from your model vs other models.
To validate your model all depends on the discipline and even the problem to be treated. For example in Physics, you can validate your model:
- You can compare with another existing model even in a special case. Because in general, this new model belongs to a given discipline and of course there are other methods that exist. In general, this new model allows optimizing or generalizing something that already exists.
- You can compare your results with those of numerical methods such as FEM, FDTD, MOM, BEM, ...
- You can also compare your results with those of experimental measurements. if you can't do these measurements, you can collaborate with other people who do
If you have verified a simulation code for your model, validation consists usually on confronting numerical results to data from experiments.
You are going to test a lot of parameters of your model to see if some sets of parameters give a good agreement with your data. How you are going to perturb those parameters depends on the number of parameters you need to tune.
If your model fits the data, meaning the uncertainties (from experiments or from the lack of knowledge on your model parameters) are the only stuff that makes your model differ from your expemental data (this is hard to check in practice), the important point here is that you can only fail to reject your model. Validation is too strong a word. Validation is only Not Being able to reject a model. This is an important point in my opinion.
Otherwise, in order to compare experimental data and numerical simulations to see if my model is good, I use statistical hypothesis testing.
I know this answer is somewhat frustrating, but nevertheless it is the first step in model validation: try to answer... what is the intended use for the model? or: what should the model be good at? Depending on the answer you give, the approaches to model validation (and many excellent suggestions have been give above) can be very different.
"All models are wrong, but some are useful" (attributed to George Box) is still a good starting point. In most cases, a dynamic mathematical model is used to either predict future states or outputs or to better understand the system. The prediction error can be tested by methods similar to machine learning. It is also worth looking at the Data Assimilation literature. For the analytical side: Does your model predict an interesting behaviour, e.g. in response to a perturbation or a
a new stimulus, which could experimentally be tested? Or, does it provide insights into a phenomenon which wasn't understood so far? And, can you derive testable predictions from that? Then the model could also be useful, even if the output does not exactly fit the data, but predicts the right quantitative behaviour.
I work in Math & Stats, so the answer to your question is straightforward: statistical methods, machine learning techniques and their combinations should do the job; all in conjunction with data from real life.
1. Formally, you can use methods similar to machine learning, .e.g. dividing experimental data into fitting and testing sets, etc.
2. The model is good if it reproduces different experimental observations obtained in different experiments, from different labs. I.e. when it can used beyond your current study.
3. You can forget previous two things, because the only case when someone will pay attention to your model is when you predict something non-obvious (ideally counter-intuitive), which was not measured before, and then experimentally show that model prediction is correct. This is much more persuasive then any formal validation method.
The best method is the direct comparison with experimental tests OR gathered data. If the system is large or expensive to run an experiment on it, then one should request data from industrial partners to reflect the system behavior. This is the most acceptable way of verification.
Usually validation can be performed by running further independent experiments under the same or similar conditions. Moreover, extrapolation capabilities indicate a well agreement among the identified model and the process.
Additionally, the models can be tested according to different error metrics, such as: rms, mae, r2, etc. Take the following article as an example. It evaluates different models with different performance metrics in a complex system.
Article Experimental Evaluation of Different Microturbojet EGT Model...
Please let me recommend our framework for model evaluation: Conference Paper Data Formats and Visual Tools for Forecast Evaluation in Cyb...
One of the principles we used is as follows: the criterion used to evaluate a model must correspond to the criterion used to optimize predictions, see page 3 of this chapter:Chapter Forecast Error Measures: Critical Review and Practical Recommendations
For example if you used a median of density forecast, it is useless to compare models in terms of MSE since the median will be optimal for MAE (for non-symmetric distributions), see slide 33 for more info: Conference Paper Data Formats and Visual Tools for Forecast Evaluation in Cyb...