There are two requirements for the use of computational tools for simulations: verification and validation. In very broad terms, and paraphrasing Prof Patrick Roache's book on verification and validation in CFD, these can be understood as: "am I solving the equations right?" (verification) or "am I solving the right equations?" (validation). Verification of a code amounts to checking that your numerical solution of the governing equation (your PDEs) is correct: a comparison with other code solving the same system of PDEs, together with a mesh convergence study of both codes, would be just this: a proof of the ability of your code to obtain the right numerical solution. Validation of a code is done by comparison with experiments and checks that the (properly discretised governing equations of the) model represents the physics of the problem that you want to simulate.
Hi Prakasam, i don't know which kind of system you're handling with, but the fact is that once you have your mathematical model of the physical system, you need to validate it. Two main ways are allowable: numerical analysis (with finite element method for example) and experimental analysis (designing a prototype).
A 'well done' analysis is based on both the two validations, numerical and experimental, and this because in both cases you cannot perfectly reproduce the real system you're handling with (for how it is defined, a finite element simulation is a kind of approximation of the reality because you're discretizing the continuum, and in parallel, a test bench cannot simulate the perfect behavior of the system). So my personal advice is before to reach particular conclusions, the comparison between mathematical, numerical and experimental results is straightforward.
It depends what you want to test. If you want to test your tool and your assumptions, e.g. effect of model reductions, etc., its okay to validate them with already validated numerical tools.
If you want to see if your model describes the real world correctly you need experiments.
To calibrate new tools and models you need experimental data. Calibration based on numerical tools exclusive as to be done with attention and requires first a thorough research on all the assumptions that have been considered to calibrate the numerical tool and that may affect your calibration.
It is always better to validate the numerical results with experimental data, when new system or material is used. This will give confidence to apply the new findings to field problems.
I angree with Christian, It is even very difficulty to validate numerical model with experiments. That is because we have always some no-identified factors in experiments that can't right to define in numerical model!
Numerical modelling (especially FEM) has following issues, which need to be understood clearly.
1. The discretization issue – Larger element size gives less accurate results. Adaptive meshing etc. may help.
2. The modelling issue – The modelling with a particular type of element involves certain assumptions (even in large commercial software), except for very simple analysis. Classical example is that of Shell elements, in which the modelling issues are still being discussed by researchers, although nearly all the commercial software have shell elements for stress analysis of thick or thin curved plate type structures.
3. Accuracy of data supplied to the model – Not all the properties are well documented for every material. Moreover the model in the software may neglect certain type of variation in properties.
Thus experimental validation is required, although in certain cases it may not be feasible. Possibly validation with a simple experimental object may be tried and after developing confidence with the model it may be used with real world problem. However, it is not necessary for each user to perform experiments and survey through research literature may provide very useful experimental data.
Mostly for process optimization or calibration you need to validate the model with real conditions in experiments. For this you might need to do a simple statistical analysis (e.g. TAGUCHI) to find out the most important factors in your process. Then monitor these factors and try to validate your model based on these factors.
In my expirience, well tested FEM models can reproduce standard problems with high reliability given that you've respected the numerical scheme limitations.
In general it is not good enough to compare theoretial results with theoretical results for FE models of real structures/machines. Model updating is the right ways of improve FE models.
I agree with Christian W. Model is always build with certain idealization and assumptions that describe your system as close as possible so your model will never be identical with the physical system. So all depends on your goal.
without experimental validation, doing FEM is of no use. IT can only leads to what is called colourful plots nothing more than this. Generating correct geometry using solid modelling software is not sufficient. Also a lot of emphasis should be given to mesh density and mesh size variation and also the type and shape of elements. In 3D hexahedral elements are always preferable. In general, FEM results of quasiStatic problem yield well within 5% accuracy compared with experimental results if your FEM model is correct.
The simulation results based on assumption (should be correct) and mathematical model carried out or method chosen to solve the problem. It may predict the behavior of the system with approximation can help scholar to right solution but it can not perfectly predict the real system. Therefore, experiments are required.
Experimental validation is the acid test of mathematical modelling, where practicable. I personally have come across many examples where the FE looked fine but differed from experiment. The flaw was was in the FE not the experiment.
It very much depends on the problem at hand. If the objective is to find a more efficient model, or a faster approach, but making no chnages to the fundamtnal principles of the model, numerical comparison is acceptable. Hoever, if you are defining a new material, or analyzing something innovative, for which reliable numerical models have not been developed, you can only use experimental readings. Bottim line, if the numerical results you are comparing to are good and you do no aim at improving then, you can compare with numerical results. Otherwise, experimental must be used
hello Prakasam.the answer is depend on your theoretical base of your software. if your software is developed on a relatively new approach compared to F.E.M, then the comparison of your results with F.E. results can be meaningful.however, if your software is demonstrated on the F.E. method, such comparison cannot yield useful sense. furthermore, i think that is more better to compare results of software which is turn belongs to numerical methods with analytical and experimental results first and then in second stage with another numerical method.
It depend what you need. If you want to check your numerical tools with finite element tools that is good. However, if you test your results and your assumption as a true results or not for your case steady you should verify your results experimentally
If your purpose is to interrogate your model in order to further enhance your structure you would need a degree of accuracy which is not achievable by numerical simulations of any kind. Furthermore, theoretical and empirical data are necessary to verify any numerical model.
If the question is about a method that solves a model, for example, whether to use the central difference scheme or Gunge-Kutta to a time-domain response of a differential equation, then no experimental valdiation is required.
On the other hand, if one wants the theretical to be predictive and used in design of a real structure, one should do some expeirmental validation.
You want to validate one numerical computation with another one based exactly on the same basic principle but uses different techniques for the solution. You can only compare these two different techniques for the solution of the same fundamental principle. These two might agree with each other and fail in the real world experiments. There is no substitute for experiment.
all of numerical simulation have an error in different magnitude. so if two numerical simulation compare together, it could be enlarge the final error. But some time give relatively good answer. But as an axiom a numerical simulation should not be validated with another numerical simulation.
There are two requirements for the use of computational tools for simulations: verification and validation. In very broad terms, and paraphrasing Prof Patrick Roache's book on verification and validation in CFD, these can be understood as: "am I solving the equations right?" (verification) or "am I solving the right equations?" (validation). Verification of a code amounts to checking that your numerical solution of the governing equation (your PDEs) is correct: a comparison with other code solving the same system of PDEs, together with a mesh convergence study of both codes, would be just this: a proof of the ability of your code to obtain the right numerical solution. Validation of a code is done by comparison with experiments and checks that the (properly discretised governing equations of the) model represents the physics of the problem that you want to simulate.
With reference to Joaquim Peira's answer, I like to comment as follows:
If there could be an exact solution (may be a long series or an integral which could be evaluated numerically) then the verification and validation processes (as mentioned in the answer ) can be performed. However, in numerical techniques (especially FEA), where the real world problems are solved, satisfying the boundary conditions exactly and even the exact formulation and development of numerical scheme is often impossible and various type of assumptions are made and approximations carried out. This is also true in case of the formulations built in the commercial software packages ( especially for the complex situations). Although these formulations might have been verified for certain cases, the applicability of these for all cases is to be accepted with care and there comes the necessity of experimental verification ( which may be extremely costly sometimes).
I think can never be sure of the build up of the truncated error, besides there is the problem of round off error. Even if the answer doesn't go haywire, it might still make substantial changes in the numerical solution which you would need to verify. I think to be completely sure, one must verify it with real world answers.
I also agree with Cristian, It all depends on what do you want to validate. Both mathematical models and physical experiments rely on assumptions. You must evaluate first which sets of assumptions your software tool is using and then validate it accordingly. Even if FEA "limitations" seem evident, please be aware that measurement is not by itself a guarantee of SOLID reference. Spatial/Temporal acquisition rate, less obvious physical constraints and assumptions on instrument precision require the same degree of scrutiny that the reliability of an FEA code. In both cases a previous validation of your reference method with known standrds/problems (both experimental or numerical) should be in order.
It is essential to all of the above to know the proble you are dealing with. Elastoplastic/small strain mechanical problems are simple to validate both numerically and experimentally, while on the other end, complex turbulent flows in nonlinear fluids, large strain instability problems, etc. are notoriously difficult to model, but also to measure.
So, before you select you validation tool, be sure that it has (in turn) been thorougly validated against well known problems.
The answers to this question have already exhausted the probable treatise. I just want to emphasize the point that the experimental work to be done for "validation" requires a full understanding of the physical nature of the problem. This will be of indispensible help in setting up a model which properly represents the behaviour of the actual object. The next very crutial step is the "design of experiments" directed towards determining the empirical components of the model to make it work. There is no need to stress that the "measurements" should be of proper "sensitivity" and "trueness" in the experimental work so that the "uncertainties" and associated "errors" remain within limits allowing for sufficiently correct picture of the physical behaviour. For this always a series of pilot tests is of great help.
You can also validate the simulation results you got from FEM packages with already existing experimental results in literature and there are some benchmark problems for different applications. You can validate your results with benchmark problems as well. If you have availability of resources to carry out experiments, you should perform them.
Sometimes you provide analytical models of certain tasks, which might be a simplification of FEA simulations, in this case, numerical results from FEA shall be used in order to verify the analytical model. However, both analytical model and FEA simulation are based on some assumptions, if you want to check how 'real' your analytical model is, experiment should be necessary.
If one has a system, a component or a device and it is required to characterize it, then one may do this by mathematical analysis producing closed form solutions for the performance parameters of the intended objects. One can also make the solution numerically. The numerical solution may be more precise and more descriptive to the performance of the object as numerical solutions may be based on the same equations bu with no simplifying assumptions that is required to get closed form analytical solutions. So, numerical solutions may be of more general applicability and of more precision.
Then comes the the characterization of the real object by experiments. This will be the real behavior within the accuracy of the measuring instruments.
The experience shows that if they are based on correct model and model parameters, the numerical solutions may approach the experimental results with very acceptable accuracy. While the analytical solutions will be of limited applicability if they are subjected to simplifying assumptions so that one can get a closed form solutions.
At this time, computer aided analysis and design tools are developed with numerical simulation results in close agreement to the experimental results. In my specialization as an example there is electronic device simulator and design tools that may give computational results in agreements with the experimental results. These tools sometimes termed virtual electronic device factory as they also can simulate the semiconductor processing and the fabrication steps.
Whatever it is, an experiment (prototype, verification series, etc.) is definitive verification always . A real experiment automatically respects all laws and circumstances - also those we do not even know :-)
The method of FEM as an approximation method used for solving PDEs can be validated based on strong valid mathematical laws. However, when you solve a problem with FE method, the validity of the results is highly dependent of the selection of proper input parameters and mesh. So, if one can guarantee the validation of the input parameters and mesh, the validity of the FE model is satisfied automatically. In other words, in a case that we can not guarantee the validation of the input parameters and mesh, I recommend that the results of the analysis verified with experimental evidence.
You always need experimental material values for the input of your FEM-simulation. I prefer a physical simulation without theoretic material models because it's very near to the experiment, see my project "CC technology".
Simulation without experimental calibration is just fantasy.
In some cases, you do not need specific experimental material values. As an example, when you simulate the linear elastic behavior. You need only Young's modulus and Poisson's ratio. In such cases, we have closed form analytical solution and it is not necessary to validate your FE results with experiments, the analytical results are enough. In addition, when we have even a nonlinear behavior for which a closed form solution based on classical plasticity theory is available, again, I think the validation with experimental solution can be neglected.