I have a set of both experimental and numerical values (say for every second). What are the different ways of finding the cumulative deviation between the values? Which can be considered the best among all?
is your experiment intended to validate a steady numerical simulation? If this is the case, i would suggest to perform time averaging of your measurements. If oscillations are small, they could be interpreted as noise. The time averaged values could then be compared to the model values. The differences between both values, experimental and numerical, however, have to be within a certain interval for the model to be successfully validated. Suggestions of the determination of this intervall, the so-called validation uncertainty, are given in
ASME V&V 20-2009 - Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer
If your problem is steady I'd agree with Schmandt, but if it involves a time dependent result you'll have to compare each simulated point with the numerical one, so the correct uncertainty indicator should be the given by the root mean square of all differences between simulation and measurements.
furthermore, if the numerical simulation depends in any way on the m measurements taken (let's say, depends on n parameters you extracted from the measured points) the indicator, s0 has to be corrected in this way:
s02=SS/(m-n)
where SS is the square summation of all differences between simulated points and numerical ones.
The most time tested measure of deviance is the variance, square root of the sum of the squares of the difference. But be careful. This may be true if the values are expected to be hovering around a constant value. There may be some trends in the data -- linear, quadratic or other complex function with time. First perform a multiple regression fit around the numerical data. Then the sum of the square of the deviations of the fit from the numerical and experimental data may be an appropriate measure.
You could also perform an ensemble averaging after several experimental runs. This means that measured data from several runs but at comparable time instances within a single experimental run are averaged.
I assume from the question that you do not have a statistical model in mind for the deviations. In such a case, I recommend trying several things. Easy things to compute are
* RMS (oot mean square)
* MAD (mean absolute difference)
and "sisters" of the above using median instead of mean. Other approaches might be based on ratios of values. Or log ratios. There are many things that might make sense, and the best choice depends on the statistical character of your data.
But I would start by plotting the fields to learn more about what is going on. The eye can pick up a lot of information, e.g. you might see that deviations are skewed, and that might make you think of plotting a histogram of deviation, which might suggest a statistical model, and so on.
Simply stated -- you just need to explore your data from a series of angles before you settle on one difference measure.