I have a whole lot of actual measured data from real instruments. More often than not, the probability distribution is not symmetric or Gaussian. It is disturbing how often assumptions are made in wielding statistics without considering the implications. This is particularly true when it comes to calculating uncertainty. Failure to consider these assumptions can lead to errors of more than 100%, underestimating the uncertainty, especially when extrapolating, which must be done for certain applications. It is easy to demonstrate that the classical uncertainty (i.e., using Student's-T and sigma) does not hold for normally-distributed random numbers. The unstated (and often unrecognized) assumption is that the samples are evenly-distributed over the domain, which isn't necessarily true. For example: a million groups of random numbers taken 5 at a time (or 6 at a time, etc.), what is the actual uncertainty at the 90% or 95%? We know what the true span is from the outset. I am working on this problem with a colleague (Lindon C. Thomas) and invite open discussion.

More Dudley J Benton's questions See All
Similar questions and discussions