12 December 2013 29 3K Report

All measurements are uncertain, so in experimental science one should always specify the uncertainty, u, attached to the measured value, y. There are at least two mainstream ways to do it:

(1) y +- u(random) +- u(systematic) (sorry the post does not allow correct format for formulas); this way is often used in physics, leaving separate the random and systematic components.

(2) y +- u; in this way, in u, the two components are combined in quadrature, after "corrections" are performed on the result, uncorrected for systematic effects; this way is normal in metrology.

The size of u also depends on the chosen confidence level of a confidence interval (under Bayes approach the naming is different).

There are other ways when non-probabilistic methods are used to estimate the uncertainty.

In all instances, the result of a measurement is represented by (at least) a couple of parameters (y,u).

The question is: would it be possible to formalise a definition and notation which does not depend on the choice of the method (e.g. probabilistic or not) for the category 'uncertain number', and build an arithmetic out of that?

I was prompted by the existence of other categories of numbers, e.g., complex numbers.

Mathematicians should have the best tools to succeed (like, e.g., it happened for the mathematics of intervals)

I have placed some problems arising when using 'uncertain numbers', as opposed to 'mathematical' numbers, in some of my publications. I attach here one of them.

Similar questions and discussions