I need to calculate my confidence in a procedure. I've never seen this done, so I thought I'd ask the RG community.

Scenario:

I have the numerical answer to something, but others who have been methodologically deriving their answers to the same problem for decades have gotten different answers. Their analytical methods entail imprecision, so their estimates fall into a distribution. The distribution includes the correct value, but the mean of the distribution does not align with the correct value because the method systematically overestimates the correct value.

Would it be valid to transform the methodological distribution to have the correct value as its mean, then use the SD of the distribution to calculate a CI that can then be compared to an untransformed CI in some way that quantifies one' degree of confidence when using the methodology?

I think I could do that in practice, but it doesn't seem to capture what I'm after, which is a measure of the confidence one can have in the methodology's capacity to get the right answer, or a measure of the right answer's deviation from the method's mean, given its error distributoon.

Theoretically, the right answer doesn't have a distribution. Is there a statistic that captures dispersion and deviation from a theoretical correct value to arrive at an estimate of confidence in the method's ability to arrive at the correct answer?

Similar questions and discussions