The term "epistemic uncertainty" is still commonly used in risk analysis and risk literature, often in opposition of “aleatory uncertainty”. This model may be useful to evaluate the potential of reducing specific uncertainties.
Although I agree that this term may have a negative connotation, I think that focusing on the bright side instead (“confidence interval”) might have a perverse effect of disengagement toward what we do not know (and the responsibility to deal with this uncertainty). This is peculiarly relevant in crisis prevention strategies.
I have always considered "confidence interval" a "neutral term".
For example,
VFormat: AbstractSend to
Transfusion. 2016 Jul;56(7):1680-3. doi: 10.1111/trf.13635. Epub 2016 May 17.
How do I interpret a confidence interval?
O'Brien SF1,2, Yi QL1.
Author information
1
Canadian Blood Services, University of Ottawa, Ottawa, Ontario, Canada.
2
School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, Ontario, Canada.
Abstract
"A 95% confidence interval (CI) of the mean is a range with an upper and lower number calculated from a sample. Because the true population mean is unknown, this range describes possible values that the mean could be. If multiple samples were drawn from the same population and a 95% CI calculated for each sample, we would expect the population mean to be found within 95% of these CIs. CIs are sensitive to variability in the population (spread of values) and sample size. When used to compare the means of two or more treatment groups, a CI shows the magnitude of a difference between groups. This is helpful in understanding both the statistical significance and the clinical significance of a treatment. In this article we describe the basic principles of CIs and their interpretation."
Hi, because epistemic uncertainty is essentially "model" inadequacy, this is difficult. In risk almost any linear, predetermined, quantitative model is "wrong" or oversimplified to get some kind of answer. Unless you use something like a Bayesian dependency model "calibrated" against real observed data, you normally don't know what you don't know (ie the effect of real life inter-dependencies and interactions (resonances)). Classic Rumsfeld uncertainty?
Ok David, but if We do know that deterministic models are "unknown" in risk analysis context, why not attempting to explicit our doubts using epistemic techniques, like bayesian, Dempster, Possibility Theory and so on.
My opinion is that most methods (such as Monte-Carlo) have high computational overhead (especially with larger dimensions) and this runs counter to the rapid analysis needed for investment decisions.
Use of mathematical procedures to epistemic uncertainty , this will not solve the problem of decision making, if the recipients of the information , i.e. the decision makers are not ready to revise their risk hypothesis. For example, Siddhartha R. Dalal, Edward B. Fowlkes, Bruce Hoadley (1989) and Frederik Michel Dekking; Cornelis Kraaikamp; Hendrik Paul Lopuhaa; ̈Ludolf Erwin Meester(2005) decided that there is a high probability of 81% that O rings will be damaged in the case of Space shuttle accident. Will the information five out of six rings will be damaged or probability of 81% that O rings will be damaged in the case of the launch at very low temperatures at the pre launch meeting shared, do we think that the statistical probability rate would have the same effect on the NASA decision makers?