01 January 1970 59 5K Report

A hypothetical example (hehe hypothesis), assume we have enough observations, apply both a “frequentist” and “Bayesian'ist'” model (e.g. linear model with Gaussian error distribution and for Bayesian an uninformative prior to keep it rather vague), we look at the intervals, and both models resulted in the same intervals. Then according to [1] we can say it is similar* to suggest the estimate on the population fell between [1] if we know they are similar. Are both than equally “wrong”? And, do they actually quantify uncertainty, as the both “want” to make (or am I wrong, as they really seem to want, although indeed P(data|estimate) and P(estimate|data)) probabilistic statements on the data about the population. Hence, the data is certain, the estimates are based on the data, so it seems is certain the estimate might approximate the population (assuming perfectly sampled population and this description makes sense) might take on a specified value (note Confidence an Credibility intervals have converged). Again, the data is certain, what is uncertain is what is not in the data. I am just curious what more statistical educated people think of this, how they would communicate this, as this seems hardly discussed (or it is my ignorance).

Thank you in advance for you input.

*Not their words. I just remember a part from the text.

[1] Article The Fallacy of Placing Confidence in Confidence Intervals

Similar questions and discussions