It seems odd that the general population often pays attention to statistics, but apparently, generally only to point estimates.  Variance is not given much attention, and bias doesn't fair well either.  Even attention to the accuracy of official statistics reported by federal agencies - such as energy and employment statistics - seems to often get short shrift.   

But there is an example I see now, of a particular statistical application, which is appearing with appalling frequency.    I mean the misuse of polls.  It is bad enough that many pay attention to instant "polls" which consist of people finding a place on the internet to "vote" multiple times on each of them.  That should be recognized as trash, though many may not recognize this.  Unfortunately, there is even a great deal of discussion by pundits, looking at legitimate/scientific polls, considering what strategy to use when, say, there is a 4 point increase for some case, but there is, say, a note in the corner of the visual that says +/- 4.4% (which is still rather nebulous), and too often, no one even seems to note that it is there!   Differences in (legitimate) poles may be noted, which gives some indication of bias, but people seem to often ignore that inconvenient "margin of error" in the corner of a graphic, and when they don't ignore it, they generally gloss over it, often failing to spend the requisite time discussing what it means.  At least I've noticed this in US politics.  I think there is also selective use, or non-use - i.e., use of the "margin of error" only when it supports someone's opinion. -  Perhaps this is done better elsewhere. 

   

I saw a discussion which speculated that a candidate made a certain decision to make a certain pronouncement in a certain US state, because of a lead of a certain number of points in that State, and then I noticed a relatively large "margin of error" that was presented on the graphic, and ignored, and wondered who was explaining this to that candidate, before making that decision.  That decision was also based on considering which subdivisions (categories) of voters might be counted upon, and to what extent, if some were lost from one candidate in one race, to help another candidate in a different race, but in the same party.  That sounds like a risky 'calculation,' for which one should need good data, AND have an idea how good or bad is the accuracy of such a statistic. 

(Of course, with polls, you should also note that even a relatively accurate result, at a given point in time, could change greatly on future occasions, for later periods based on an unforeseen event.  Starting without an accurate result for an earlier poll could really degrade good decision making ability.  -  Of course, consistency in results, or a consistent trend, can be meaningful, as long as there is no, suddenly occurring, new factor.) 

   

In laboratory experiments, one may recognize the need for accuracy in results, but how often is this not given enough attention? 

   

What about other applications? 

Are most misuses of statistics due to deliberate causes, or sloppiness and/or lack of expertise?  (In my experience at a statistical agency, and general observations, my opinion is that both causes, and often a mixture of the two, are very common.  However, I seem to have noticed more misunderstandings and wishful thinking than willful scheming.  I hope that is the case.   I've even seen tremendous pressure to "dumb down" results so that a twelve-year-old would be able to (supposedly) understand. How good could your decision be if a decision-maker functions at the level of a twelve-year-old?

There must be a number of other applications of statistics, where accuracy is often ignored.  

Thoughts?  Examples?   -   Thank you. 

Similar questions and discussions