In his book A Mathematician Reads the Newspaper (1995), John Allen Paulos maintains that “much economic and political commentary and forecast are fatuous nonsense”. Because of the inherent “interconnectedness” of the variables of this kind of phenomena, he argues, “we should not expect to predict political or economic developments with any exactitude.”
The author points out that those interconnections can be model through the notion of a nonlinear dynamical system that is highly sensitive to minuscule variations of the initial conditions. These systems “demonstrate a complex unpredictability” and “their trajectories in mathematical space are fractals”. We can rely on social forecasts, Paulos concludes, only if the predictions are short-term, deal with simple phenomena and are just “hazy anticipations rather than precise assertions”.
Most of our personal decisions, however, are based on what we expect to happen in the future --predictions that we usually learn from the media. Moreover, governments, companies, banks and many other organizations use forecasts on hugely complicated political and economic matters to make decisions that often affect the lives of many millions of people.
In practice, “governments rely routinely and heavily on intuitive beliefs about high-stakes outcomes.” [Mellers et al: “Psychological Strategies for Winning a Geopolitical Forecasting Tournament”, 2014] But those apparently intractable problems could be predictable with more accuracy than that assumed by Paulos.
In the recent book Superforecasting: the Art and Science of Prediction (2015), psychologist and researcher P. Tetlock and journalist D. Gardner explain the method used by a small subset of experts who excelled in predicting future events among thousands of participants in a research tournament sponsored by the US government (see Mellers et al).
According to Tetlock and Gardner, “superforecasters” unpack the problem into components; establish a preliminary probability for a given outcome on the basis of how common it is within a broader class –the “base rate” of statisticians--; analyze the specifics of the particular case to adjust the initial probability; explore the similarities and differences between their views and those of others, paying special attention to prediction markets and other methods of extracting wisdom from crowds; express their judgment using a finely grained scale of probability, and update their forecast in light of changing information. Their predictions are like this: “There is a 6% chance that a country will leave the Euro zone by the end of 2016” (see link below).
A different but complementary view is described by L. Smith and N. Stern in “Uncertainty in science and its role in climate policy”, 2011). Based on the economic ideas of F. Knight in Risk uncertainty and profit (1921), they distinguish between “Knightian risk” –future outcomes for which decision-relevant probabilities statements can be provided—and “Knightian uncertainty”, related to outcomes for which probability statements are not possible but are anyway useful to policy makers. We don’t have a clear scientific view of what a 5◦C warmer world would look like –these researchers exemplify--, but science can be certain that the impacts would be huge, even when it cannot quantify them. Communicating this fact may be useful if only because policy makers may erroneously conclude that adapting to the impacts of 5◦C would be straightforward.
Do you think that political and economic phenomena are predictable with some degree of accuracy?
https://www.washingtonpost.com/news/wonk/wp/2016/01/04/how-to-predict-the-future-better-than-everybody-else/