Saturday, May 28, 2011

Making Better Predictions

We are constantly listening to the predictions of others.  The predictions of friends, family, experts….

But the human ability to actually make accurate predictions is limited.  Within a complex system the variables are enormous and it is very hard to separate the various inputs.  Small external inputs, the proverbial “flap of the butterfly’s wing”  can have exceedingly large effects on the timing and ultimate results.

Dan Gardner has a new book coming out Future Babble, based Philip Tetlock’s very well known book on political forecasters Expert Political Judgment: How Good Is It? How Can We Know?.   They had a recent article in Forbes magazine that discussed how to get better results from your predictions.

Philip Tetlock and Dan Gardner,, March 17, 2011
[A]s natural science has revealed, our ability to predict is limited by the nature of complex systems. Weather forecasts, for example, are quite accurate a day or two out. Three or four days out, they are less accurate. Beyond a week, we might as well flip a coin. As scientists learn more about weather, and computing power and sophistication grow, this forecasting horizon may be pushed out somewhat. But there will always be a point beyond which meteorologists cannot see, even in theory.
Prediction horizons vary, but the general idea is the same whether experts are trying to forecast the weather, economies, elections or social unrest: No matter how brilliant the analysts may be, no matter how abundant the resources at their disposal, their vision can only go so far.
A second point is even more humbling: People are really bad at predicting the future. This very much includes experts. In the largest and best-known test of the accuracy of expert predictions, a study reported in Philip Tetlock's book Expert Political Judgment: How Good Is It? How Can We Know?, the average expert was found to be only slight more accurate than a dart-throwing chimpanzee. Many experts would have done better if they had made random guesses. And even the best forecasters were beaten by arbitrary rules such as "always predict no change" (a rule that worked very well for the first 30 years of Hosni Mubarak's regime).
It's easy to conclude that all hope is lost. But that would be wrong. In the Tetlock study, what separated those with modest but significant predictive ability from the utterly hopeless was their style of thinking. Experts who had one big idea they were certain would reveal what was to come were handily beaten by those who used diverse information and analytical models, were comfortable with complexity and uncertainty and kept their confidence in check.
What this and much other research suggests is that the right training, tools and organization can make people better forecasters. Their vision will be not be perfect, so they won't see to the prediction horizon. But they will see closer to the horizon. And their vision will be as good as anyone's can reasonably be expected to be.
The people who are riding the one-trick pony do not do very well.  The world has gone through many enormous changes.  Yet the doom predicting crowd usually mistimed, or gets these events completely wrong.  Often a real disaster preempts the predicted one.  Other times the conditions that created the concern work themselves out.
Being flexible is important.  It is important because there is always more noise in the data then you first suspect.

Predictability is elusive because randomness holds much more sway than most of us would like to believe. Drawing on his own research, Watts shows that messages on Twitter don't spread through a predictable set of influential hubs. Similarly, when you ask large numbers of people to relay an e-mail to a stranger through someone they know, there turn out to be no star intermediaries through whom most e-mails find their way. "When we hear about a large forest fire, we don't think that there must have been anything special about the spark that started it," Watts wrote. "Yet when we see something special happen in the social world, we are instantly drawn to the idea that whoever started it must have been special also."
Another way to separate the wheat from chaff is to distinguish between trends and causes that have a very broad base of inputs (population growth) and those that are more transitory (which political party is in charge).  Events in which many people have an input into (economic outcomes) are going to be harder to control, with differences distinguished by percentage points, than issues where only a few people can make major decisions (war).
Short term trends may vary greatly, but long term trends are very difficult to reverse.  The population cycles of Malthus required an industrial revolution to break out of.  Our current trend of the declining rate of technical advancement will require something equally profound.
Long Term Trends
1.      Increases in Population
2.      Increase in consumption per person
3.      Slowing growth in new fossil fuel discoveries
4.      Slowing growth rate technological advancement
5.      Slowing productivity increases by leading edge economies
6.      Rapid increases in productivity by second tier economies
7.      Increase (from a very low point) of solar energy utilization
8.      Decline in absolute terms of available water
9.      Increase in number of parties with access to nuclear warheads
10.  Increase in micro manufacturing
11.  Increases in communication technology
12.  Increase in surveillance technology


PioneerPreppy said...

I always look at it as a car race. Each long term trend is a car like peak oil, economics, civil rights, etc. The black swans are hurricanes, nuclear reactors melting, terrorist bombs, etc. Some cars can throw out their own black swans while others are external.

Hey I guess it's more of a "Starwars" pod race :)

russell1200 said...

Not a bad analogy. I would only add that it is a pod race that never ends and your lead/victory is only relevant to a given moment.

Don't forget the tribesmen (or what ever they were) on the sidelines taking pot shots.