Can We Predict the Future More Accurately?

Is there a science behind predicting uncertain events in the future, or is it intuitive? Do you have the right traits to be a “superforecaster?”

Getting predictions wrong can be costly.  It’s not just weather or loan defaults that get predicted.  The intelligence community in 2002 officially concluded that Iraq had weapons of mass destruction and that was part of the rationale for why we went to war.  They were wrong.  How can similar misjudgments be avoided in the future?  Forecasting Blackboard - KROMKRATHOG

A recent article in Harvard Business Review, by Paul Schoemaker and Philip Tetlock, describes how organizations – and you — can become better at judging the likelihood of uncertain events.  While they write about commercial companies’ use of prediction tools, their insights apply to government as well.

Background.  In the wake of the Iraq intelligence failure, the Intelligence Advanced Research Projects Activity (IARPA) set out in 2011 to determine if it was possible to improve predictions of uncertain events.  It selected five academic research teams to compete in a multi-year prediction tournament.   The initiative ran from 2011 – 2015 and “recruited more than 25,000 forecasters who make well over a million predictions” . . . Forecasters were challenged to respond to questions posed to them – e.g., would Greece exit the Eurozone, or the likelihood of a financial panic in China.

Some teams used computer algorithms, others focused on the use of expert knowledge, and yet others turned to the “wisdom of crowds.”  They knew they were competing with each other.  They didn’t know they were also competing against intelligence analysts with access to classified information.  Who was better?  What did the winners do differently?

A Consistent Winner.  One team, the Good Judgment Project, was the consistent winner, offering “a 50+% reduction in error compared to the current state-of-the-art,” according to IARPA.  This was judged to be “the largest improvement in judgmental forecasting accuracy” ever observed in the public policy literature.  In fact, they were on average about 30 percent more accurate than intelligence analysts with access to secret data.  The results were so stark that after two years, IARPA cancelled its contracts with the other teams and focused on the approach the Good Judgment Project was using.

The Good Judgment Project was a team led by several academics from the Wharton School at the University of Pennsylvania, including Tetlock and Schoemaker.  They said: “The goal was to determine whether some people are naturally better than others at prediction and whether prediction performance could be enhanced.”  They demonstrated over the four-year span of the project that yes, it could be done by applying several techniques.

In their article, they write that their approach focuses “on improving individuals’ forecasting ability through training; using teams to boost accuracy; and tracking prediction performance and providing rapid feedback.”

Training.  Schoemaker and Tetlock say: “training in reasoning and debiasing can reliably strengthen a firm’s forecasting competence.  The Good Judgment Project demonstrated that as little as one hour of training improved forecasting accuracy by about 14% over the course of a year.”  In their project, they found that training in basic probability concepts, such as regression to the mean, Bayesian revision, and statistical probability, were important tools for forecasters.  But it also required “that forecasts include a precise definition of what is to be predicted . . . . and the timeframe involved.”

They say that successful forecasting requires an understanding of the role of cognitive biases, such as looking for information that confirms existing views, noting: “it’s a tall order to debias human judgment.”  They say you have to “train beginners to watch out for confirmation bias that can create false confidence, and to give due weight to evidence that challenges their conclusions.”  . . . don’t look at problems in isolation . . . understand the psychological factors [because you] may see patterns in the data that have no statistical basis.”  How can you develop this trait?  They gave an example of a financial trading company that trains new employees about the basics of statistical reasoning and cognitive traps by requiring them to play a lot of poker – on the job!

Teams. Schoemaker and Tetlock found that “Assembling forecasters into teams is an effective way to improve forecasts.  In the Good Judgment Project, several hundred forecasters were randomly assigned to work alone and several hundred to work collaboratively in teams. . . the forecasters working in teams outperformed those who worked alone.”

They said that having the right composition of talent on the teams is important – that natural forecasters are “cautious, humble, open-minded, analytical – and good with numbers.” . . . and intellectually diverse.  In the course of their project, “When the best forecasters were culled, over successive rounds, into an elite group of superforecasters, their predictions were nearly twice as accurate as those made by untrained forecasters.”

They also note that team members “must trust one another and trust that leadership will defend their work and protect their jobs and reputation.  Few things chill a forecasting team faster than a sense that its conclusions could threaten the team itself.”

Tracking.  Finally, Schoemaker and Tetlock say measuring, reporting, and making adjustments is a key element in successful forecasting.  They write: “Bridge players, internal auditors, and oil geologists . . . shine at prediction thanks in part to robust feedback and incentives for improvement.” But the statistics aren’t enough.  You can calculate a Brier score to measure the accuracy of a prediction and track them over time, but knowing your Brier score won’t necessarily improve your performance.  The authors say “you have to track the process it used as well.”  They recommend that organizations “should systematically collect real-time accounts of how their top teams made judgments, keeping records of assumptions made, data used, experts consulted, external events, and so on.”

However, the authors note, the value of these three techniques won’t work on their own.  They say that organizations ‘will capture this advantage only if respected leaders champion the effort.” But what about improving the abilities of individuals to become forecasters?  The authors have created a public tournament – Good Judgment Open – so you can take a test to see if you have the traits shared by the best-performing forecasters.  Try it!

IBM Center for The Business of Government

Graphic Credit:  KROMKRATHOG via FreeDigitalPhotos.net

Leave a Comment

Leave a comment

Leave a Reply