Super Forecasting
Essay by bonnielass • July 23, 2016 • Presentation or Speech • 2,459 Words (10 Pages) • 1,233 Views
Even though most people can improve their thinking and forecasting, there has always been resistance to change based on what Tetlock and Gardner call “illusions of knowledge.” Intuition is one example. Intuition is a form of pattern recognition that works in settings with lots of “valid cues.” But intuition is notoriously unreliable in unstable or nonlinear environments. An overreliance on intuition leads to poor decisions.
Another case is insufficient self-reflection. This is in part prompted by a module in our brain that seeks to rapidly close cause and effect loops.
We show you an outcome and your mind quickly comes up with an explanation for it. As Tetlock and Gardner write, “we move too fast from confusion and uncertainty to a clear and confident conclusion without spending any time in between.” This is related to theconcept that Daniel
Kahneman, an eminent psychologist, calls thinking fast.
So what is the source of good forecasting? Tetlock and his colleagues found four drivers behind the success of the superforecasters:
Tetlock and Gardner offer three reasons that forecasters underreact to new information. To start, sometimes we are so busy that novel information merely slips our attention. We also may take our eye off of the original question and dwell on a simpler or slightly different one. So the new information may not appear relevant to the question in our minds even though it is relevant to the question at hand. Finally, and probably most likely, is belief perseverance. This is typically accompanied by confirmation bias—actively seeking information that supports our view and dismissing views counter to it.
But it’s also possible to overreact to new information. One reason is that we take into account irrelevant information. People may base their initial estimate on solid reasoning, but subsequently place weight on additional information that has no bearing on the issue at hand. A second reason for overreaction is a lack of commitment. There is no easy way to correctly update views, but we know that superforecasters spend a lot of time thinking about how to do it well.
Superforecasters have an above-average awareness of these biases and try to manage them. Both the thinking styles and forecasting methods of superforecasters help address bias.
Find the right people. You get a 10-15 percent boost from screening forecasters on fluid intelligence and active open-mindedness.
Manage interaction. You get a 10-20 percent enhancement by allowing the forecasters to work collaboratively in teams or competitively in prediction markets.
Train effectively. Cognitive debiasing exercises lift results by 10 percent.
Overweight elite forecasters or extremize estimates. Results improve by 15-30 percent if you give more weight to better forecasters and make forecasts more extreme to compensate for the conservatism of forecasts.
The scientists measure these improvements using a Brier score (the appendix provides more detail on the calculation). A Brier score reflects the difference between a forecast and the outcome. Like golf scores, lower is better. There are a couple of ways to calculate Brier scores, but a common scale runs from zero to 2.0. Zero means that the forecast is spot on, 0.50 is a random forecast, and 2.0 means that the forecast is
completely wrong.
By this scoring, a person who predicts a 55 percent probability of an outcome that happens receives a Brier score of 0.405. A subsequent forecast of a 65 percent probability of an event that occurs getsa Brier score of 0.245, nearly a 40 percent improvement.
But Tetlock and Gardner offer a simple formula that is at the core of the whole process: “Forecast, measure, and revise: it is the surest path to seeing better.”
PhilosophicalOutlook.
Superforecasters tend to be comfortable with a sense of doubt.Scientists sometimes sense that they know the truth. Good thinkers can feel the same way. “But they know they must set that feeling aside and replace it with finely measured degrees of doubt,” write Tetlock and Gardner, “
—doubt that can be reduced (although never to zero) by better evidence from better studies.” Recall that our minds are keen to assign causality. We want the case to be closed. But as Daniel Kahneman says, “It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”
Superforecasters are also humble, but not in the sense of feeling unworthy. Rather, their humility comes from the recognition that reality is profoundly complex. Indeed, it is possible to think highly of yourself and to be intellectually humble at the same time. Tetlock and Gardner note that, “Intellectual humility compels the careful reflection necessary for good judgment; confidence in one’s abilities inspires determined action.”
It is common, and often soothing, to attribute outcomes to fate. Superforecasters aren’t big believers in fate. On a one to nine “fate score,” where one is a total rejection of fate and nine is complete belief in it, the average adult American falls near the middle. The mean score for a student at the University of Pennsylvania is a little lower, the regular forecasters are below that, and the superforecasters are the lowestof these groups. Superforecasters don’t think that what happened had to happen.
Ability and Thinking Style.
The first point is that superforecasters are not geniuses. The researchers tested the fluid and crystallized intelligence of all the GJP volunteers.Fluid intelligence is the ability to think logically and to solve novel problems. It doesn’t rely on accumulated knowledge. Crystallized intelligence is exactly what it sounds like: your collection of skills, facts, and wisdom,and your ability to use them when you need to. Those who participated in the GJP were not a valid sample of the population —
these are people who raise their hand to make lots of forecasts in return for a $250 gift certificate from Amazon.com. The regular forecasters scored higher than about 70 percent of the populationon intelligence tests. That translates roughly into an average intelligence quotient (IQ) of 108-110 where the average of the population is 100. The superforecasters scored higher than about 80 percent of the population, or an average IQ range of 112-114. There is a much bigger gap between the overall population and regular forecasters than there is between those forecasters and the superforecasters. Keith Stanovich, professor emeritus of applied psychology and human development at the University of Toronto, distinguishes between IQ and what he calls “RQ,” or rationality quotient.The correlation coefficient between the two is a relatively low .20 to .35. Those with high RQ’s exhibit adaptive behavioral acts, efficient behavioral regulation, sensible goal prioritization, reflectivity, and the proper treatment of evidence. These qualities are very consistent with those of the superforecasters. Jonathan Baron, a professor of psychology at the University of Pennsylvania and colleague of Tetlock’s, coined the term “active open-mindedness.”Those who are actively open-minded seek views that are different than their own and consider them carefully. Tetlock and Gardner suggest that if they had to reduce superforecasting to a bumper sticker, it would read, “Beliefs are hypotheses to be tested, not treasures to be guarded.”
...
...