Will the United States Olympic Committee announce a boycott of the Olympics in 2022 before January 1st?
Nicols.
I have 10 seconds to answer a question in a virtual workshop that will test my abilities to forecast the future. I am looking at my brain. Simone Biles is not relevant. Yes, Moscow in 1980. Uyghurs? Time is up. I think it's 20 percent. The next question is: Will the U.S. regulate cryptocurrencies on the stock market by January 23, 2023? Will China attempt to take Taiwan over the next five years? The surface area of the Mediterranean Sea is in square kilometers.
Warren Hatch, who is co-leading this workshop, says that you didn't wake up thinking you had to answer that question.
A guy from the Department of Defense is taking this training. Over the next two days, we try to shed our cognitive biases, and see if we have the chops for predicting things professionally. I'm definitely out, but I think a couple in this group do. It is called a "superforecaster" and will be a step closer to gaining an elite status. If you are a member of the global network of ber predictors, you can work with the company that arranged this workshop in the first place. Good judgement is what it is called. Hatch is the CEO.
While I am at my laptop sweating it out for Good Judgment in September, experts are answering similar questions in the real world. The Federal Reserve chairman predicts that inflation will likely remain elevated in the coming months, while a Yale economist predicts that home prices will decline in the coming years. Anthony Fauci says a Northeast surge of Delta is possible. It is an interesting contrast. As a culture, we have come to accept the phrase "Likely," "Possible," or "There's a chance in the coming years." What does it mean in a concrete way? Is it a majority of chance, or a minority? Are the years of 2020 and 2023 considered coming years?
Your top 4 questions about 2022, answered by the world's top superforecasters.
We need this level of specificity in business, but we don't demand it from our experts. Good Judgment shows that you can quantify vague hunches with scalpel-like accuracy, simply with the human brain.
Good Judgment's superforecasters have more Olympian skill than anyone else. We can all get better. We need to train.
Good Judgment was born in the belly of the U.S. government, unlike many companies that start life in a dark bar. It goes back to 9/11. The Intelligence Advanced Research Projects Activity, or IARPA, was created in 2006 after analysts appeared to miss signals of the terrorist attack. Its goal was to improve American intelligence. The intelligence community started using an internal classified prediction market in 2010 where top-secret-cleared employees could make trades on whether an event would happen. IARPA wondered if there was a better way to use the crowd's wisdom to anticipate what was coming.
It launched a forecasting tournament for the public in 2011. Over the next four years, thousands of ordinary Joes and Janes answered about 500 questions, like: Will North Korea launch a new multistage missile before May 10, 2014? Is Robert Mugabe going to be president of Zimbabwe by September 30, 2011? If the teams failed to reach certain accuracy goals, they would be eliminated. Only one team remained after the first two years. Good Judgment was led by Philip and Barbara.
The science of prediction was already being studied by Tetlock. He was curious as to why so many foreign policy experts had failed to predict the fate of the Soviet Union. The average expert was roughly as accurate as a dart- throwing Chimpanzees. He doesn't mind the joke, but it's not quite how he put it. He developed a more systematic approach to predictions, so that he could identify the kinds of people who were good at making predictions. He and Mellers recruited 3,200 volunteers and then won the hearts of 2 percent of them, which they called superforecasters. Hatch, a Wall Street guy who left Morgan Stanley to set up his own small investment firm, was among that group.
The Good Judgment team was 50 percent more accurate than the control team recruited from the public by the fourth year of the tournament. Steven Rieber, program manager of IARPA, said that the intelligence community began using a guide that the researchers put together to train their analysts. He says that the tournament was not what they expected. The fact that there are people who have unusual skill in making accurate forecasts came as a surprise to me. Ordinary people can become more accurate in their predictions.
The government was not the only one to see opportunity. Good Judgment was transformed into a forecasting company in the year of the tournament, with plans to use its elite superforecasters to answer clients' questions about the future. Hatch was asked to help run it. He decided it was a good idea based on his predictions.
Are you confident? Most people are wrong. Good Judgment tests for overconfidence when evaluating whether someone has the skills to be a superforecaster.
To see what that looks like, another member of the Entrepreneur team submits to Hatch.
What year was Gandhi born? Hatch asks. Feifer thinks Gandhi could have been born in the latest year. Feifer should pick years that he is confident in.
Feifer has no idea. I am going to say 1940 and 1955.
Hatch says that Gandhi was born in 1869.
I don't know anything about Gandhi. Feifer was embarrassed by his lack of knowledge.
Hatch told him that it didn't matter. He says that the real point of the exercise is that Feifer picked a narrow range despite not knowing what the answer was. He could have said, "Gandhi was born between 1600 and 1980." Feifer was over confident, he wasn't willing to consider things he didn't know, and he narrowed his options, which made him less likely to be accurate. Hatch says that overconfidence leads to bad predictions.
5 Things the Future Holds, According to the World's Most Elite Superforecasters
Outside of academia, terms like "90 percent confidence" and "67 percent probable" may seem arcane. Hatch argues that the world is filled with uncertainty. He says instead of dealing with uncertainty by guessing, instead hold yourself accountable by using numbers. Why? The process forces you to pay attention to detail, cast for good information, and make better decisions. This requires a shift in thinking. If you only feel 67 percent confident in your answer, you are acknowledging some failure and creating a window for yourself to learn more.
When Good Judgment's superforecasters are trying to answer a client's question, they push outside their own bubble and take time to understand other people's experiences and opinions. Scattered around the world, many of them are retired or doing this work on the side, and they often bring in unusual bits of data from wherever they are. Paul Theron, an investment manager in South Africa, once tracked down a Muslim Brotherhood spokesman to get the scoop on a question about Egypt. An American who has lived in China producing theme park attractions with her company, is a superforecaster by the name of JuliAnn Blam. She says that not everything is in the press. You have to read between the lines in China because they can't really tell you.
When you flip a question like "Is it a good time to do a capital raise?" to "Is it a bad time to do a capital raise?", you can see the full picture. Changes to your forecast are a key practice. The degree to which one is committed to belief updating and self-improvement is the most important factor in determining whether or not one becomes a superforecaster. It is three times more powerful a predictor than intelligence.
At the workshop I am taking, a former U.S. diplomat asked us to imagine being at the royal wedding. The base rate is one of the core tactics of good predictions.
We imagine someone at the wedding asking us if the happy bride and groom will stay married. We think 100 percent. The look in the couple's eyes is clear, and there is Charlotte with the flowers, we can already see their kids. We are stopped by Koehler there. He says that our minds love a good story, but that can derail a forecast. The divorce rate in the U.S. is reported to be as high as 50 percent. It does matter who Prince Harry is. It does not matter that they left Buckingham Palace. All I am saying is to think about that second. People who start with the outside view and then move to consider the particulars of the case are going to be 10 percent more accurate.
I challenge this point with Tetlock since he is the one who did the science. It makes sense to start with the base rate. Few would start a business if they thought that way, and nobody would get married if they thought that way. If you want to start ambitious projects that people think will fail, you need to ignore these things. Tetlock says the point is great. Success requires inspiring people, and it is hard to inspire people with a lot of negatives. charisma is linked to overconfidence. Disaster is also linked to it. Think like a well-forecaster in private and project confidence in the public.
Hatch is at his desk in the New York office of Good Judgment, and he says it has been a busy morning. He isn't doing it just to have fun. Hatch does a daily crossword to stay up to speed on pattern recognition, an important skill for superforecasters. Predicting is about seeing what the picture might be before everyone else.
Hatch has tested his skills in building this company.
How do you make money from the ability to find, train, and coordinate brilliant minds? Good Judgment started workshops to teach their prediction tactics. Good Judgment Open is a free site for anyone who wants to mingle with superforecasters and try their hand at predictions, which has served as a recruiting ground. How to leverage the actual predictions from its network of superforecasters, now about 170 active members, in ways that clients would actually pay for has been the bigger question. Hatch says that they have had their fair share of bloopers here.
Many potential clients in the financial, legal, and government worlds believe they have the best experts making the best predictions. What is the benefit of hiring a bunch of people to pick up internet work? Superforecasters are not always right. The group had an 80 percent chance that Clinton would win. The superforecasters continue to win tournaments held by the government. Good Judgment was correct in its predictions about COVID-19.
The first hint of something similar appeared in September at a workshop for a Canadian financial firm. A critical forecasting practice intended to anticipate surprises was practiced by participants. You think the event is going to be different. Tell the story about why it went the other way before you make a prediction. The Canadians were imagining an event that would change their forecast on China's economy, and someone came up with a similar event. Hatch says that they were better equipped to deal with COVID when it started showing up in the headlines. Good Judgement was so.
Artificial intelligence will shape our future.
Blam turned down another lucrative three-year theme park job in the country in January 2020 because of early chatter on Good Judgment's platform about COVID-19. She says that everyone knew it was going to be bad. I didn't want to get stuck over there. Hatch and his team realized that people were desperate for the kind of insight Good Judgment could provide. The company put its elite team to work forecasting on everything from caseload levels to vaccine timing. Financial firms began to reference its forecasts in their work. Hatch says that it put them on Broadway even if they were in a small theater.
Good Judgment launched FutureFirst, a subscription service for $20,000 a year that lets members vote on questions they want forecasts on every week, with options for a premium. Hatch said that the product was generating a third of the company's revenue by fall. He has a lot of other ideas, including commercializing Delphineo, the collaboration platform it built for the workshops, which was named by the crowd using the tool itself. Hatch asks his team what success would look like. What would failure look like? He puts probabilities to each so that he can beprimed for signs of risk and opportunity.
He says that a lot of this is about him. Let's not get surprised, good or bad.
Good Judgment has to predict not only what will happen with its own business but also the future of the forecasting business at large as it grows. That will change as well.
"Machines already dominate prediction in all Big Data settings but struggle as the data get sparser and require more qualitative analysis," says the man whose research initially launched Good Judgment, and who still enjoys engaging on the more challenging client cases while continuing his work at Wharton. The types of problems we will deal with in the next 20 years will be human-machine hybrid. Expect the status hierarchy to continue stonewalling efforts to introduce scorekeeping, especially in government, but in many businesses as well.
One business is bucking the trend. It could mean good things for Hatch.
David Barrosse is the founder and CEO of Capstone, a global policy analysis firm. He was surprised when he picked up a copy of Superforecasting. It has always stuck out to me that in the global securities research industry, which is a multi-billion dollar industry and covers every investment bank all over the globe, not one of them focuses on the accuracy of their predictions. They don't keep track of it. They don't talk about it. 99% of them won't put a number on it.
10 steps to future proof your personal brand
He passed the book to his employees and then sent them to Good Judgment's workshops to get the ideas in the bloodstream. He wondered what it would look like to change the predictions systems of both the company and its clients. Good Judgment was hired last year to design a training for the analysts. There was a lot of resistance to the project at first, according to Cordell Eddings, the supervisor of the project. The training gave people the skills to do it right. People bought in across the firm.
It has been more difficult to convince clients to change their prediction systems because they think it is bullshit. How can you know that it is 67 percent? It gives us an opportunity to talk about the 40 percent prediction that we might have started out with. We debated internally if this would make us look like we are bending with the wind. We can tell the client how things are changing based on information that is coming in real time. We are giving you a realistic prediction. It is better to say that we will be with you and give you a chance in a specific time frame.
This is now seen as a competitive advantage by Barrosse. He is more than 67 percent sure of it.