It is not hard to think of recent events that have taken experts by surprise. From Donald Trump's success in the Republican presidential primaries to Jeremy Corbyn's election as leader of Britain's Labour Party the last year offers plenty of examples of insiders getting it wrong.  Pollsters, regulators, intelligence experts and, of course, economists, have suffered reputational setbacks in recent years as the future failed to conform to their theories and expectations.

The financial crisis is perhaps the most egregious example. It was, in part, a product of errors made by smart, well-informed players. It has provided a rich vein of material for the economists and psychologists seeking to understand why insiders so often make avoidable mistakes.

One of the leading academics in this field is Philip Tetlock, professor of psychology and political science at the Wharton School of Business. Over twenty years to 2003 Professor Tetlock measured the forecasting abilities of 284 experts including government officials, academics, journalists and economists. He famously concluded that the average expert was "roughly as accurate as a dart-throwing chimpanzee" and noted that individuals who were most widely featured in the media were especially poor forecasters.

In his latest book, Superforecasting: The Art and Science of Prediction, published in 2015, Tetlock sets out how forecasters can improve their accuracy. His conclusions are essential reading for anyone, and not just expert forecasters, who wants to improve the quality of their decision-making.

The book draws on a second forecasting tournament Tetlock ran, this time with the US intelligence agency, Intelligence Advanced Research Projects Activity (IARPA), between 2011 and 2015. (The willingness of the US intelligence community to embrace new approaches to forecasting is refreshing and, as the West's invasion of Iraq in 2003 demonstrated, desperately needed).

Tetlock assembled a group of more than 2,000 curious, non-expert volunteers under the banner of the Good Judgement Project. Working in teams and individually the volunteers were asked to forecast the likelihood of various events of the sort intelligence analysts try to predict every day ("Will Saudi Arabia agree to OPEC production cuts by the end of 2014" was one example).

The results were astonishing. The Good Judgement Project's predictions were 60% more accurate in the first year one than those of a control group of intelligence analysts. Results in the second year were even better, with an almost 80 percent improvement over the control group.

Such was the margin of accuracy of Tetlock's group over the competition that two years into a planned five-year tournament IARPA dropped the other teams, including those from MIT and the University of Michigan.  

Tetlock found that the most accurate forecasts were made by 2% of his volunteers – a small group of so-called superforecasters. Crucially, it was the way these people made decisions and learned, rather than deep subject knowledge, which gave them the edge over specialist intelligence analysts.

In their thought process the superforecasters were almost exclusively what the philosopher Isaiah Berlin described in a famous 1953 essay as "foxes". Foxes have a wide and shallow perspective, an approach which contrasts with what Berlin described as "hedgehogs" who have depth and expertise in a narrow field.

To Tetlock hedgehogs are confident in their deep knowledge and are often guided by one or two theories (Keynesianism, post-liberalism, communism and so on). Foxes are sceptical about such theories, open-minded, cautious in their forecasts and quick to adjust their ideas as events change.

Rather than rely on one or two simplifying ideas to explain events, Tetlock's superforecaster foxes embraced complexity and were comfortable with a sense of doubt.

Another difference is that the best forecasters continuously seek ways to make better predictions. They display a mix of determination, self-reflection and willingness to learn from one's mistakes. To improve the quality of predictions it is vital to keep a honest tally of successes and failures. Bad forecasters ignore their mistakes; good ones acknowledge and learn from them.

Superforecasters rarely use sophisticated mathematical models to make their forecasts, though they are uniformly highly numerate. Comfort with numbers seems a prerequisite for making good forecasts but fancy quantitative models are not.

Tetlock improved accuracy by having the forecasters work in teams. In year one the researchers randomly assigned some forecasters to work in teams and provided them with tips on how to work together effectively. Others worked alone. Their results showed that teams were on average 23 percent more accurate than individuals.

Sharing and debating ideas can significantly improve forecast scores. But some teams did poorly; free riding by individual members was one problem; another was a tendency to groupthink which weakened the very scepticism and openness that Tetlock finds essential to good forecasting.

Perhaps most encouragingly, Tetlock found that people can be trained to be better forecasters. Indeed with just one hour of training results could be raised by 10 percent.

For individuals, the training focused on thinking in terms of probabilities and removing thinking biases – for instance, focussing on the limitations of one's own knowledge and being open to alternative views. For groups the training aimed to strike a balance between conflict and harmony. Too much conflict destroys the cooperation that is essential to teamwork. Too much consensus leads to groupthink. Teams need to aim for an open, positive atmosphere of exploration and criticism.

In the real world many experts' predictions are pretty poor. Curiously, the most media friendly of forecasters are the best at telling a compelling story and the worst at forecasting the future. Simple stories "that grab and hold audiences" earn pundits fame.

The good news is that a different style of thinking can yield dramatic improvements in results. Scepticism, learning from mistakes, openness and hard work enabled smart amateurs using publicly available information to outperform skilled intelligence analysts.

Sadly, humility and a willingness to change your mind are not always the hallmarks of those who prosper in the world of forecasting. Strong, well-articulated views often matter more. Tetlock's work shows that for those who want to be better at forecasting, as opposed to merely being entertaining, new ways of thinking can yield significant rewards.

P.S. In last week's Monday Briefing we examined the latest UK opinion poll data on Brexit. At the time it seemed likely that President Obama's endorsement of the UK's membership of the EU would boost the remain camp. However, six of the seven polls published since the President's comments show a reduction in support for remaining in the EU. And four of the seven polls show the leave camp ahead of remain. President Obama's comments have certainly not turned the tide for the remain camp. Most followers of the polls, me included, assumed it would at least help. I need to adopt more fox-like thinking.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.