For this year-end post however, I offer some tools of skepticism to you, dear reader. When it comes to predicting the future, we are all terrible at it. Worse yet, confidence in our predictions and the accuracy of those predictions are inversely correlated. The more confident we are, the more wrong we are.
“I do not pretend to start with precise questions. I do not think you can start with anything precise. You have to achieve such precision as you can, as you go along.”
– Bertrand Russell
Out of the thirty or forty books I read in 2010, several focused on the topics of decision making, estimation, and expert opinion. These are some of the books I’ve come across that have helped change my own thinking about expert opinion, our inability to predict the future, and the pitfalls we all fall prey to in our decision making process. I won’t give an in-depth review, but encourage you to check these books out and read the summaries and customer reviews for yourself.
Fooled by Randomness and The Black Swan
I like Nassim Nicholas Taleb. He cuts right to the chase and backs up what he says, and has a rebellious nature that I identify with and enjoy. I tend to think that if ‘everyone is doing it’ it’s probably wrong. Taleb can come off as a bit arrogant if you are not prepared for his tone, but if you appreciate brutal honesty I think you will enjoy his works.
In Fooled by Randomness, Taleb discusses probability and how it is misunderstood and illustrates points via thought experiments and short stories. I find some of his views on the role of randomness to be a bit over the top, but very insightful nonetheless. Misunderstanding statistics is something we are very good at as human beings, and this book illustrates the point well.
The Black Swan focuses on predicting the future, and how bad we are at it. A main focus are the large-scale outlier events we can not predict and usually forget to even attempt to include in forecasts. The lesson I drew from the book was to be humble about my ability to predict the future in any way.
How We Decide
In How We Decide, Jonah Lehrer cited many interesting psychological and neuroscience studies demonstrating that we don’t always make decisions in the manner we think we do. I wrote a bit about anchoring in project estimation as a result. He also demonstrated that for some types of decisions and situations, intuitive decision making is better suited while in others it can lead us astray. I must say I was more interested in the cited studies and follow on research I did based on this book than the book itself, but it was still a pleasant read.
Sleights of Mind
I bought Sleights of Mind primarily because of my fascination with how human psychology works and the ways in which we can be fooled. If you like magic and science, this is definitely a book for you. In terms of managing projects, this book offered me some additional insight into how people can be fooled. Although the focus here is sensory, the aspects of self-delusion and justification are extremely pertinent when considering how decisions are made in the real world. It shows how we can remember X happening even though Y actually happened, which is a good thing to know when trying to plan projects using past experience.
Future Babble and Expert Political Judgment
These I haven’t read yet but plan to in 2011, but I have done some independent looking into Tetlock’s research on ‘expert opinion’ and it is absolutely astounding. The bottom line is that human beings and in particular experts are really bad at predicting the future. Gardner draws upon Tetlock’s research and I think I will probably go after Future Babble as a summary of this question, and hold off on getting Tetlock’s book unless I feel I need to go that route. From the reviews it seems that Expert Political Judgement is rather dry, more of an academic book.
In general, most experts do worse than random chance at predicting the future and even the ones who are slightly better than chance aren’t better by much. Additionally, looking at groups of people, one who is better than random chance and the other who is worse, there is one correlate. Those who used a single tool or approach when generating estimates were more confident in their predictions and less comfortable with uncertainty. They also were the “worse than random chance” group. The other group was more comfortable with uncertainty, more tentative about their own predictions, and used an array of tools and approaches rather than relying on one ‘ultimate’ tool.