Theres a famous thinking framework by futurist, forecaster, and scenario consultant Paul Saffo called strong opinions, weakly held. The phrase itself became popular in tech circles in the 2010s I remember reading about it on Hacker News or a16z.com or one of those thinky tech blogs around the period. Its still rather popular today.
Saffos framework laid out in his original 2008 blog post goes like this:
I have found that the fastest way to an effective forecast is often through a sequence of lousy forecasts. Instead of withholding judgment until an exhaustive search for data is complete, I will force myself to make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search for further indicators and information. Iterate the process a few times, and it is surprising how quickly one can get to a useful forecast.Since the mid-1980s, my mantra for this process is strong opinions, weakly held. Allow your intuition to guide you to a conclusion, no matter how imperfect this is the strong opinion part. Then and this is the weakly held part prove yourself wrong. Engage in creative doubt. Look for information that doesnt fit, or indicators that pointing in an entirely different direction. Eventually your intuition will kick in and a new hypothesis will emerge out of the rubble, ready to be ruthlessly torn apart once again. You will be surprised by how quickly the sequence of faulty forecasts will deliver you to a useful result.
This process is equally useful for evaluating an already-final forecast in the face of new information. It sensitizes one to the weak signals of changes coming over the horizon and keeps the hapless forecaster from becoming so attached to their model that reality intrudes too late to make a difference.
More generally, strong opinions weakly held is often a useful default perspective to adopt in the face of any issue fraught with high levels of uncertainty, whether one is venturing a forecast or not. Try it at a cocktail party the next time a controversial topic comes up; it is an elegant way to discover new insights and duck that tedious bore who loudly knows nothing but wont change their mind!
On the face of it, it all sounds very reasonable and smart. And strong opinions weakly held is such a catchy phrase which probably explains its popularity.
The only problem with it is that it doesnt seem to work that well.
Swimming Upstream Against the Architecture Of The Mind
How do I know that it doesnt work that well? I know this because Ive tried. I tried to use Saffos framework in the years between 2013 and 2016, and when I was running my previous company I attempted it with my boss, whenever we convened to discuss company strategy.
Eventually I read Phillip Tetlocks Superforecasting, and then I gave up on Strong Opinions, Weakly Held.
Why does the framework not work very well? From experience, Saffos approach fails in two ways.
The first way is if the person hasnt read Saffos original post. This is, to be fair, most of us Saffos original idea is so quotable it has turned into a memetic phenomenon, and Ive seen it cited in fields far outside tech. In such cases, the failure mode is that Strong Opinions, Weakly Held turns into Strong Opinions, Justified Loudly, Until Evidence Indicates Otherwise, At Which Point You Invoke It To Protect Your Ass.
In simpler terms, strong opinions, weakly held sometimes becomes a license to hold on to a bad opinion strongly, with downside protection, against the spirit and intent of Saffos original framework.
Now, you might say that this is through no fault of Saffos, and is instead the problem of popularity. But my response is that if an idea has certain affordances, and people seem to always grab onto those affordances and abuse the idea in the exact same ways, then perhaps you shouldnt use the idea in the first place. This is especially true as were about to see if there are better ideas out there.
The second form of failure is if the person has taken the time to look up the original intention of the phrase. In this situation, the failure mode is when you attempt to integrate new information into your judgment. Saffos framework offers no way for us to do this.
Heres an example. Lets say that youve decided, along with your boss, to build a particular type of product for a particular subsection of the self-service checkout market. You both come to the opinion that this subsection is the best entry-point to the industry: it is relatively lucrative, and you think that it is the easiest customer segment to service.
What happens to your opinion when you slowly discover that the subsegment is overcrowded? Of course, you dont find out immediately what happens instead is that you spot little hints, spread over the course of a couple of months, that many competitors are entering the market at the same time. These are tiny things like competitor brochures lying in the corner table of a clients office, or pronouncements by industry groups that they are looking to engage vendors for large deployments, and then much later, clearer evidence in the form of increased competition in deals.
Well, I can hear you say, Strong opinions weakly held means that you should change your opinion when you encounter these tiny hints!
But at which point do you change your mind? At which point do you switch away from your strong opinion? At which point do you think that its time to reconsider your approach?
The problem, of course, is that this is not how the human brain works.
Both forms of failure stem from the same tension. Its easy to have strong opinions and hold on to them strongly. Its easy to have weak opinions and hold on to them weakly. But it is quite difficult for the human mind to vacillate between one strong opinion to another.
It is quite difficult for the human mind to vacillate between one strong opinion to another
I dont mean to say that people cant do this only that it is very difficult to do so. For instance, Steve Jobs was famous for arguing against one position or another, only to decide that you were right, and then come back a month later holding exactly your opinion, as if it were his all along.
But most people arent like Jobs. Psychologist Amos Tversky used to joke that by default, human brains fall back to yes I believe that, no I dont believe that, and maybe a three-dial setting when it comes to uncertainty. People then hold on to their opinion for as long as their internal narratives allow them to. Saffos thinking framework implies that you sit in yes I believe that territory, and then rapidly switch away to maybe or to no, depending on the information you receive.
Perhaps you may like Jobs! be able to do this. But if you are like most people, the attempt will feel a lot like whiplash.
So, you might ask, what to do instead?
Use Probability as an Expression of Confidence
The gentler answer lies in Superforecasting. In the book, Tetlock presents an analytical framework that is easier to use than Saffos, while achieving many of the same goals.
- When forming an opinion, phrase it in a way that is very clear, and may be verified by a particular date.
- Then state the probability you are confident that it is correct.
For instance, you may say I believe that Tesla will go bankrupt by 31 December 2021, and I am about 76% confident that this is the case. Or you can be slightly sloppier with the technique with my boss, I would say: I think this subsegment is a good market to enter, and I think we would know if this is true within four months. I believe this on the order of 70% ish. Lets check back in September.
(My boss was an ex-investment banker, so he took to this like a duck to water.)
Tetlocks stated technique was developed in the context of a geopolitical forecasting tournament called the Good Judgment Project. In 2016, when I read Superforecasting for the first time, I remember thinking that geopolitical forecasting wasnt particularly relevant to my job running an engineering office in Vietnam. But I also glommed onto the books ideas around analysis, because it was too attractive to ignore.
The truth is that Tetlocks ideas are not unique to his research group. Annie Dukes Thinking in Bets proposes the same approach, but drawn from poker, and the rationalist community LessWrong has long-held norms around stating the confidence of their opinions.
More importantly, Duke and LessWrong have both discovered that the fastest way to provoke such nuanced thinking is to ask: Are you willing to bet on that? What odds would you take, and how much?
Youd be surprised by how effective this question is.
Why is it so effective? Why does it succeed where Strong Opinions, Weakly Held does not?
The answer lies in the strong opinion portion of the phrase. First: by forcing you to state your opinion as a probability judgment that is, a percentage you are forced to calibrate the strength of your belief. This makes it easier to move away from it. In other words, you are forced to let go of the yes, no, maybe dial in your head.
Second: by framing it as a bet, you suddenly have skin in the game, and are motivated to get things right.
Of course, you dont actually have to bet you can merely propose the bet as a thinking frame. Later, as new information trickles in, you are allowed to update the % confidence you have in your belief. This allows you to see the world in shades of grey; it also allows you to communicate that confidence to those around you.
Revisiting The Hierarchy of Practical Evidence
I have one final point to make about this approach.
Long term readers of this blog would know that my shtick is apply a technique to my career or to my life, over the period of a couple of months, and report on its efficacy. Over time, Ive noticed that techniques are more likely to be effective when they come from believable practitioners. This is what led to my Hierarchy of Practical Evidence.
Saffos and Tetlocks ideas are drawn from the domain of forecasting. But this post is about thinking, not forecasting; Im only confident to recommend one over the other because Ive had enough experience with both as analytical tools.
But its worth noting that Saffo isnt particularly believable as a forecaster either.
For much of Superforecasting, Tetlock rails against professional forecasters, who make vague verbiage statements and issue long form narratives about the future. These forecasters are always able to worm out of a bad forecast, because their pronouncements are carefully worded to provide plausible deniability.
As I was writing this piece, I skimmed through the book, and was surprised to learn that Tetlock had met up with Saffo over the Good Judgment Project, and had written up the encounter. In that account, Saffo dismisses Tetlocks research out of hand:
In the spring of 2013 I met with Paul Saffo, a Silicon Valley futurist and scenario consultant. Another unnerving crisis was brewing on the Korean peninsula, so when I sketched the forecasting tournament for Saffo, I mentioned a question IARPA had asked: Will North Korea attempt to launch a multistage rocket between 7 January 2013 and 1 September 2013? Saffo thought it was trivial. A few colonels in the Pentagon might be interested, he said, but its not the question most people would ask. The more fundamental question is How does this all turn out? he said. Thats a much more challenging question.So we confront a dilemma. What matters is the big question, but the big question cant be scored. The little question doesnt matter but it can be scored, so the IARPA tournament went with it. You could say we were so hell-bent on looking scientific that we counted what doesnt count.
Tetlock goes on to defend his approach:
That is unfair. The questions in the tournament had been screened by experts to be both difficult and relevant to active problems on the desks of intelligence analysts. But it is fair to say these questions are more narrowly focused than the big questions we would all love to answer, like How does this all turn out? Do we really have to choose between posing big and important questions that cant be scored or small and less important questions that can be? Thats unsatisfying. But there is a way out of the box.Implicit within Paul Saffos How does this all turn out? question were the recent events that had worsened the conflict on the Korean peninsula. North Korea launched a rocket, in violation of a UN Security Council resolution. It conducted a new nuclear test. It renounced the 1953 armistice with South Korea. It launched a cyber attack on South Korea, severed the hotline between the two governments, and threatened a nuclear attack on the United States. Seen that way, its obvious that the big question is composed of many small questions. One is Will North Korea test a rocket? If it does, it will escalate the conflict a little. If it doesnt, it could cool things down a little. That one tiny question doesnt nail down the big question, but it does contribute a little insight. And if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question. Will North Korea conduct another nuclear test? Will it rebuff diplomatic talks on its nuclear program? Will it fire artillery at South Korea? Will a North Korean ship fire on a South Korean ship? The answers are cumulative. The more yeses, the likelier the answer to the big question is This is going to end badly.
I call this Bayesian question clustering because of its family resemblance to the Bayesian updating discussed in chapter 7. Another way to think of it is to imagine a painter using the technique called pointillism. It consists of dabbing tiny dots on the canvas, nothing more. Each dot alone adds little. But as the dots collect, patterns emerge. With enough dots, an artist can produce anything from a vivid portrait to a sweeping landscape.
There were question clusters in the IARPA tournament, but they arose more as a consequence of events than a diagnostic strategy. In future research, I want to develop the concept and see how effectively we can answer unscorable big questions with clusters of little ones.
Saffos business is in selling stories about the future to businesses and organisations. He teaches his approach to business students, who would presumably go on to do the same thing. Tetlocks job is in pinning forecasters down on their performance, and evaluating them quantitatively using something called a Brier score. His techniques are now used in the intelligence community.
These are two different worlds, with two different standards for truth.
You decide which one is more useful.
So lets wrap up.
In my experience, strong opinions, weakly held is difficult to put into practice. Most people who try will either:
- Use it as downside-protection to justify their strongly-held bad opinions, or
- Struggle to shift from one strong opinion to another.
The reason it is difficult is because it works against the grain of the human mind.
So dont bother. The next time you find yourself making a judgment, dont invoke strong opinions, weakly held. Instead, ask: how much are you willing to bet on that? Doing so will jolt people into the types of thinking you want to encourage.
Whether you actually put money down is besides the point; whichever way you approach it, its still a heck of a lot easier than vacillating between multiple strong opinions.
See also: The Forecasting Series, A Personal Epistemology of Practice.