Strategy and Behavioral Economics


Dan Lovallo and Olivier Sibony are Professors of Strategy. They make an interesting observation. Behavioral economics, the application of cognitive psychology to economic decision making, seems to have gained universal acceptance…except among corporate strategists. And there might be a pretty straightforward reason why. In marketing or finance, for example, a focus on behavioral economics entails highlighting the biases of other people. With corporate strategy, however, it requires us to look at our own biases. A lot of executives might not want to.

Lovallo shares the story of a Hollywood executive bluntly telling him yes, he gets it, this could improve decisions over time, but people are only in this role for a few years and he really doesn’t want data suggesting that he’s made bad decisions. In short, executives want their company to be successful, but even more so, they want to be perceived as successful executives. Sometimes these things don’t exactly line up. Resolving this tension between process and reputation requires us to change how we think about decisions. We would need to stop focusing on decision makers and their perceived prestige and instead focus on the process itself.

As we love pointing out here on The Lateral Lens, it’s about systems and incentives, not individuals. This, however, will not be popular in cultures that love focusing on personalities. Ignoring this larger view leaves us in the trap many corporate strategists remain stuck in. They still assume that good analysis makes for good decisions, ignoring that they are making decisions within a larger system with a decision process, and that this process cannot itself offset biases it is not designed to account for. Thus, as Lovallo and Sibony highlight, a good process may weed out poor analyses, but good analyses still leave one blind to bad process.

To raise awareness about this, we need to start having collective conversations about bias, about which are likely prevalent in our cultures, about which might be prevalent or even favored by our leadership. Over the years, Lovallo and Sibony have shared in McKinsey Quarterly which biases they have found to be the most prevalent among corporate leadership. They clump these under the categories of saliency- or pattern-recognition biases, action-oriented biases, stability biases, interest biases, and social biases. Here is a quick look at each.

Saliency-Based Bias

Executives often base decisions on prior experience. What stands out to them, however, is often misleading. Similar to the confirmation bias (or “positive test strategy”) is the fallacy of “narratizing”. We tend to overweight data when arranged in a story that seemingly connects the dots, that seems to make things gel. This is true even when the narrative we’ve concocted fails to capture the most important patterns at play in the decision environment.

This may be a problem when funding decisions are based on advocacy. Those who prove the most persuasive may not have the best ideas. Lovallo and Sibony call this the “champion bias”. To fight it, they advocate manipulating “angles of vision”. What alternative hypotheses might explain the same set of facts? What are multiple, alternative ways to interpret the same situation? What pattern recognition has been triggered in the group? What experiences are influencing those present?

These should be explicitly discussed, brought to light, and explored. Consider the pros and cons of each story shared. Leverage research to increase the angles of vision. Make field visits, spend time with customers, and explore techniques that change how meetings are typically run. Explore alterative frames, experiment with assumption reversals, and always seek to surface the narratives people are running in their heads. (In other words, apply some lateral thinking!) Offset potentially misrepresentative narratives by purposefully fleshing out explicit counter-narratives.

Action-Oriented Bias

Executives often stress the need to “take action”, to “speed up”, to become more “efficient”. We typically end up incentivized to focus on output, sans any meaningful pulsing of its conversion to value. Designers often find themselves in a predicament here. They’re often the sole voice inquiring how value will be created, what problems will be solved, and digging into the why. They’re typically repaid with handwaving and lots of sturm und drang about “slowing things down”.

This need to “take action” is usually coupled with gross overconfidence. Indeed, in many organizations, exuding such brash confidence is key to getting a plan funded in the first place. And this is unfortunate. Overconfidence, after all, doesn’t mean I’m right—it just means I’m overconfident. What we’re left with is a tendency for overly-optimistic action and the ignoring of actual outcomes. Techniques for offsetting this bias include premortems, decision trees, and scenario planning.

A premortem is basically an upfront postmortem. Ask a team to imagine they’re looking into a crystal ball, “seeing” that the project has failed. For an allotted amount of time, have them individually write down what they see in the crystal ball. They KNOW the project failed. Whydid it fail? Have everyone share one reason the project failed, capturing them on a whiteboard. After three rounds, start capturing ways the team could mitigate the issues discussed. Ask how these ideas could be used to strengthen the current plan. This can offset overconfidence more effectively than other risk-analysis methods. 

Scenario planning is to create a “reference set” of similar endeavors, complete with their strategies and outcomes. The aim is to elaborate viewpoints at odds with senior leadership, thereby countering optimism bias. This was the approach Pierre Wack famously took at Royal Dutch Shell, applying concepts from futurist Herman Kahn to business strategy. Lovallo likes the example of Colonel Kalev Sepp, who innovated policy in Iraq with a reference set of 53 similar counterinsurgencies (which he created by himself, in just a few days). Though the set of scenarios must be agreed on without knowing whether they’re the “right” ones, without them decisions will be overly anchored to far fewer upfront narratives.

Stability Bias

In a famous study, a group was asked if Gandhi was younger than 9 when he died. Another was asked if he was older than 140 (clearly impossible). Both were then asked how old Gandhi was when he died. The average for the first group was 50. For the second it was 67. The point? Even when the numbers used to “anchor” us are clearly nonsense, as was the case here, the bias of “anchoring and adjustment” can still have a dramatic effect. When the anchors available are not irrelevant, as is the case with last year’s budget, the effect is even stronger. (This is called “endowed anchoring”.)

Say we go through a laborious budgeting process. At the end, what we’ve decided on pretty much matches the numbers we got from the BUs, which matches last year’s budget. Lovallo and Sibony argue this is probably more due to anchoring than good budgeting. Because of this, business leaders often believe their plans are changing over time more than they really are. As with offsetting narratives, here one must fight fire with fire, or “anchors with anchors”. One approach is to use regression to create a model serving as an alternative anchor. If there are areas where the model and last year’s numbers are close, that makes it easy. Where they are very different, those should be longer conversations.

Or take the example of “zero-based budgeting” or “ZBBing”. Have a group of executives individually look at a set of opportunities for the coming year. For half the execs, that’s all you show them. The other half can also see how resources were allocated across units the previous year. Now compare the two groups: How different are the decisions between them? Lovallo argues low performers remain 99% the same as the prior year.

Another stability bias, loss aversion, is when we weight losses avoided greater that gains of the same amount. Say an executive decided not to recommend an investment that had a 50-50 chance of losing $2m or of returning $10m. He was worried about the damage to his reputation if the investment failed. To the extent that he was wise to do so, his organization is guilty of omission bias. The bet, after all, is “worth” $4m (.5x-2m+.5x10m). The organization should be seeking as many such bets as possible.

Loss aversion often combines with the fallacy of “honoring sunk cost”. This is when we “throw good many after bad”, basing investment decisions partially on the irretrievable cost of prior decisions, which should be treated as informationally irrelevant. Another variant is not doing the right thing because of work already done. Consider project teams that don’t want to bother users and stakeholders “until they have something to show them”. As Erika Hall points out, this usually means waiting until there is enough sunk cost that the team isn’t going to change direction much regardless of what the feedback is.

Stability biases cause leaders to cling to prior investments that should be let go. This causes us to keep projects alive that should be killed. Every org is burdened by such “zombie projects”, as Janice Fraser calls them, those cash-hungry, zero-value projects that amble along like the living dead. One approach to countering them is to shift the burden of proof. Instead of regularly asking which projects should be killed, look at every live project and justify why it should continue. Roadmaps can be made “contingent” by placing forced decision forks throughout, requiring that plan be augmented based on the actual outcomes obtained along the way.

The default attitude and position should not be that projects just continue ipso facto. Weeds should continually be separated from fruitful seeds. Such pruning should not be a dramatic one-off. Big and rare decisions not following a smart decision process are likely themselves rife with cognitive bias. If the people displaced by such pruning, however, are not then placed on projects of higher value, if they instead get “redeployed” and move to projects of comparable value, then what was the point?

Perhaps the real challenge here is to get better at such regular pruning without losing talent. Pruning should not equal layoffs. It should be a regular, ongoing, and healthy thing. It is a mistake to devalue doers by treating them as interchangeable cogs in a machine. When we outsource doers we mistakenly assume their value is confined to a very narrow set of technical skills, ignoring all the value laden in their rich knowledge of context, process, and culture.

Interest and Social Bias

Different parties will have different and often competing interests. Sometimes the individual interests of various orgs or BUs are not in the best interest of the company, involving issues Lovallo and Sibony call “inappropriate attachments” and “misaligned incentives”. Executives should seek to shut down practices that benefit individuals at the expense of a smart general decision process. An example is when people schedule one-on-ones prior to the larger decision meeting to gain buy-in beforehand.

Executives should sometimes counter this by gathering diverse groups with conflicting perspectives and letting them debate. Discuss an opportunity by having people individually list the plusses and minuses without writing down conclusions. If vigorous debate cannot happen, if it is not a psychologically-safe environment, there will be a disagreement deficit. Groupthink will be high. If everyone is scared of challenging the Highest-Paid Person’s Opinion (the “HiPPO”), then groupthink is the natural outcome.

(By the way, for those who find the acronym “HiPPO” offensive, Lovallo and Sibony offer an interesting alternative: “Sunflower management”. This refers to our inclination to try and match the perceived views of executives, whether expressed or not, similar to how sunflowers always turn to face the sun.)


Corporate cultures sometimes have cognitive biases baked into them, amplifying their impact. An easy way to start countering this is to hold conversations on what biases might be at play in a given process. Only then can we start incorporating explicit counter-bias techniques. If we don’t and leave it up to our instincts to detect when bias may be influencing decisions, then we’re leaving it up to our instincts to tell us when our instincts need checking—not a smart move.

Encourage people to share the narratives and experiences triggered by the discussion, surfacing what stories may be influencing their decision making. Share raw data so others can try to detect alternative patterns. If a meeting will be full of people with one view, populate it with people who hold an opposing view. Push back against action-oriented bias and the sentiment that, “We just need to make a decision!” No matter how well-intended, we do not in fact need to just make a thoughtless, suboptimal decision—often decisions should be made at the last responsible moment. In close, we must keep the focus on the decision process itself…but this means no longer focusing on individual prestige or reputation.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: