Revisiting “Intelligent Failure”


It’s important to go back and read the old stuff sometimes. When ideas get popularized, often parts get left behind that are just as interesting and important as the parts that catch on. Today we’ll take a look at Sim Sitkin’s “Learning Through Failure,” from Organizational Learning (1996), which introduced the concept of “intelligent failure.”

Though talk of “risk taking” and “failing fast” is now commonplace, much of the nuance of Sitkin’s “strategy of small losses” is today missing from such discussion. Sitkin’s original argument had more to do with incentive structures. If executives tout the importance of risk taking while rewarding what they consider success and punishing failure—even if only by not rewarding it—then they incentivize the opposite of what they verbally claim.

In The agile Manager (2019), England and Vu state such thinking falls prey to the “Simple System Myth,” the notion that you can plan once and then execute flawlessly and know what the outcome will be. You can’t. Instead, you must learn your way to the success hiding “under the pile of failure.” To do this, you must manage the size, cost, and speed of those “piles.” The idea is to strategically leverage the inevitability of failure in a complex system as a sort of “metal detector.”

That is partially what Sitkin means by “intelligent failure.” He also has a deeper point here, which is more counterintuitive. He’s not just stressing that you accept failure as feedback but that you purposefully generate failure as a proactive learning mechanism. Consider, if a team claims consistent success, are they high performing or playing it safe? It’s not only probably the latter, Sitkin argues, but there is also an opportunity cost to these claimed “successes.”

A lack of failures (i.e., unequivocal positive feedback) tends to produce suboptimal results. This is true for organizations, teams, as well as individuals. If a Scrum team keeps building whatever stakeholders request, and they’re always satisfied, success will be declared and “the business” will probably be happy…even though these solutions might be less value-adding than if the team had pushed back and done some discovery.

Now imagine 20 teams following such a process. If you reward what you see as successes (and again punish the failures by not rewarding them), then you’re essentially rewarding the teams that bet on your process and happened to win. You’re also punishing the generation of diverse information, which is the real value here.

Sitkin’s argument then is twofold:


1) Failure has benefits that need to be leveraged; and conversely,

2) Success has liabilities that need to be managed.


To focus on rewarding success is to prioritize reliability over adaptability and short-term wins over long-term resilience. Success can foster efficiency in the short term, given environmental conditions don’t change. It can also, however, stifle innovation, strengthen the status quo, make managers overconfident, and create a maladaptive homogeneity of personnel, process, information, and choices in response to emerging problems.

To circumvent this, teams should experiment with methods, ideas, approaches, and explore different paths forward. Inherent to this process is, of course, failure—only astrologers never fail. But again, if this failure isn’t rewarded, then what is incentivized isn’t learning but hiding the failure. I see this with executives who want to know what work is “green,” “yellow”, and “red,” ignoring that, as I like to say, “All the indicators are green until they aren’t.”

This is “success theater.” When it’s rewarded, you actually punish continuous improvement. This props up a fragile system centered on the traditionally relevant. As Sitkin stresses, failing—and the experience of failing—creates a felt need for corrective action fueling a need to search, innovate, and take risks that wouldn’t otherwise be felt and likely cannot otherwise be replicated.

Traditional factory-based thinking makes this worse. You’re not looking to drive variability out of the process to maximize “efficiency.” Far from it. In fact, quite the opposite, as Reinertsen (2009) makes clear. The innovator seeks to add variability to the system and increase the range of outcomes. If you reduce variability, then you strip innovation from the system.

Early success limits knowledge. If you succeed before some set of intelligent failures, then you are deprived of the insights they would have provided. You will take your success and move forward without that additional information, and perhaps with fewer paths forward.

Obviously, however, not all failures are created equal.

So, what makes some failures “intelligent?”

Here are Sitkin’s key characteristics:




Further, to position themselves to benefit from intelligent failures, organizations must do four things:

1. Increase focus on process. Seek informative outcome distributions that contain sufficient amounts of failure. Focus on the process of generating diverse and informative outcomes, not on whether a team “succeeds or fails.” Smaller-scale actions allow for more teams to independently experiment, more quickly generating the distribution of outcomes being sought.

Goals need to be well balanced. Unchallenging goals produce distributions of small, predictable successes, which are not informative. With modestly challenging goals, more information will be gained by purposively pursuing intelligent failures, what Sitkin calls the “strategy of small losses.”

Action and learning should be somewhat decoupled. By speeding up action and feedback while slowing down plan revisions, sample sizes are increased, which builds in a safeguard against making adjustments based on unreliable observations. (This point is perhaps somewhat at odds with Scrum practice.)

2. Legitimize intelligent failure. Intelligent failure must be monetarily incentivized. If people cannot point to clear evidence of the positive effect of intelligent failure on career mobility and rewards, then no one is going to take claims that the org “values risk taking” seriously—nor should they, because it doesn’t.

Organizations cannot expect to foster innovation via intelligent failures if the individuals providing them must pay a price for doing so. Quick judgment of failure should be stymied; after all, what looks like a failure today may be recognized as a critical contribution tomorrow.

Publicly recognizing individuals who intelligently fail and urging successful executives to share their own stories of intelligent failure will show commitment to strategic failure, risk taking, constructive experimentation, and innovation. Such public recognitions also legitimize failure in orgs where this very concept might feel foreign.

3. Change the culture. Employee training should include material on risk taking and the importance and value of failure and surprise. If an organization is serious about innovation, then intelligent failure must be viewed as a strategic asset.

This requires the corporate culture to shift its thinking on failure in ways that might seem ironic. For instance, teams might actually be penalized for not failing enough. Teams not producing a large enough “scrap pile” of intelligent failures may not be sufficiently taking risks, dealing with failure, and learning from it. They are likely not experimenting and learning their way forward, but rather building and claiming success regardless of real outcomes.

Such teams are likely risk averse, stagnant, and failing to continuously improve. Calling such a team “high performing” may be rewarding people for playing it safe and sticking with what’s already known to work. In some contexts, an absence of failure should signal the need to remove risk-averse routines.

4. Emphasize failure management systems rather than individual outcomes. Strategic failure must be implemented at the organizational level. Individuals on their own will not produce a sufficient and random enough range of failures to produce optimal organizational learning. Individuals will tend to produce safe successes or predictable failures, neither of which are very informative.

It might even be necessary to purposively expose employees to small doses of failure and then reward them for handling such well. This inoculates employees against their hesitance to take risks and better enables them to handle and learn from their failures.


In sum, what matters is the process, not specific outcomes in isolation. Intelligent failure, stated another way, suggests that being right the first time is a risky strategy. The number of successful innovations can only be increased by purposefully increasing both the number and diversity of failures in the overall outcome distribution.

There’s an analogy in the world of peer-review publications. If a paper’s research idea is interesting and its methodology is sound, then the result of the experiment should be irrelevant—it’s equally informative either way. And yet only papers that “find something” tend to get published, turning the literature itself into an incomplete dataset.

Does this mean you should really try to fail? The answer, unfortunately, is yes and no. I encourage you to be more “designerly” and less “factory-esque,” but the onus in leveraging strategic failure does not fall on you. It falls on executives. No amount of writing about best practice circumvents the law that whatever is incentivized is policy.

To strategically benefit from risk taking and drive innovation, organizations will need to counteract the natural inclination to value success and devalue failure by actively promoting and rewarding intelligent failure.




Reference

England, R. & Vu, C. (2019). The agile manager: New ways of managing. Porirua, New Zealand: Two Hills Ltd.

Reinertsen, D. G. (2009). The principles of product development flow: Second generation Lean product development. Redondo Beach, CA: Celeritas Publishing.

Sitkin, S. (1996). Learning through failure: The strategy of small losses. In M. D. Cohen & L. S. Sproull (Eds.), Organizational learning (pp. 541-577). Thousand Oaks, CA: SAGE Publications, Inc.

2 thoughts on “Revisiting “Intelligent Failure”

  1. The subject was relevant when Sitkin wrote about it back in 96 and it is still challenging nowadays. Still, I guess it will certainly be hard to cope years from now. Is efficiency and the way we reward executives constraining companies from embrace failure as startups do?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: