Agile vs. Waterfall—it’s a discussion that never seems to progress as much as it continues to confuse. This hasn’t stopped countless organizations from spending staggering sums on “Agile transformations” which never seem to increase, well, agility. The stories of such transformations are now notorious, with Southwest Airlines being but the latest.
In this post I would like to suggest that the contrasting of Agile and Waterfall is a frame that we should abandon and that, furthermore, the very notion of a “transformation” is probably not helpful—especially if conflated with the idea of “scaling”. We will then discuss some expensive mistakes organizations tend to make when it comes to scaling.
An Unhelpful Dichotomy
In Lacanian psychoanalysis, a “quilting point” is a concept that anchors your interpretation of something, effectively “stitching down” its meaning. Consider Agile and Waterfall. Taken individually they each have a meaning largely “stitched” through the other—they practically serve as character foils. Taken together they form a contrast that often then serves as a larger “quilting point” for software product work itself. I would like to suggest this contrast is a framing we should reject. It is misleading and unhelpful.
At the product level one will rarely find a single, cross-functional team responsible for everything. It is more likely a great many teams involving hundreds of people. The product will have a life cycle, as all do, and there will be handoffs between the teams. Furthermore, the product must first come from somewhere, which means there should be initial work that is very different in nature from later work.
As Norman notes in The Design of Everyday Things, with almost any product there will be phases it will be difficult to return to as more decisions are made—which brings one closer to “Waterfall”—and there should also be an iterative refinement of assumptions as progress unfolds—which brings one closer to “Agile” (or just good design practice). Try as you might then what you’ll end up with will be some blend of the two. And Norman argues this is as it should be. As he puts it, the goal should be to have the “best of both worlds”.
Johanna Rothman, in Diving for Hidden Treasures, similarly argues that the aim is not to “become Agile” but to maximize value, which will often entail slicing work as small as possible, focusing on good acceptance criteria, and leveraging frequent milestone reviews to reassess risk and regularly reprioritize investments. So, the question arises…is this Waterfall? Maybe. Is it Agile? Who cares? The important question is: Why does it matter? There are more important things to worry about.
A Bigger Issue: Where the Hell Is Strategy?
“Really existing Agile” often ends up being little more than Scrum imposed on dev teams. As Dan Sloan notes, at this level there is typically little to no responsibility for strategy. Allen Holub has called this “Scrum at the bottom”. If left at that, this confines iteration to tactics and does not go far in helping the organization itself become nimbler. In other words, it sets a low “agility ceiling”. Teams, after all, cannot iterate past the limits of their own decision authority.
Scrum at the bottom often then leaves us with a “really existing product work” where there is no actual product vision or strategy in sight. If your dev teams interface with users and POs prioritize what to build, that’s not a product strategy—that’s just process. This abdication of strategy is, I would argue, a far bigger issue than whether teams are “Agile” or “Waterfall”. As Patricia Colley stresses, a real product strategy—and the work that would go into creating a good one—is vital to providing product work with a much-needed conceptual integrity. Tristin Hayward is correct when he says that without this the methods we so often obsess over become moot.
The solution, which SAFe at least tried to do, is to explode the confining of Agile to dev teams. The title of the Manifesto, designating Agile to the realm of “software development”, should be rejected. This diminution distracts from the larger task of picking apart the tight relationship between real agility and organizational decision authority, thereby responsibly raising the agility ceiling in the organization itself. Any talk of “scaling Agile” apart from this is likely a façade. In High Output Management, Andy Grove seems to have recognized this when he talked about the need to push decisions to the lowest level responsibly possible.
Scrum at the level of dev teams does not help with this. You frequently share your work with users and stakeholders and get feedback. You build what others say they need. Proceeding thus, the larger architectural work—in its proper design sense—will be missing from the picture. It is an error of focusing on one timescale, the near-term, the tactical. Taking a page from Jabe Bloom, this ignores bigger stories with longer time horizons.
Farthest on the horizon, the overall sense of direction, or aspiration, is here provided by the product mission and vision. This is the strategic intent. Strategy itself breaks this down into longer-term objectives, themselves broken down into behavioral outcomes to be achieved in the near-term. This requires conversations about paths forward and beliefs about the future, expressed at this level as testable hypotheses. Outcomes are achieved by work at the tactical level, by output. This builds out a nested, causal model, where the difference between tactics and strategy lies in dialing up from short-term present HOWS to longer-term future WHYS.
Agility as Optioning
Other frameworks can be leveraged to help raise the agility ceiling without sacrificing strategy, such as VMOST analysis (Vision, Mission, Objectives, Strategy, and Tactics) and/or the North Star Framework, which I learned from John Cutler. Consider the image below, adapted from Rob England and Cherry Vu’s excellent book, S&T Happens. At the top, conventional practice has short, medium, and long-term planning without accounting for VUCA (volatility, uncertainty, complexity, and ambiguity). Annual plan, for example, is a time-consuming and expensive endeavor that dictates the year to come as though a crystal ball was at hand.
The more VUCA the world, England and Vu observe, the more this drives a wedge between short-term planning for the foreseeable future and long-term visioning for the far future. Any traditional planning within this “zone of uncertainty” is therefore risky (and likely waste). Thus, the traditional project with its output-based roadmap and output-based milestones will likely all be high risk. Moving down, the more you acknowledge VUCA the more you should approximate some form of scenario planning.
Here, one keeps real options alive in the form of possible futures. The focus then shifts to these options and the operational agility required to act on them at the last responsible moment, as indicated by changing conditions. At the bottom of the diagram, this is approximated by translating options or scenarios into alterative possible objectives and their respective provisional outcome-based roadmaps.
The Outcome Lever
People naturally want to focus on output—it’s easy to see, easy to measure, and easy to create a lot of distracting drama about. This drama, by the way, is often highly sought by managers—hiding in “busyness” makes for easy success theater. Unfortunately, it’s a recipe for waste. The diagram below, adapted from Josh Seiden, shows the Kellogg Foundation’s Logic Model.
Behavioral outcomes are typically missing precisely because they’re difficult. In defining what an “outcome” is, I here follow Jeff Patton, Jeff Gothelf, and Josh Seiden. Outcomes are the concrete behavior changes targeted in some group of people in order to enact your strategy. Thus, if you release a feature, that is not an “outcome of the work”. Outcomes are differences in what users DO. No amount of code is more valuable than its ability to change behavior, which is the only way to create new business value. An outcome is not “successful delivery” or “customers are happy” (sorry SAFe).
This is not just semantics. The goal is not more output faster. That’s a blinkered approach to product work. If the bets placed aren’t panning out, then the focus should be on making smarter bets (not more bets faster). As Patton puts it, the goal should be minimal output per maximal outcomes. This distinction and this structure specifies the pivot triggers necessary to circumvent waste: If you’re not achieving target outcomes, then a pivot in tactics is needed. If achieving outcomes isn’t driving toward objectives and your vision, then a strategy review is called for.
To preserve the focus on outcomes, avoid feature-based roadmaps and avoid focusing on ROI. As Rothman calls out, any ROI estimate based on a roadmap of output that is likely evolve as the work progresses is wasted effort. When considering whether an option is “worth doing”, try to keep the focus on the perceived value of achieving target outcomes, which keeps the conversation at the strategy level. In comparing the outcomes themselves, I would suggest focusing on cost of delay.
As Pavel Samsonov notes, what you should strive for is not a single, linear roadmap but something more resembling a decision tree, with all branches but one representing mere possible roadmaps proposing alternative futures. Ignoring this is to stick with traditional “one-plan-at-a-time thinking”, as described by Finkelstein, Whitehead, and Campbell in Think Again. This multiplies resulting risk, especially considering that the original “one plan” is often chiefly driven by all the standard cognitive biases.
To see how this approach resembles a decision tree, it helps to flip the above model on its side. Scenario planning here comes into play as alternative bets to achieve outcomes, alternative outcomes to achieve objectives, alternative objectives to drive toward the product vision, and possibly alternative product mission/vision statements (strategic intent). In the example below, alternative scenarios are faded. Pivots at any level activate alterative options, thereby branching in the decision tree.
This is, admittedly, a somewhat dramatic example. Though you want the ability to pivot wherever needed, typically the higher you go the less frequently you will be pivoting. Iterating at the strategy level will therefore be less common than the tactical. Iterating the mission/vision will (or should) be less common still. This does not mean, however, that a low agility ceiling is fine, such as fixed at the backlog level. Realizing this, and explicitly tying agility to decision authority from top to bottom helps to scale actual agility.
Don’t Scrum Your UXD
We’ve talked about “Scrum at the bottom”. Well, same goes with User Experience Design (UXD). UXD, at its core, is not about making user interfaces look pretty. As Erika Hall defines it, design is the orchestration of the exchange of value within constraints to achieve a goal. In other words, design is about discovering the best problems to solve and exploring what you should try out as you solve them. The real medium of any design then is decisions.
The more decisions are already made, the more the direction is set, and the fewer “degrees of freedom” remain. As the degrees of decision freedom are “spent”, there will less wiggle room for agility. Agile itself then is not about being fast. (The opposite of Agile isn’t “slow”, it’s “path dependent”.) As discussed above, to raise the agility ceiling in the organization you need to move away from “one-roadmap-at-a-time” thinking and practice. You need to bake options into planning to preserve degrees of freedom for agility.
The discovery of these options and the feeding of this scenario-style planning approach will largely be driven by the work of UXDs. As Jonathan Korman argues, though Product Management (PM) should have final say on product strategy, ensuring it feeds into higher-level business strategy, it should nevertheless be primarily derived from the work of UXDs. And this cannot happen if you confine UXD work to the dev-team level.
The larger the initiative, the more PM will reside above the level of dev teams. To raise the agility ceiling, then, UXD must be similarly raised. And yes, this higher-level UXD work will largely take the form of research. There is a bizarre irony in rejecting such upfront research as being somehow anti-Agile “BDUF” (Big Design Up Front). Scenario planning, after all, both requires quite a lot of upfront research and is precisely the sort of thing that helps ensure organizational agility in a high-VUCA context.
Also ironic is all the usual obsessing over “velocity” when teams are deprived of the conceptual integrity of a good vision and strategy. (In fact, that probably feels a lot like gaslighting.) As Korman has put it, so much of this obsessing is itself a result of this underlying lack of strategic clarity. As he notes, if you have a clear strategic vision of system intent, then making the tactical decisions becomes all the easier and “done” becomes a lot easier to talk about.
To close, this post focused on Scrum because of its prevalence—that’s not a recommendation. There are other ways to build this out. An interesting alternative I learned about from Rob England is the Last Planner System®, from the world of construction. As shown below, this maps quite nicely to the structure played with above.
More to come.