Say a team’s customer wants it to software-enable a process. Doing so will save around $20k a month. Maybe the team does it and the customer is happy. Very happy. The team is even featured in the IT newsletter and gets lots of praise.
All is well and good….
Exactly. What if it turns out if they’d simply asked a few more questions, done a little more digging, and made a few introductions between the right stakeholders, the process could have been done away with altogether, saving $1m a month? That oversight has a “cost” of $11.8m per year. This changes what you feel about the team’s accomplishments, doesn’t it?
It should—interpretation depends on context. With different comparisons you’ll see the same result differently. A result in isolation doesn’t mean much. As Groucho asks above, “Compared to what?” What’s the context? Your focus at first should be digging into the context to surface the most pertinent comparisons in order to maximize value. Without this, it’s like you have horse blinders on.
There’s an analogy in experimental design. When the team software-enabled the process without digging into the context and surfacing other options, it’s like they trapped themselves in a between-subjects design. This is where subjects are only shown one condition of a study—they don’t know what the relevant comparisons are. In a within-subjects design, where subjects are shown the other conditions, they have an expanded view. The horse blinders are off.
In a research study, if people are asked to make a judgment or evaluation about what they’re shown, their judgments will differ depending on which type of design they’re in. As an amusing example, Birnbaum (1999) famously found that in a between-subjects design people rated the number 9 as being greater than the number 221. This makes perfect sense too.
In a between-subjects design, where you aren’t shown the relevant comparison(s), you must imagine a context to even make the judgment you’re asked to make. The stimulus is therefore confounded with its context (see Lambdin & Shaffer, 2009). (Incidentally, this likely invalidates a great many between-subjects studies.)
If merely asked to rate how large the number 9 seems, you’ll likely generate a context of 1 to 10. If asked to rate how large the number 221 seems, you’ll likely generate a context of 1 to 1000. You don’t know the relevant comparison is 9 compared to 221, hence 9 is evaluated as being larger.
Whether you’re a subject in a between-subjects design or evaluating the work of a product team, your judgment—your evaluation—is dependent on the context you generate. This is in turn dependent on the comparisons you can think to make, which depends on the options you are aware of. If your only comparison is “doing X or not doing X,” then any value doing X creates will seem like a win. You’re anchoring yourself and won’t see the opportunity cost, regardless of how staggering it might be.
Back to our example, if you’re comparing saving $20k a month ($240k a year) to saving $0, then $240k a year seems like a big win. By exploring the frame and generating options, however, you are moving to more of a within-subjects situation. You open up your perspective to include other comparisons, thereby widening context. Saving $240k a year is fine compared to $0—compared to $12m it’s small potatoes.
In general, if you have one option on the table and focus on deciding “whether or not” to do it—without eliciting actual alternatives—you’re going to make worse decisions than if you routinely consider at least three alternatives (Nutt, 1999). In The Secrets of Consulting (1985), Weinberg calls this his “Rule of Three” (which he got from Virginia Satir). If you need to set a rule to force you to always generate alternatives, then do it! An example might be, “I will always dovetail by pitching a potential problem with two alternative problems to solve.”
As Weinberg further notes, “cost-benefit analysis” is typically a misleading euphemism for “cost analysis.” Generally, no one is even looking at the value left behind. This echoes Andy Grove’s (1983) point that saying yes to one option means saying no to the others. When it comes to the other problems you could have solved instead, what is the potential value left on the table? What are the alternative “costs of delay” being ignoring?
In summary, in what way are you in a between-subjects study of your own making? How suboptimal are the comparisons you’re left with? Agile doesn’t help with this, by the way. Just because you have demand doesn’t mean you don’t need discovery. Iteratively pursuing an option still leaves you with that option. You are still trapped in a between-subjects view of reality.
You need problem-space research and lateral thinking. You can’t compare options if you don’t do the discovery work needed to bring them to your attention in the first place. Widen your view. Make better comparisons. Create more value. Strive to break out into a a more within-subjects view.
Birnbaum, M. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4, 243-249.
Grove, A. S. (1983). High output management. New York: Vintage Books.
Lambdin, C. & Shaffer, V. (2009). Are within-subjects designs transparent? Judgment and Decision Making, Vol. 4, No. 7, 554-566.
Nutt, P. C. (1999). Surprising but true: Half the decisions in organizations fail. Academy of Management Executive, 13, 75–90.
Weinberg, G. M. (1985). The secrets of consulting: A guide to giving & getting advice successfully. NY: Dorset House Publishing.