Design is not about creating things. It’s about making decisions to solve problems. Design research is about fitting the right learning tools to the right kinds of questions, doing the smart thing to “derisk” the decisions made.
As designer Erika Hall puts it, the decisions we make and the constraints we set up front are ASSUMPTIONS, and the gap between these assumptions and reality is our risk. To bridge this risk we need to learn, and to learn, we need to know what questions to ask.
We typically get this backwards, Hall says. We want to be perceived as having answers, not as knowing what questions to ask. We ignore that the right activity is determined by the sort of question we need to answer, and we end up prescribing activities with no clear goals.
As she explains, this is one reason leaders often state they’ve “tried research” and didn’t get value out of it (see Hall, 2018a). Without identifying our riskiest assumptions and with no clear research goal, of course doing research isn’t going to provide value. This puts the cart before the horse…and yet we do it all the time.
By dictating learning activities in the absence of research goals and only wondering about the right questions later—if at all—we fall into a variety of traps. Here we’ll discuss four.
Trap 1: Learning = live product
This places the focus solely on delivery and again ignores the importance of the types of questions that need to be asked. Prototype testing, for instance, cannot tell us whether we’re solving the right problem. Neither, for that matter, can iteratively building a particular solution.
Perhaps we should have started with some stakeholder and user interviews, fitting the tool to the question? (As I recently heard someone put it, you COULD eat Chinese food with a hammer and a screwdriver, but it will probably take longer and be messier than necessary.)
It’s unfortunate that Agile has tended to make this trap worse. As one Agile Manifesto coauthor has repeatedly told me, you can’t deliver value if you’re not delivering working software! Notice this is also a red herring of sorts. If we’re building the wrong thing, then building more of it isn’t going to solve the problem.
This raises important questions. What are we actually focusing on? How do we see our mission? Are we achieving outcomes and solving problems or taking requests, filling orders, and delivering product? They’re not the same thing. At what level are teams allowed to pivot? If they cannot meaningfully pivot, then they cannot meaningfully take action based on what they learn. Agile agrees with this, by the way, and yet Agile teams often get stuck in the right-hand loop below.
Trap 2: Value is product, not value
This confuses a way station with the goal. In this view, anything we suspect will slow the filling of orders thereby slows value. (Maybe we think we’re running a pizza joint.) Hence the peculiar (though frequent) manager and executive notion that research “adds to time and cost” (see Constable & Rimalovski, 2018 for another discussion on this point).
It’s easy to dismiss this mistake by thinking, “Well, they just don’t understand research.” That doesn’t go far enough though. If what is being left unsaid is that the org is going to continue funding the HiPPO come what may, then, yes, doing any research would in fact be waste (added to other activities, many of which will also, incidentally, ipso facto be waste).
Note, however, this also means the org cannot be “Agile” in any meaningful sense of the term. Agile is not about building as quickly as possible whatever the HiPPO decides. It’s about empowering teams to discover what they should be doing as they quest…something design research can and should help with.
Design research digs into the problem space and explore the frames used to generate value-adding options. These skills would up-level the game of most Agile teams.
When leadership reacts to discovery and agility as an affront to power, this shows they’re stuck in a bad frame. A better frame is that leaders are making BETS and should want research to help them make SMARTER bets with less time and less cost.
As Hall (2018a) notes, if an executive was going to buy a $70k car, she’d probably do some research first, right? So why wouldn’t she want some research done when betting $100k on an Agile team doing a sprint of work on an idea? Or $1m for five teams to do two sprints of work? How is that different? (Image reworked from an idea I got from Jeff Patton.)
Trap 3: Design research is “icing on the cake”
The assumption here is that research produces unnecessary refinements to the vital work that would already be happening without the research. Notice what is being telegraphed here. This statement is only true to the extent that whatever research is done will be ignored!
The very point of design research, after all, is to determine what work the team should be doing in the first place. If discovery is “icing on the cake,” then where did the cake come from?
The assumption Agile teams tend to make is that the research is happening in secret somewhere upstream, when it’s typically not. (Usually an idea was passionately advocated in a meeting and then got funded. Passion is great, but we need to create a space for derisking.)
This brings us to our final trap.
Trap 4: Activities in absence of outcomes
“Providing a good user experience” is too vague. In Hall’s words, “A good user experience is only as good as the action it enables.” What specific behavior changes are we trying to create in order to generate value for both the users and the business? This view is aided, Hall argues, by shifting from “user-centered” to “value-centered design.”
Notice I’ve been saying “design research” and not “user research.” That is intentional. User research is too narrow. As Hall argues, confining design to the user’s experience distracted us from the importance of outcomes, which is the proper focus of design.
The organization providing a solution or service needs users to engage in certain behaviors that provide value to the organization. Users will only proactively engage in those behaviors if doing so also meets their needs, which are different from the organization’s.
It’s by centering design on these “touchpoints” (not all of which are digital by the way) that we can best drive this bidirectional creation of value. The target behaviors in the middle are concrete outcomes, and it’s only by researching the ecosystem that we can effectively design for them.
In other words, what concrete behaviors, if users engaged in them, would both solve problems they have and provide value to the organization? Solve for those.
There is, however, another reason user research is too narrow a term. To succeed we must influence and to influence we need to learn what influences. Research stakeholders and learn how they make decisions. Learn what data they find persuasive. Before doing research FOR them, research to learn how to make research effective.
As Hall (2018b) notes, the data that make for good design decisions typically aren’t persuasive to business folks. They like numbers, and often don’t care that quantitative data cannot say anything about WHAT or WHY. To dig into that, we must go qualitative.
Exploring ways to shepherd the business stakeholder along, even inviting her to participate in some research herself, can go a long way in making her more receptive to the qualitative data that will be necessary to derisk the decisions that need to be made.
To conclude, here are the four traps we discussed.
Until next time.
Constable, G. & Rimalovski, F. (2018). Testing with humans: How to use experiments to drive faster, more informed decision making. Giff Constable.
Hall, E. (2018a) The nine rules of design research. Medium. Retrieved on February 8, 2019 from: https://medium.com/mule-design/the-9-rules-of-design-research-1a273fdd1d3b.
Hall, E. (2018b). Thinking in triplicate: You have to see the whole story to make it come true. Medium. Retrieved on February 8, 2019 from: https://medium.com/mule-design/a-three-part-plan-to-save-the-world-98653a20a12f.