Inferring Causality: Why You Can’t Just Ask Why




A very prominent UX Researcher, who shall remain unnamed, presented at a conference recently. It started off a little rocky and then got worse…and then I stopped listening.

When UX Researchers lecture others on research while spreading misinformation, it doesn’t help the rest of us in the profession. Here we’ll only discuss two issues with the talk. For the sake of anonymity, we’ll just refer to the presenter as “Chris.”


Issue No. 1

Chris began by poking fun at an academic researcher for running a correlation and using it to bolster a causal claim. After reminding the audience of the oft-repeated claim that “correlation isn’t causation,” Chris poked some more fun at the claim being made. Ironically, the research in question IS flawed, just not for the reasons Chris seems to think.

“Correlation isn’t causation” is a near-ubiquitous slogan. Unfortunately, it’s often misleading. It’s sort of an inferior way of expressing the classic fallacy, “With this, therefore because of this” (in Latin, cum hoc, ergo propter hoc).  

The idea is just because A and B tend to cooccur does not necessarily mean one causes the other. True enough. If the consumption of ice cream always goes up with the crime rate, that doesn’t mean eating ice cream causes crime or vice versa. It could be a “third-variable problem.”




The issue here is this, just because association (cum hoc) isn’t causation (propter hoc), does not mean CORRELATION cannot be used to imply causation. In fact, correlations are OFTEN used to make valid causal claims in science. People that ignore this are ignorant of, well, science, which is ironically the very thing they’re feigning expertise of.

As noted here, dismissing someone by declaring “correlation isn’t causation” is almost a form of “fallacy engineering.” You’re essentially claiming to know the truth without evidential support.

This confuses experimental DESIGN with statistical ANALYSIS. One of the best ways (but not the only way) to empirically support a causal claim is to design a “true experiment” (as opposed to a “quasi experiment” or “non-experimental design”), which allows you to control for confounds.

A true experiment requires three things (Tabachnick & Fidell, 2019).

  1. The random assignment of subjects to levels.
  2. The manipulation of the levels of at least one independent variable. (If you don’t manipulate a variable, you cannot randomly assign subjects to levels.)
  3. The control of extraneous variables.

You CAN do these three things and run a true experiment and then analyze your results with correlations. You could also collect data without controlling for confounds and then analyze your results with an ANOVA (analysis of variance). Here it’s not the analysis that supports your causal claim—it’s the design of the research.




When people decry “correlation,” what they probably mean is that it’s difficult to show causation with “passive observational studies” that make it more difficult to control for confounds. Even this, however, isn’t quite accurate.

Just look at epidemiology. Snow didn’t identify the cause of cholera by randomly having some people drink contaminated water. He made observations and then dug into the data. Same with smoking and cancer. Lung cancer death rates lagged behind smoking rates by a couple decades while other forms of cancer didn’t (Dallal, 2000).


Issue No. 2

So far things are…not that bad. Lots of people go around saying “correlation isn’t causation.” It was followed up though with some truly bad advice.

Chris then raked the academic researcher over the coals some more, declaring the causal claim was not justified because the people being observed were not asked WHY they behaved the way they did. Apparently that’s a no-no.




That’s just, well, wrong. Again, inferring causation is BEST done by controlling for extraneous variables.

These are factors other than the independent variable that might account for the changes observed in the DEPENDENT variable. These could be treatment confounds, as when the independent variable covaries with another factor, or measurement confounds, as when you’re actually measuring more than one thing (Whitley & Kite, 2013).

So here’s the second issue: Just how does asking subjects WHY they behaved in certain ways help control for these extraneous factors? How does it help establish statistical relevance, a prerequisite for causation? It doesn’t.




In Milgram’s obedience studies or Asch’s conformity studies, when subjects were later asked WHY they behaved the way they did, did that help establish causality? No. If anything it gives SOME insight into people’s awareness (or lack thereof) of their own motivations, but that’s it.

People are really, really good at ad hoc hypothesizing. Ask some people why they did something and, even if they don’t really know, they can concoct a story. They might even believe it.

In Langer’s famous copy machine study, if you asked subjects WHY they let certain people cut in line, they might give you some story about how those certain people were more polite, which would have little to do with the actual hypothesized reason. (The irony here is the study was actually about people needing a reason to act and it not mattering much what it is…so long as there is one.)

One more example. In WWII it was noticed that bombers were more accurate when facing increased enemy fire. Can you imagine interviewing bombers and asking them WHY they’re more accurate? They could easily tell you a tale about heightened nerves, the adrenaline kicking in, and accuracy increasing when you’re in “fight or flight.” Or maybe there’s more enemy resistance when the sky is clear and bombers can see better (Dallal, 2000).

There is a corollary in user research. Let’s say you interview someone about his morning commute to work. He tells you some things, you ask why he does X Y X, and he tells you some more. Now let’s say instead you ride shotgun and just observe some morning commutes to work. You’re going to learn completely different things! (Beyer, 2012)

The first way, he’s going to make his morning commute sound a lot “cleaner” than it likely actually is. The second way, you’re going to see a lot of stuff he would not have shared with you, for a variety of reasons.

First, you’ll probably see some things he’d rather leave out, like how much he actually texts while driving. Second, a lot of what he’s doing will involve TACIT knowledge, which means when you ask him WHY this and WHY that, he probably doesn’t really know anyway. A lot of what you’re going to hear will be RATIONALIZATIONS.

Anyway, enough harping. When conducting user research, you’re largely not there to make scientific causal claims anyway. You’re trying to learn about what users tend to do in certain situations, what their needs are, what their underlying values and goals are.

The causal claims will more be assessed by building and deploying and seeing what people actually DO with what’s offered.

Until next time.




References

Beyer, H. (2012). Getting started with UX inside Agile development. Presented at the UX Immersion Conference. Portland, OR.

Dallal, G. E. (2000). Little handbook of statistics. Tufts. Retrieved on February 26, 2020 from: http://www.jerrydallal.com/LHSP/LHSP.htm.

Tabachnick, B. G. & Fidell, L. S. (2019). Using multivariate statistics (7th ed.). NY: Pearson Education, Inc.

Whitley, B. E. & Kite, M. E. (2013). Principles of research in behavioral science (3rd ed.). NY: Routledge.

One thought on “Inferring Causality: Why You Can’t Just Ask Why

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s