Stories as Filters





Intro

Last time we discussed the Narrative Fallacy. Today we will look at five topics that might not seem all that related at first glance. Together, however, they all serve as important lenses showing us the many and profound ways that narratizing can lead us astray. In this post we will discuss sensemaking, projection, pluralistic ignorance, hidden profiles, and unwarranted belief perseverance.

1. Sensemaking

Sensemaking means just what it says. It’s “the making of sense”. We place stimuli in a story or context to explain things, to create a shared sense of understanding, to extrapolate, and to try and control our environments. This positions narrative as a lens or filter through which we interpret perceptions. When sensemaking through these different lenses, verbal agreement often creates the illusion of alignment while masking deeper conceptual discrepancies.




Karl Weick, in his Sensemaking in Organizations, suggests we think of “organizations” as verbs, akin to collective narratives that delineate “us” from “not us”. He distinguishes seven aspects of this:

In social and ongoing interaction, filtered through our identity and personal narratives, and triggered by salient cues, we enact environments by co-generating narratives that frame our thoughts and create a sense of shared understanding. The narratives that are the most plausible in retrospect are more likely to be repeatedly enacted.

Social narratives serve as a backdrop that are themselves not all that noticeable. They are environments we swim in.

To quote McLuhan’s War and Peace in the Global Village, “One thing about which fish know exactly nothing is water….”

To detect an environment, McLuhan argued, necessitates a comparison environment, an “anti-environment”, a point which we’ll return to below.

2. Projection

In Gunther von Fritsch’s 1944 film, The Curse of the Cat People, a lonely girl invents an imaginary playmate for herself…or does she? The movie was praised by the head of the Child Psychology Clinic at UCLA. He would show the film in class and remark on the brilliant insight of having the child showcase her emotional problems by maintaining an awkward half-smile throughout the film.

One year he invited Val Lewton, the film’s producer, to attend his lecture. When the professor brought up the smile, Lewton frankly shared the actress held her mouth the way she did because she had just lost a tooth and they didn’t want it to show in the movie. The professor, in other words, was projecting.

When we project, we attribute traits, qualities, or motivations to others that they don’t actually have. The concept was originally proposed by Anna Freud as a defense mechanism, in which we “project” our unwanted thoughts or feelings onto others. More generally, we can take projection to mean the creation of a narrative about another person that misattributes states to them based on our own assumptions and biases.




In general, we are better at inferring mental states that result from events than we are at positing mental states as causes of behavior. Much of our narratizing is about people’s motivations and private views. This involves projection, which may come more from us than anything warranted by what the other person is doing.

Whether about why our manager worded something a particular way or what our team members believe about a decision, our narratives about cause states are often wrong.

3. Pluralistic Ignorance

We will often place what our group wants above our own preferences or opinions. As Todd Rose notes, however, to accurately conform to a majority opinion we must first know what it is. Rose’s research, discussed in Collective Illusions, has found that when it comes to matters of social importance our beliefs about majority opinions are only about 50% accurate.

When enough people believe most others in their group hold a belief they don’t really have, the result is an illusion of popular support called “pluralistic ignorance”. Combined with conformity, this results in the Abilene Paradox, wherein groups elect to do something the majority of members do not actually support.




If this seems amazingly irrational, remember, we are tribal creatures. As Boghossian and Lindsay note in their phenomenal book, How to Have Impossible Conversations, “People tend to care more about fitting in than believing what’s true.”

Both they and Todd Rose cite Timur Kuran’s classic, Private Truths, Public Lies, which coined the term “preference falsification”. This is the act of misrepresenting one’s genuine wants under perceived social pressures.

As group members misrepresent their personal views to fit in more, this fuels pluralistic ignorance, making it easier for leaders to hide behind collective illusions and stymieing the effectiveness of group decision-making processes.

4. Hidden Profiles

To make matters worse, in group or team decision making we tend to focus on the information that is already public knowledge, that adheres to the group’s shared narrative, even when information kept private would result in a more optimal solution if shared.

When the best alternative is undiscoverable by a group because information is being withheld, this is known in the literature as a “hidden profile”.

The information discussed in a meeting is more likely to consist of what is already known and supportive of pre-discussion preferences. Novel information is both less likely to be shared and less likely to be repeated if stated.




In their 1993 paper, “The Common Knowledge Effect”, Gigone and Hastie observe that the biggest determinant of a post-discussion decision is people’s pre-discussion judgments. This means that, sadly, the actual content of the discussion is not as important as who supports what at the onset of the meeting (which is why the techniques discuss here are so key).

As Toma and Butera (2009) note, information sharing doesn’t really alter pre-discussion preferences. The information shared is treated in such a way as to favor existing knowledge structures. Simply put, facts don’t influence.

Interestingly, learning each other’s preferences can further reduce decision quality. This creates quite the quandary: When preferences are strong and a group seems largely agreed, there is little point to even having a discussion.

The tendency here will be to treat the quick reaching of consensus and making of a decision as itself being a victory. And sometimes this is fine. As we just saw, however, this does not mitigate the risk posed by pluralistic ignorance.  

When value is repeatedly left on the table due to unchecked cognitive biases, what is needed are interventions that systematically improve decision quality.

5. Unwarranted Belief Perseverance

If a belief is based on Data X, then when Data X are shown to be false the belief should reset to its state prior to learning Data X.

This is often not what happens.

Taking an analogy from Robert Cialdini’s Influence, Data X can be thought of as a pillar. The belief based on it is like a slab balancing on this pillar. If narratizing further fleshes out the belief, then it will add other reasons for believing, other “pillars,” such that if the original pillar, Data X, are removed, the slab nevertheless remains firmly in place.    

In a classic study, Anderson, Lepper, and Ross (1980) induced participants to believe in a predictive relationship between a personality trait and a behavioral outcome. Participants were then told the relationship wasn’t real and the data were fictitious.

They found that participants still nevertheless generalized the hypothesis—which they were just told was made up—to both new people and test items. This effect was even more evident in participants who had also written causal narratives explaining “why” the relationship in question made sense.

In short, once a narrative is in place, we may continue to use it to assess the likelihood of whatever events it purported explain—even after the data it was originally based on has been debunked. Thus, narratives make such “unwarranted belief perseverance” worse.




Conclusion

Hopefully the above shows some of the ways narratizing can lead us astray. So, what to do about it?

In answering, it’s vital to also discuss what doesn’t work. After all, if our thinking is beset with cognitive biases, so too will be our thinking about what to do about cognitive biases.

For example, raising the stakes, such as by offering a financial incentive to do well, does not counter whatever bias might be at play. It may even exacerbate it. If you incentivize people to apply their current thinking with more vigor, then any bias at play may end up being amplified.

As we saw above, presenting facts is also ineffective. In one of my favorite examples, Lord, Lepper, and Preston (1984) had people either in favor of or opposed to capital punishment assess two studies. One purportedly found evidence that the death penalty lowers murder rates. The other found evidence that it doesn’t. Unsurprisingly, both groups on average rated the study that supported their a priori views as being methodologically superior to other one.

More interesting was the finding of attitude polarization. Both groups walked away feeling more confident in their initial positions, even though everyone on both sides of the issue read the same two studies (not to mention that they were made up). Thus, if already processing information in biased ways, then new information is just more fuel for the bias fire.

Further, if the new information disconfirms current beliefs, it may also trigger defense mechanisms. Boghossian and Lindsay call this the “Backfire Effect”. Presenting conflicting evidence generates cognitive dissonance, which launches people into defending their beliefs, seeking out arguments to counter your “facts,” and rehearsing defenses against future challenges. In short, when new data contradicts a belief, we automatically want to argue against it just to make ourselves feel better.

Lord, Lepper, and Preston also found that asking people to be objective had no effect. What proved effective, however, was counterfactual reasoning, asking people to consider if they would have the same evaluations if the same study had produced results on the other side of the issue. Interestingly, counterfactual primes seem to improve performance on many decision-making tasks.

Tversky and Kahneman (1973), for example, proposed that thoughts can come to mind in one of two ways, either by recalling past instances or by mentally creating scenarios. The latter, which is basically counterfactual reasoning, is not our usual habit. This skill, which helps us generate alternatives, can and should be developed like a muscle.

To close, once we are “in a narrative”, much of our “thinking” serves only to further flesh it out. It can be hard to step out of. We’ll often overestimate the likelihood that our current narrative is true. We’ll also then think the outcome our narrative “explains” is itself more likely to occur. (This is sometimes called the “Explanation Bias”.)

Whatever the focal hypothesis of the current narrative, its truth is assumed. To counter this, we need counter-explanations.

This brings us back to McLuhan’s “water”. Remember, to detect an environment we need an “anti-environment”, or here, “anti-narratives”.

It’s like we can’t stop swimming in the “pool” we’re in unless we see another pool to swim in instead. Most of us don’t have the Buddhist-mental wherewithal to do the opposite of Dory’s advice in Finding Nemo: “Just stop swimming!”


2 thoughts on “Stories as Filters

  1. dear Charles,

    I love it when you do a series (it makes it closer to a chapter in a book.

    Here are my favorites

    2. Projection

    In Gunther von Fritsch’s 1944 film, The Curse of the Cat People, a lonely girl invents an imaginary playmate for herself…or does she? The movie was praised by the head of the Child Psychology Clinic at UCLA. He would show the film in class and remark on the brilliant insight of having the child showcase her emotional problems by maintaining an awkward half-smile throughout the film.

    One year he invited Val Lewton, the film’s producer, to attend his lecture. When the professor brought up the smile, Lewton frankly shared the actress held her mouth the way she did because she had just lost a tooth and they didn’t want it to show in the movie. The professor, in other words, was projecting.

    In general, we are better at inferring mental states that result from events than we are at positing mental states as causes of behavior.

    After all, if our thinking is beset with cognitive biases, so too will be our thinking about what to do about cognitive biases.

    What proved effective, however, was counterfactual reasoning, asking people to consider if they would have the same evaluations if the same study had produced results on the other side of the issue. Interestingly, counterfactual primes seem to improve performance on many decision-making tasks. And find another pool to swim in.

    Take care my friend,

    Michael

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: