Search the KHIT Blog

Saturday, January 14, 2023

"Deliberation Science" readings update

There's so much yet to learn. And, even more yet to "unlearn." Some current readings.

First, a Christmas gift book from my sister.
The past several years, we’ve felt as if we’re all stuck in some weird Twilight Zone nightmare where we are constantly, relentlessly gaslit. Up is down, left is right, right is wrong. It feels as if the values of our society have changed almost overnight. We feel disoriented, frustrated, disaffected, and distrustful of each other. We ask ourselves whether we’ve gone crazy, or the world has, or both. No wonder people in the United States are waging a kind of war on trust, building elaborate castles of suspicion that imperil our personal happiness and national prosperity.

All around the world, democracy is now under strain due in part to social problems that cannot be solved through legislation or technology. In a very real sense, collective illusions do the most damage in free societies, precisely because they depend on shared reality, common values, and the willingness to engage with different viewpoints in order to function, let alone flourish. That is why I see collective illusions as an existential threat.

 The bad news is that we are all responsible for what is happening. And yet that is also the good news, because it means we have the power, individually and together, to solve the problem. The best news of all is that, as powerful as collective illusions are, they are also fragile because they are rooted in lies and can be dismantled through individual actions. With the right tools and some wise guidance, we can dismantle them…

…While our social nature is part of our biology, our reaction to our social instincts is within our control. When we’re armed with the right knowledge and skills, we don’t have to choose between being a maverick or being a lemming. This book aims to give you the tools you need to truly understand why and how we conform, how conformity leads directly to collective illusions, and how you can learn to control social influence so that it doesn’t control you…

…we often conform because we’re afraid of being embarrassed. Our stress levels rise at the thought of being mocked or viewed as incompetent, and when that happens, the fear-based part of the brain takes over. Confused and unsure of ourselves, we surrender to the crowd because doing so relieves our stress. Caving to the majority opinion also diffuses our personal responsibility for our decisions, making it easier to bear mistakes. When you find yourself making a decision on your own, it can feel isolating, and the personal responsibility can be intimidating. Indeed, whether our actions are right or wrong, they always feel better if we take them together with others.

Rose, Todd (2022-01-31T22:58:59.000). Collective Illusions. Hachette Books. Kindle Edition.
Persuasive case, eloquently made.
Think about Robert Cialdini's venerable "Influence: the Psychology of Persuasion" and how his "levers" model fits with Todd's thesis:
Consistency and Commitment;
Social Proof;
Appeals to Evidence;
Appeals to Authority;
Appeals to Scarcity.
Dr. Cialdini's work is widely known in the advertising / marketing domains, re principles and techniques going to being "influential." Dr. Rose's work reveals an unhappy flip side of that—our vulnerability to malign influence given our social anxieties regarding "fitting in" to socioculturural norms (which includes the political space).

How about some Justin Gregg?

I would also recommend Todd Kashdan's book apropos of this topic:
Also, more on principles of influence by our esteemed Zoe B. Chance in a prior post.

I recently subscribed to some useful Substack readings. This recent Steven Pinker offering stood out:
Reason To Believe
How and why irrationality takes hold, and what do to about it.

When I tell people that I teach and write about human rationality, they aren’t curious about the canons of reason like logic and probability, nor the classic findings from the psychology lab on how people flout them. They want to know why humanity appears to be losing its mind…

…Can anything be done? Explicit instruction in “critical thinking” is a common suggestion. These curricula try to impart an awareness of fallacies such as arguments from anecdote and authority, and of cognitive biases such as motivated reasoning. They try to inculcate habits of active open-mindedness, namely to seek disconfirmatory as well as confirmatory evidence and to change one’s mind as the evidence changes.

But jaded teachers know that lessons tend to be forgotten as soon as the ink is dry on the exam. It’s hard, but vital, to embed active open-mindedness in our norms of intellectual exchange wherever it takes place. It should be conventional wisdom that humans are fallible and misconceptions ubiquitous in human history, and that the only route to knowledge is to broach and then evaluate hypotheses. Arguing ad hominem or by anecdote should be as mortifying as arguing from horoscopes or animal entrails; repressing honest opinion should be seen as risible as the doctrines of biblical inerrancy or Papal infallibility.

But of course we can no more engineer norms of discourse than we can dictate styles of hairstyling or tattooing. The norms of rationality must be implemented as the ground rules of institutions. It’s such institutions that resolve the paradox of how humanity has mustered feats of great rationality even though every human is vulnerable to fallacies. Though each of us is blind to the flaws in our own thinking, we tend to be better at spotting the flaws in other people’s thinking, and that is a talent that institutions can put to use. An arena in which one person broaches a hypothesis and others can evaluate it makes us more rational collectively than any of us is individually.

Examples of these rationality-promoting institutions include science, with its demands for empirical testing and peer review; democratic governance, with its checks and balances and freedom of speech and the press; journalism, with its demands for editing and fact-checking; and the judiciary, with its adversarial proceedings. Wikipedia, surprisingly reliable despite its decentralization, achieves its accuracy through a community of editors that correct each other's work, all of them committed to principles of objectivity, neutrality, and sourcing. (The same cannot be said for web platforms that are driven by instant sharing and liking.)

If we are to have any hope of advancing rational beliefs against the riptide of myside bias, primitive intuitions, and mythological thinking, we must safeguard the credibility of these institutions. Experts such as public health officials should be prepared to show their work rather than deliver pronouncements ex cathedra. Fallibility should be acknowledged: we all start out ignorant about everything, and whenever changing evidence calls for changing advice, that should be touted as a readiness to learn rather than stifled as a concession of weakness.

Perhaps most important, the gratuitous politicization of our truth-seeking institutions should be halted, since it stokes the cognitively crippling myside bias. Universities, scientific societies, scholarly journals, and public-interest nonprofits have increasingly been branding themselves with woke boilerplate and left-wing shibboleths. The institutions should not be surprised when they are then blown off by the center and right which make up the majority of the population. The results have been disastrous, including resistance to climate action and vaccination.

The defense of freedom of speech and thought must not be allowed to suffer that fate. Its champions should have at their fingertips the historical examples in which free speech has been indispensable to progressive causes such as the abolition of slavery, women’s suffrage, civil rights, opposition to the war in Vietnam, and gay rights. They should go after the censors on the right as vigorously as those on the left, and should not give a pass to conservative intellectuals or firebrands who are no friends to free speech, but are merely enemies of their enemies.

The creed of universal truth-seeking is not the natural human way of believing. Submitting all of one’s beliefs to the trials of reason and evidence is cognitively unnatural. The norms and institutions that support this radical creed are constantly undermined by our backsliding into tribalism and magical thinking, and must constantly be cherished and secured…
Hmmm… “critical thinking?” Yeah. “Necessary but insufficient?” One wonders.

Dr. Pinker cites Professor Keith Stanovich:

Myside bias is displayed by people holding all sorts of belief systems, values, and convictions. It is not limited to those with a particular worldview. Any belief that is held with conviction—any distal belief, to use Robert Abelson’s (1986) term—can be the driving force behind myside thinking. In short, as an information processing tendency, myside cognition is ubiquitous.

Some might argue that something so ubiquitous and universal must be grounded in the evolution of our cognitive systems (either as an adaptation or as a by-product). Others, however, might argue that myside bias could not be grounded in evolution because evolutionary mechanisms would be truth seeking, and myside bias is not. In fact, evolution does not guarantee perfect rationality in the maximizing sense used throughout cognitive science—whether as maximizing true beliefs (epistemic rationality) or as maximizing subjective expected utility (instrumental rationality). Although organisms have evolved to increase their reproductive fitness, increases in fitness do not always entail increases in epistemic or instrumental rationality. Beliefs need not always track the world with maximum accuracy in order for fitness to increase.

Evolution might fail to select out epistemic mechanisms of high accuracy when they are costly in terms of resources, such as memory, energy, or attention. Evolution operates on the same cost-benefit logic that signal detection theory does. Some of our perceptual processes and mechanisms of belief fixation are deeply unintelligent in that they yield many false alarms, but if the lack of intelligence confers other advantages such as extreme speed of processing and the noninterruption of other cognitive activities , the belief fixation errors might be worth their cost (Fodor 1983; Friedrich 1993; Haselton and Buss 2000; Haselton, Nettle, and Murray 2016). Likewise, since myside bias might tend to increase errors of a certain type but reduce errors of another type, there would be nothing strange about such a bias from an evolutionary point of view (Haselton, Nettle, and Murray 2016; Johnson and Fowler 2011; Kurzban and Aktipis 2007; McKay and Dennett 2009; Stanovich 2004). What might be the nature of such a trade-off?

For many years in cognitive science, there has been a growing tendency to see the roots of reasoning in the social world of early humans rather than in their need to understand the natural world (Dunbar 1998, 2016). Indeed, Stephen Levinson (1995) is just one of many theorists who speculate that evolutionary pressures were focused more on negotiating cooperative mutual intersubjectivity than on understanding the natural world. The view that some of our reasoning tendencies are grounded in the evolution of communication dates back at least to the work of Nicholas Humphrey (1976), and there are many variants of this view. For example, Robert Nozick (1993) has argued that in prehistory, when mechanisms for revealing what is true about the world were few, a crude route to reliable knowledge might have been to demand reasons for assertions by conspecifics (see also Dennett 1996, 126–127). Kim Sterelny (2001) developed similar ideas in arguing that social intelligence was the basis of our early ability to simulate (see also Gibbard 1990; Mithen 1996, 2000; Nichols and Stich 2003). All of these views are, despite subtle differences between them, sketching the genetic-cultural coevolutionary history (Richerson and Boyd 2005) of the negotiation of argument with conspecifics.

The most influential synthesis of these views—and the most relevant to myside bias—was achieved by Hugo Mercier and Dan Sperber (2011, 2017),whose subtle, nuanced theory of reasoning is grounded in the logic of the evolution of communication. Mercier and Sperber’s theory posits that reasoning evolved for the social function of persuading others through arguments. If persuasion by argument is the goal, then reasoning will be characterized by myside bias. We humans are programmed to try to convince others with arguments, not to use arguments to ferret out the truth. Like Levinson (1995) and the other theorists mentioned earlier, Mercier and Sperber (2011, 2017) see our reasoning abilities as arising from our need not to solve problems in the natural world but to persuade others in the social world. As Daniel Dennett (2017, 220) puts it: “Our skills were honed for taking sides, persuading others in debate, not necessarily getting things right.”

In several steps, Mercier and Sperber’s (2011, 2017) theory takes us from the evolution of reasoning to our ubiquitous tendency, as humans, to reason with myside bias. We must have a way of exercising what Mercier and Sperber call “epistemic vigilance.” Although we could adopt the inefficient strategy of differentiating trustworthy people from untrustworthy people by simply memorizing the history of our interactions with them, such a strategy would not work with new individuals. Mercier and Sperber (2011, 2017) point out that argumentation helps us to evaluate the truth of communications based simply on content rather than on prior knowledge about particular persons. Likewise, we learn to produce coherent and convincing arguments when we wish to transmit information to others with whom we have not established a trusting relationship. These skills of producing and evaluating arguments allow members of a society to exchange information with other members without the need to establish a prior relationship of trust with them.

If, however, the origins of our reasoning abilities lie in their having as a prime function the persuasion of others through argumentation, then our reasoning abilities in all domains will be strongly colored by persuasive argumentation. If the function of producing an argument is to convince another person, it is unlikely that the arguments produced will be an unbiased selection from both sides of the issue at hand. Such arguments would be unconvincing. Instead, we can be expected to have an overwhelming tendency to produce arguments that support our own opinion (see Mercier 2016).

Mercier and Sperber (2011) argue that this myside bias carries over into situations where we are reasoning on our own about one of our opinions and that, in such situations, we are likely to anticipate a dialogue with others (see Kuhn 2019). The anticipation of a future dialogue will also cause us to think to ourselves in a mysided manner. Mercier and Sperber’s (2016, 2017) theory makes differential predictions about our ability to evaluate the arguments of others. Basically, it predicts that, though we will display a myside bias in evaluating arguments if the issue in question concerns a distal belief, we will display much less of a myside bias when the issue in question is a testable belief.

In short, Mercier and Sperber (2011, 2017) provide a model of how myside bias is inherent in the evolutionary foundations of reasoning. From their evolutionary story of origins, it is not hard to imagine how the gene-culture coevolutionary history (see Richerson and Boyd 2005) of argumentation abilities would reinforce the myside properties of our cognition (a subject of much speculation I can only allude to here). For example, in an early discussion of myside costs and benefits, Joshua Klayman (1995) suggests some of the gene-culture coevolutionary trade-offs that may have been involved. He discusses the cognitive costs of generating ideas outside the mainstream—“Just keeping an open mind can have psychic costs” (Klayman 1995, 411)—and the potential social disapproval of those who waffle. And he discusses the often-immediate benefits of myside confidence triumphing over the more long-term benefits of doubt and uncertainty. Anticipating Mercier and Sperber (2011) in some ways, Klayman (1995, 411) argues that “when other people lack good information about the accuracy of one’s judgments, they may take consistency as a sign of correctness”; he points to the many characteristics of myside argumentation (e.g., consistency, confidence) that can bootstrap social benefits to the individual and group. Dan Kahan’s discussions of his concept of identity protective cognition (Kahan 2013, 2015; see also Kahan, Jenkins-Smith, and Braman 2011; Kahan et al. 2017) likewise suggest other potential mechanisms for myside bias to confer evolutionary benefit by facilitating group cohesion. These possible social benefits must be taken into account when we assess the overall rationality of mysided thinking…

Stanovich, Keith E. (2021-08-30T23:58:59.000). The Bias That Divides Us. MIT Press. Kindle Edition.
That blew me away. I am a long-time fan of Mercier & Sperber's "Why Do Humans Reason?"
"Why do humans reason?" Uhhh... to WIN the argument at hand. If objective truth happens along the way, so much the better.
A "Pen is Mightier Than The Sword" riff.


I agreed to accept a comp hardcopy to read, evaluate, and review this book. Just finished it. Review pending. Tangential to the topic of this post, a Very enjoyable read nonetheless. The above quote sums it up succinctly and accurately. While the "pursuit of happiness" is an widely enduring yet often "aspirational" ideal, orienting toward consistently having more deliberate "fun" is easier to accomplish and helps in the achievement of rational, prosocial, stable "happiness."


Goes beyond “fun.”


You know those recent iPhone Max Pro ads, “Hollywood in your pocket?“ I guess we’re fixin’ to see. I just ordered the full Monty—a 14 Max Pro with a terabyte of capacity.

“Baltimore in my pocket.”

No comments:

Post a Comment