In a word: "Gasphxiation."
“How do we as citizens participate in a democracy when disinformation is so prevalent, and when so many seem so willing to believe the lies and ignore the reality that is right in front of us? When so many are willing to abandon all values to choose their side, every single time?”
People who feel more and more powerless have asked me a version of this question: “What can I do practically as a citizen, apart from vote and call my representative, to help preserve American democracy against Trump’s assault against our institutions and truth itself?”
I’ve struggled to offer an answer; so have those I’ve reached out to for counsel. I have yet to receive a menu of compelling options. But I am certain that what needs to inform the answers to these questions, and what needs to precede a comprehensive plan of action, is knowledge.
That means turning to experts on the history of disinformation, such as Thomas Rid, who can talk about how societies have addressed these questions in the past; political psychologists, such as Australia’s Karen Stenner, who can help develop the language for how to reach people awash in distortions and deceptions; and experts in psychology and neuroscience, such as Jay Van Bavel, whose work addresses issues of group identity, social motivation, cooperation, intergroup bias, and social media. It includes turning to cognitive scientists such as Steven Sloman and Philip Fernbach, who study how people reason, make decisions, and form attitudes and beliefs; philosophers of science such as Cailin O’Connor and James Owen Weatherall, who argue that social forces explain the persistence of false beliefs; Peter Pomerantsev, who specializes in overcoming the challenges of digital-era disinformation and polarization; and political scientists such as Brendan Nyhan, who works on subjects including misperception and conspiracy theories...
THIS WEEK'S CULMINATION OF TRUMP'S "LIBERATION DAY"
Read a review in the new issue of my Science Magazine. Yeah, of course I bought it.
For my PhD, I researched human and social engineering in the mid-twentieth century, spending years peering into leather-bound journals at articles that described rats running through mazes millions of times. These seemed like the least-sought-after things in the library. Making my way along the shelves, I looked into radical behaviorist experiments that were first dreamed up in laboratories before World War II and then streamed into the world (as I argued) after World War II. I titled the resulting dissertation “The Laboratory Imagination.”Stay tuned. Lotta book here...
When I graduated in the spring of 2000, just as a new millennium began, I looked around myself. The dot-com bubble had just burst in the California Bay Area where I lived. The bubble’s “irrational exuberance,” as then Federal Reserve chairman Alan Greenspan put it around this time, had meant huge tips for my waiter boyfriend who worked at a celebrated place with end-grain wood tables where they made upscale corndogs for the employees in the afterhours. Suddenly, when the exuberance drained away, so did the outsized gratuities (not to mention the corndogs, which slipped in quality). I had no permanent job and was considering an academic post—my doctoral supervisor had arranged a one-year stint at a private college a few states away—but I felt unequipped for the job, which scared me. I decided instead to dedicate myself to a year of yoga by enrolling in a teacher training program. Studying esoteric practices and working as a (very) part-time legal secretary to support myself felt like a way to address the sterility of a world where social control procedures were proliferating—the way everything was so managed, from your Starbucks order to your health insurance (had I been lucky enough to have any).
All around me, I was seeing the experiments I had studied come to life. Some of the most sophisticated of my scientist subjects, the ones whose previously neglected work I had exhumed, called their goal “canalizing.” The point of canalizing was to channel people’s impulses into desired responses by crafting their outer surroundings and even their inner psychology. (The “transformation of duty into desire,” as one put it.) People must be taught to want what was best for them. This was human engineering through behavior modification, and it was the opposite of neglected.
Canalizing appeared to be everywhere in the 1990s. Uniqueness was just starting to be mass produced. The slogan for a cell phone company around that time captured the nascent vibe: “You’re a unique individual. This phone is just for you . . . and everyone like you.” Even the Army was recruiting based on scripted individualism rather than social duty in those days, debuting a new slogan, “An Army of One,” in 2001. “I am an Army of one,” declared a corporal running across the Mojave at dawn while carrying a thirty-five-pound pack in a widely run TV advertisement: “Even though there are 1,045,690 soldiers just like me, I am my own force.” As in the military, so in the academic world: Aspiring to be a bold free thinker around that time in Berkeley, California, led to quandaries. Aiming to be a “thinker of one” was full of pitfalls. You were unique but simultaneously part of 1,045,690 others who were “just like me.” There was an acceptable way to be “different.” Norms expanded to incorporate this quirkiness. And so was born, or reborn, the hipster.
I had spent almost half of my twenties studying the long-forgotten records of a huge experiment in inculcating unfreedom. At Yale’s Institute of Human Relations, a well-funded program of running rats through mazes, with the objective of establishing a universal science of behavior control, flourished during the 1930s. It featured many scientists announcing breakthroughs in what they described as “the maze that a human must learn” in order to live decently and capably in society. But not every experiment was a success. In a 1934 study by psychologist Neal Miller, one particular small Norwegian rat—an animal that had been “variable throughout,” according to its handlers—suddenly refused to run at all after facing myriad electric shocks in an alleyway. The animal’s recalcitrance sabotaged the data for the whole experiment, and the umbrage-filled note in the scientific publication—“Unfortunately . . . [t]his [refusal] spoiled the statistical reliability of the outcome”—piqued my interest. Even in a huge program like Yale’s, one that sought to apply its animal discoveries to humans, a lone two-ounce creature could derail an experiment. I wondered: How did the past, with its seemingly unrealized stakes, contend with the bigger stakes of the present?
To bridge the gap between past and present, I started to think about brainwashing, a topic I had not covered directly in my dissertation but was implied. Brainwashing was a well-worn term, and certainly everyone could recognize it—yet I found I didn’t really know what it meant. It was over-the-top, scandalous, frightening, possibly silly. But it also struck me as possibly the most successful method in history to mold a human being into some new form. It was not just a matter of compelling someone to do something they didn’t want to do or breaking a person down. After all, brute force had a long history, a lot longer than brainwashing. Brute force did not really change people’s minds and sometimes it actually inspired resistance. Sheer physical punishment (“getting medieval on your ass” was the way the character Marsellus Wallace put it in Pulp Fiction) was not reliably successful. The Central Intelligence Agency’s MK-ULTRA program would prove this. Even an average man of the late seventeenth century, the priest Urbain Grandier of Loudun, France, managed to resist making a forced confession at the hands of his tormentors as he was burned alive.*
What concerned me were modern methods, in which physical force was involved but was not the primary driver of change. As Czesław Miłosz observed in his 1953 classic, The Captive Mind, “We are concerned here with questions more significant than mere force.” The scarier thing was people gladly volunteering for terrible outcomes—begging for the gallows or sacrificing their previous beliefs and blindly embracing new ones. Observers from Aldous Huxley to a notorious Communist interrogator agreed that whereas it was possible to resist torture, new methods of the mid-twentieth century were close to 100 percent effective in attaining compliance. Just about no one could withstand them. “If God Himself was sitting in that chair we would make him say what we wanted him to say,” claimed the interrogator. No one was exempt anymore. Said Huxley in Brave New World Revisited: “Government through terror works on the whole less well than government through the non-violent manipulation of the environment and of the thoughts and feelings of individual men, women and children.” Breaking the will, Brave New World style, was possible using behavioral technologies of modern mind control. Not only pain but a targeted mix of pleasure and pain would ensure adherence. Brainwashing is neither pure persuasion nor sheer coercion but both: coercive persuasion.
What was this method? Was it possible that, to have a rate of compliance nearing total, the subject must—on some level, in some way, even subtly, even unwittingly—agree? And what did this all have to do with me and my concerns with being a unique and free individual—like everyone else—at the dawn of the internet age? This was not just a random cell phone campaign or a bohemian style of conformity but hard evidence of an extreme process.
The natural place to begin was during the early Cold War. Any proper discussion of brainwashing, I thought, must commence with its entrance into the English language. And even this was hazy…
Lemov, Rebecca. The Instability of Truth: Brainwashing, Mind Control, and Hyper-Persuasion (pp. 11-15). (Function). Kindle Edition.
Random web definition
PREFACE
I WAS WORKING IN the library when I bumped into one of my former patients. I hadn’t seen her for maybe five years. We sized each other up. I was a retired professor of psychiatry lugging a pile of books. She was a bright young scholar carrying lots of baggage from her past. We chatted for a bit, surrounded by shelves of books in the stacks, and she asked what I was working on. I told her I had gotten interested in brainwashing.
“Umm,” she said. “Isn’t that kind of a stale, musty topic—Communists, bad science, and all that stuff?” As I said, she was bright and inclined to come right to the point—tact had never been her strong suit. Why was I spending so much time on this arcane topic? Granted, I am eccentric; but what made me think anybody else would be interested in this subject?
Then I came home to watch the evening news, which featured its usual dose of suicide bombers and mass shootings, followed by political leaders making preposterous statements (“Vaccination causes autism,” “Global warming is a myth,” “The COVID-19 virus is not a problem”). It is bad enough that leaders can propound such nonsense; the bigger problem is that they persuade so many other people to endorse their misunderstandings of the world. I thought about my patient again. How did she make sense of a world where people could be persuaded to believe rubbish and follow it up with self-destructive violence?
As a psychiatrist, I should be one of the last people to believe the world operates rationally. I know better. Leaders have all too often been pied pipers, but something new emerged in the twentieth century. I still don’t know what to call this phenomenon. Brainwashing, coercive persuasion, thought control, dark persuasion—all these terms refer to the fact that certain techniques render individuals shockingly vulnerable to indoctrination…
Dimsdale, Joel (2021). Dark Persuasion: A History of Brainwashing from Pavlov to Social Media (Function). Kindle Edition.
Hmmm... From a prior post; #OnDisinformation.