Search the KHIT Blog

Tuesday, January 31, 2023

"You could fill up a book with what I know."

"But with all I don't know, you could fill up a library."

I found the blog post title quote in this Atlantic article by Thomas Chatterton Williams.
When I was in my 20s and writing my first book—I know, I really fucked up there—I came across a quote I can no longer find the source of that said, essentially, “You could fill a book with all I know, but with all I don’t know, you could fill a library.” It’s a helpful visualization, perhaps the most basic and pragmatic justification for deep reading. And though correlation is not causation, I submit that we’d save ourselves an enormous amount of trouble in the future if we’d agree to a simple litmus test: Immediately disregard anyone in the business of selling a vision who proudly proclaims they hate reading...
Tweeted him to tell him I'd be "stealin' it."
We have never before had access to so many perspectives, ideas, and information. Much of it is fleetingly interesting but ultimately inconsequential—not to be confused with expertise, let alone wisdom. This much is widely understood and discussed. The ease with which we can know things and communicate them to one another, as well as launder success in one realm into pseudo-authority in countless others, has combined with a traditional American tendency toward anti-intellectualism and celebrity worship. Toss in a decades-long decline in the humanities, and we get our superficial culture in which even the elite will openly disparage as pointless our main repositories for the very best that has been thought.

…In an ill-conceived profile from September, published on the Sequoia Capital website, the 30-year-old SBF rails against literature of any kind, lecturing a journalist on why he would “never” read a book. “I’m very skeptical of books,” he expands. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”…

It is one thing in practice not to read books, or not to read them as much as one might wish. But it is something else entirely to despise the act in principle. Identifying as someone who categorically rejects books suggests a much larger deficiency of character …. receiving all of your information from the SBF ideal of six-paragraph blog posts, or from the movies and random conversations that Ye prefers, is as foolish as identifying as someone who chooses to eat only fast food.

Many books should not have been published, and writing one is an excruciating process full of failure. But when a book succeeds, even partially, it represents a level of concentration and refinement—a mastery of subject and style strengthened through patience and clarified in revision—that cannot be equaled. Writing a book is an extraordinarily disproportionate act: What can be consumed in a matter of hours takes years to bring to fruition. That is its virtue. And the rare patience a book still demands of a reader—those precious slow hours of deep focus—is also a virtue. One might reasonably ask just where, after all, these men have been in such a rush to get to? One might reasonably joke that the answer is either jail or obscurity…
Great article.
"...when a book succeeds, even partially, it represents a level of concentration and refinement—a mastery of subject and style strengthened through patience and clarified in revision—that cannot be equaled. Writing a book is an extraordinarily disproportionate act: What can be consumed in a matter of hours takes years to bring to fruition. That is its virtue. And the rare patience a book still demands of a reader—those precious slow hours of deep focus—is also a virtue..."
I continue to read for at least 30 hours a week, averaging 2-3 books a week plus all of my periodicals. There's just too much to learn and unlearn. 'Nuther fav quote of mine: "The best place to hide a $100 bill from Donald Trump is inside a book."
Just finished this one.
Stay tuned. Not yet sure about this one. It piqued my interest in light of my 1998 Master’s in Applied Ethics ("Ethics & Policy Studies") and my ongoing Jones for so-called "Deliberation Science." 
Read this New Yorker article the other day. Led me to this book. Had to get it. Delightful thus far.
Early on, some "Taylorism 2.0" observations (I've taken my shots at Taylor across the years. I'm one of those humanistic progressive QI guys):
...Sometimes, digital enforcement happens through attempts to prevent violation, making rules more difficult to break—using code to make it more onerous (or even impossible) to deviate from an imposed rule. For example, digital rights management technology makes it (nearly) impossible to violate copyright law. If these technologies work as “perfectly” as intended, rule violation is completely impaired, and violation becomes practically impossible (or at least much more difficult). But even more common than tools of prevention are tools of detection—technologies that function not by making rule-violating behavior more difficult to execute, but by creating a comprehensive account of our behaviors. These are surveillance technologies. For example, body-worn cameras don’t make it impossible for a police officer to use unauthorized force against a civilian, but are intended to make the officer more accountable should they do so. These technologies may work by deterring sanctioned behaviors—knowing that one is being observed can incentivize rule-following—or because they enable enforcers to more swiftly detect and punish rule-breaking.

Perhaps nowhere do we see this trend more clearly than in the workplace, where surveillance over workers’ behaviors has become a favored method for compelling compliance with the aims of management. As we’ll see, this practice has deep roots—but contemporary workplace surveillance has some new features, too.

Work and the Future of Work
We often anticipate the “future of work” in either dreamy or dystopian terms. The phrase has been widely adopted by technologists and commentators, either to describe a paradisical ideal in which people have much greater autonomy and flexibility to do work in ways that suit them while affording them ample time for leisure; or, as a dark alternative, as a future in which workers have ever-diminishing social and economic power and in which their every move and thought is overseen, predicted, and optimized by management, human or algorithmic. Both visions, though, are united by the assumption that the future of work (whatever it looks like) will happen, well, in the future—that is to say, this is a vision of a time that is not now, and that is somehow different than now, or at least different enough that it deserves its own label.

It’s rather curious that we tend to talk in such future-oriented prognostications about what technological change will portend for work and the workplace. In other domains, the way we talk about technology tends to be more focused on what is occurring now or in the very near term; but when it comes to work, we maintain some temporal distance, at least in our discourse, from these changes. This is odd because the “future of work” is, of course, not some distant or discrete mode of social organization so unlike the one we have today. The management practices of tomorrow are, in many ways, not particularly different from the management practices of the past. They’re built on the same foundations—motivating efficiency, minimizing loss, optimizing processes, improving productivity. And one of the most common strategies for achieving these goals, then and now, is increased oversight over the activities of workers.

So what's new about today’s workplace surveillance? Is this not just more of the same, driven by the same organizational goals that have always motivated managerial oversight—even if the specific technologies that are used to do so have changed form in one way or another? Some workplace monitoring is old wine in a new bottle, a contemporary instantiation of the manager with a clipboard looming above the factory floor. This is not to say, of course, that these practices don’t deserve scrutiny or critique—but we should be precise about what, if anything, is new here.

In fact, there are some subtle but important dynamics that distinguish contemporary workplace surveillance from what’s come before, and that will become important in telling the truckers’ story. First, contemporary technologies facilitate surveillance in new kinds of workplaces. Geographically distributed and mobile workers, for example, have historically maintained more independence from oversight than workers centralized in nonmobile workplaces, like factories, call centers, and office buildings—but location tracking, sensor technology, and wireless networking have changed that. Porous boundaries between home and work also facilitate surveillance in new places. For example, the growth of work-from-home arrangements during the Covid-19 pandemic has led to greater use of tracking software to monitor workers’ keystrokes, locations, and web traffic—as well as video capture of the kitchen tables and living rooms in which their work now takes place.

New kinds of data also come to the fore. As sensor technologies become cheaper and easier to deploy, and workplace surveillance capability is more frequently embedded in software by default, employers are well positioned to capture more and more fine-grained data about workers’ movements and activities. Wearable technologies, like those used in Amazon’s warehouses, monitor and evaluate workers’ speed with much more precision than was previously possible—including the number and length of their bathroom breaks. Employers increasingly monitor and analyze datapoints like workers’ social media posts, phone calls, and attendance at meetings; Microsoft faced pushback in 2020 when it built “productivity scoring” into its widely used Office 365 product, which gave managers access to “73 pieces of granular data about worker behavior” like email and chat frequency. And as we’ll discuss, biometric data is also becoming more commonly collected in the workplace—from authentication mechanisms like fingerprint and retinal scans to behavioral data about workers’ attention and fatigue.

These new data streams fuel new kinds of analysis that impact how workers are managed. In some contexts, managerial decisions are implemented through opaque algorithmic systems that can create acute information asymmetries between workers and firms—like Uber’s use of algorithms to apportion rides and determine rates without making those rules transparent to its drivers. Other analyses are predictive, designed to forecast which workers are likely to be most productive, how many workers to staff at a given time to meet demand, or which worker is likely to make a sale to a particular customer.

Finally, contemporary workplace surveillance can blur boundaries between the workplace and other spheres of life, creating new kinds of entanglements across previously disparate domains. Surveillance of work-from-home environments can facilitate data collection about family, friends, and living situations. Managers often keep tabs on workers’ online activities on social media platforms. Workplace wellness programs can facilitate employers’ collection of data about worker health, stoking concerns about discrimination. And “bring-your-own-device” policies, in which an employer’s software is installed on a worker’s own personal phone or computer, can further muddle distinctions between home and work and create additional data privacy and security concerns.

Levy, Karen. Data Driven (pp. 6-9). Princeton University Press. Kindle Edition.
Scary smart. Law Degree and a Doctorate in Sociology from Princeton. What a Sheet (pdf).
Just guessing, based on her CV, she may be about 40 ( no personal bio info). But, I gotta say, were I a bartender and she came in, I'd be asking for ID.

Stay tuned...

Thursday, January 26, 2023

"RUSSIA, RUSSIA, RUSSIA..." Interesting Twitter thread

  1. The person who led the relevant section, Charles McGonigal, has just been charged with taking money from the Russian oligarch Oleg Deripaska. Follow this thread to see just how this connects to the victory of Trump, the Russian war in Ukraine, and U.S. national security. 1/20

  2. The reason I was thinking about Trump & Putin in 2016 was a pattern. Russia had sought to control Ukraine, using social media, money, & a pliable head of state. Russia backed Trump the way that it had backed Ukrainian president Viktor Yanukovych, in the hopes of soft control 2/20

  3. Trump & Yanukovych were similar figures: interested in money, & in power to make or shield money. And therefore vulnerable partners for Putin. They also shared a political advisor: Paul Manafort. He worked for Yanukovych from 2005-2015, taking over Trump's campaign in 2016. 3/20

  4. You might remember Manafort's ties to Russia from 2016. He (and Jared Kushner, and Donald Trump, Jr.) met with Russians in June 2016 in Trump Tower as part of, as the broker of the meeting called it, "the Russian government's support for Trump" (#RoadToUnfreedom, p. 237). 4/20

  5.  Manafort had to resign as Trump's campaign manager in August 2016 when news broke that he had received $12.7 million in cash from Yanukovych. But these details are just minor elements of Manafort's dependence on Russia. (#RoadToUnfreedom, p. 235). 5/20

  6. Manafort worked for Deripaska, the same Russian oligarch to whom McGonigal is linked, between 2006 and 2009. Manafort's assignment was to soften up the U.S for Russian influence. He promised "a model that can greatly benefit the Putin government." (#RoadToUnfreedom, p. 234). 6/20

  7. While Manafort worked for Trump in 2016, though, Manafort's dependence on Russia was deeper. He owed Deripaska money, not a position one would want to be in. Manafort offered Deripaska "private briefings" on the campaign. He was hoping "to get whole." (#RoadToUnfreedom, 234) 7/20

  8. Reconsider how the FBI treated the Trump-Putin connection in 2016. Trump and other Republicans screamed that the FBI had overreached. In retrospect, it seems the exact opposite took place. The issue of Russian influence was framed in a way convenient for Russia and Trump. 8/20

  9. The FBI investigation, Crossfire Hurricane, focused on the narrow issue of personal connections between the Trump campaign and Russians. It missed Russia's cyber attacks and the social media campaign, which, according to Kathleen Hall Jamieson, won the election for Trump. 9/20

  10. Once the issue of Russian soft control was framed narrowly as personal contact, Obama missed the big picture, and Trump had an easy defense. Trump knew that Russia was working for him, but the standard of guilt was placed so high that he could defend himself. 10/20

  11. It is entirely inconceivable that McGonigal was unaware of Russia's 2016 cyber influence campaign on behalf of Trump. Even I was aware of it, and I had no expertise. It became one of the subjects of my book #RoadtoUnfreedom. 11/20

  12. The FBI did investigate cyber later, and came to some correct conclusions. But this was after the election, and missed the Russian influence operations entirely. That was an obvious counterintelligence issue. Why did the FBI take so long, and miss the point? 12/20

  13. I had no personal connection to this, but will just repeat what informed people said at the time: this sort of thing was supposed to go through the FBI counter-intelligence section in New York, where tips went to die. That is where McGonigal was in charge. 13/20

  14. The cyber element is what McGonigal should have been making everyone aware of in 2016. In 2016, McGonigal was chief of the FBI's Cyber-Counterintelligence Coordination Section. That October, he was put in charge of the Counterintelligence Division of the FBI's NY office. 14/20

  15. We need to understand why the FBI failed in 2016 to address the essence of an ongoing Russian influence operation. The character of that operation suggests that it would have been the responsibility of an FBI section whose head is now accused of taking Russian money. 15/20

  16. Right after the McGonigal story broke, Kevin McCarthy ejected Adam Schiff from the House intelligence committee. Schiff is expert on Russian influence operations. It exhibits carelessness about national security to exclude him. It is downright suspicious to exclude him now. 16/20

  17. Back in June 2016, Kevin McCarthy expressed his suspicion that Donald Trump was under Putin's influence. He and other Republican members concluded that the risk of an embarrassment to their party was more important than American security. #RoadToUnfreedom, p. 255. 17/20

  18. The Russian influence operation to get Trump elected was real. It serves no one to pretend otherwise. We are still learning about it. Denying that it happened makes the United States vulnerable to ongoing Russian operations. 18/20

  19. I remember a certain frivolity from 2016. Trump was a curiosity. Russia was irrelevant. Nothing to take seriously. Then Trump was elected, blocked weapon sales to Ukraine, and tried to stage a coup. Now Ukrainians are dying every day in the defining conflict of our time. 19/20

  20. The McGonigal question goes even beyond these issues. He had authority in the most sensitive possible investigations within U.S. intelligence. Sorting this out will require a concern for the United States that goes beyond party loyalty. 20/20.
I've read two of his books.

Serious scholar. 
Below, I just downloaded another of his books and have begun reading.
The year 2010 was a time of reflection. A financial crisis two years before had eliminated much of the world’s wealth, and a halting recovery was favoring the rich. An African American was president of the United States. The great adventure of Europe in the 2000s, the enlargement of the European Union to the east, seemed complete. A decade into the twenty-first century, two decades away from the end of communism in Europe, seven decades after the beginning of the Second World War, 2010 seemed like a year for reckonings.

I was working on one that year with a historian in his time of dying. I admired Tony Judt most for his history of Europe, Postwar, published in 2005. It recounted the improbable success of the European Union in assembling imperial fragments into the world’s largest economy and most important zone of democracy. The book had concluded with a meditation on the memory of the Holocaust of the Jews of Europe. In the twenty-first century, he suggested, procedures and money would not be enough: political decency would require a history of horror.

In 2008, Tony had fallen ill with amyotrophic lateral sclerosis (ALS), a degenerative neurological disorder. He was certain to die, trapped in a body that would not serve his mind. After Tony lost the use of his hands, we began recording conversations on themes from the twentieth century. We were both worried, as we spoke in 2009, by the American assumptions that capitalism was unalterable and democracy inevitable. Tony had written of the irresponsible intellectuals who aided totalitarianism in the twentieth century. He was now concerned about a new irresponsibility in the twenty-first: a total rejection of ideas that flattened discussion, disabled policy, and normalized inequality…

Snyder, Timothy. The Road to Unfreedom (p. 2). Crown. Kindle Edition.
Read his Substack piece.
We are on the edge of a spy scandal with major implications for how we understand the Trump administration, our national security, and ourselves.

On 23 January, we learned that a former FBI special agent, Charles McGonigal, was arrested on charges involving taking money to serve foreign interests.  One accusation is that in 2017 he took $225,000 from a foreign actor while in charge of counterintelligence at the FBI's New York office.  Another charge is that McGonigal took money from Oleg Deripaska, a sanctioned Russian oligarch, after McGonigal’s 2018 retirement from the FBI.  Deripaska, a hugely wealthy metals tycoon close to the Kremlin, "Putin's favorite industrialist," was a figure in a Russian influence operation that McGonigal had investigated in 2016.  Deripaska has been under American sanctions since 2018.  Deripaska is also the former employer, and the creditor, of Trump's 2016 campaign manager, Paul Manafort.

The reporting on this so far seems to miss the larger implications. One of them is that Trump’s historical position looks far cloudier. In 2016, Trump's campaign manager (Manafort) was a former employee of a Russian oligarch (Deripaska), and owed money to that same Russian oligarch.  And the FBI special agent (McGonigal) who was charged with investigating the Trump campaign's Russian connections then went to work (according to the indictment) for that very same Russian oligarch (Deripaska).  This is obviously very bad for Trump personally.  But it is also very bad for FBI New York, for the FBI generally, and for the United States of America…
This bears ongoing close scrutiny.
…Before he moved on to other positions at FBI headquarters, McGonigal's career had begun in New York, where he worked closely with James Kallstrom — the right-wing ideologue who headed the New York office for decades. A bosom buddy of Giuliani and Trump, Kallstrom is suspected of leading the pressure campaign that induced Comey to reopen the Clinton investigation. The explicit threat of leaks by agents and former agents like Kallstrom, who reportedly hated Clinton, spurred Comey's disastrous decision and his public announcement, which again violated department policy against election interference.

Damning as those facts may seem, they only get us so far. There is much more to learn before we can understand the full story of 2016. The scrupulously nonpartisan presidential historian Michael Beschloss asked this week whether McGonigal's indictment will lead us closer to the truth. Will the prosecution of McGonigal reveal the details of his relationship with Deripaska, whom he had once investigated before becoming his corrupt stooge? Will Comey provide a full and honest accounting of what happened in the New York FBI office before the election? Will the New York Times examine — and disclose — how that misleading story about Trump and Russia appeared on its front page? Who briefed the Times for that bogus story?

With Trump seeking to return to the White House, the answers to those questions do not merely reckon with the past but are critical to democracy's future. The malign conspirators who first brought that would-be tyrant to power, both foreign and domestic, are still at large


The first video only begins as an officer pulls up behind Nichols’s car and he and other officers demand, with weapons drawn, that Nichols get out of the car. “Damn, I didn’t do anything,” Nichols says as they drag him from the car and pepper-spray him. But in the scrum, the officers also spray themselves, and Nichols somehow breaks free and runs away on foot. After giving chase, two officers mill around, pouring water in their eyes and catching their breath. The radio barks that an officer has found Nichols some distance away. “I hope they stomp his ass,” one cop says.

He got his wish. The next three videos, two from body cams and one from SkyCop, don’t capture the moment when officers catch up with Nichols, but the surveillance camera shows several officers beating him on a suburban corner. He doesn’t appear to be resisting them, or even physically able to do so; he looks like a rag doll. They shout for him to do this and that, but Nichols, being yanked in every direction by the officers, could hardly have complied if he wanted to. They stomp and kick him as he repeatedly calls for his mother…
Will this stuff ever end?

Wednesday, January 25, 2023

"USA, USA, USA..."

Madness. First 25 days of 2023, at least 39 mass shootings, at least 69 dead, many additional wounded. It's obscenely crazy.

Friday, January 20, 2023

The State of The Stupid.

A bit of Photoshop irascibility.
The rampant, reason-resistant incoherence is starting to get to me as I slink through Dry January and ponder the latest "federal debt default" canard.
The debt ceiling / federal shutdown / default threat dance is of course a recurrent, transiently annoying political brinksmanship shuffle. This time, however, the congressional GOP is overpopulated with nihilist firebrands who might just Go There. The upshot remains largely unknowable at this point—though it's unlikely to be good. From NPR’s Ron Elving: 

How could this year's fight over the debt ceiling be different?

For now, the White House and the leaders of both parties in Congress continue to vow they will avoid default. But Republican leaders are saying they won't raise the limit until the White House and Democrats agree to negotiate deep cuts in the federal budget and substantial changes to the spending process. How deep? How substantial? Those would be among the questions to be answered.

But this is not a drill, and it is not just another repetition of a periodic exercise. For a variety of reasons, 2023 could be the year the dealmakers fail and we face the consequences long feared. There are pivotal figures within the Congress who seem to be working toward just this outcome as a policy goal…



Nascent "Crypto Ice Age" in the wake of FTX et al, my ASS!  You just watch—these DigiFi hucksters will be exuberantly pitching scams aromatic of my foregoing deliberately absurd Photoshop spoof in no time.

Bet on it.
BTW, where's Gabe?
"Debt Crisis" Solution? 

Why the legal scholar Rohan Grey thinks the U.S. Mint can defuse the debt-ceiling standoff

By Annie Lowrey

Later this year, for no good reason at all, the United States might enter a chaotic period of financial default. Once again, the country has hit its statutory debt limit, because Congress continues to spend more than the government receives in tax revenue. The Treasury has no more legal authority to issue new debt and is currently using a series of “extraordinary measures” to keep the government’s bills paid. Those extraordinary measures will last for only six months or so. At that point, either Congress will raise the debt ceiling or the full faith and credit of the country will be at risk.

Rohan Grey is a law professor at Willamette University, in Oregon, and a leading promoter of an arcane idea that could save the country from all that drama: The Biden administration could exercise its unilateral legal authority over U.S. currency to mint a trillion-dollar platinum coin and use it to pay the government’s bills…

The power to create money in the Constitution is literally the power to coin money…

Good article. Read all of it, including the linked citations. I smells me some tangential Modern Monetary Theory (MMT) riffs here...
@HODLcoin™ baby! LOL.

Saturday, January 14, 2023

"Deliberation Science" readings update

There's so much yet to learn. And, even more yet to "unlearn." Some current readings.

First, a Christmas gift book from my sister.
The past several years, we’ve felt as if we’re all stuck in some weird Twilight Zone nightmare where we are constantly, relentlessly gaslit. Up is down, left is right, right is wrong. It feels as if the values of our society have changed almost overnight. We feel disoriented, frustrated, disaffected, and distrustful of each other. We ask ourselves whether we’ve gone crazy, or the world has, or both. No wonder people in the United States are waging a kind of war on trust, building elaborate castles of suspicion that imperil our personal happiness and national prosperity.

All around the world, democracy is now under strain due in part to social problems that cannot be solved through legislation or technology. In a very real sense, collective illusions do the most damage in free societies, precisely because they depend on shared reality, common values, and the willingness to engage with different viewpoints in order to function, let alone flourish. That is why I see collective illusions as an existential threat.

 The bad news is that we are all responsible for what is happening. And yet that is also the good news, because it means we have the power, individually and together, to solve the problem. The best news of all is that, as powerful as collective illusions are, they are also fragile because they are rooted in lies and can be dismantled through individual actions. With the right tools and some wise guidance, we can dismantle them…

…While our social nature is part of our biology, our reaction to our social instincts is within our control. When we’re armed with the right knowledge and skills, we don’t have to choose between being a maverick or being a lemming. This book aims to give you the tools you need to truly understand why and how we conform, how conformity leads directly to collective illusions, and how you can learn to control social influence so that it doesn’t control you…

…we often conform because we’re afraid of being embarrassed. Our stress levels rise at the thought of being mocked or viewed as incompetent, and when that happens, the fear-based part of the brain takes over. Confused and unsure of ourselves, we surrender to the crowd because doing so relieves our stress. Caving to the majority opinion also diffuses our personal responsibility for our decisions, making it easier to bear mistakes. When you find yourself making a decision on your own, it can feel isolating, and the personal responsibility can be intimidating. Indeed, whether our actions are right or wrong, they always feel better if we take them together with others.

Rose, Todd (2022-01-31T22:58:59.000). Collective Illusions. Hachette Books. Kindle Edition.
Persuasive case, eloquently made.
Think about Robert Cialdini's venerable "Influence: the Psychology of Persuasion" and how his "levers" model fits with Todd's thesis:
Consistency and Commitment;
Social Proof;
Appeals to Evidence;
Appeals to Authority;
Appeals to Scarcity.
Dr. Cialdini's work is widely known in the advertising / marketing domains, re principles and techniques going to being "influential." Dr. Rose's work reveals an unhappy flip side of that—our vulnerability to malign influence given our social anxieties regarding "fitting in" to socioculturural norms (which includes the political space).

How about some Justin Gregg?

I would also recommend Todd Kashdan's book apropos of this topic:
Also, more on principles of influence by our esteemed Zoe B. Chance in a prior post.

I recently subscribed to some useful Substack readings. This recent Steven Pinker offering stood out:
Reason To Believe
How and why irrationality takes hold, and what do to about it.

When I tell people that I teach and write about human rationality, they aren’t curious about the canons of reason like logic and probability, nor the classic findings from the psychology lab on how people flout them. They want to know why humanity appears to be losing its mind…

…Can anything be done? Explicit instruction in “critical thinking” is a common suggestion. These curricula try to impart an awareness of fallacies such as arguments from anecdote and authority, and of cognitive biases such as motivated reasoning. They try to inculcate habits of active open-mindedness, namely to seek disconfirmatory as well as confirmatory evidence and to change one’s mind as the evidence changes.

But jaded teachers know that lessons tend to be forgotten as soon as the ink is dry on the exam. It’s hard, but vital, to embed active open-mindedness in our norms of intellectual exchange wherever it takes place. It should be conventional wisdom that humans are fallible and misconceptions ubiquitous in human history, and that the only route to knowledge is to broach and then evaluate hypotheses. Arguing ad hominem or by anecdote should be as mortifying as arguing from horoscopes or animal entrails; repressing honest opinion should be seen as risible as the doctrines of biblical inerrancy or Papal infallibility.

But of course we can no more engineer norms of discourse than we can dictate styles of hairstyling or tattooing. The norms of rationality must be implemented as the ground rules of institutions. It’s such institutions that resolve the paradox of how humanity has mustered feats of great rationality even though every human is vulnerable to fallacies. Though each of us is blind to the flaws in our own thinking, we tend to be better at spotting the flaws in other people’s thinking, and that is a talent that institutions can put to use. An arena in which one person broaches a hypothesis and others can evaluate it makes us more rational collectively than any of us is individually.

Examples of these rationality-promoting institutions include science, with its demands for empirical testing and peer review; democratic governance, with its checks and balances and freedom of speech and the press; journalism, with its demands for editing and fact-checking; and the judiciary, with its adversarial proceedings. Wikipedia, surprisingly reliable despite its decentralization, achieves its accuracy through a community of editors that correct each other's work, all of them committed to principles of objectivity, neutrality, and sourcing. (The same cannot be said for web platforms that are driven by instant sharing and liking.)

If we are to have any hope of advancing rational beliefs against the riptide of myside bias, primitive intuitions, and mythological thinking, we must safeguard the credibility of these institutions. Experts such as public health officials should be prepared to show their work rather than deliver pronouncements ex cathedra. Fallibility should be acknowledged: we all start out ignorant about everything, and whenever changing evidence calls for changing advice, that should be touted as a readiness to learn rather than stifled as a concession of weakness.

Perhaps most important, the gratuitous politicization of our truth-seeking institutions should be halted, since it stokes the cognitively crippling myside bias. Universities, scientific societies, scholarly journals, and public-interest nonprofits have increasingly been branding themselves with woke boilerplate and left-wing shibboleths. The institutions should not be surprised when they are then blown off by the center and right which make up the majority of the population. The results have been disastrous, including resistance to climate action and vaccination.

The defense of freedom of speech and thought must not be allowed to suffer that fate. Its champions should have at their fingertips the historical examples in which free speech has been indispensable to progressive causes such as the abolition of slavery, women’s suffrage, civil rights, opposition to the war in Vietnam, and gay rights. They should go after the censors on the right as vigorously as those on the left, and should not give a pass to conservative intellectuals or firebrands who are no friends to free speech, but are merely enemies of their enemies.

The creed of universal truth-seeking is not the natural human way of believing. Submitting all of one’s beliefs to the trials of reason and evidence is cognitively unnatural. The norms and institutions that support this radical creed are constantly undermined by our backsliding into tribalism and magical thinking, and must constantly be cherished and secured…
Hmmm… “critical thinking?” Yeah. “Necessary but insufficient?” One wonders.

Dr. Pinker cites Professor Keith Stanovich:

Myside bias is displayed by people holding all sorts of belief systems, values, and convictions. It is not limited to those with a particular worldview. Any belief that is held with conviction—any distal belief, to use Robert Abelson’s (1986) term—can be the driving force behind myside thinking. In short, as an information processing tendency, myside cognition is ubiquitous.

Some might argue that something so ubiquitous and universal must be grounded in the evolution of our cognitive systems (either as an adaptation or as a by-product). Others, however, might argue that myside bias could not be grounded in evolution because evolutionary mechanisms would be truth seeking, and myside bias is not. In fact, evolution does not guarantee perfect rationality in the maximizing sense used throughout cognitive science—whether as maximizing true beliefs (epistemic rationality) or as maximizing subjective expected utility (instrumental rationality). Although organisms have evolved to increase their reproductive fitness, increases in fitness do not always entail increases in epistemic or instrumental rationality. Beliefs need not always track the world with maximum accuracy in order for fitness to increase.

Evolution might fail to select out epistemic mechanisms of high accuracy when they are costly in terms of resources, such as memory, energy, or attention. Evolution operates on the same cost-benefit logic that signal detection theory does. Some of our perceptual processes and mechanisms of belief fixation are deeply unintelligent in that they yield many false alarms, but if the lack of intelligence confers other advantages such as extreme speed of processing and the noninterruption of other cognitive activities , the belief fixation errors might be worth their cost (Fodor 1983; Friedrich 1993; Haselton and Buss 2000; Haselton, Nettle, and Murray 2016). Likewise, since myside bias might tend to increase errors of a certain type but reduce errors of another type, there would be nothing strange about such a bias from an evolutionary point of view (Haselton, Nettle, and Murray 2016; Johnson and Fowler 2011; Kurzban and Aktipis 2007; McKay and Dennett 2009; Stanovich 2004). What might be the nature of such a trade-off?

For many years in cognitive science, there has been a growing tendency to see the roots of reasoning in the social world of early humans rather than in their need to understand the natural world (Dunbar 1998, 2016). Indeed, Stephen Levinson (1995) is just one of many theorists who speculate that evolutionary pressures were focused more on negotiating cooperative mutual intersubjectivity than on understanding the natural world. The view that some of our reasoning tendencies are grounded in the evolution of communication dates back at least to the work of Nicholas Humphrey (1976), and there are many variants of this view. For example, Robert Nozick (1993) has argued that in prehistory, when mechanisms for revealing what is true about the world were few, a crude route to reliable knowledge might have been to demand reasons for assertions by conspecifics (see also Dennett 1996, 126–127). Kim Sterelny (2001) developed similar ideas in arguing that social intelligence was the basis of our early ability to simulate (see also Gibbard 1990; Mithen 1996, 2000; Nichols and Stich 2003). All of these views are, despite subtle differences between them, sketching the genetic-cultural coevolutionary history (Richerson and Boyd 2005) of the negotiation of argument with conspecifics.

The most influential synthesis of these views—and the most relevant to myside bias—was achieved by Hugo Mercier and Dan Sperber (2011, 2017),whose subtle, nuanced theory of reasoning is grounded in the logic of the evolution of communication. Mercier and Sperber’s theory posits that reasoning evolved for the social function of persuading others through arguments. If persuasion by argument is the goal, then reasoning will be characterized by myside bias. We humans are programmed to try to convince others with arguments, not to use arguments to ferret out the truth. Like Levinson (1995) and the other theorists mentioned earlier, Mercier and Sperber (2011, 2017) see our reasoning abilities as arising from our need not to solve problems in the natural world but to persuade others in the social world. As Daniel Dennett (2017, 220) puts it: “Our skills were honed for taking sides, persuading others in debate, not necessarily getting things right.”

In several steps, Mercier and Sperber’s (2011, 2017) theory takes us from the evolution of reasoning to our ubiquitous tendency, as humans, to reason with myside bias. We must have a way of exercising what Mercier and Sperber call “epistemic vigilance.” Although we could adopt the inefficient strategy of differentiating trustworthy people from untrustworthy people by simply memorizing the history of our interactions with them, such a strategy would not work with new individuals. Mercier and Sperber (2011, 2017) point out that argumentation helps us to evaluate the truth of communications based simply on content rather than on prior knowledge about particular persons. Likewise, we learn to produce coherent and convincing arguments when we wish to transmit information to others with whom we have not established a trusting relationship. These skills of producing and evaluating arguments allow members of a society to exchange information with other members without the need to establish a prior relationship of trust with them.

If, however, the origins of our reasoning abilities lie in their having as a prime function the persuasion of others through argumentation, then our reasoning abilities in all domains will be strongly colored by persuasive argumentation. If the function of producing an argument is to convince another person, it is unlikely that the arguments produced will be an unbiased selection from both sides of the issue at hand. Such arguments would be unconvincing. Instead, we can be expected to have an overwhelming tendency to produce arguments that support our own opinion (see Mercier 2016).

Mercier and Sperber (2011) argue that this myside bias carries over into situations where we are reasoning on our own about one of our opinions and that, in such situations, we are likely to anticipate a dialogue with others (see Kuhn 2019). The anticipation of a future dialogue will also cause us to think to ourselves in a mysided manner. Mercier and Sperber’s (2016, 2017) theory makes differential predictions about our ability to evaluate the arguments of others. Basically, it predicts that, though we will display a myside bias in evaluating arguments if the issue in question concerns a distal belief, we will display much less of a myside bias when the issue in question is a testable belief.

In short, Mercier and Sperber (2011, 2017) provide a model of how myside bias is inherent in the evolutionary foundations of reasoning. From their evolutionary story of origins, it is not hard to imagine how the gene-culture coevolutionary history (see Richerson and Boyd 2005) of argumentation abilities would reinforce the myside properties of our cognition (a subject of much speculation I can only allude to here). For example, in an early discussion of myside costs and benefits, Joshua Klayman (1995) suggests some of the gene-culture coevolutionary trade-offs that may have been involved. He discusses the cognitive costs of generating ideas outside the mainstream—“Just keeping an open mind can have psychic costs” (Klayman 1995, 411)—and the potential social disapproval of those who waffle. And he discusses the often-immediate benefits of myside confidence triumphing over the more long-term benefits of doubt and uncertainty. Anticipating Mercier and Sperber (2011) in some ways, Klayman (1995, 411) argues that “when other people lack good information about the accuracy of one’s judgments, they may take consistency as a sign of correctness”; he points to the many characteristics of myside argumentation (e.g., consistency, confidence) that can bootstrap social benefits to the individual and group. Dan Kahan’s discussions of his concept of identity protective cognition (Kahan 2013, 2015; see also Kahan, Jenkins-Smith, and Braman 2011; Kahan et al. 2017) likewise suggest other potential mechanisms for myside bias to confer evolutionary benefit by facilitating group cohesion. These possible social benefits must be taken into account when we assess the overall rationality of mysided thinking…

Stanovich, Keith E. (2021-08-30T23:58:59.000). The Bias That Divides Us. MIT Press. Kindle Edition.
That blew me away. I am a long-time fan of Mercier & Sperber's "Why Do Humans Reason?"
"Why do humans reason?" Uhhh... to WIN the argument at hand. If objective truth happens along the way, so much the better.
A "Pen is Mightier Than The Sword" riff.


I agreed to accept a comp hardcopy to read, evaluate, and review this book. Just finished it. Review pending. Tangential to the topic of this post, a Very enjoyable read nonetheless. The above quote sums it up succinctly and accurately. While the "pursuit of happiness" is an widely enduring yet often "aspirational" ideal, orienting toward consistently having more deliberate "fun" is easier to accomplish and helps in the achievement of rational, prosocial, stable "happiness."


Goes beyond “fun.”


You know those recent iPhone Max Pro ads, “Hollywood in your pocket?“ I guess we’re fixin’ to see. I just ordered the full Monty—a 14 Max Pro with a terabyte of capacity.

“Baltimore in my pocket.”

The U.S. public debt

The U.S. public debt is not properly a GOP federal budget bargaining chip. The entirety of constitutional language devoted to it is set forth above.

Thursday, January 5, 2023

Damar Hamlin

24 year old 2nd year NFL Buffalo Bills defensive player, graduate of the University of Pittsburgh. He went into cardiac arrest during the first quarter of the Monday night football game against Cincinnati after making an open field tackle. I feared he would die. Tonight, there is great news that he has regained consciousness and is able to communicate with his doctors. He is still on a vent in critical condition in the ICU, and the likely extent of his recovery is still unknown, but things are looking much more favorable than they were just a day ago. Yes!

Latest report is that Damar's intubation has been removed, and he's communicating lucidly.