For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively—deploying claims about the world, explanations, advice, encouragement, apologies, and promises—while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM’s words shape our beliefs, decisions, and actions, yet no speaker stands behind them.
This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer. When corrected again, it apologizes again—sometimes reversing its position entirely. What unsettles users is not just that the system lacks beliefs but that it keeps apologizing as if it had any. The words sound responsible, yet they are empty.
This interaction exposes the conditions that make it possible to hold one another to our words. When language that sounds intentional, personal, and binding can be produced at scale by a speaker who bears no consequence, the expectations listeners are entitled to hold of a speaker begin to erode. Promises lose force. Apologies become performative. Advice carries authority without liability. Over time, we are trained—quietly but pervasively—to accept words without ownership and meaning without accountability. When fluent speech without responsibility becomes normal, it does not merely change how language is produced; it changes what it means to be human.
This is not just a technical novelty but a shift in the moral structure of language. People have always used words to deceive, manipulate, and harm. What is new is the routine production of speech that carries the form of intention and commitment without any corresponding agent who can be held to account...
"Any corresponding agent who can be held to account?"
Language has always been more than the transmission of information. When humans speak, our words commit us in an implicit social contract. They expose us to judgment, retaliation, shame, and responsibility. To mean what we say is to risk something.
You might get Primaried. Perhaps doxxed. You might lose your job, your scholarship, your visa...
The AI researcher Andrej Karpathy has likened LLMs to human ghosts. They are software that can be copied, forked, merged, and deleted. They are not individuated. The ordinary forces that tether speech to consequence—social sanction, legal penalty, reputational loss—presuppose a continuous agent whose future can be made worse by what they say. With LLMs, there is no such locus. No body that can be confined or restrained; no social or institutional standing to revoke; no reputation to damage. They cannot, in any meaningful sense, bear loss for their words. When the speaker is an LLM, the human stakes that ordinarily anchor speech have nowhere to attach.
Speech without enforceable consequence undermines the social contract. Trust, cooperation, and democratic deliberation all rely on the assumption that speakers are bound by what they say.
The response cannot be to abandon these tools. They are powerful and genuinely valuable when used with care. Nor can the response be to pursue ever greater machine capability alone. We need structures that reanchor responsibility: constraints that limit the use of AI in various contexts such as schools and workplaces, and preserve authorship, traceability, and clear liability. Efficiency must be constrained where it corrodes dignity.
As the idea of AI “avatars” enters the public imagination, it is often cast as a democratic advance: systems that know us well enough to speak in our voice, deliberate on our behalf, and spare us the burdens of constant participation. It is easy to imagine this hardening into what might be called an “avatar state”—a polity in which artificial representatives debate, negotiate, and decide for us, efficiently and at scale. But what such a vision forgets is that democracy is not merely the aggregation of preferences. It is a practice of speaking in the open. To speak politically is to risk being wrong, to be answerable, to live with the consequences of what one has said. An avatar state—fluent, tireless, and perfectly malleable—would simulate deliberation but without consequence. It would look, from a distance, like self-government. Up close, it would be something else entirely: responsibility rendered optional, and with it, the dignity of having to stand behind one's words made obsolete.
Wiener understood that the whirlwind would come not from malevolent machines but from human abdication. Capability displaces responsibility. Efficiency erodes dignity. If we fail to recognize that shift in time, responsibility will return to us only after the damage is done—seated, as Wiener warned, on the whirlwind.
Deb Roy is a professor of Media Arts and Sciences at MIT, where he directs the MIT Center for Constructive Communication, based at the MIT Media Lab. He is also the cofounder and chair of Cortico, a non-profit dedicated to building stronger civic networks.
1. American University
2. Boston College
3. Boston University
4. Brown university
5. Carnegie Mellon University
6. Case Western Reserve University
7. Columbia University
8. College of William and Mary
9. Cornell University
10. Duke University
11. Emory University
12. Florida Institute of Technology
13. Fordham University
14. Georgetown University
15. George Washington University
16. Harvard University
17. Hawaii Pacific University
18. Johns Hopkins University
19. London School of Economics
20. Massachusetts Institute of Technology
21. Northeastern University
22. Northwestern University
23. New York University
24. Pepperdine University
25. Princeton University
26. Stanford University
27. Tufts University
28. University of Miami
29. University of Pennsylvania
30. University of Southern California
31. Vanderbilt University
32. Wake Forest University
33. Washington University in St. Louis
34. Yale University
…What is arguably the defining trait of the second Trump administration, a bearing and a bullying that cast a noxious haze over all public discourse, which was already plenty polluted. This crew — Bondi, Stephen Miller, JD Vance, President Trump himself — don’t want to win opponents’ favor. They don’t even want to win the argument. Why sweat the delicate art of persuasion when you can use the brute force of condemnation? Comity and conciliation are a slog. They’re for suckers. Contempt is victors’ ready, heady prerogative.
It’s also what the MAGA movement was supposed to be rebelling against. Many people who flocked to Trump in all his spite and willful destructiveness were protesting the condescension and derision of the Democratic elite, who, they felt, held them in contempt. They were responding to Barack Obama’s lament about embittered Americans who “cling to guns or religion.” They were reacting to Hillary Clinton’s gibe about the “basket of deplorables.”
At least that’s one origin theory, one narrative thread.
But Trump, his aides and many of his supporters haven’t purged contempt from our politics. They’ve mainstreamed it. Purified it. Industrialized it. It’s their push-a-button pushback against everyone who challenges them and any circumstances that threaten to undermine them, an all-purpose way to pivot from the substance of a situation to an evasive and obfuscating ill will. Envelop everything in indiscriminate animosity and nothing real survives.
That’s what Kristi Noem, the homeland security secretary, and Miller, the impresario of ugliness, did when federal agents killed protesters in Minneapolis. Smear first, ask questions later (or, better yet, never)...



































