via The New Republic |
Conservative political activist Ivan Raiklin claims to have assembled a “Deep State target list” that includes high-ranking Democrats and Republicans, U.S. Capitol Police officers, officials at the FBI and other intelligence agencies, witnesses in Trump’s impeachment trials, and journalists at The New York Times, CNN, The Washington Post, and other news outlets. Their supposed crime? Being Trump’s political enemies. And Raiklin views himself as justice incarnate.
During one podcast appearance earlier this year, Raiklin said his nickname was the “deep state marauder, a.k.a. the mauler.” Raiklin’s plan for what to do with his list isn’t nearly as gruesome, but it is still terrifying. He intends to enlist right-wing sheriffs to carry out mass arrests, and in May, he declared his intention to arrange “livestreamed swatting raids.”
Raiklin reportedly claims that if he went public with all of the so-called evidence he collected on Trump’s enemies in the deep state, it would be probable cause to arrest those high-ranking officials and journalists on his list.
Those arrests would be carried out by “constitutional sheriffs,” specifically a right-wing anti-government group called the Constitutional Sheriffs and Peace Officers Association. Those sheriffs would then deputize the 75,000 veterans Raiklin claims were dismissed from the military for refusing to comply with Covid-19 vaccine mandates, forming a rogue army intent on revenge…
Former Sheriff Richard Mack, who leads the Constitutional Sheriffs group, told Raw Story that he had severed ties with Raiklin in early June and did not approve of his rhetoric. “Quite frankly, he talks about that list of 350 people—I’m sure they can afford lawyers,” said Mack. “It reeks of lawsuits, and it doesn’t follow due process.”
Raiklin has also emailed sheriffs’ offices across the country hoping to enlist them to his cause. Not one has signed on, according to Raw Story.
Recall how Donald Trump has repeatedly promised his MAGA fans "retribution." This dude is a real piece of work.
GOP CONVENTION ERRATUM
I.Just.Can't.Do.Any.More...
I'd intended to stay up and her JD Vance's VP nomination speech. Decided it would be pointless.
OTHER NEWS...
My latest issue of Science Magazine.
JULY 18TH GOP CONVENTION PRIME TIME CONCLUSION
In the wake of a sorry parade of MAGA D-list celebs—including Hulk Hogan, Dana White, Kid Rock, Tucker Carlson, Lee Greenwood—Donald Trump came out and babbled on for 93 minutes. Same litany. The less said about it the better.
One charming erratum: Son Eric Trump spoke to introduce his Dad, and claimed that his father "built the New York skyline."
Right. Sure.
Also, one musical performer was bumped from the final evening performance schedule after the Heiitage Foundation President learned of his religious beliefs.
Kid Rock was much more fitting.
I finished Frank Bruni's important book The Age of Grievance, and, in addition to several others still in play, I'm back onto the compelling book The AI Mirror.
If the image of AI as an imminent destroyer or supplanter of human dominance and supremacy were only a looking-glass fantasy—a flight of misplaced imagination, serving as a creative template adapted by generations of science fiction writers, then perhaps we could just enjoy it for its imagined possibilities. iRobot is a thrill! The Terminator is a lot of fun. The Matrix is a gas. Ex Machina is spellbinding. Westworld is mind-blowing (often literally—minds get blown open quite a lot)!
But in fact, the looking-glass illusion these entertaining fictions reflect is being leveraged today by billionaires, lobbyists, and powerful AI companies to directly influence and reshape public policy, scientific research priorities, venture capital investment, and philanthropic giving. It is being placed in service of movements to defer action on real and imminent existential threats—from climate change and ocean acidification to global pandemics and food insecurity—in favor of lavish funding for research on long-term AGI risk, which these powerful interests now claim outstrips all other dangers. The new tech-centered movement of longtermism, which grew out of an earlier movement called effective altruism, has put forth in many powerful academic, policy, and media circles the idea that AGI presents an incalculably greater threat than even climate change. Many longtermists now argue that the most prudent use of philanthropy and public resources for long-term human benefit is to invest more heavily in AGI risk and AI safety research. Yet it is not the safety of living humans today, or even the next generation, that drives longtermists.
Longtermist arguments often go as follows: because AGI in the far future could theoretically kill or immiserate even more living humans than are alive on the planet now, preventing future AGI from doing this is a more rational use of resources than funding clean energy tech, food security, or public health today. Related claims made by some longtermists include the idea that saving the future requires directing even more of today’s resources toward the wealthy in the Global North, rather than sending aid to the most impoverished countries in the Global South, since wealthy people in already disproportionately wealthy regions are better positioned to use their resources to fight these “existential” risks from AGI.
Longtermism is rooted in utilitarian ethics, a brand of moral theory that has long driven effective altruists to the contemporary work of philosopher Peter Singer. Utilitarianism, from its foundation in the nineteenth-century writings of philosophers Jeremy Bentham and John Stuart Mill to the AI-inflected writings of today’s longtermists, has always held that our first and highest moral duty is not to root out injustice, or cultivate virtue, or protect human rights and the planet, but rather to maximize the sum total of happiness that will be enjoyed in the future.
Peter Singer is widely admired for his vigorous defense of animal liberation on the grounds of their capacity to suffer, which subtracts from their happiness. He is less well known for his claims (still bitterly recalled by many disability activists) that it is ethical to kill so-called “defective infants” whose potential for future happiness he sees as severely compromised. What longtermists and effective altruists have in common with Singer is the utilitarian view that morality is a happiness optimization problem, a matter of running the numbers. Conveniently for AI enthusiasts, that makes morality just the sort of thing that computers and computer scientists are supposed to be good at!
For a virtue ethicist like me, there are profound dangers and moral errors in any view of morality that separates it from humane feeling, relational bonds, and social context, reducing it to a mere computation of net happiness units. As longtermist Eliezer Yudkowsky readily admits, this kind of fundamentalist utilitarianism licenses one to turn off moral feeling and just “shut up and multiply” happiness sums, even if the action dictated by the calculation feels deeply wrong or shocks the conscience. Some longtermists at Oxford’s Future of Humanity Institute (FHI) have used such license to entertain hypotheticals in which the welfare of billions of today’s suffering humans is dispassionately sacrificed for the potential to create far larger numbers of happy “digital people” in virtual worlds of the distant future. Others work out longtermist defenses of a related utilitarian view known as the Repugnant Conclusion: the proposal that given the option, it would be better to choose a world packed with people who all suffer so greatly that their lives are “barely worth living” than to choose a far more modest population of ten billion people who can all flourish.
These types of scenarios are often seen as compelling rebuttals to utilitarian logic. When taken instead as plausible moral obligations for humans, such scenarios are, as the philosopher and fierce critic of utilitarianism Bernard Williams famously said, the result of having “one thought too many.” Ideas like this did not dominate the early philanthropic thinking of effective altruists. From donating surplus personal wealth to the purchase of mosquito nets and other high-impact aid for the global poor, to investing in public health and climate resilience, much of what effective altruists initially tried to do is aligned with commonsense moral obligations to present and future generations. Even as the effective altruism movement merged with the more speculative AI fantasies of longtermism, many remained committed to fighting the existential threats already here: climate change, biohazards, and nuclear holocaust. As observed by a critic quoted in a 2022 profile on effective altruist and longtermist William MacAskill in The New Yorker, “If you read things that [effective altruists] are saying, they sound a lot crazier than what they’re actually doing.” If the more speculative AGI visions of longtermists were just intellectual side hustles for bored Oxford philosophers and Silicon Valley investors, we might think them harmless.
But today, longtermism is the language of moral thought spoken in a growing number of the wealthiest and most powerful political and industrial circles. Billionaires like Jaan Tallinn and Peter Thiel have been described as avid supporters, while Elon Musk, funder of the Future of Humanity Institute, has cited MacAskill as “a close match for my philosophy.” FHI Senior Research Fellow Toby Ord notes in his bio that he has advised “the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science.” Ord was quoted to the UN General Assembly in a 2021 speech on climate by former British PM Boris Johnson. Nor do longtermists only seek influence in conservative political circles. FTX crypto founder and longtermist Sam Bankman-Fried was celebrated as a “megadonor” in US Democratic circles before being jailed on fraud charges, while longtermists in the UK formed a group called Labour for the Long Term to influence the Labour Party. Political influence is a core part of longtermist strategy. A 2022 position paper by Oxford’s Global Priorities Institute outlined their political case for institutional longtermism, the view that far-future considerations should dominate public policy and even our choice of political institutions, in contrast with longtermism as a personal philosophy for setting one’s own philanthropic priorities….
Vallor, Shannon. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (pp. 76-80). Oxford University Press. Kindle Edition.
"Longtermist arguments often go as follows: because AGI in the far future could theoretically kill or immiserate even more living humans than are alive on the planet now, preventing future AGI from doing this is a more rational use of resources than funding clean energy tech, food security, or public health today. Related claims made by some longtermists include the idea that saving the future requires directing even more of today’s resources toward the wealthy in the Global North, rather than sending aid to the most impoverished countries in the Global South, since wealthy people in already disproportionately wealthy regions are better positioned to use their resources to fight these “existential” risks from AGI."
My TwitterX reaction.
All apropos of my abiding core concerns.
_________
No comments:
Post a Comment