Search the KHIT Blog

Saturday, March 9, 2024

Pentagon Valley? Moving fast and breaking “things” (including human non-combatants)?

What could possibly go wrong? Not wholly a forward-looking speculation, it turns out.
  

This
jumped in my face today. Been lying there on the coffee table for a couple of days. Finally got around to it. Totally coheres with the Kara Swisher stuff of my prior post.
Three months before Hamas attacked Israel, Ronen Bar, the director of Shin Bet, Israel’s internal security service, announced that his agency had developed its own generative artificial intelligence platform—similar to ChatGPT—and that the technology had been incorporated quite naturally into the agency’s “interdiction machine,” assisting in decision-making “like a partner at the table, a co-pilot.” As the Israeli news site Tech12 explained in a preview of his speech.
The system knows everything about [the terrorist]: where he went, who his friends are, who his family is, what keeps him busy, what he said and what he published. Using artificial intelligence, the system analyzes behavior, predicts risks, raises alerts.
Nevertheless, Hamas’s devastating attack on October 7 caught Shin Bet and the rest of Israel’s multibillion-dollar defense system entirely by surprise. The intelligence disaster was even more striking considering Hamas carried out much of its preparations in plain sight, including practice assaults on mock-ups of the border fence and Israeli settlements—activities that were openly reported. Hamas-led militant groups even posted videos of their training online. Israelis living close to the border observed and publicized these exercises with mounting alarm, but were ignored in favor of intelligence bureaucracies’ analyses and, by extension, the software that had informed them. Israeli conscripts, mostly young women, monitoring developments through the ubiquitous surveillance cameras along the Gaza border, composed and presented a detailed report on Hamas’s preparations to breach the fence and take hostages, only to have their findings dismissed as “an imaginary scenario.” The Israeli intelligence apparatus had for more than a year been in possession of a Hamas document that detailed the group’s plan for an attack.

Well aware of Israel’s intelligence methods, Hamas members fed their enemy the data that they wanted to hear, using informants they knew would report to the Israelis. They signaled that the ruling group inside Gaza was concentrating on improving the local economy by gaining access to the Israeli job market, and that Hamas had been deterred from action by Israel’s overwhelming military might. Such reports confirmed that Israel’s intelligence system had rigid assumptions of Hamas behavior, overlaid with a racial arrogance that considered Palestinians incapable of such a large-scale operation. AI, it turned out, knew everything about the terrorist except what he was thinking…
Imagine our surprise. It gets worse.
…[M]isplaced confidence was evidently not confined to Israeli intelligence. The November/December issue of Foreign Affairs not only carried a risibly ill-timed boast by national security adviser Jake Sullivan that “we have de-escalated crises in Gaza,” but also a paean to AI by Michèle Flournoy. Flournoy is a seasoned denizen of the military-industrial complex. The undersecretary of defense for policy under Barack Obama, she transitioned to, among other engagements, a lucrative founding leadership position with the defense consultancy WestExec Advisors. “Building bridges between Silicon Valley and the U.S. government is really, really important,” she told The American Prospect in 2020. Headlined ai is already at war, Flournoy’s Foreign Affairs article invoked the intelligence analysts who made “better judgments” thanks to AI’s help in analyzing information. “In the future, Americans can expect AI to change how the United States and its adversaries fight on the battlefield,” she wrote. “In short, AI has sparked a security revolution—one that is just starting to unfold.” This wondrous new technology, she asserted, would enable America not only to detect enemy threats, but also to maintain complex weapons systems and help estimate the cost of strategic decisions. Only a tortuous and hidebound Pentagon bureaucracy was holding it back.

Lamenting obstructive Pentagon bureaucrats is a trope of tech pitches, one that plays well in the media. tech start-ups try to sell a cautious pentagon on a.i. ran a headline in the New York Times last November over a glowing report on Shield AI, a money-losing drone company for which Flournoy has been an adviser…
And, it gets worse.
 

This shit makes me ill.
The belief that software can solve problems of human conflict has a long history in U.S. war-making. Beginning in the late Sixties, the Air Force deployed a vast array of sensors across the jungles of Southeast Asia, masking the Ho Chi Minh trail along which North Vietnam supplied its forces in the south. Devised by scientists advising the Pentagon, the operation, code-named Igloo White, and designed to detect human activity by the sounds of marching feet, the smell of ammonia from urine, or the electronic sparks of engine ignitions, relayed information to giant IBM computers housed in a secret base in Thailand. The machines were the most powerful then in existence; they processed the signals to pinpoint enemy supply columns otherwise invisible under the jungle canopy. The scheme, in operation from 1967 to 1972 at a cost of at least hundreds of millions a year, was a total failure. The Vietnamese swiftly devised means to counter it; just as Hamas would short-circuit Shin Bet algorithms by feeding the system false information, the Vietnamese also faked data, with buckets of urine hung in trees off the trail, or herds of livestock steered down unused byways, which were then dutifully processed by the humming computers as enemy movements. Meanwhile, North Vietnamese forces in the south were well supplied. In 1972, they launched a powerful offensive using hundreds of tanks that went entirely undetected by Igloo White. The operation was abandoned shortly thereafter…
No, hardly a forward-looking speculative concern.
 

 Move fast and kill people. Armed or not.

 
We'll leave things here for now. Andrew Cockburn:
I was curious about Palantir, whose stock indeed soared amid the 2023 AI frenzy. I had been told that the Israeli security sector’s AI systems might rely on Palantir’s technology. Furthermore, Shin Bet’s humiliating failure to predict the Hamas assault had not blunted the Israeli Defense Force’s appetite for the technology; the unceasing rain of bombs upon densely packed Gaza neighborhoods, according to a well-sourced report by Israeli reporter Yuval Abraham in +972 Magazine, was in fact partly controlled by an AI target-creation platform called the Gospel. The Gospel produces automatic recommendations for where to strike based on what the technology identifies as being connected with Hamas, such as the private home of a suspected rank-and-file member of the organization. It also calculates how many civilians, including women and children, would die in the process—which, as of this writing, amounted to at least twenty-two thousand people, some 70 percent of them women and children. One of Abraham’s intelligence sources termed the technology a “mass assassination factory.” Despite the high-tech gloss on the massacre, the result has been no different than the slaughter inflicted, with comparatively more primitive means, against Dresden and Tokyo during World War II.
My links column ongoing list of concerns.
Armed State Conflicts + Malign Technologies ... No surprises here.
 
CODA
 
An old 2002 beef of mine with the DARPA "Total Information Awareness" tech+surveilance+military thing. (uh, has link rot). And, yeah, I actually sent this to Admiral Poindexter. I have long been outa [bleeps] to give.
 _________
 

No comments:

Post a Comment