Search the KHIT Blog

Tuesday, December 5, 2023

336 days and counting down

I am no Republican. I am now headlong into this book. You should be as well.


Friday, December 1, 2023

OpenAI: "Show me baboons tossing a pizza alongside Jesus, rendered in the style of Matisse.”

The Silicon Valley successor to YouTube cat videos?
On deck... See prior post. Related riffs continue.
An eminent political thinker uses our history with states and corporations―“artificial agents” to which we have granted immense power―to predict how AI will remake society.

Countless books, news reports, and opinion pieces have announced the impending arrival of artificial intelligence, with most claiming that it will upend our world, revolutionizing not just work but society overall. Yet according to political philosopher and historian David Runciman, we’ve actually been living with a version of AI for 300 years?because states and corporations are robots, too. In The Handover, Runciman explains our current situation through the history of these “artificial agents” we created to rescue us from our all-too-human limitations―and demonstrates what this radical new view of our recent past means for our collective future.

From the United States and the United Kingdom to the East India Company, Standard Oil, Facebook, and Alibaba, states and corporations have gradually, and then much more rapidly, taken over the planet. They have helped to conquer poverty and eliminate disease, but also unleashed global wars and environmental degradation. As Runciman demonstrates, states and corporations are the ultimate decision-making machines, defined by their ability to make their own choices and, crucially, to sustain the consequences of what has been chosen. And if the rapid spread of the modern state and corporation has already transformed the conditions of human existence, new AI technology promises the same.

But what happens when AI interacts with other kinds of artificial agents, the inhuman kind represented by states and corporations? Runciman argues that the twenty-first century will be defined by increasingly intense battles between state and corporate power for the fruits of the AI revolution. In the end, it is not our own, human relationship with AI that will determine our future. Rather, humanity’s fate will be shaped by the interactions among states, corporations, and thinking machines.

With clarity and verve, The Handover presents a brilliantly original history of the last three centuries and a new understanding of the immense challenges we now face.—Amazon blurb
Ton of good writing going on these days on the topic.
Imagine a world of superhuman machines, built in our image and designed to make our lives go better. Imagine that these machines turn out to be vastly more powerful than we are. It’s not only that we can’t do what they do; we can’t really understand how they do it either. Still, we come to rely on them. They are there to serve our interests, offering us convenience, efficiency, flexibility, security and lots of spare time. Imagine that it all works. As a result of our inventions, we become longer lived, richer, better educated, healthier, and perhaps happier too (though that remains up for debate). We enjoy lives that would be unrecognisable to people born just a couple of generations earlier. The human condition is transformed.

Yet we know – surely, we know? – that there are enormous risks in becoming so dependent on these artificial versions of ourselves. They are superhuman but they are also fundamentally inhuman. They lack the essence of what makes us who we are. Call it a conscience. Call it a heart. Call it a soul. The potential power of these machines in the service of conscienceless, heartless, soulless human beings, of whom there are still plenty, is frightening. But more frightening still is the possibility that these machines will start taking decisions for themselves. They are meant to serve us, but they also have the capacity to destroy us. What if their power were to be turned against their creators? We might have ended up building the agents of our own obsolescence.

This is a very twenty-first-century story, and perhaps the quintessential twenty-first-century nightmare. On the cusp of the AI revolution, we are now constructing machines capable of doing things that leave us exhilarated, baffled or terrified.

Runciman, David (2023-11-13T22:58:59.000). The Handover: How We Gave Control of Our Lives to Corporations, States and AIs . Liveright. Kindle Edition.

Some interesting errata from David's book relevant to current U.S. events.
The January 6th, 2021 Trump DC Insurrection attempt? Three years later we're still mired in the contentious process of establishing culpabilities.
BTW, Liz Cheney's new book comes out December 5th.

Elon Musk celebrated his 44th birthday in July 2015 at a three-day party thrown by his wife at a California wine country resort dotted with cabins. It was family and friends only, with children racing around the upscale property in Napa Valley.
This was years before Twitter became X and Tesla had a profitable year. Mr. Musk and his wife, Talulah Riley — an actress who played a beautiful but dangerous robot on HBO’s science fiction series “Westworld” — were a year from throwing in the towel on their second marriage. Larry Page, a party guest, was still the chief executive of Google. And artificial intelligence had pierced the public consciousness only a few years before, when it was used to identify cats on YouTube  with 16 percent accuracy.
A.I. was the big topic of conversation when Mr. Musk and Mr. Page sat down near a firepit beside a swimming pool after dinner the first night. The two billionaires had been friends for more than a decade, and Mr. Musk sometimes joked that he occasionally crashed on Mr. Page’s sofa after a night playing video games.
But the tone that clear night soon turned contentious as the two debated whether artificial intelligence would ultimately elevate humanity or destroy it...
A long-read. A good read. 
As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Mr. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day there would be many kinds of intelligence competing for resources, and the best would win.
If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.

With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future.

That insult, Mr. Musk said later, was “the last straw.”

Many in the crowd seemed gobsmacked, if amused, as they dispersed for the night, and considered it just another one of those esoteric debates that often break out at Silicon Valley parties.

But eight years later, the argument between the two men seems prescient. The question of whether artificial intelligence will elevate the world or destroy it — or at least inflict grave damage — has framed an ongoing debate among Silicon Valley founders, chatbot users, academics, legislators and regulators about whether the technology should be controlled or set free.

That debate has pitted some of the world’s richest men against one another: Mr. Musk, Mr. Page, Mark Zuckerberg of Meta, the tech investor Peter Thiel, Satya Nadella of Microsoft and Sam Altman of OpenAI. All have fought for a piece of the business — which one day could be worth trillions of dollars — and the power to shape it.

At the heart of this competition is a brain-stretching paradox. The people who say they are most worried about A.I. are among the most determined to create it and enjoy its riches. They have justified their ambition with their strong belief that they alone can keep A.I. from endangering Earth.

Mr. Musk and Mr. Page stopped speaking soon after the party that summer. A few weeks later, Mr. Musk dined with Mr. Altman, who was then running a tech incubator, and several researchers in a private room at the Rosewood hotel in Menlo Park, Calif., a favored deal-making spot close to the venture capital offices of Sand Hill Road.

That dinner led to the creation of a start-up called OpenAI later in the year. Backed by hundreds of millions of dollars from Mr. Musk and other funders, the lab promised to protect the world from Mr. Page’s vision.

Thanks to its ChatGPT chatbot, OpenAI has fundamentally changed the technology industry and has introduced the world to the risks and potential of artificial intelligence. OpenAI is valued at more than $80 billion, according to two people familiar with the company’s latest funding round, though Mr. Musk and Mr. Altman’s partnership didn’t make it. The two have since stopped speaking.

“There is disagreement, mistrust, egos,” Mr. Altman said. “The closer people are to being pointed in the same direction, the more contentious the disagreements are. You see this in sects and religious orders. There are bitter fights between the closest people.”…
Yeah, paywalled. I'm a long-time subscriber to "The Failing New York Times."

Sunday, November 26, 2023

Information overload and artificial intelligence

Above: yeah, that'd be @BobbyGcyborg in the wake of the Singularity.
My more plausible impending next-life future. 🤣

Lotta good stuff in Science Magazine of late. to wit:

When Iosif Gidiotis began his doctoral studies in educational technology this year, he was intrigued by reports that new tools powered by artificial intelligence (AI) could help him digest the literature in his discipline. With the number of papers burgeoning—across all of science, close to 3 million were published last year—an AI research assistant “sounds great,” says Gidiotis, who is studying at the KTH Royal Institute of Technology. He hoped AI could find more relevant papers than other search tools and summarize their highlights.

He experienced a bit of a letdown. When he tried AI tools such as one called Elicit, he found that only some of the returned papers were relevant, and Elicit’s summaries weren’t accurate enough to win him over. “Your instinct is to read the actual paper to verify if the summary is correct, so it doesn’t save time,” he says. (Elicit says it is continuing to improve its algorithms for its 250,000 regular users, who in a survey credited it with saving them 90 minutes a week in reading and searching, on average.)

Created in 2021 by a nonprofit research organization, Elicit is part of a growing stable of AI tools aiming to help scientists navigate the literature. “There’s an explosion of these platforms,” says Andrea Chiarelli, who follows AI tools in publishing for the firm Research Consulting. But their developers face challenges. Among them: The generative systems that power these tools are prone to “hallucinating” false content, and many of the papers searched are behind paywalls. Developers are also looking for sustainable business models; for now, many offer introductory access for free. “It is very difficult to foresee which AI tools will prevail, and there is a level of hype, but they show great promise,” Chiarelli says.

Like ChatGPT and other large-language models (LLMs), the new tools are “trained” on large numbers of text samples, learning to recognize word relationships. These associations enable the algorithms to summarize search results. They also identify relevant content based on context in the paper, yielding broader results than a query that uses only keywords. Building and training an LLM from scratch is too costly for all but the wealthiest organizations, says Petr Knoth, director of CORE, the world’s largest repository of open-access papers. So Elicit and others use existing open-source LLMs trained on a wide array of texts, many nonscientific...
So, of course, I hopped on over to the Elicit website to rummage around and check things out.

Okeee Dokeee, then...

  • Good reasoning is reasoning that reliably arrives at true beliefs and good decisions.
  • Good reasoning is rare but essential for government, companies, and research.
  • Advanced AI systems are an opportunity to radically scale up good reasoning.
  • Getting this right is crucial for guiding the transition to a society that runs on machine thinking.
"A society that runs on machine thinking."  Hmmm... what could possibly go wrong?
What is good reasoning?
We want people, organizations, and machines to arrive at:
  • True beliefs
  • Good decisions
Good reasoning is reasoning that reliably leads to these outcomes.
The status quo
Right now, good reasoning is rare. Bad reasoning is everywhere:
  • Governments allocate resources for pandemics and similar risks based on biases, political pressures, or vested interests
  • Courts judge cases based on selective, fallacious, or unfair use of evidence, arguments, and law
  • Companies decide their strategies based on overconfidence, underestimation, or complacency
  • Investors allocate capital based on herd mentality, hype, or fear
  • Researchers do work that is unimpactful or harmful, and manipulate or hide data
Yeah. No pick with any of that.
But, "why do humans reason?" Well, predominantly to win the argument (Sperber & Mercier's "Adaptive Utility"). Should verifiable truth happen along the way, so much the better.

I sent an email to my son Nick and his lifelong best friend Nate:
Yo, FYI:
As scientists face a flood of papers, AI developers aim to help | Science | AAAS

This stuff is serious interest to me. One of my long-term questions has been, “can AI do argument analysis?“

What is “argument analysis?“ Actually the full phrase is “argument analysis and evaluation.“ I got hip to it in graduate school at UNLV. The “analysis“ part is basically where you “flowchart“ the logic in a prose piece making the case for some assertion of truth. All of the “if/then/therefore/else contingency stuff. The “evaluation“ part that follows is basically then an argument of YOURS assessing the relative strengths and weaknesses of the original proffer set forth by the author. The difficulty comes when you consider that everyone has a significantly different way of expressing the same ideas in prose writing. Language fluidity (not to mention the 100+ different human languages around the world these days).

See my argument analysis paper here. (pdf)

Basically, I went through the JAMA paper paragraph by paragraph, flowcharting the “logic” that comprised the sub-arguments leading to the final conclusion (took me probably 100 hours). I wrote about this also on my blog, here:

Imagine trying to do this shit manually to a full length book. You’d fucking never be heard from again.

If someone could come up with a “AI/LLM“ app that could parse the reasoning in a complex scientific or technical paper (or book), or, say, a complex political policy argument, it would totally kick ass. My 3GL/4GL "structured" RDBMS programming skills are two generations out of date by now. But, logic is logic, and the same basic principles apply – I assume.

Just looking for a bit of feedback. You guys are way smart.


PS- looking into this “Elicit” company cited in the article, oh, geez, they too are trying to “make the world a better place.

Nick, our baby, is now 40. He started his current job as the Operations Manager for a custom beverage packaging & distribution company based here in Baltimore. He's transitioned of late into an IT management role. He has degrees in accounting and finance, and operations management. He and his bud Nate, a year younger, met at The Hebrew Academy in Las Vegas when Nick was 9. We're not Jewish, but we sent him there anyway. Neither are we Catholic (I'm a fully recovered Episcopalian, dilettante UU, and Secular Zen Sympathizer), but we subsequently sent Nick to Bishop Gorman High School, also in Vegas (I have a funny story about that; I'll get to it).
Nate is absurd. He's the son of Frank Sinatra's Las Vegas bassist, Seth Kimball. Nate now has a Master's in Jazz Performance, a real estate agent license, a private pilot's license, and recently transitioned to work in software engineering for Microsoft (Azure platform). We used to call him "Number Two Son." He spent more time at our house than at home.

Below: From his jazz CD. An Old Soul, our Nate.
  "Nate Kimball: Hello World"
NOTE: Earlier post of mine on AI and music.
So, back to the post topic.

Could AI do accurate real-world-relevant "argument analysis" (absent significant "hallucination")?

More from Science:
Science, justice, and evidence

Courts in the United States have increasingly relied on scientific evidence and expert testimony to help resolve questions of fact. On 1 December 2023, amendments to Federal Rule of Evidence 702 will take effect, further clarifying the court’s responsibilities as a gatekeeper for expert evidence. This update comes just a few months after the 30-year anniversary of the Supreme Court’s landmark decision on how federal judges should evaluate scientific evidence. Daubert v. Merrell Dow was hailed as a victory for the use of scientific information in the legal system and certainly cast a much-needed spotlight on scientific evidence in the courtroom. But the nuanced and flexible nature of the “Daubert standard” has since led to substantial inconsistencies in its application. Most strikingly, it has had far more impact in civil cases than criminal cases. Daubert’s core tenet—that scientific evidence introduced in court should be adequately valid and reliable—needs to be taken just as seriously in the criminal justice system and for forensic science as it has been in civil cases.

Daubert instructs judges to be “gatekeepers” responsible for assessing the validity of the science brought to court. Previously, courts often asked only whether the science was “generally accepted” by the relevant scientific community. Because judges often treated an expert witness’s own assertion of general acceptance as adequate, the rule did not present much of a bar for admissibility. With Daubert, the onus is more squarely on the judge to assess validity. The opinion details numerous possible factors to consider (including testing, error rate, peer review, and general acceptance), but gives little truly concrete guidance and allows the court great flexibility in weighing these factors.

When the Daubert decision was handed down, many legal analysts and scientists agreed that the use of experts in court had long been a mess. Some critics lambasted judges for too often permitting expert testimony that wasn’t scientifically credible; others worried that juries were fundamentally incapable of making reasoned decisions when competing experts offered wildly different, contradictory testimony. Daubert shined a much-needed spotlight on expert evidence—but 30 years later, what has been its evidentiary impact?...

Jennifer Mnookin is chancellor and professor of law at the University of Wisconsin–Madison, Madison, WI, USA
F.R.E. 702 12/01/23 AMENDMENTS

In April, the Supreme Court sent a list of proposed amendments to Congress that amend the Federal Rules of Evidence. Absent action by Congress, the rules go into effect December 1, 2023.  The proposed amendments affect Rules 106, 615 and, relevant to this article, 702.

Rule 702 addresses testimony by an expert witness. The proposed rule reads as follows (new material is underlined; matters omitted are lined through):
A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if the proponent demonstrates to the court that it is more likely than not that:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.

The proposed amended seeks to clarify how a judge should view his or her gatekeeping role without substantively changing the rule. Assuming Congress adopts the proposed rule, the proposal amends the rule in two ways.

First, it clarifies that a court should not admit expert testimony unless the proponent demonstrates that it is “more likely than not” that the proffered testimony meets Rule 702’s admissibility requirements. The rules committee recommended the change because many courts hold that the critical question of the sufficiency of the basis for an expert’s opinions goes to the weight of the testimony, not its admissibility. Thus, to be admissible in the future, the proponent of expert testimony must demonstrate, consistent with Rule 104(a) and case law interpreting Rule 104(a), that the testimony meets admissibility requirements – i.e., meets a preponderance of the evidence standard…
Back in the mid-late 1980's I worked in a forensic-level environmental radiation lab in Oak Ridge as a systems programmer and QC analyst (we did a ton of litigation support and regulatory action analytics).
Our (now-late) founder and CEO, John A. Auxier, PhD, CHP was the nation's premier expert on radiation dose/exposure. Former Director of Industrial Health & Safety at Oak Ridge National Laboratory, he was a member of the Three Mile Island Commission, and Editor of the Health Physics Society Journal.
The governing forensic legal standard then was "the Frye Standard." One hopes that this new FRE 702 Amendment will materially improve things with respect to "Science, Justice, and Evidence” in ouor endless adversarial venues.

And, let us always keep in mind "Ethics."

More to come. Still studyin' up on these Elicit peeps (pdf). Smart bunch.
The world has changed since the rise of civilization some 10,000 years ago. Look around—what do you see? You may be sitting on a chair, in front of a computer, a glass of water next to it on the table, within the walls of a house, the neighbor’s dog barking outside, cars driving by on asphalt roads built next to power lines supplying electricity for the local school or hospital. Almost every part of your environment has been shaped by humans; it is there because we intend for it to be there, or at least approve of its existence.

This has arguably been a change for the better—as indicated by the existence of the notion of progress—even if not without exceptions, and not without contention. Basic human needs such as food, shelter, health, and physical safety are provided to a degree far beyond hunter-gatherer times. What is responsible for this change? While it may be difficult to pin down the relative contributions of different causes and enabling factors, it is safe to say that our capability for thought was a necessary ingredient… 

Chapter 2
Background: Probabilistic Programming

A probabilistic program is a program in a universal programming language with primitives for sampling from probability distributions, such as Bernoulli, Gaussian, and Poisson. Execution of such a program leads to a series of computations and random choices. Probabilistic programs thus describe models of the stochastic gen- eration of results, implying a distribution on return values. Most of our examples use Church [26], a probabilistic programming language based on the stochastic lambda calculus. This calculus is universal in the sense that it can be used to define any com- putable discrete probability distribution [49] (and indeed, continuous distributions when encoded via rational approximation)… 
Reasoning about Reasoning as Nested Conditioning
4.1 Introduction

Reasoning about the beliefs, desires, and intentions of other agents—theory of mind—is a central part of human cognition and a critical challenge for human-like artificial intelligence. Reasoning about an opponent is critical in competitive situations, while reasoning about a compatriot is critical for cooperation, communication, and maintaining social connections. A variety of approaches have been suggested to explain humans’ theory of mind. These include informal approaches from philosophy and psychology, and formal approaches from logic, game theory, artificial intelligence, and, more recently, Bayesian cognitive science.

Many of the older approaches neglect a critical aspect of human reasoning uncertainty—while recent probabilistic approaches tend to treat theory of mind as a special mechanism that cannot be described in a common representational framework with other aspects of mental representation. In this chapter, we discuss how probabilistic programming, a recent merger of programming languages and Bayesian statistics, makes it possible to concisely represent complex multi-agent reasoning scenarios. This formalism, by representing reasoning itself as a program, exposes an essential contiguity with more basic mental representations.

Probability theory provides tools for modeling reasoning under uncertainty: distributions formalize agents’ beliefs, conditional updating formalizes updating of beliefs based on evidence or assertions. This approach can capture a wide range of reasoning patterns, including induction and non-monotonic inference. In cognitive science, probabilistic methods have been very successful at capturing aspects of human learning and reasoning [92]. However, the fact that conditioning is an operation applied to such models and not itself represented in such models makes it difficult to accommodate full theory of mind: We would like to view reasoning as probabilistic inference and reasoning about others’ reasoning as inference about inference; however, if inference is not itself represented as a probabilistic model we cannot formulate inference about inference in probabilistic terms.

Probabilistic programming is a new, and highly expressive, approach to probabilistic modeling. A probabilistic program defines a stochastic generative process that can make use of arbitrary deterministic computation. In probabilistic programs, conditioning itself can be defined as an ordinary function within the modeling language. By expressing conditioning as a function in a probabilistic program, we rep- resent knowledge about the reasoning processes of agents in the same terms as other knowledge. Because conditioning can be used in every way an ordinary function can, including composition with arbitrary other functions, we may easily express nested conditioning: we can condition any random variable, including random variables that are defined in terms of other conditioned random variables. Nested conditioning de- scribes reasoning about reasoning and this makes theory of mind amenable to the kind of statistical analysis that has been applied to the study of mental representation more generally.

The probabilistic program view goes beyond other probabilistic views by extending compositionality from a restricted model specification language to a Turing-complete language, which allows arbitrary composition of reasoning processes. For example, the multi-agent influence diagrams proposed by Koller and Milch [45] combine the expressive power of graphical models with the analytical tools of game theory, but their focus is not on representing knowledge that players’ might have about other players’ reasoning…
Hmmm... WWSS? (What Would Sapolsky Say?
OK, then...


Good stuff.
In the wake of studying Mustafa Suleyman's new book "The Coming Wave," I've begun interacting a bit with his app. 
e.g., a question I posed yesterday:
"In 1994, the journal of the American Medical Association, published a paper by an organization known as “Physicians for a National Healthcare Plan.“ In the paper, they argue for what is known as a “single payer system“ for healthcare. Can you analyze and evaluate their argument for me?"

"The 1994 JAMA paper by Physicians for a National Healthcare Plan (PNHP) argued for the adoption of a single-payer healthcare system in the US. Their main argument was that such a system would provide universal coverage, eliminate the need for private insurance, and reduce administrative costs, thereby improving access to healthcare and reducing overall costs. They proposed that a national formulary would be used to determine which prescription drugs and medical supplies would be covered, and that an expert panel would oversee the program to ensure its effectiveness. Their argument was based on the premise that all Americans should have access to quality healthcare, and that a single-payer system was the most efficient and cost-effective way to achieve this. However, their proposal was met with criticism from those who argued that it would lead to a decline in the quality of care and a loss of patient choice."
Again, my 1994 grad school Argument Analysis & Eval paper (pdf). left me a bit less than dazzled here. The response was "accurate," but nowhere in the depth I wanted to see. Sounded like a Google search result. But, it was lightening fast, gotta give 'em that.

In AI research—and, really, in all of science—the rise and fall of ideas is not based on pure meritocracy. Usually, the scientists and companies with the most resources and the biggest loudspeakers exert the greatest influence. Consensus forms around these entities, which effectively means that they determine the direction of AI development. Within the AI industry, power is already consolidated in just a few companies—Meta, Google, OpenAI, Microsoft, and Anthropic. This imperfect process of consensus-building is the best we have, but it is becoming even more limited because the research, once largely performed in the open, now happens in secrecy.

Over the past decade, as Big Tech became aware of the massive commercialization potential of AI technologies, it offered fat compensation packages to poach academics away from universities. Many AI Ph.D. candidates no longer wait to receive their degree before joining a corporate lab; many researchers who do stay in academia receive funding, or even a dual appointment, from the same companies. A lot of AI research now happens within or connected to tech firms that are incentivized to hide away their best advancements, the better to compete with their business rivals… via @TheAtlantic
"Why Won’t OpenAI Say What the Q* Algorithm Is?"

AGI software (e.g., "Scheme" & "Church") has to run on cutting-edge hardware–e.g., the GPU technology. Massively parallel computing.

  "AI and the Nvidia GPU" 
The revelation that ChatGPT, the astonishing artificial-intelligence chatbot, had been trained on an Nvidia supercomputer spurred one of the largest single-day gains in stock-market history. When the Nasdaq opened on May 25, 2023, Nvidia’s value increased by about two hundred billion dollars. A few months earlier, Jensen Huang, Nvidia’s C.E.O., had informed investors that Nvidia had sold similar supercomputers to fifty of America’s hundred largest companies. By the close of trading, Nvidia was the sixth most valuable corporation on earth, worth more than Walmart and ExxonMobil combined. Huang’s business position can be compared to that of Samuel Brannan, the celebrated vender of prospecting supplies in San Francisco in the late eighteen-forties. “There’s a war going on out there in A.I., and Nvidia is the only arms dealer,” one Wall Street analyst said…

Click title


Thursday, November 23, 2023

Thanksgiving 2023

  "Arguing better: Atlantic Radio"

Wednesday, November 22, 2023

November 22nd, 1963

60 years ago I was a 17-yr old high school senior in Somerville, New Jersey, walking to the football field house several blocks from the school building after classes to suit up for a last practice just before our final game of the year vs nearby Bound Brook HS. Other kids randomly started shouting that the President had been shot in Dallas. We thought it was just a joke. We learned otherwise in horror when we arrived at the practice field.

Monday, November 20, 2023

Making the World a Bitter Place

To the Board of Directors at OpenAI,

OpenAI is the world’s leading AI company. We, the employees of OpenAI, have developed the best models and pushed the field to new frontiers. Our work on AI safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.

When we all unexpectedly learned of your decision, the leadership team of OpenAI acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.

The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability.

Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

Kara Swisher is all over this CusterFluck. Are these Tech bros "Making the world a better place?"

…Altman’s firing can be seen as a stunning experiment in OpenAI’s unusual structure. It’s possible this experiment is now unraveling the company as we’ve known it, and shaking up the direction of AI along with it. If Altman had returned to the company via pressure from investors and an outcry from current employees, the move would have been a massive consolidation of power. It would have suggested that, despite its charters and lofty credos, OpenAI was just a traditional tech company after all.

Even with Altman out, this tumultuous weekend showed just how few people have a say in the progression of what might be the most consequential technology of our age. AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers, and multibillion-dollar companies. The fate of OpenAI might hang in the balance, but the company’s conceit—the openness it is named after—showed its limits. The future, it seems, will be decided behind closed doors.
Microsoft Emerges as the Winner in OpenAI Chaos
Microsoft has hired OpenAI cofounders Sam Altman and Greg Brockman to head a new advanced AI team, acquiring one of the most successful management groups in the AI industry.

JUST AFTER 2 am Pacific time on Monday morning, several OpenAI staffers—including its chief technology officer, Mira Murati—posted in unison on X: “OpenAI is nothing without its people.” Sam Altman, who was dramatically removed as the company’s chief executive on Friday, reposted many of them. By then, Altman already had a new job. Satya Nadella—CEO of Microsoft, a major investor and partner of OpenAI—announced late on Sunday night that Altman and his cofounder Greg Brockman would be joining the tech giant to head a new “advanced AI research team.” Nadella’s statement seemed to suggest that others from the startup would be joining Microsoft.

By hiring Altman and Brockman amid the chaos at the top of OpenAI, Microsoft has managed to acquire one of the most successful management teams in artificial intelligence without having to buy the company—whose pre-chaos valuation was $86 billion…
"Silicon Valley is peerless when it comes to mythologizing its ideas men (and yes, they tend to be men). In the industry’s telling, technologies and their founders succeed in a meritocratic fashion, based on the genius of the idea and the skill of its execution. OpenAI’s self-mythologizing went a step further, positioning itself almost in opposition to its own industry—a company so committed to an ideology and a purity of product that it would self-immolate to protect itself and others. Over the weekend, this ideology crashed against the rocks of a capitalist reality. As is always true in Silicon Valley, a great idea can get you only so far. It’s the money that gets you over the finish line."
Nothing more to see here, folks, time to move on... (yeah, right.)

There’s something a little absurd about the saga. It’s remarkable to see so many prominent people in A.I. acting so human—being impulsive, enraged, and confused. The scary part is that the confusion has deep roots. It’s real, and inherent to the field. How dangerous is A.I.? How close are we to inventing A.G.I.? Who should be trusted to keep it safe, and how should they go about doing that? No one really knows the answers to those questions, and, as a result, some of the most qualified people in the world are fighting among themselves.—Joshua Rothman, the New Yorker

Sunday, November 19, 2023

Misk Joudeh

"Collateral damage?"
As a father, grandfather, and great-grandfather, this shit just absolutely breaks my heart. The barbarity of Hamas, the inevitable crushing response of Israel...
Infant Misk Joudeh briefly had actual "personhood" (as did the Israeli infants butchered by Hamas). 

Saturday, November 11, 2023


apropos of the prior post. Today I ran across this article in The Atlantic.
The Supreme Court overturned Roe v. Wade more than a year ago, but in the time since, the number of abortions performed nationwide seems to have gone up, not down. And not just in blue states—even in red states where abortion has been banned, some sizable percentage of people can and do travel out of state or get abortion pills in the mail.

The anti-abortion movement is—no surprise—committed to stopping this flow of patients and abortion pills across state lines. One strategy that has recently emerged is an effort to revive and reinterpret the Comstock Act, a 19th-century anti-vice law that the movement claims makes sending or receiving any abortion drug or device in the mail a federal crime. Other approaches are proliferating. In Alabama, the attorney general has vowed to use existing state conspiracy law to prosecute residents for helping others seek abortions out of state. And in Texas, several counties passed an ordinance allowing anyone to sue a person driving on local highways who is bringing a patient to get an abortion, whether illegally in Texas or legally elsewhere…

The more that off-the-wall anti-abortion tactics proliferate, the easier it is to dismiss them as scare tactics. But that might not be the case at all. Any law can be enforced if enough people take part in rustling up offenders. Of course, the Supreme Court could be hostile to the anti-abortion activists’ new enforcement strategies, striking down laws limiting travel or rejecting conservatives’ interpretation of the Comstock Act. But this Court may not. The majority’s conservative leanings are clear, and many of the legal areas at issue are undeveloped enough to leave room for a sympathetic justice to side with abortion opponents. If that comes to pass, then the laws’ “unenforceability” can’t be counted on to protect anyone.

Mary Ziegler is the Martin Luther King Professor of Law at UC Davis.
Which led me here.

Stay tuned. Hip deep in this one at the moment, perhaps 50% through it. A nice complement to Kathleen's book. (Couldn't resist Photoshopping the Bourne movie poster graphic.)
Scholars have traced how an ascendant form of Christian nationalism—the belief that the United States was and always should be a Christian nation—was needed for Trump to edge out Hillary Clinton in 2016. But the influence of the anti-abortion movement went much further, and it had everything to do with money in U.S. politics.

Political scientists and historians of the religious right have told part of the story of the fascinating partnership between abortion foes and Republican leaders. Their studies often suggest that while pro-lifers became dependent on the GOP, the Republican Party did not fundamentally change its priorities. Some assert that the GOP co-opted the religious right, gaining its votes while offering little but speeches in return.

By focusing on the pro-life movement and its complex relationship with the broader religious right, this book shows that partnership with the movement had consequences for the GOP that went well beyond abortion. Anti-abortion operatives helped make control of the Supreme Court a deciding issue for conservative voters who had no legal background. And to gain control of the Supreme Court, abortion foes joined and ultimately helped to lead a growing fight against campaign finance laws, persuading many social conservatives and GOP leaders to oppose them as well.

 These shifts to a focus on judicial nominations and money in politics helped the GOP score electoral victories it might not otherwise have managed. But in the long term, the changes in campaign finance rules that the anti-abortion movement helped to achieve came at a terrible price for the Republican establishment—and had important consequences for the functioning of American democracy.

Of course, other factors helped to gut the establishment as well. To begin with, the GOP had long courted the kind of voters whose loyalty Trump later captured. For decades, Republicans had fueled whites’ anxieties about immigration and demographic change. Christian nationalism—and some evangelical Protestants’ investment in a masculine populism—did not begin with Trump. In the past, however, Republican politicians had effectively managed these forces. GOP voters had fallen in love with populists before, but the establishment had found a way to sideline them.

Gradually, however, the Republican establishment grew less able to marginalize insurgents. Americans increasingly disliked those in the opposing political party, and this negative partisanship triggered a surge in partisan loyalty. Whomever their own party nominated, polarized voters did not cross party lines because they viewed the opposing party’s candidate as unacceptable. Some of these changes affected both parties; negative partisanship became common among Democrats, and politicians on the left became more ideologically homogenous (and more progressive) than they once were. Democrats have relied on independent expenditures, dark money, and super PACs (political action committees) to fund their campaigns, and at times have outraised and outspent the opposition. And Democrats on occasion played a part in pushing more money into politics, passing laws like the 2014 Continuing Appropriations Act, which significantly raised the contribution limits that applied to political party committees. But partisan polarization has been asymmetric. The GOP veered sharply to the right on issues from climate change to immigration, and the party has proven more willing to play constitutional hardball than the opposition, holding up Supreme Court nominations and routinely forcing government shutdowns...

Ziegler, Mary. Dollars for Life (pp. 6-8). Yale University Press. Kindle Edition.
Whereas Policing Pregnant Bodies spans millenia back to the ancient Greeks (with a lot of focus on sociological / cultural factors shaping our current views on reproductive policy), Dollars For Life dwells principally on the increasingly partisan right-to-life / choice politics comprising the U.S. period encompassing my own nearly 78 years. I'm finding it in large measure a well-written exercise in "recall" providing a heightened awareness of the bigger picture as it exists today.

I also recommend two other relevant, excellent books already in my stash:

Quite worthy reads. Anti-choice advocates have every right to their (minority) opinions. That is not the same, however, as moral & legal license to dictate what other women must do.


Should Donald Trump manage to win the 2024 Presidential election, this hard-right group intends to hit the ground running full-speed on day one to eradicate every aspect of "liberalism" they can. Their Manifesto (pdf) runs about 900 pages. I'm down several hundred pages into it thus far. A few relevant teasers:
The next conservative President must make the institutions of American civil society hard targets for woke culture warriors. This starts with deleting the terms sexual orientation and gender identity (“SOGI”), diversity, equity, and inclusion (“DEI”), gender, gender equality, gender equity, gender awareness, gender-sensi- tive, abortion, reproductive health, reproductive rights, and any other term used to deprive Americans of their First Amendment rights out of every federal rule, agency regulation, contract, grant, regulation, and piece of legislation that exists…

In Africa, Skinner writes, the U.S. “should focus on core security, economic, and human rights” rather than impose radical abortion and pro-LGBT initiatives. Divisive symbols such as the rainbow flag or the Black Lives Matter flag have no place next to the Stars and Stripes at our embassies…

Max Primorac asserts that the United States Agency for International Development (USAID) must be reformed, writing, “The Biden Administration has deformed the agency by treating it as a global platform to pursue overseas a divisive political and cultural agenda that promotes abortion, climate extremism, gender radicalism, and interventions against perceived systematic racism.”…

Reverse policies that allow transgender individuals to serve in the military. Gender dysphoria is incompatible with the demands of military service, and the use of public monies for transgender surgeries or to facilitate abortion for servicemembers should be ended…

Focus on core diplomatic activities, and stop promoting policies birthed in the American culture wars. African nations are particularly (and reasonably) non-receptive to the U.S. social policies such as abortion and pro-LGBT initiatives being imposed on them…

Finally, conservatives should gratefully celebrate the greatest pro-family win in a generation: overturning Roe v. Wade, a decision that for five decades made a mockery of our Constitution and facilitated the deaths of tens of millions of unborn children. But the Dobbs decision is just the beginning. Conservatives in the states and in Washington, including in the next conservative Administration, should push as hard as possible to protect the unborn in every jurisdiction in America. In particular, the next conservative President should work with Congress to enact the most robust protections for the unborn that Congress will support while deploying existing federal powers to protect innocent life and vigorously complying with statutory bans on the federal funding of abortion. Conservatives should ardently pursue these pro-life and pro-family policies while recognizing the many women who find themselves in immensely difficult and often tragic situations and the heroism of every choice to become a mother. Alternative options to abortion, especially adoption, should receive federal and state support.

In summary, the next President has a moral responsibility to lead the nation in restoring a culture of life in America again...
I fixed some of their Art for them.

While this stuff might seem nominally scary, it would not surprise me one whit for this to end up being largely one more grift Op, perhaps just a tad up the socioeconomic-cognitive distribution above the Trump Rally MAGA Marks.
And, How About Those VERMIN?
Yeah, we are drowning in grotesque, often barbaric exigencies of late. Makes it difficult to even stay on one topical thread. Humor me...
I'm late to the party, but, yikes, the Apple TV+ series "The Morning Show." I'm now on S1:E4.

This series is simply brilliant. Aaron Sorkin was not the screenwriter, but the script is every bit as good. The topical relevance to this post is spot-on.


I finished Mary Ziegler's "Dollars for Life." Very illuminating. Combined with "Policing Pregnant Bodies," you get a complete picture of the acrimonious reproductive rights issue.
Also, below, I read Jonathan Ksrl's new book. Quite sobering. I am SO beyond sick of Donald Trump.

More shortly...