Search the KHIT Blog

Thursday, April 30, 2026

Psychology and Artificial Intelligence

 
Ran into this podcast today on YouTube. Found it quite interesting, particularly given all the the fractiousness engulfing the AI technology debate. Good use of 48 minutes of your time.
(~@3:23, interesting Dr. Keaton observation) “…For me personally, I hate when we teach our undergraduates—as you know as often is done—we basically just teach them a string of Nobel prize winning experiments and just connect the dots, and you go through the twists and turns—brought up this statement by Feynman—that the difference between knowing the name of the thing and knowing something about it is the most dangerous gap in all of science.” [More on that bit of Zen shortly.] 
Tom Griffiths' latest:
 
This book is about how the human mind comes to understand the world—and ultimately, perhaps, how we humans may come to understand ourselves. Many disciplines, ranging from neuroscience to anthropology, share this goal—but the approach that we adopt here is quite specific. We adopt the framework of cognitive science, which aims to create such an understanding through reverse-engineering: using the mathematical and computational tools from the engineering project of creating artificial intelligence (AI) systems to better understand the operation of human thought. AI generates a rich and hugely diverse stream of hypotheses about how the human mind might work. But cognitive science does not just take AI as a source of inspiration. What we have learned about the mathematical and computational underpinnings of human cognition can also help to build more human like intelligence in machines. 

The fields of AI and cognitive science were born together in the late 1950s, and grew up together over their first decades. From the beginning, these fields’ twin goals of engineering and reverse-engineering human intelligence were understood to be distinct, yet deeply related through the lens of computation. The rise of the digital computer and the possibility of computer programming simultaneously made it plausible to think that, at least in principle, a machine could be programmed to produce the input-output behavior of the human mind. So it was a natural step to suggest that the human mind itself could be understood as having been programmed, through some mixture of evolution, development, and maybe even its own reflection, to produce the behaviors we call “intelligent.” In these early days, AI researchers and cognitive scientists shared their biggest questions: What kind of computer was the brain, and what kind of program could the mind be? What model of computation could possibly underlie human intelligence—both its inner workings and its outwardly observable effects? 

Now, almost 70 years later, these two fields have matured and (as often happens to siblings) grown apart to some extent. Cognitive science has become a thriving, occasionally hot, but still relatively small interdisciplinary field of academic study and research. AI has become a dominant societal force, intellectually, culturally, and economically. It is no exaggeration to say that we are living in the first “AI era,” in the sense that we are surrounded by genuinely useful AI technologies. We have machines that appear able to do things we used to think only humans could do––driving a car, having a conversation, or playing a game like Go or chess—yet we still have no real AI, in the sense that the founders of the field originally envisioned. We have no general-purpose machine intelligence that does everything a human being can or thinks about everything a human can, and it’s not even close. The AI technologies we have today are built by large, dedicated teams of human engineers, at great cost. They do not learn for themselves how to drive, converse, or play games, or want to do these things for themselves, the way any human does. Rather, they are trained on vast data sets, with far more data than any human being ever encounters, and those data are carefully curated by human engineers. Each system does just one thing: the machine that plays Go doesn’t also play chess or tic-tac-toe or bridge or football, let alone know how to see the stones on the Go board or pick up a piece if it accidentally falls on the floor. It doesn’t drive a car to the Go tournament, engage in a conversation about what makes Go so fascinating, make a plan for when and how it should practice to improve its game, or decide if practicing more is the best use of its time. The human mind, of course, can do all these things and more—independently learning and thinking for itself to operate in a hugely complex physical, social, technological, and intellectual world. And the human mind spontaneously learns to figure all this out without a team of data scientists curating the data on which it learns, but instead through growing up interacting with that complex and chaotic world, albeit with crucial help with caregivers, teachers, and textbooks. 

To be sure, recent and remarkable developments in deep learning have created AI models which, with the right prompting, can be used to perform a surprisingly diverse range of tasks, from writing computer code, academic essays, and poems, and even to creating images. But, by contrast, humans autonomously create their own objectives and plans and are variously curious, bored, or inspired to explore, create, play, and work together in ways that are open-ended and self-directed. AI is smart; but as yet it is only a faint echo of human intelligence. 

What’s missing? Why is there such a gap between what we call AI today and the general computational model of human intelligence that the first computer scientists and cognitive psychologists envisioned? And how did AI and cognitive science lose, as has become increasingly evident, their original sense of common purpose? The pressures and opportunities arising from market forces and larger technological developments in computing, along with familiar patterns of academic fads and trends, have all surely played a role. Some of today’s AI technologies are often described as inspired by the mind or brain, most notably those based on artificial neural networks or reinforcement learning, but the analogies, although they have historically been crucial in inspiring modern AI methods, are loose at best. And most cognitive scientists would say that while their field has make real progress, its biggest questions remain open. What are the basic principles that govern how the human mind works? If pressed to answer that question honestly, many cognitive scientists would say either that we don’t know or that at least there is no scientific consensus or broadly shared paradigm for the field yet…


Griffiths, Thomas L.; Chater, Nick; Tenenbaum, Joshua B. (2024). Bayesian Models of Cognition: Reverse Engineering the Mind (Preface). Kindle Edition.
Another of his books.

Everyone has a basic understanding of how the physical world works. We learn about physics and chemistry in school, letting us explain the world around us in terms of concepts like force, acceleration, and gravity—the Laws of Nature. But we don’t have the same fluency with the concepts needed to understand the world inside us—the Laws of Thought. You have probably heard of Newton’s universal law of gravitation. But you might not have heard of Shepard’s universal law of generalization—a simple principle that describes the behavior of any intelligent organism, anywhere in the universe. While the story of how mathematics has been used to reveal the mysteries of the external world is familiar to anybody who has taken even a casual interest in science, the story of how it has been used to study our internal world is not. This book tells that story. 

A little over three hundred years ago, a small group of philosophers and mathematicians began to pull together the threads that make up modern science. They developed a keen sense of observation, a talent for conducting experiments, and a new set of tools for expressing mathematical theories. Over the following centuries, observation, experiment, and mathematics have been combined to reveal both the smallest and the largest things in the physical universe in ever-increasing detail. But those philosophers and mathematicians weren’t just interested in the physical universe. They were also interested in the mind, and they wanted to use the same mathematical tools to study it. Thomas Hobbes wrote about the possibility of understanding “ratiocination” as “calculation,” asking whether we might imagine thoughts being added and subtracted. René Descartes imagined numbers being assigned to thoughts in much the same way that they are assigned to collections of physical objects. Gottfried Wilhelm Leibniz spent his life trying, and ultimately failing, to find a way to use arithmetic to describe human reason. 

The first success in using mathematics to analyze thought wouldn’t appear until the middle of the nineteenth century, when George Boole cracked the problem that Leibniz hadn’t been able to solve by coming up with a new kind of algebra. That first success had far-reaching consequences, leading to the development of formal logic and computers. The first attempts to evaluate mathematical theories of thought by comparing them to human behavior wouldn’t appear until the middle of the twentieth century, in the Cognitive Revolution that launched the field of cognitive science—the interdisciplinary science of the mind. Cognitive scientists have since come to recognize the limits of formal logic as a model of human cognition, and have developed completely new mathematical approaches—artificial neural networks, which illustrate the power of continuous representations and statistical learning, and Bayesian models, which reveal how to capture prior knowledge and deal with uncertainty. Each has something to offer for understanding the mind. 

In the twenty-first century, knowing the Laws of Thought is just as important to scientific literacy as knowing the Laws of Nature. Artificial intelligence systems demonstrate on a daily basis that yet another aspect of thought and language can be emulated by machines, pushing us to reconsider the way we think about ourselves. Understanding human minds and how much of them can be automated becomes critical as we plan our careers and think about the world that our children will occupy. By the end of this book you will know the basic principles behind how modern artificial intelligence systems work, and know exactly where they are likely to continue to fall short of human abilities…


Griffiths, Tom (2026). The Laws of Thought: The Quest for a Mathematical Theory of the Mind (Introduction). Kindle Edition.     
QUICK UPDATE
 
I finished Episode 3 of NetFlix's "3 Body Problem." Bought volume 1 of the print trilogy snd have begun reading. Pretty cool.
 
UPDATE
From Science Magazine
Large language models (LLMs) are artificial intelligence (AI) algorithms that are trained on vast amounts of data to learn patterns that enable them to generate human-like responses. Reasoning models are LLMs with the added capability of working through problems step by step before responding, thus mirroring structured thinking. Such AI systems have performed well in assessing medical knowledge, but whether they can match physician- level clinical reasoning on authentic diagnostic tasks remains largely unknown. On page 524 of this issue, Brodeur et al. (1) demonstrate that AI can now seemingly match or exceed physician-level clinical diagnostic reasoning on text-based scenarios by measuring against human physician performances on clinical vignettes and real-world emergency cases. The findings indicate an urgent need to understand how these tools can be safely integrated into clinical workflows, and a readiness for prospective evaluation alongside clinicians.

AI has the potential to support a broad range of health care applications, from clinical decisions to medical education and the provision of patient-facing health information. LLMs have passed medical licensing examinations and performed well on structured clinical assessments, raising the prospect that they could help alleviate global health care workforce shortages. However, passing examinations is not the same as being a doctor, and demonstrating physician-level performance on authentic clinical tasks is a fundamentally harder challenge (2).

Brodeur et al. evaluated OpenAI’s first reasoning model, o1-preview (released in September 2024), across five experiments that assess diagnostic performance on clinical case vignettes against physician and prior-model baselines. A sixth experiment compared o1 with prior models, and physicians across three diagnostic touchpoints on 76 actual emergency department cases. Across the experiments, the o1 models substantially outperformed prior-generation nonreasoning LLMs (e.g., GPT-4) and, in many cases, the physicians themselves. For example, when provided with published clinicopathological conference cases, GPT-4 achieved exact or very close diagnostic accuracy in 72.9% of cases, whereas o1-preview achieved this in 88.6% of cases. Further, in actual emergency department cases, o1 achieved 67.1% exact or very-close diagnostic accuracy at initial triage, outperforming two expert attending physicians (55.3% and 50.0%), with blinded reviewers unable to distinguish the AI output from human. This advance sets a new evaluation benchmark—testing AI against physician performance, and ideally alongside physicians, on authentic clinical tasks...
Lengthy, detailed discussion. Seriously of interest to me, in light of my long experience working for and with physicians.
 
ALSO IN SCIENCE TODAY: BOOK REVIEW
 
Amazon blurb.
Lively. . . . Rousing. . . . Prophecy—roving, intelligent, irreducibly idiosyncratic—can expand our sense of possibility, starting now.” —The New York Times Book Review

Tech empires are the prophets of the modern day, and like the ancient oracles and medieval astrologers that preceded them, they're not in it for the common good—they're in it for power. Award-winning University of Oxford professor Carissa Véliz brilliantly argues why we must reclaim that power, and shows us how.

“A masterpiece. . . . The most important book you will read for years.” —Roger McNamee, New York Times bestselling author of Zucked


For thousands of years, oracles, seers, and astrologers advised leaders and commoners alike about the future. But predictions are often power plays in disguise, obfuscating accountability and stripping individuals of their agency. Today we face the same threat of powerful prophets but under a new facade: tech.

Not only do modern predictions made by tech companies advise on war, industry, and marriages, but artificial intelligence also now determines whether we can get a loan, a job, an apartment, or an organ transplant. And when we cede ground to these predictions, we lose control of our own lives.

Drawing on history’s cautionary tales and modern-day tech companies’ malfeasance—from surveillance and biased algorithms to a startling lack of accountability—Carissa Véliz demonstrates that big tech’s prophecies are just as shallow, dangerous, and unjust as their ancient counterparts’. What she uncovers in the process is chilling. Artificial intelligence is increasing risk in business and society while creating a false sense of security. In this incisive, witty, and bracingly original book, Véliz contends that the main promise of prediction is not knowledge of the future but domination over others. Powerful people use predictions to determine our future. Prophecy is an invitation to defy those orders and live life on our own terms
SCIENCE MAG
As I finished reading Prophecy, by philosopher Carissa Véliz, the soundtrack of The Matrix hummed in my mind, howled by the band Rage Against the Machine (“Wake up! Wake up!”). In fact, a weird kind of intellectual synesthesia took place throughout my perusal of the book, as I could also hear the dystopian slogan from George Orwell’s 1984 furiously sung by the same band: “Who controls the past now controls the future, who controls the present now controls the past.”

Véliz’s scholarship focuses on ethics and artificial intelligence (AI)—two realms that often refuse to meet. In 2020, she published Privacy Is Power, warning readers against the algorithmic invasion that has continued to take hold of our personal data, feeding the techno-golems of Silicon Valley and threatening our sanity and liberty. This book is her second major admonition.

Prophecy is about the power of predictions, especially when they are maliciously misleading. The structure of the book is dialectic: The first part expounds the promise of predictions and their influence throughout history, and the second articulates their manifold perils and related abuses of power, particularly in the current age of AI oracles. The third and final part of the book seeks a resolution, namely, how to rethink predictions and resist their deceptive lure.

Predictions have always been with us, from ancestral wisdom that told us when to sow and reap to mathematics used to optimize decisionmaking under uncertainty. Forecasts are ultimately guesses—educated or naïve, right or wrong, innocuous or consequential. They can also be deliberately deceptive, not anticipating the future but rather covertly shaping it. When such predictions are then turned into promises and ossified into decrees, we are in trouble.

Using predictions as prescriptions to benefit one’s agenda is not new. Rulers have always done so. But current AI empires make such a practice unprecedentedly pervasive and pernicious.

AI, argues Véliz, is the new diviner—the ultimate prediction machine. Mirrored on the fashionable idea that our brains are inference devices, such artificial prophets are dangerous. Digital technologies now rule our personal and professional lives, as well as the fate of countries and civilizations. They tell us who to date, what to watch, who to hire, when to start a war, and so forth. These simulacra are presented as deep knowledge, even truth, and then turned into self-fulfilling prophecies. Perils abound, as predictions also give us a false sense of security, increasing risks and lacking accountability...
'eh? 
 
More shortly...

Tuesday, April 28, 2026

"Enough is Enough."

And, given the past few shitstorm days, I have most certainly had enough.

 
I need a bit of a diversion.
 
Surfing NetFlix tonight. Ran into a SciFi series I'd not heard of.
 

8 episode series 1. Apparently re-upped for a Season 2. From a 3-book trilogy, which I will buy and read. I paused Episode 1 about halfway through to post this.


Oddly resonant. Stay tuned.

Sunday, April 26, 2026

"Digital Gold, or Fool's Gold?”

 
I've posted on Ben and his work before.
See more prior crypto-related posts here.

Saturday, April 25, 2026

SHOTS FIRED INSIDE THE DC HILTON WHITE HOUSE CORRESPONDENTS' DINNER

Trump evacuated, no one killed, lone gunman arrested.
    
 Some stuff is so predictable.

Herr Altman, der Schiedsrichter der Wahrheit.

 
The opening moments of the 1982 film Blade Runner introduce viewers to a world of artificially intelligent beings that are “virtually identical” to humans. To tell man from machine, people rely on something called the Voight-Kampff test, which is a little like a polygraph; robot irises exhibit subtle tells when prompted. If you’re dealing with a robot, you’ll know by the eyes.

If Sam Altman has his way, this could be sort of how it works in real life. Last week, he announced an expansion of the verification service World ID, created by a start-up called Tools for Humanity. Altman co-founded the company in 2019, the same year he became CEO of OpenAI. Onstage last Friday, he described the product as a way to certify personhood in a digital landscape rife with bots, deepfakes, phishers, and other sorts of impostors. Think of it as an evolution of CAPTCHA, the security program used to identify bots and prevent attacks on websites. To verify your humanness and secure a World ID, you must stare into a white, frosted orb and allow the company to take pictures of your face and eyeballs…
The foregoing is excerpted from a recent article in The Atlantic. Commenters were not amused.
__________
pchapin
This technology completely fails in the face of sock puppetry. It only proves that a human was present when the iris scan was made, not that a human is present at the time of the transaction.

Imagine a bad actor who pays a desperately poor person for their iris scan. For, say, $100, that poor person might be able to buy food for their family for weeks. They might not understand the significance of the scan or care about it. For only $10,000, the bad actor can now amass an "army" of sock puppets online using the iris scans of people who will never be online.

The system completely fails to provide the assurance that it claims to provide, and is therefore useless.

DarkHorse
not a chance I’m letting Altman’s tech scan my eyeballs

DrakX316
I honestly struggled to tell if this was satire.

JamesB
So let me get this straight. Mr. Altman creates a world where we can’t trust our own ears and eyes, and then wants me to surrender what’s left of my identity. Just to prove who I am to him?

No. You can’t have that.

Cynthiajay
A 'World ID' ... dear me. Sam Altman apparently doesn't read his history.
Americans were once fiercely opposed to any such thing as a national ID.
Whatever are we to think of this?

steve207144
The test in Blade Runner tested emotional responses to various scenarios. A tortoise is lying upside down in a desert, you won't help it, why? Since the replicants in Blade Runner were emotionally immature, they could be detected by their extreme response.

Given what I've heard about Sam Altman, I would think he would have no emotional response to others suffering, because he seems to be a sociopath.

I wouldn't trust him with my excrement. Never mind all kinds of bio-metric information and an app tracking me on my smartphone.

Knowing if someone on the internet is human, a dog, a bot, or an AI is a problem, but Sam Altman, and the other tech bros, aren't the solution.

rodeoairflow
Literally, it’s actually only a problem for people who want to extract money from the rest of us. If a terminator is bearing down on me, I’m not going to have the time to ask him to gaze into an orb. And otherwise I couldn’t care less if you’re a human, honestly. Do you.

Cynthiajay
He's just another oligarch.
Zuckerberg. Bezos. Musk. Just remember all of them clustered around the dais at trump's inauguration. 
What more do we need to know?

brian99jordan
Story left out that Altman ran a Worldcoin huge scam in Kenya. It was found to be illegal for scamming underage children to provide valuable, sensitive biomedical data. Incredibly naive to let this scammer record your eye-print. Altman does not have an honest, non-hustler bone in his body.

pdstolen
For many years, in government and after while volunteering, I used—sometimes quite effectively—the original purpose of the federal environmental review laws. Simply, the purpose was to determine both the positive and downside effects of human efforts affecting the environment as well as human life. (That law has been deeply degrading and the original purpose essential lost in the shuffle.

My point is this: there is a huge amount of money and effort being spent on the supposed positive effects of various technology—while there is essentially little available for looking at the negative effects. I learned this early on, as we desperately tried to describe the negatives. Now here, you pathetically see the proponent of the positives coming out with supposed solutions to the negatives. No, no no….not at all at what is needed.

Slight
I just can't wait to see whether this will catch on and the all the conspiracy theories begin surrounding it. Meanwhile, I will take a hard pass.

rodeoairflow
I’m so freaked out by the description of this technology that it’s difficult for me to understand how anyone could recognize it as anything other than malicious.

Zobi44
The article was way to blasé about it 

malloryt
Brave New World is our new reality.

renegaderosie
Anyone feel the Borg coming for us?

Spectato
Wait, so first you verify that you are human by looking into an orb. And then a token confirming that you are human is stored on your phone. And now the AI agent on your phone can prove online that it's human. Did I get that right?

bookerloo66
Who cares?

malloryt
People who care about unethical market manipulation, violation of privacy rights, and being forced to pay for things that should be optional. 

It is like a pharmaceutical company inventing a disease so they can sell an antidote.

Or a pest control company releasing a colony of termites in a newly constructed housing development, then offering to exterminate them.

Or creating a tax system so complex most Americans have to hire a CPA or pay for an online DIY tax prep service to file their tax return. 

Altman made AI that is used to create false identities that aid in completing fraudulent activities, then he developed a fee-based service that can spot AI fakes.

For now, this human-verification process is “free” to users. But I would bet that eventually that cost will wind up being paid by the consumer. Perhaps a subscription fee or service charge, or an increase in the cost of goods because the store has to pay for the services Altman is providing.

brian99jordan
Altman’s entire career is marked with scams and dishonest, unethical business practices. If he were not protected by the elites he would be in jail.

jo_snover
I have used both Zoom and DocuSign and would like to be able to continue to do so.

I don't want to provide biometric data to a for-profit company run by a man with no ethical compass (Sam Altman) in order to do it. I'm not sure who/what I would trust, but never Sam Altman. Once given, you can't get this data back or control its use

IadmirePublius
(Edited)
Absolutely unacceptable that they would require me to allow them to ID me that way. Captcha is already bad enough the way it regularly presents me with pictures too fuzzy for me to see even with my glasses on. It discriminates against visually handicapped. It should be illegal already.

Murray
I’m really quite surprised the author was so willing to provide a private corporation with rather dubious ethics a copy of his most private, personal biometric information.

Once you’ve given up biometrics the threat of identity theft backed by that data rises very significantly, and the value of hacking a database containing millions of peoples’ biometrics is almost inconceivable. Armed with a copy of your irises anyone could become you. Encrypted? Should anyone trust that in encryption in 2026? 2030? And I’d also assume systems using this data pass tokens rather than the actual data so having the ability to create those tokens represents the same threat.

I’d say that threat comes from malicious actors, which now includes AIs run by malicious actors, but I’m hardly convinced Altman himself is to be trusted. Much less Musk. Much less an authoritarian government. They all desperately want our biometrics and want to create situations where we are either incentivised or forced to do so. Don’t give it to them.

ForTheBirds
I was going to make a similar comment about the risks of giving biometric information to these companies. It’s likely to make it easier to use your information by AI.

Hmthinkingaboutthis
My thoughts as well. Why on earth would you give up your biometrics. It's your final "password"

LEON

UPDATE

The long-running fight to rein in the government’s power to search Americans’ phone calls, emails and text messages without a warrant has gained new urgency on Capitol Hill over concerns that AI will supercharge state surveillance.

Privacy advocates warn that if the law enabling warrantless monitoring of Americans is not meaningfully reformed, many citizens could be subject to increasingly invasive AI-powered analysis of communications swept up by foreign intelligence programs as well as commercially available location and behavioral data.

“Imagine instead of doing a query with one person that you turned AI loose on these databases,” Rep. Thomas Massie, R-Ky., said Thursday at a press conference announcing a new bill to close data-collection loopholes. “There’s virtually nothing the government can’t know about you.”

Section 702 of the Foreign Intelligence Surveillance Act (FISA) allows the government to collect the communications of foreigners abroad, but it also enables the government to collect messages, emails and other transmissions from Americans when they contact foreigners. The government can then perform warrantless searches on those emails, messages and other communications. Though the provision was originally passed in 2008, lawmakers must renew it every few years…
Total Information Awareness 3.0?

Friday, April 24, 2026

Some random Rip Currents from Jacob Ward

Jacob Ward
"Other Currents"

1. Meta logged its employees’ keystrokes, then announced their layoffs. The Model Capability Initiative — Meta’s new tool for capturing employee keystrokes and mouse clicks, framed as AI training data — was announced this week with no opt-out option. Two days later, 8,000 layoffs. The sequence is the story: document what the workers do, then eliminate them.

2. Palantir published a manifesto. Anthropic went to court. Two companies drew opposite lines in the same week. Palantir’s 22-point public document calls AI weapons inevitable, some cultures “dysfunctional and regressive,” and pluralism a failure. Bellingcat’s Eliot Higgins noted the obvious: these aren’t abstract ideas floating in space — they’re the stated ideology of a company that sells targeting software to militaries and immigration enforcement. Meanwhile, Anthropic has been in federal court since March fighting a Pentagon designation that labeled it a national security supply-chain risk — because it refused to remove safeguards against autonomous weapons and mass domestic surveillance. One company announced what it believes. The other is paying lawyers to defend it.

3. The Joint Chiefs said autonomous weapons are coming. Nobody asked whether they work. Yesterday the Chairman of the Joint Chiefs called autonomous weapons a “key and essential part of everything we do.” Anthropic’s actual position — that frontier AI is not reliable enough to kill people without a human in the loop — has not been rebutted. It has been bypassed.

4. 96,000 tech layoffs this year, all attributed to AI. Nobody has to prove it. Amazon, Meta, Microsoft, Snap, Salesforce, Block — every announcement uses the same language: efficiency, AI investment, right-sizing. No company is required to document whether AI replaced a single role. The NLRB has no jurisdiction over the framing. Nobody does.

5. Amazon’s Ring can now identify your neighbors by face. Familiar Faces rolls out facial recognition to Ring doorbells — identifying family, friends, delivery drivers — processed in Amazon’s cloud, opt-out by default. EFF and Senator Markey are pushing back. Three states have already blocked it. “Optional and disabled by default” is how every ambient surveillance feature starts.

6. OpenAI lost three senior executives in a single day and is projecting $14 billion in losses. The CPO, the head of Sora, and the enterprise CTO all departed the same Friday. Multiple cited the DOD contract and the cultural shift from research to commercial operations. The company generating $25 billion in annual revenue is simultaneously losing the people who built the products and spending $14 billion more than it earns.

7. Anthropic just passed OpenAI in revenue. The gap matters because of what each company agreed to. Anthropic hit $30 billion in annualized revenue this month, passing OpenAI’s $25 billion. Enterprise customers — not consumers — drove it. Worth noting alongside item 2: Anthropic is the company currently in court over autonomous weapons restrictions. OpenAI signed a DOD contract that removed equivalent restrictions weeks after Anthropic was blacklisted.

8. S&P 500 boards are disclosing AI risk while knowing almost nothing about AI. 83% of S&P 500 companies now list AI as a material risk in disclosures — up from 12% in 2023. Board directors with AI expertise: 2.7%. The people with the legal authority to set limits have mostly opted not to understand the thing they’re supposed to be limiting. This is the governance gap that makes every other story in this section possible.
 I added my own random @BobbyGvegas Current:
Another random “current" from a subscriber: The other day, I posted a story to my Facebook page, and was shortly thereafter notified by Facebook that, unless I opted out, henceforth all of my story postings would be given an AI generated “headline." Supposedly to help gain better visibility for my stories. Read “content monetizing assistance." I replied in testy ALL CAPS. (Use your imagination.) Something along the lines of  NO.MUTHA.ZUCKING.WAY…
RIP CURRENT UPDATE


Jacob is definitely worth a paid subscription.

Thursday, April 23, 2026

While we're thinking about Earth Day


14,607 days ago...
 

In late April of 1986 I was busily in the delightful pedal-to-the-metal thrall of my first-ever real day gig after finally getting my first college degree at UTK the prior June at age 39. My wife had been hired by the retiring Director of Industrial Health & Safety at ORNL (Oak Ridge National Laboratory), John Auxier, an internationally renowned PhD nuclear engineer and Certified Health Physicist (CHP; think 'radiation exposure & dose epidemiologist'). He served on the Three Mile Island Commission. Don't let the Overalls-clad Good Ol' Boy aw-shucks Kentucky Tobacco Farmer schtick fool ya.
 
Dr. Auxier had recently founded ASL (Applied Sciences Laboratory) out on Bear Creek Road in western Oak Ridge. ASL's remit was to conduct forensic trace-level environmental radioanalytics in support of consulting, site characterization, regulatory compliance, and litigation.
 
John first met my wife while she was working as the bar manager in a notorious Alcoa Highway roadhouse that ran illegal card and casino table games in a large members-only backroom (the owner, Eddie, was a colorful Vegas Junket operative). Another front bar patron, Jim, was the owner of Smoky Mt. Aero, the private aviation facility at the Knox McGhee-Tyson airport. John was an experienced pilot (going back to his military service) who kept his two aircraft and a helicopter at Smoky Mt.
 
Jim hired Cheryl away from "Fast Eddie's" to be his customer service ops manager, where she would subsequently manage John's aircraft account, along with the facility's overall aircraft rental service, repair & maintenance shop, and a flight school.
 
John smelled serious talent. After he founded ASL, he pilfered her away from Smoky Mt. Aero to be his Marketing Manager. He subsequently tossed in "QA Manager."
 
John knew of my UTK concentration in Applied Statistics ("psychometrics" focused) and asked Cheryl if I'd be willing to come out under contract and help ASL computerize the lab and begin assembling a Statistical Quality Control regimen. They had some old-school nuke lab techs still using slide rules and doing sums of squares for control limits on yellow pads. (Seriously.)
 
Done. Yeah, I'll do it. I started my largely OJT effort first week of January.
 
I stayed 5 and a half yrs, wrote every line of bench-level and business office code, installed a Novell LAN and an SCO Xenix Oracle RDBMS running on a VAX Mini. And, "Statistical Process Control?" (pdf) Got'cha covered. See here as well (pdf). 
 
Old washed-up guitar player in need of a career change. Go figure.
 
We eventually got bought by "International Technology Corporation." (Subsequently renamed "The IT Group.")
 
APRIL 26TH 1986
 
Among the gazillion recurrent items on my plate was managing the database & reporting on a REMP study for Perry Nuclear, a nuke plant under construction near Cleveland OH. A "Radiological Environmental Monitoring Program," i.e., a lengthy weekly nvironmental baseline study via which to determine "naturally occuring" radionuclide levels across a breadth of matrices—water, soil, vegetation, fish, fowl, cows milk, ambient air, etc.
 
The air filter samples always came back "below LLD." "Below the Lower Limit of Determination." I had a quick macro that simply entered "<0.04 pCi/cu.m" for me automatically. ("Less than 0.04 picocuries per cubic meter.")
 
One week after Chernobyl blew we got quantifiable elevated positive hits from every Perry air monitor station. Airborne I-131.
 

I-131 is an readily airborne alpha emitter with an 8.05 day half-life, so, by June the Perry air monitor results were back down below LLD. Other, heavier rad particles with higher energies and longer half-lives were tracked for years thereafter (some likely still sampled & studied).
 
Earth Day.
 
 
It's always earth day.
 
UPDATE
 
Assessing radiation risk today. The regulatory politics vs the science.
When US President Trump signed an executive order in May 2025 seeking to streamline the adoption of nuclear energy by directing federal agencies to reconsider whether radiation protection standards have been unnecessarily strict, he reignited a debate that has smoldered in radiation science for decades. At the heart of the controversy is the linear nonthreshold (LNT) model—the idea that any amount of radiation, no matter how small, has damaging biological effects. Resolving whether LNT is a reasonable precaution or a costly misapplication of incomplete science will require not just better arguments, but better data.

The science underlying low-dose risk (less than 100 milligray, a measure of how much radiation is absorbed by living tissue) is incomplete. Decades of epidemiology and biology have neither confirmed LNT nor established a universally agreed threshold dose below which there is essentially no risk. Given the uncertainties, some experts advocate that doses should be as low as reasonably achievable (ALARA). There is also a hypothesis called hormesis, which holds that very low doses of radiation might be beneficial by stimulating cellular repair mechanisms. These are not purely scientific questions. How society weighs uncertain risks against economic and energy imperatives, whether in nuclear power, medical imaging, or occupational safety, involves value judgments that data alone cannot fully resolve.
 
Answering these questions will still require more research on several fronts. Here, experts do not agree on what that evidence will ultimately show. Some (including E.A.C.) hold that although LNT is not an established biological truth, it is a defensible regulatory tool—adding, however, that ALARA has been applied inconsistently, at times generating financial costs, psychological harm, and forgone societal benefits without proportionate protective benefit. Others (including B.A.U.) maintain that evidence for harm at low doses is lacking and that the real-world consequences of LNT-based regulation, from unnecessary abortions following the Chernobyl nuclear disaster in 1986 to evacuation deaths after the Fukushima nuclear accident in 2011, demonstrate that treating a contested model as settled science carries measurable human cost...
Every day is earth day...
 
Then there's the White House Facebook page.

Wednesday, April 22, 2026

EVERY Day is Earth Day


Yeah, how predictable. Drill, baby, drill. "Clean Coal." Outlaw wind turbines. "Green scams." Yadayadayada...
 
But facts are stubborn things. Facts that will outlive Donald Trump and his pillagers.
 
 
Search the word "renewables" on this blog for some prior observations. Alternatively, use "anthropocene."
 

CHRISTINA MITTERMEIER
 
"I have floated in the middle of the Pacific, far from any shore, looking down into water so blue it felt like the beginning of the universe. I have stood at the edge of the Arctic, listening to ice crack and fall into a warming sea. In these moments, I understood something that no dataset had ever been able to teach me: we are small, and the Earth is generous, and we have not yet earned that generosity.

Today the world pauses to celebrate this planet. I want to ask you to do something harder than celebrating. I want to ask you to grieve, just for a moment, for what we have already lost. The reefs we bleached. The forests we traded for cattle. The glaciers that are now water. Grief, when we allow ourselves to feel it, becomes fuel.

I grew up in Mexico, the daughter of a country where the land and the sea were never abstractions. They were food, livelihood, identity. The communities I have photographed across a hundred countries, Indigenous guardians, fishermen, divers, children who still swim in clean rivers, they have never needed Earth Day to remind them that the planet is alive. They know it, in their bones and their hands and their hunger.

The rest of us are still learning.

Here is what I have learned behind the lens: beauty is not passive. A photograph of a humpback whale does not ask you to admire her. It needs you to protect her. Every image I have ever made has been a letter: urgent, loving, written to whoever is willing to read it.

So today, I ask you to make a promise that survives until April 23rd. And April 24th. And every day after that. Protect a piece of it. Vote for it. Fund it. Teach your children its name.

The Earth does not need Earth Day. But it does need you.

Happy Earth Day. Now let's get to work."
She posted this on Facebook today. We heard her speak last week. Fabulous.
 
More shortly...

Tuesday, April 21, 2026

How long until you say "Kara" and pretty much everyone knows who you mean?

 Not very long, I would think. We can thank CNN (truly) as they skulk off into the partisan infotainment cognitive desert.
 
And, OK, I had to just riff on the hair.
 
 
OK, Jacob Collier on the left, Kara on the right. Jacob is the most stunningly brilliant musician/vocalist/composer/bandleader/music technologist & educator I have ever encountered. Kara Swisher has been a preeminent cut-to-rhe-chase tech journalist & clear thinker for a very long time. She is now rapidly approaching the time when she will simply, rightfully be known as “Kara.” They both truly “make the world a better place.”
 
 
Trust me...

Monday, April 20, 2026

Bohemian Grove? That is SO yesterday.

 
Very interesting Atlantic article, excerpted.
...In 2018, I was a guest at Jeff Bezos’s Campfire retreat in Santa Barbara, California. It’s an annual event in which the Amazon founder invites 80-plus guests—celebrities, artists, intellectuals, and anyone else he thinks is interesting—to spend three nights at a private resort. I had recently been approached by Amazon about moving my film-and-television business over from Disney, and although I had declined (or maybe because I had declined), Bezos’s team invited me to Campfire, perhaps keen to impress me with the power of his reach…

Each morning, we gathered in a lecture hall to hear presentations. If you’ve ever seen a TED Talk, you understand the format. The year I went, a sitting Supreme Court justice was interviewed, and a neurologist talked about technological advances in prosthetics. In the afternoons and evenings, we were encouraged to exchange ideas over drinks and four-course meals, with no set purpose—to network, in other words, with some of the most rarefied talent on Earth. The most common question I heard was “Why am I here?”

“Why am I here?” asked the 1980s hair-metal singer. “Why am I here?” asked the Pulitzer Prize–winning novelist, the famous anthropologist, the presidential historian. Only the movie stars and the billionaires didn’t ask: They had done this kind of thing before. It turns out there is a circuit of idea festivals. Many tech billionaires host one, and if you find yourself on the right list, you can spend much of the year traveling the world, eating Wagyu, and discussing how to make the world a better place with the most famous talk-show host in history...

At drinks on the second night, the head of a major talent agency asked me what I thought of the weekend. I said, “I’ve spent my whole career trying to figure out how the world works. I didn’t realize I could just come here and ask the people who ran it.” On some level I was kidding. The lead singer of an alt-country band didn’t run the world, nor did a noted author who would later be accused of impropriety. But finding myself at that resort by exclusive invitation, I now knew exactly what people meant when they talked about the elite.

Sitting in the lecture hall, pencils out, listening to a famous chef explain his humanitarian work, it was easy to feel like the solution to the world’s problems lay within our grasp. And yet, looking around at faces I had only ever seen in a magazine or on-screen, I had an unsettling revelation: This is the hubris of accomplishment. To be declared a genius at one thing is to begin to believe you are a genius at everything…

Here we were, 80 individuals with a combined net worth that was greater than a small city’s yet infinitesimal compared with the wealth and dominion of our host. How did he view this exercise—as a first step toward changing the world, or as a performative display of his reach and influence?

Bezos was everywhere that weekend—in a tight T-shirt, laughing too loudly, arms thrown around his teenage sons. He had recently become the world’s second centibillionaire, his net worth hovering somewhere around $112 billion, about half of what it is today. That number, previously unimaginable, had made him unique on a planet of 8 billion people, and you could feel it in the room. Even the richest and most famous among us were drawn to the energy of this impossible wealth...

Eight years later, Bezos and two of the world’s other richest men—Mark Zuckerberg and Elon Musk—have clearly left the world of consequences behind. They float in a sensory-deprivation tank the size of the planet, in which their actions are only ever judged by themselves.

The closer I’ve gotten to the world of wealth, the more I understand that being truly rich doesn’t mean amassing enough money to afford superyachts, private jets, or a million acres of land. It means that everything becomes effectively free. Any asset can be acquired but nothing can ever be lost, because for soon-to-be trillionaires, no level of loss could significantly change their global standing or personal power. For them, the word failure has ceased to mean anything…

This sense of invulnerability has deep psychological ramifications. If everything is free and nothing matters, then the world and other people exist only to be acted upon, if they are acknowledged at all. This is different from classic narcissism, in which a grandiose but fragile self-image can mask deep insecurity. What I’m talking about is a self-definition in which the individual grows to the size of the universe, and the universe vanishes. Asked recently if there is any check on his power, President Trump—himself a billionaire, and by far the richest president in American history—said, “Yeah, there is one thing. My own morality. My own mind. It’s the only thing that can stop me.” Not domestic or international law, not the will of the voters, not God or the centuries-old morality of civic and religious life…
Great piece. Read all of it.
 
My dear Las Vegas friend Jerry Lopez (below leading his killer band) once got to do a hang and performance at The Bohemian Grove in the SF Bay Area.
 
 
Similar kind of juiced-up heavy hitters thing at the Bohemian, but these current-day techbiz cats like Bezos, Musk, Zuckerberg, Altman et al are order(s) of magnitude wealthier and aggressive. This stuff struck me when I read the Atlantic article, given its publication concurrent with the airing of Kara Swisher's awesome new CNN docuseries.
 
SPEAKING OF CNN

    
I'm  not yet sure what to make of this. Really like Jake Ward.
 
More shortly... 

Friday, April 17, 2026

The Prof G Pod

Kara Swisher: Tech Billionaires Want to Live Forever. Should They? | Prof G Conversations 
   
 
I have long appreciated the work and insights of Dr. Scott Galloway. In particular, his collaboration with Kara Swisher on The PIVOT Podcast just totally rocks.
 
Below, on Scott's podcast they discuss her fabulous new CNN docuseries "Kara Swisher Wants To Live Forever."


NOTE: I've been watching a lot of great, relevant stuff on YouTube lately, which cuts into my always-overflowing book & periodicals reading time. A tip that works for me:
 
 
On the YouTube program screen, locate and click the little "gear" icon. I now frequently reset my viewing velocity to 1.25x. The vocals retain the same timbre but the speed elevation saves a bit of time and is only minimally distracting here and there. You just have to focus your aural attention a tad more.
I also sometimes push the audio to 2x when mocking Donald Trump.
OTHER STUFF TODAY 

Jake Ward has this author up on his podcast.
Journalist Katrina Manson spent years inside the classified and not-so-classified world of U.S. military AI — interviewing the colonels, the defense tech founders, and the ethicists watching it all unfold. Her book, Project Maven, is the definitive account of how Silicon Valley's "ship it and fix it later" culture collided with the business of war. We talk about Palantir, Claude, the Jevons Paradox applied to killing, and why the people building these systems openly admit they're worried about what they've built. A conversation you won't hear anywhere else.
A riveting discussion. Sobering.
 
BUSY NEWS WEEK, 'EH?
 

More in a bit...

Thursday, April 16, 2026