Search the KHIT Blog

Monday, August 25, 2025

Nobel Peace Prize?

 
APROPOS OF WAR & PEACE
 

60 Minutes just re-ran this segment. Palmer Luckey? Anduril Industries?

Personal views

In September 2016, Luckey stated he is a libertarian who had supported Ron Paul and Gary Johnson in past elections. Since then, he has become a prominent fundraiser for the Republican Party and Donald Trump.

In a 2024 interview, Luckey described himself as a "radical Zionist"

Fundraising for Donald Trump

In September 2016, Luckey donated $10,000 to an organization called "Nimble America" with the stated purpose of "educating the community on our ideals of America First, Smart Trade, Legal Immigration, and Ethical Behavior." Luckey offered to match further contributions from r/The_Donald users for 48 hours after the announcement. Luckey later issued an apology, stating on his Facebook page, "I am deeply sorry that my actions are negatively impacting the perception of Oculus and its partners." He stated that he acted independently, not as a representative of Oculus VR. The Wall Street Journal later reported that Luckey had been pressured into making this statement as a condition of employment.

In October 2020, Luckey hosted a fundraiser for Donald Trump at his home in Lido Isle, Newport Beach, with the president in attendance. The fundraiser had tickets ranging from $2,800 per person to $150,000 per couple, and there were gatherings both for and against President Trump in Newport Beach outside during the event.

On June 8, 2024, Luckey co-hosted another fundraiser for Trump at the home of health insurance company co-founder John Word, where donors spent up to $100,000 per person to attend.

Interesting dude. Merits a bit of drill-down. Having just an itch of Sam Bankman-Fried'ish dubiety. The "Nimble America" thingy turns up an immediate dry hole.

"educating the community on our ideals of America First, Smart Trade, Legal Immigration, and Ethical Behavior."

The website is inoperative, a "502 error." Following 2 on Twitter/X and 36 Followers, after nine years. Oh, well...
 
I was hoping to get hip to their take on "Ethical Behavior." Let me guess. Ayn Randian-istas?

Thursday, August 21, 2025

On deck...

Returning from some medical tests at Kaiser today I heard this on NPR/WYPR (MP3 interview here). Way cool. Bought her book (of course). Will be buying my wife her own copy.

 
 
This book rocks. It is scientifically and historically broad and deep, while wickedly funny in repeated measure. 611 pages with copious chapter end-notes. A glorious read.
 
UPDATE
 
Another book find (via @SciAm). Comes out next week.
 
 

When we talk about carbon dioxide, the narrative is almost always that of a modern-day morality play. We hear about gigatons of CO2 emitted, about rising global temperatures and about the dire, unheeded warnings of climate scientists. In these tales, CO2 often seems less like a mute, inert molecule and more like an evil supervillain—a malevolent force that has been plotting for centuries to wreak havoc on our planet and ruin our lives.

But according to science journalist Peter Brannen, that dismal view is far too narrow. In his first book, The Ends of the World, Brannen chronicled Earth’s five major mass extinctions, charting the deep history of our planet’s greatest catastrophes. For his second, The Story of CO2 Is the Story of Everything (Ecco, 2025), he has higher ambitions, taking readers on dizzying jaunts through deep time to reframe our understanding of what may be the most vilified and misunderstood molecule on Earth.

Inspired and informed by conversations with leading planetary scientists, Brannen’s central argument is that CO2 is not merely an industrial pollutant but a key player in the four-billion-year-old drama of life on Earth. It is the molecule that built our planet, forming the global carbon cycle that has regulated climate, shaped geology and powered evolution for eons. He shows how the ebb and flow of atmospheric CO2 across Earth’s vast history has played a role in, yes, practically everything under the sun—from the primordial origins of life to the development of human civilization and our global economic system. From the ancient past to the present day, Brannen makes the case that to understand CO2 is to understand the very fabric of our world…

Wednesday, August 20, 2025

Sunday, August 17, 2025

Oval Office Monday after Alaska


Prior posts of mine on Russia's aggression against Ukraine.
 

Yeah, the internets are alive and well with exasperated multimedia snark.
 CODA 

Thursday, August 14, 2025

August 15th, 2025 Summit in Alaska

 
POST-SUMMIT UPDATE
  
Hannity: "You said before the interview that in two minutes you would know...What vibe did you get in two minutes?"

Trump: "You know, I always had a great relationship with President Putin. And we would have done great things together in terms of, you know. Their land is incredible. The rare earth, the oil gas — it's incredible. It is the largest piece of land in the world as a nation by far. I think they have 11 timezones if you can believe it, that's big stuff. But we would have done a lot of great things we had the Russia, Russia, Russia hoax which stopped us from doing that. We would have done so great. But we have the greatest — one of the great hoaxes. I mean, there were others like the election itself and as you know, as you covered better than anyone. But it was a rigged election and a horrible thing that took place in 2020. But we would have had a great relationship but we did amazingly well considering — you know, he would look and see what happened, he would think we're crazy with the made up Russia, Russia, Russia hoax. So we had something very important and we had a very good meeting today, but we'll see. I mean it's, you have to get a deal."
Is it too early to start drinking? Hmmm... a prior post.
 
UPDATES
 

 
 
ERRATUM
Reacting to a report that someone in Donald Trump’s entourage who went with him to Alaska left sensitive State Department documents on a public hotel printer, critics of the president pounced on yet another security breach since he took office.

According to a report from NPR on Saturday, Three guests staying at the Hotel Captain Cook, located 20 minutes away from Joint Base Elmendorf-Richardson where the Russian and American president were meeting, discovered the documents that included names, meeting times, room locations, phone numbers and other details.

That led Jon Michaels, a professor of law at UCLA who lectures about national security, to explain to NPR, "It strikes me as further evidence of the sloppiness and the incompetence of the administration.You just don't leave things in printers. It's that simple." …
Lordy...

Sunday, August 10, 2025

AI: the Possible vs the Probable

Tristan Harris cuts to The Chase

 
"Wisdom Traditions?" "Philosophy?" Define "philosophy.'"
 
So, I punted to Google's new native "AI" jus' fer grins. BTW, some prior riffs on AI.
 
 
I was pleased by that. "Knowledge" and "Wisdom" differ. The former is necessary but insufficient for the latter. Given that my 1998 grad degree is in "Ethics & Policy Studies," I know just a thing or two about the core elements of "applied philosopy."
 
Another material facet of all of this.
 
Click here.
When Jensen Huang, the chief executive of the chipmaker Nvidia, met with Donald Trump in the White House last week, he had reason to be cheerful. Most of Nvidia’s chips, which are widely used to train generative artificial-intelligence models, are manufactured in Asia. Earlier this year, it pledged to increase production in the United States, and on Wednesday Trump announced that chip companies that promise to build products in the United States would be exempt from some hefty new tariffs on semiconductors that his Administration is preparing to impose. The next day, Nvidia’s stock hit a new all-time high, and its market capitalization reached $4.4 trillion, making it the world’s most valuable company, ahead of Microsoft, which is also heavily involved in A.I.

Welcome to the A.I. boom, or should I say the A.I. bubble? It has been more than a quarter of a century since the bursting of the great dot-com bubble, during which hundreds of unprofitable internet startups issued stock on the Nasdaq, and the share prices of many tech companies rose into the stratosphere. In March and April of 2000, tech stocks plummeted; subsequently many, but by no means all, of the internet startups went out of business. There has been some discussion on Wall Street in the past few months about whether the current surge in tech is following a similar trajectory. In a research paper entitled “25 Years On; Lessons from the Bursting of the Technology Bubble,” which was published in March, a team of investment analysts from Goldman Sachs argued that it wasn’t: “While enthusiasm for technology stocks has risen sharply in recent years, this has not represented a bubble because the price appreciation has been justified by strong profit fundamentals.” The analysts pointed to the earnings power of the so-called Magnificent Seven companies: Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla. Between the first quarter of 2022 and the first quarter of this year, Nvidia’s revenues quintupled, and its after-tax profits rose more than tenfold.

The Goldman paper also provided a salutary history lesson...

MORE ON AGI CONCERNS
 

There sre now dozens of these critical AGI videos on YouTube alone. 
 
Briefly back to Econ stuff (pertaining to just OpenAI):
 
OpenAI astounded the tech industry for the second time this week by launching its newest flagship model, GPT-5, just days after releasing two new freely available models under an open source license.

OpenAI CEO Sam Altman went so far as to call GPT-5 “the best model in the world.” That may be pride or hyperbole, as TechCrunch’s Maxwell Zeff reports that GPT-5 only slightly outperforms other leading AI models from Anthropic, Google DeepMind, and xAI on some key benchmarks, and slightly lags on others.

Still, it’s a model that performs well for a wide variety of uses, particularly coding. And, as Altman pointed out, one area where it is undoubtedly competing well is price. “Very happy with the pricing we are able to deliver!” he tweeted.

The top-level GPT-5 API costs $1.25 per 1 million tokens of input, and $10 per 1 million tokens for output (plus $0.125 per 1 million tokens for cached input). This pricing mirrors Google’s Gemini 2.5 Pro basic subscription, which is also popular for coding-related tasks. Google, however, charges more if inputs/outputs cross a heavy threshold of 200,000 prompts, meaning its most consumption-heavy customers end up paying more…
"Tokens?"
 

 Lordy. Wafts of the Crypto bamboozlement ensue.
 
UPDATE
Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.” The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training? ...
From The New Yorker by Cal Newport. Interesting piece. GPT 5 is getting a lot of pushback. 
 
MORE CONSIDERATIONS
 
Chapter 1 
The Artificial Intelligence of the Ethics of Artificial Intelligence  
An Introductory Overview for Law and Regulation  

Joanna J. Bryson 

For many decades, artificial intelligence (AI) has been a schizophrenic field pursuing two different goals: an improved understanding of computer science through the use of the psychological sciences; and an improved understanding of the psychological sciences through the use of computer science. Although apparently orthogonal, these goals have been seen as complementary since progress on one often informs or even advances the other. Indeed, we have found two factors that have proven to unify the two pursuits. First, the costs of computation and indeed what is actually computable are facts of nature that constrain both natural and artificial intelligence. Second, given the constraints of computability and the costs of computation, greater intelligence relies on the reuse of prior computation. Therefore, to the extent that both natural and artificial intelligence are able to reuse the findings of prior computation, both pursuits can be advanced at once.

Neither of the dual pursuits of AI entirely readied researchers for the now glaringly evident ethical importance of the field. Intelligence is a key component of nearly every human social endeavor, and our social endeavors constitute most activities for which we have explicit, conscious awareness. Social endeavors are also the purview of law and, more generally, of politics and diplomacy. In short, everything humans deliberately do has been altered by the digital revolution, as well as much of what we do unthinkingly. Often this alteration is in terms of how we can do what we do—for example, how we check the spelling of a document; book travel; recall when we last contacted a particular employee, client, or politician; plan our budgets; influence voters from other countries; decide what movie to watch; earn money from performing artistically; discover sexual or life partners; and so on. But what makes the impact ubiquitous is that everything we have done, or chosen not to do, is at least in theory knowable. This awareness fundamentally alters our society because it alters not only how we can act directly, but also how and how well we can know and regulate ourselves and each other. 

A great deal has been written about AI ethics recently. But unfortunately many of these discussions have not focused either on the science of what is computable or on the social science of how ready access to more information and more (but mechanical) computational power has altered human lives and behavior. Rather, a great deal of these studies focus on AI as a thought experiment or “intuition pump” through which we can better understand the human condition or the nature of ethical obligation. In this Handbook, the focus is on the law—the day-to-day means by which we regulate our societies and defend our liberties …


Dubber, Markus D.; Pasquale, Frank; Das, Sunit (2020). Oxford Handbook of Ethics of AI (OXFORD HANDBOOKS SERIES) (Function). Kindle Edition.  
Just delving into this. Pretty interesting, right off. 
 
TOBY ORD INTERVIEW
 

 I've cited Toby Ord before.
 
ETHICS OF AI, ANOTHER CITE
Every task we apply our conscious minds to—and a great deal of what we do implicitly—we do using our intelligence. Artificial intelligence therefore can affect everything we are aware of doing and a great deal we have always done without intent. As mentioned earlier, even fairly trivial and ubiquitous AI has recently demonstrated that human language contains our implicit biases, and further that those biases in many cases reflect our lived realities. In reusing and reframing our previous computation, AI allows us to see truths we had not previously known about ourselves, including how we transmit stereotypes, but it does not automatically or magically improve us without effort. Caliskan, Bryson, and Narayanan discuss the outcome of the famous study showing that, given otherwise-identical resumes, individuals with stereotypically African American names were half as likely to be invited to a job interview as individuals with European American names. Smart corporations are now using carefully programmed AI to avoid implicit biases at the early stages of human resources processes so they can select diverse CVs into a short list. This demonstrates that AI can—with explicit care and intention—be used to avoid perpetuating the mistakes of the past. 

The idea of having “autonomous” AI systems “value-aligned” is therefore likely to be misguided. While it is certainly necessary to acknowledge and understand the extent to which implicit values and expectations must be embedded in any artifact, designing for such embedding is not sufficient to create a system that is autonomously moral. Indeed, if a system cannot be made accountable, it may also not in itself be held as a moral agent. The issue should not be embedding our intended (or asserted) values in our machines, but rather ensuring that our machines allow firstly the expression of the mutable intentions of their human operators, and secondly transparency for the accountability of those intentions, in order to ensure or at least govern the operators’ morality. 

Only through correctly expressing our intentions should AI incidentally telegraph our values. Individual liberty, including freedom of opinion and thought, are absolutely critical not only to human well-being but also to a robust and creative society. Allowing values to be enforced by the enfolding curtains of interconnected technology invites gross excesses by powerful actors against those they consider vulnerable, a threat, or just unimportant. Even supposing a power that is demonstrably benign, allowing it the mechanisms for technological autocracy creates a niche that may facilitate a less-benign power—whether through a change of hands, corruption of the original power, or corruption of the systems communicating its will. Finally, who or what is a powerful actor is also altered by ICT, where clandestine networks can assemble—or be assembled—out of small numbers of anonymous individuals acting in a well-coordinated way, even across borders.

Theoretical biology tells us that where there is greater communication, there is a higher probability of cooperation. Cooperation has nearly entirely positive connotations, but it is in many senses almost neutral—nearly all human endeavors involve cooperation, and while these generally benefit many humans, some are destructive to many others. Further, the essence of cooperation is moving some portion of autonomy from the individual to a group. The extent of autonomy an entity has is the extent to which it determines its own actions. Individual and group autonomy must to some extent trade off, though there are means of organizing groups that offer more or less liberty for their constituent parts.
[Dubber, et al, Ch 1.]  
A lot to consider in this book.

Friday, August 8, 2025

Of late humanity seems to be spinning chaotically out of civil and moral control.

So, perhaps I should catch a breath or two and diverge—however briefly—to speak of my Carlos in the wake of our Ranger's recent gut-wrenching demise.
 
 
The day before embarking on our long-planned trip to Portugal, we were to take the dogs for boarding. Instead, we had to have Ranger euthanized. He'd been failing for many months, and that Saturday morning he simply could not get up. We had to carry him up the basement family room stairs and into the car. It was horrible.
 
Some backstory. I got Carlos in August 2011 at the Las Vegas city animal shelter. He was then a young Terrier mix who'd been given up. Appears to have perhaps been abused. When you'd reach out to pet him he would cring like he was anticipating being hit. I had two larger rescue dogs at the time, Lucie, a Chow/Huskie mix, and Jaco, a black lab mix. Both strays that had wandered into our lives.
 
The three dogs got on just fine. They had to have sensed that they were safe and loved with us. Cheryl and I have been stray magnets for a half century.
 
My mother was in her 3rd year of nursing home care. She loved our dogs, but Lucie and Jaco were too rowdy for nursing home visits. 
 
Carlos, on the other hand, was perfectly behaved when we visited. All the patients in Ma's wing loved littlw Carlos. 
 
Sadly, my Ma died several months later. 
 
Carlos is now pushing 16. Lucie and Jaco are gone, as now recently likewise is Ranger.
 
Lucie
Jaco and Carlos
Ranger and Carlos
And, then there's our 15 yr old cat, Miss Lizzie. She who rules.
 

Lizzie came to me in May 2010 from an abandoned litter of six found in a dumpster off downtaon Las Vegas.
 
Carlos is now permitted the run of the house, and he loves to ride in the car. He now gets a lot more attention.

Thursday, August 7, 2025

Donald Trump's planned $200,000,000 White House Ballroom

Individual dinner guest reservations from $100,000 per seat.
 
UPDATE 
 

Sunday, August 3, 2025

AI update, revisited on "60 Minutes"

 
Recall our recent "Artificial Superintelligence" post? Is AGI now passe, not too many months later? Updates on deck.
 
 
Again, no substantive discussion jumps out questioning how much of the myriad Ben & Jerry'd flavors of "AI" are really just the relatively pedestrian "IA" technologies (Intelligence Enhancement).
 
UPDATE: 60 MINUTES OVERTIME
 


NEWS ITEM: FORTUNE
Sam Altman is wrong that the AI fraud crisis is coming—it’s already here
BY HAYWOOD TALCOVE

Sam Altman recently warned that AI-powered fraud is coming “very soon,” and it will break the systems we rely on to verify identity.

It is already happening and it’s not just coming for banks; it’s hitting every part of our government right now.

Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programs. Criminal networks are already using deepfakes, synthetic identities, and large language models to outpace outdated fraud defenses, including easily spoofed, single-layer tools like facial recognition, and they’re winning …f

... As I testified before the U.S. House of Representatives twice this year, what we’re seeing in the field is clear. Fraud is faster, cheaper, and more scalable than ever before. Organized crime groups, both domestic and transnational, are using generative AI to mimic identities, generate synthetic documentation, and flood our systems with fraudulent claims. They’re not just stealing from the government; they’re stealing from the American people.

The Small Business Administration Inspector General now estimates that nearly $200 billion was stolen from pandemic-era unemployment insurance programs, making it one of the largest fraud losses in U.S. history. Medicaid, IRS, TANF, CHIP, and disaster relief programs face similar vulnerabilities. We have also seen this firsthand in our work alongside the U.S. Secret Service protecting the USDA SNAP program, which has become a buffet for fraudsters with billions stolen nationwide every month. In fact, in a single day using AI, one fraud ring can file tens of thousands of fake claims across multiple states, most of which will be processed automatically unless flagged.

We’ve reached a turning point. As AI continues to evolve, the scale and sophistication of these attacks will increase rapidly. Just as Moore’s Law predicted that computing power would double every two years, we’re now living through a new kind of exponential growth. Gordon Moore, Intel’s co-founder, originally described the trend in 1965, and it has guided decades of innovation. I believe we may soon recognize a similar principle for AI that I call “Altman’s Law”: every 180 days, AI capabilities double.

If we don’t modernize our defenses with the same pace as technological advancements, we’ll be permanently outmatched…

AI is a force multiplier, but it can be weaponized more easily than it can be wielded for protection. Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.
Read all of it.

What could possibly go wrong? We have an administration ready to extend an unknown amount of probably billions of dollars on an AI industry that will pretty much be substantially “self regulated.” And, the same administration wants to unleash unfettered cryptocurrency ultimately backed by the US treasury. Need I repeat?

What could POSSIBLY go wrong?
 
NY TIMES ARTICLE
 
In a scene in HBO’s “Silicon Valley” in 2014, a character who had just sold his idea to a fictional tech company that was a thinly veiled analogue to Google encountered some of his new colleagues day drinking on the roof in folding lawn chairs. They were, they said tipsily, essentially being paid to do nothing while earning out — or “vesting” — their stock grants.

“Rest and vest,” the techies said, in between sips of beer.

The tongue-in-cheek sendup wasn’t far from Silicon Valley’s reality. At the time, young engineers at Facebook, Apple, Netflix and Google made the most of what was known as the “Web 2.0” era. Much of their work was building the consumer internet — things like streaming music services and photo-sharing sites. It was a time of mobile apps and Mark Zuckerberg, Facebook’s founder, wanting to give everyone a Facebook email address.

It was also the antithesis of corporate America’s stuffy culture. Engineers held morning meetings sitting in rainbow-colored beanbags, took lunch gratis at the corporate sushi bar and unwound in the afternoon with craft brews from the office keg (nitrogen chilled, natch). And if they got sweaty after a heated office table-tennis tournament, no matter — dry cleaning service was free.

That Silicon Valley is now mostly ancient history. Today, the tech has become harder, the perks are fewer and the mood has turned more serious. The nation’s tech capital has shifted into its artificial intelligence age — some call it the “hard tech” era — and the signs are everywhere.

In office conference rooms, hacker houses, third-wave coffee houses or over Zoom meetings, knowledge of terms like neural network, large language model and graphical processing unit has become mandatory. Stacked up against ChatGPT’s ability to instantly transform any image into a Studio Ghibli cartoon, Instagram’s photo filters are practically Paleolithic. And the chatter is about not how you built your app with the HTML5 coding language, but how many H100 graphics cards — the highly coveted hardware for running A.I. programs — you can get your hands on.

The tech epicenter has moved from the traditional cradle of Silicon Valley — the towns of San Jose, Mountain View, Menlo Park and Palo Alto — 40 miles north to San Francisco, the home of the A.I. start-ups OpenAI and Anthropic. Tech giants like Google are no longer hiring in droves as they once did. And those with jobs at those behemoths are met by the watchful eyes of managers looking to cut dead weight rather than coddle employees.

The region, long known for its capital-L Liberal politics, is no longer a political monoculture. A contingent of venture capitalists and entrepreneurs have spurred a rightward shift, leading to the rise of the “Liberaltarian” — a term coined by two Stanford political economists to describe the tech industry’s proclivity toward trumpeting liberalism in some social issues but maintaining antigovernment posturing in regulating businesses. Alongside that change, industries that were once politically incorrect among techies — like defense and weapons development — have become a chic category for investment…
 
The NY Times piece really resonates with me. I have major Bay Area history (going back to the late 60's), including, in more recent years, the SF/Silicon Valley digital Health IT startup scenes. See here as well.
 
AI 2027 UPDATE
 

Read the AI 2027 Report here. Choose your speculation... 

Friday, August 1, 2025

Donald Trump: “He stole her from me!”

“And, trust me; I know more about theft than anyone!” —DJT
"THANK YOU FOR YOUR INATTENTION TO THIS MATTER."

Wednesday, July 30, 2025

"You white people

all think that Caitlin Clark invented basketball."
 
 
I bought Christine Brennan's new book. Posted about it on BlueSky. Within mere minutes I got hammered with angry "you white people" stuff like the post headline above.
 
"deboraho" cracked on me and then blocked me.
 

Wasn't ready for that. I tersely repled "spare me." Whatever. I reciprocated the block. No point in engaging and escalating.
 
Yeah, right, "You White People." Pleeeze...
 
July 31 Update: I just finished. It is one fine book. 
 

 
More to come...

Tuesday, July 29, 2025

The Incumbent U.S. President:

“It would be nice to have at least a ‘thank you’.”