Search the KHIT Blog

Sunday, August 10, 2025

AI: the Possible vs the Probable

Tristan Harris cuts to The Chase

 
"Wisdom Traditions?" "Philosophy?" Define "philosophy.'"
 
So, I punted to Google's new native "AI" jus' fer grins. BTW, some prior riffs on AI.
 
 
I was pleased by that. "Knowledge" and "Wisdom" differ. The former is necessary but insufficient for the latter. Given that my 1998 grad degree is in "Ethics & Policy Studies," I know just a thing or two about the core elements of "applied philosopy."
 
Another material facet of all of this.
 
Click here.
When Jensen Huang, the chief executive of the chipmaker Nvidia, met with Donald Trump in the White House last week, he had reason to be cheerful. Most of Nvidia’s chips, which are widely used to train generative artificial-intelligence models, are manufactured in Asia. Earlier this year, it pledged to increase production in the United States, and on Wednesday Trump announced that chip companies that promise to build products in the United States would be exempt from some hefty new tariffs on semiconductors that his Administration is preparing to impose. The next day, Nvidia’s stock hit a new all-time high, and its market capitalization reached $4.4 trillion, making it the world’s most valuable company, ahead of Microsoft, which is also heavily involved in A.I.

Welcome to the A.I. boom, or should I say the A.I. bubble? It has been more than a quarter of a century since the bursting of the great dot-com bubble, during which hundreds of unprofitable internet startups issued stock on the Nasdaq, and the share prices of many tech companies rose into the stratosphere. In March and April of 2000, tech stocks plummeted; subsequently many, but by no means all, of the internet startups went out of business. There has been some discussion on Wall Street in the past few months about whether the current surge in tech is following a similar trajectory. In a research paper entitled “25 Years On; Lessons from the Bursting of the Technology Bubble,” which was published in March, a team of investment analysts from Goldman Sachs argued that it wasn’t: “While enthusiasm for technology stocks has risen sharply in recent years, this has not represented a bubble because the price appreciation has been justified by strong profit fundamentals.” The analysts pointed to the earnings power of the so-called Magnificent Seven companies: Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla. Between the first quarter of 2022 and the first quarter of this year, Nvidia’s revenues quintupled, and its after-tax profits rose more than tenfold.

The Goldman paper also provided a salutary history lesson...

MORE ON AGI CONCERNS
 

There sre now dozens of these critical AGI videos on YouTube alone. 
 
Briefly back to Econ stuff (pertaining to just OpenAI):
 
OpenAI astounded the tech industry for the second time this week by launching its newest flagship model, GPT-5, just days after releasing two new freely available models under an open source license.

OpenAI CEO Sam Altman went so far as to call GPT-5 “the best model in the world.” That may be pride or hyperbole, as TechCrunch’s Maxwell Zeff reports that GPT-5 only slightly outperforms other leading AI models from Anthropic, Google DeepMind, and xAI on some key benchmarks, and slightly lags on others.

Still, it’s a model that performs well for a wide variety of uses, particularly coding. And, as Altman pointed out, one area where it is undoubtedly competing well is price. “Very happy with the pricing we are able to deliver!” he tweeted.

The top-level GPT-5 API costs $1.25 per 1 million tokens of input, and $10 per 1 million tokens for output (plus $0.125 per 1 million tokens for cached input). This pricing mirrors Google’s Gemini 2.5 Pro basic subscription, which is also popular for coding-related tasks. Google, however, charges more if inputs/outputs cross a heavy threshold of 200,000 prompts, meaning its most consumption-heavy customers end up paying more…
"Tokens?"
 

 Lordy. Wafts of the Crypto bamboozlement ensue.
 
UPDATE
Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.” The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training? ...
From The New Yorker by Cal Newport. Interesting piece. GPT 5 is getting a lot of pushback. 
 
MORE CONSIDERATIONS
 
Chapter 1 
The Artificial Intelligence of the Ethics of Artificial Intelligence  
An Introductory Overview for Law and Regulation  

Joanna J. Bryson 

For many decades, artificial intelligence (AI) has been a schizophrenic field pursuing two different goals: an improved understanding of computer science through the use of the psychological sciences; and an improved understanding of the psychological sciences through the use of computer science. Although apparently orthogonal, these goals have been seen as complementary since progress on one often informs or even advances the other. Indeed, we have found two factors that have proven to unify the two pursuits. First, the costs of computation and indeed what is actually computable are facts of nature that constrain both natural and artificial intelligence. Second, given the constraints of computability and the costs of computation, greater intelligence relies on the reuse of prior computation. Therefore, to the extent that both natural and artificial intelligence are able to reuse the findings of prior computation, both pursuits can be advanced at once.

Neither of the dual pursuits of AI entirely readied researchers for the now glaringly evident ethical importance of the field. Intelligence is a key component of nearly every human social endeavor, and our social endeavors constitute most activities for which we have explicit, conscious awareness. Social endeavors are also the purview of law and, more generally, of politics and diplomacy. In short, everything humans deliberately do has been altered by the digital revolution, as well as much of what we do unthinkingly. Often this alteration is in terms of how we can do what we do—for example, how we check the spelling of a document; book travel; recall when we last contacted a particular employee, client, or politician; plan our budgets; influence voters from other countries; decide what movie to watch; earn money from performing artistically; discover sexual or life partners; and so on. But what makes the impact ubiquitous is that everything we have done, or chosen not to do, is at least in theory knowable. This awareness fundamentally alters our society because it alters not only how we can act directly, but also how and how well we can know and regulate ourselves and each other. 

A great deal has been written about AI ethics recently. But unfortunately many of these discussions have not focused either on the science of what is computable or on the social science of how ready access to more information and more (but mechanical) computational power has altered human lives and behavior. Rather, a great deal of these studies focus on AI as a thought experiment or “intuition pump” through which we can better understand the human condition or the nature of ethical obligation. In this Handbook, the focus is on the law—the day-to-day means by which we regulate our societies and defend our liberties …


Dubber, Markus D.; Pasquale, Frank; Das, Sunit (2020). Oxford Handbook of Ethics of AI (OXFORD HANDBOOKS SERIES) (Function). Kindle Edition.  
Just delving into this. Pretty interesting, right off. 
 
TOBY ORD INTERVIEW
 

 I've cited Toby Ord before.
 
ETHICS OF AI, ANOTHER CITE
Every task we apply our conscious minds to—and a great deal of what we do implicitly—we do using our intelligence. Artificial intelligence therefore can affect everything we are aware of doing and a great deal we have always done without intent. As mentioned earlier, even fairly trivial and ubiquitous AI has recently demonstrated that human language contains our implicit biases, and further that those biases in many cases reflect our lived realities. In reusing and reframing our previous computation, AI allows us to see truths we had not previously known about ourselves, including how we transmit stereotypes, but it does not automatically or magically improve us without effort. Caliskan, Bryson, and Narayanan discuss the outcome of the famous study showing that, given otherwise-identical resumes, individuals with stereotypically African American names were half as likely to be invited to a job interview as individuals with European American names. Smart corporations are now using carefully programmed AI to avoid implicit biases at the early stages of human resources processes so they can select diverse CVs into a short list. This demonstrates that AI can—with explicit care and intention—be used to avoid perpetuating the mistakes of the past. 

The idea of having “autonomous” AI systems “value-aligned” is therefore likely to be misguided. While it is certainly necessary to acknowledge and understand the extent to which implicit values and expectations must be embedded in any artifact, designing for such embedding is not sufficient to create a system that is autonomously moral. Indeed, if a system cannot be made accountable, it may also not in itself be held as a moral agent. The issue should not be embedding our intended (or asserted) values in our machines, but rather ensuring that our machines allow firstly the expression of the mutable intentions of their human operators, and secondly transparency for the accountability of those intentions, in order to ensure or at least govern the operators’ morality. 

Only through correctly expressing our intentions should AI incidentally telegraph our values. Individual liberty, including freedom of opinion and thought, are absolutely critical not only to human well-being but also to a robust and creative society. Allowing values to be enforced by the enfolding curtains of interconnected technology invites gross excesses by powerful actors against those they consider vulnerable, a threat, or just unimportant. Even supposing a power that is demonstrably benign, allowing it the mechanisms for technological autocracy creates a niche that may facilitate a less-benign power—whether through a change of hands, corruption of the original power, or corruption of the systems communicating its will. Finally, who or what is a powerful actor is also altered by ICT, where clandestine networks can assemble—or be assembled—out of small numbers of anonymous individuals acting in a well-coordinated way, even across borders.

Theoretical biology tells us that where there is greater communication, there is a higher probability of cooperation. Cooperation has nearly entirely positive connotations, but it is in many senses almost neutral—nearly all human endeavors involve cooperation, and while these generally benefit many humans, some are destructive to many others. Further, the essence of cooperation is moving some portion of autonomy from the individual to a group. The extent of autonomy an entity has is the extent to which it determines its own actions. Individual and group autonomy must to some extent trade off, though there are means of organizing groups that offer more or less liberty for their constituent parts.
[Dubber, et al, Ch 1.]  
A lot to consider in this book.

No comments:

Post a Comment