Are AI LLMs approaching true "sentience?"
The Amazon blurb:
An insider look at the Large Language Models (LLMs) that are revolutionizing our relationship to technology, exploring their surprising history, what they can and should do for us today, and where they will go in the future—from an AI pioneer and neuroscientist
In this accessible, up-to-date, and authoritative examination of the world’s most radical technology, neuroscientist and AI researcher Christopher Summerfield explores what it really takes to build a brain from scratch. We have entered a world in which disarmingly human-like chatbots, such as ChatGPT, Claude and Bard, appear to be able to talk and reason like us - and are beginning to transform everything we do. But can AI ‘think’, 'know' and ‘understand’? What are its values? Whose biases is it perpetuating? Can it lie and if so, could we tell? Does their arrival threaten our very existence?
These Strange New Minds charts the evolution of intelligent talking machines and provides us with the tools to understand how they work and how we can use them. Ultimately, armed with an understanding of AI’s mysterious inner workings, we can begin to grapple with the existential question of our age: have we written ourselves out of history or is a technological utopia ahead?
SCIENCE MAGAZINE REVIEW
In These Strange New Minds, cognitive neuroscientist and artificial intelligence (AI) safety specialist Christopher Summerfield presents a wide-ranging overview of AI for nonspecialists, focusing on what the technology really is, what it might do, and whether it should be feared. We no longer live in “a world where humans alone generate knowledge,” writes Summerfield. Machines possessing this potential will soon occupy custodial positions in society, he maintains (1). His book takes on six broad questions: How did we get here? What is a language model? Do language models think? What should a language model say? What could a language model do? And, are we all doomed?
Summerfield is a philosophical empiricist who argues that “the meaning of language depends on its evidentiary basis.” He is also a functionalist who believes that “it is perfectly possible for the same computational principle to be implemented in radically different physical substrates” and a materialist who sees the mind’s activity as identical to “neural computation.” But does he believe that AI machines think like humans do, or just that they appear to?...
...In the book’s final section, Summerfield turns to whether the technology will doom or deliver humankind. Here, he begins by discussing computer scientist Rich Sutton’s assertion that humankind should already be planning for the inevitable and great “succession” as AI machines “take over.” Neither AI successionists nor its antagonists have much to offer compared with those “whose core members are rooted in the AI safety community, [who] believe that there is an urgent need for AI to be tightly regulated precisely because it is so potent a tool,” argues Summerfield.
Existential risk groups have alternatively called for AI to be widely and publicly paused or for large government and private investments to design AI monitoring and countermeasures. So far, little headway has been made in either direction, but Summerfield’s book offers nonspecialists a good introduction to the issues and some hope that sound efforts in AI safety may see the light of day.
Just getting started.
I'd like to get Shannon Valor's take on this book.
DR. SUMMERFIELD
MORE:
Whether or not we are on a pathway to building AI systems that figure out the deepest mysteries of the universe, these more mundane forms of assistance are round the corner. It also seems likely that the main medium by which most people currently seek information – an internet search engine – will soon seem as quaint as the floppy disk or the fax machine. ChatGPT is already integrated into the search engine Bing, and it surely won’t be long before Google and others follow suit, augmenting page search with conversational skills. As these changes occur, they will directly touch the lives of everyone on the planet with internet access – more than five billion people and counting – and are sure to upend the global economy in ways that nobody can quite predict. And this is all going to happen soon – on a timeframe of months or years, not decades. It’s going to happen to you and me.
The new world I’ve described might sound like quite a blast. Imagine having access to AI systems that act as a sort of personal assistant – at your digital beck and call – much more cheaply than the human equivalent, a luxury that today only CEOs and film stars can afford. We would all like an AI to handle the boring bits of life – helping us schedule meetings, switch utility provider, submit our tax returns on time. But there are serious uncertainties ahead. By allowing AI systems to become the ultimate repositories for human knowledge, we devolve to them stewardship of what is true or false, and what is right or wrong. What role will humans still play in a world where AI systems generate and share most knowledge on our behalf?
Of course, ever since humans began to exchange ideas, they have found ways to weaponize dissemination – from the first acts of deception or slander among the pre-industrial hunter-gatherer crew to the online slough of misinformation, toxicity, and polemic that the internet has become today. If they are not properly trained, machines with language risk greatly amplifying these harms, and adding new ones to boot. The perils of a world in which AI has authority over human knowledge may exceed the promise of unbounded information access. How do we know when an LLM is telling the truth? How can we be sure that they will not perpetuate the subtle biases with which much of our language is inflected, to the detriment of those who are already least powerful in society? What if they are used as a tool for persuasion, to shepherd large groups of people towards discriminatory or dangerous views? And when people disagree, whose values should LLMs represent? What happens if large volumes of AI-generated content – news, commentary, fiction, and images – come to dominate the infosphere? How will we know who said what, or what actually happened? Are we on the brink of writing ourselves out of history?
Summerfield, Christopher. These Strange New Minds: How AI Learned to Talk and What It Means (pp. 7-8). (Function). Kindle Edition.
BLASTS FROM MY BLOG PAST
I searched back in the blog for a look at what I'd posted a devade or so ago on "Artificial Intelligence."
Fairly quaint.
Stay tuned...
_________
No comments:
Post a Comment