Ran into this podcast today on YouTube. Found it quite interesting, particularly given all the the fractiousness engulfing the AI technology debate. Good use of 48 minutes of your time.
Tom Griffiths' latest:
This book is about how the human mind comes to understand the world—and ultimately, perhaps, how we humans may come to understand ourselves. Many disciplines, ranging from neuroscience to anthropology, share this goal—but the approach that we adopt here is quite specific. We adopt the framework of cognitive science, which aims to create such an understanding through reverse-engineering: using the mathematical and computational tools from the engineering project of creating artificial intelligence (AI) systems to better understand the operation of human thought. AI generates a rich and hugely diverse stream of hypotheses about how the human mind might work. But cognitive science does not just take AI as a source of inspiration. What we have learned about the mathematical and computational underpinnings of human cognition can also help to build more human like intelligence in machines.Another of his books.
The fields of AI and cognitive science were born together in the late 1950s, and grew up together over their first decades. From the beginning, these fields’ twin goals of engineering and reverse-engineering human intelligence were understood to be distinct, yet deeply related through the lens of computation. The rise of the digital computer and the possibility of computer programming simultaneously made it plausible to think that, at least in principle, a machine could be programmed to produce the input-output behavior of the human mind. So it was a natural step to suggest that the human mind itself could be understood as having been programmed, through some mixture of evolution, development, and maybe even its own reflection, to produce the behaviors we call “intelligent.” In these early days, AI researchers and cognitive scientists shared their biggest questions: What kind of computer was the brain, and what kind of program could the mind be? What model of computation could possibly underlie human intelligence—both its inner workings and its outwardly observable effects?
Now, almost 70 years later, these two fields have matured and (as often happens to siblings) grown apart to some extent. Cognitive science has become a thriving, occasionally hot, but still relatively small interdisciplinary field of academic study and research. AI has become a dominant societal force, intellectually, culturally, and economically. It is no exaggeration to say that we are living in the first “AI era,” in the sense that we are surrounded by genuinely useful AI technologies. We have machines that appear able to do things we used to think only humans could do––driving a car, having a conversation, or playing a game like Go or chess—yet we still have no real AI, in the sense that the founders of the field originally envisioned. We have no general-purpose machine intelligence that does everything a human being can or thinks about everything a human can, and it’s not even close. The AI technologies we have today are built by large, dedicated teams of human engineers, at great cost. They do not learn for themselves how to drive, converse, or play games, or want to do these things for themselves, the way any human does. Rather, they are trained on vast data sets, with far more data than any human being ever encounters, and those data are carefully curated by human engineers. Each system does just one thing: the machine that plays Go doesn’t also play chess or tic-tac-toe or bridge or football, let alone know how to see the stones on the Go board or pick up a piece if it accidentally falls on the floor. It doesn’t drive a car to the Go tournament, engage in a conversation about what makes Go so fascinating, make a plan for when and how it should practice to improve its game, or decide if practicing more is the best use of its time. The human mind, of course, can do all these things and more—independently learning and thinking for itself to operate in a hugely complex physical, social, technological, and intellectual world. And the human mind spontaneously learns to figure all this out without a team of data scientists curating the data on which it learns, but instead through growing up interacting with that complex and chaotic world, albeit with crucial help with caregivers, teachers, and textbooks.
To be sure, recent and remarkable developments in deep learning have created AI models which, with the right prompting, can be used to perform a surprisingly diverse range of tasks, from writing computer code, academic essays, and poems, and even to creating images. But, by contrast, humans autonomously create their own objectives and plans and are variously curious, bored, or inspired to explore, create, play, and work together in ways that are open-ended and self-directed. AI is smart; but as yet it is only a faint echo of human intelligence.
What’s missing? Why is there such a gap between what we call AI today and the general computational model of human intelligence that the first computer scientists and cognitive psychologists envisioned? And how did AI and cognitive science lose, as has become increasingly evident, their original sense of common purpose? The pressures and opportunities arising from market forces and larger technological developments in computing, along with familiar patterns of academic fads and trends, have all surely played a role. Some of today’s AI technologies are often described as inspired by the mind or brain, most notably those based on artificial neural networks or reinforcement learning, but the analogies, although they have historically been crucial in inspiring modern AI methods, are loose at best. And most cognitive scientists would say that while their field has make real progress, its biggest questions remain open. What are the basic principles that govern how the human mind works? If pressed to answer that question honestly, many cognitive scientists would say either that we don’t know or that at least there is no scientific consensus or broadly shared paradigm for the field yet…
Griffiths, Thomas L.; Chater, Nick; Tenenbaum, Joshua B. (2024). Bayesian Models of Cognition: Reverse Engineering the Mind (Preface). Kindle Edition.
Everyone has a basic understanding of how the physical world works. We learn about physics and chemistry in school, letting us explain the world around us in terms of concepts like force, acceleration, and gravity—the Laws of Nature. But we don’t have the same fluency with the concepts needed to understand the world inside us—the Laws of Thought. You have probably heard of Newton’s universal law of gravitation. But you might not have heard of Shepard’s universal law of generalization—a simple principle that describes the behavior of any intelligent organism, anywhere in the universe. While the story of how mathematics has been used to reveal the mysteries of the external world is familiar to anybody who has taken even a casual interest in science, the story of how it has been used to study our internal world is not. This book tells that story.
A little over three hundred years ago, a small group of philosophers and mathematicians began to pull together the threads that make up modern science. They developed a keen sense of observation, a talent for conducting experiments, and a new set of tools for expressing mathematical theories. Over the following centuries, observation, experiment, and mathematics have been combined to reveal both the smallest and the largest things in the physical universe in ever-increasing detail. But those philosophers and mathematicians weren’t just interested in the physical universe. They were also interested in the mind, and they wanted to use the same mathematical tools to study it. Thomas Hobbes wrote about the possibility of understanding “ratiocination” as “calculation,” asking whether we might imagine thoughts being added and subtracted. René Descartes imagined numbers being assigned to thoughts in much the same way that they are assigned to collections of physical objects. Gottfried Wilhelm Leibniz spent his life trying, and ultimately failing, to find a way to use arithmetic to describe human reason.
The first success in using mathematics to analyze thought wouldn’t appear until the middle of the nineteenth century, when George Boole cracked the problem that Leibniz hadn’t been able to solve by coming up with a new kind of algebra. That first success had far-reaching consequences, leading to the development of formal logic and computers. The first attempts to evaluate mathematical theories of thought by comparing them to human behavior wouldn’t appear until the middle of the twentieth century, in the Cognitive Revolution that launched the field of cognitive science—the interdisciplinary science of the mind. Cognitive scientists have since come to recognize the limits of formal logic as a model of human cognition, and have developed completely new mathematical approaches—artificial neural networks, which illustrate the power of continuous representations and statistical learning, and Bayesian models, which reveal how to capture prior knowledge and deal with uncertainty. Each has something to offer for understanding the mind.
In the twenty-first century, knowing the Laws of Thought is just as important to scientific literacy as knowing the Laws of Nature. Artificial intelligence systems demonstrate on a daily basis that yet another aspect of thought and language can be emulated by machines, pushing us to reconsider the way we think about ourselves. Understanding human minds and how much of them can be automated becomes critical as we plan our careers and think about the world that our children will occupy. By the end of this book you will know the basic principles behind how modern artificial intelligence systems work, and know exactly where they are likely to continue to fall short of human abilities…
Griffiths, Tom (2026). The Laws of Thought: The Quest for a Mathematical Theory of the Mind (Introduction). Kindle Edition.
QUICK UPDATE
I finished Episode 2 of NetFlix's "3 Body Problem." Bought volume 1 of the print trilogy snd have begun reading. Pretty cool.
More shortly...

No comments:
Post a Comment