Ran into this podcast today on YouTube. Found it quite interesting, particularly given all the the fractiousness engulfing the AI technology debate. Good use of 48 minutes of your time.
(~@3:23, interesting Dr. Keaton observation) “…For me personally, I hate when we teach our undergraduates—as you know as often is done—we basically just teach them a string of Nobel prize winning experiments and just connect the dots, and you go through the twists and turns—brought up this statement by Feynman—that the difference between knowing the name of the thing and knowing something about it is the most dangerous gap in all of science.” [More on that bit of Zen shortly.]
Tom Griffiths' latest:
This book is about how the human mind comes to understand the world—and ultimately, perhaps, how we humans may come to understand ourselves. Many disciplines, ranging from neuroscience to anthropology, share this goal—but the approach that we adopt here is quite specific. We adopt the framework of cognitive science, which aims to create such an understanding through reverse-engineering: using the mathematical and computational tools from the engineering project of creating artificial intelligence (AI) systems to better understand the operation of human thought. AI generates a rich and hugely diverse stream of hypotheses about how the human mind might work. But cognitive science does not just take AI as a source of inspiration. What we have learned about the mathematical and computational underpinnings of human cognition can also help to build more human like intelligence in machines.Another of his books.
The fields of AI and cognitive science were born together in the late 1950s, and grew up together over their first decades. From the beginning, these fields’ twin goals of engineering and reverse-engineering human intelligence were understood to be distinct, yet deeply related through the lens of computation. The rise of the digital computer and the possibility of computer programming simultaneously made it plausible to think that, at least in principle, a machine could be programmed to produce the input-output behavior of the human mind. So it was a natural step to suggest that the human mind itself could be understood as having been programmed, through some mixture of evolution, development, and maybe even its own reflection, to produce the behaviors we call “intelligent.” In these early days, AI researchers and cognitive scientists shared their biggest questions: What kind of computer was the brain, and what kind of program could the mind be? What model of computation could possibly underlie human intelligence—both its inner workings and its outwardly observable effects?
Now, almost 70 years later, these two fields have matured and (as often happens to siblings) grown apart to some extent. Cognitive science has become a thriving, occasionally hot, but still relatively small interdisciplinary field of academic study and research. AI has become a dominant societal force, intellectually, culturally, and economically. It is no exaggeration to say that we are living in the first “AI era,” in the sense that we are surrounded by genuinely useful AI technologies. We have machines that appear able to do things we used to think only humans could do––driving a car, having a conversation, or playing a game like Go or chess—yet we still have no real AI, in the sense that the founders of the field originally envisioned. We have no general-purpose machine intelligence that does everything a human being can or thinks about everything a human can, and it’s not even close. The AI technologies we have today are built by large, dedicated teams of human engineers, at great cost. They do not learn for themselves how to drive, converse, or play games, or want to do these things for themselves, the way any human does. Rather, they are trained on vast data sets, with far more data than any human being ever encounters, and those data are carefully curated by human engineers. Each system does just one thing: the machine that plays Go doesn’t also play chess or tic-tac-toe or bridge or football, let alone know how to see the stones on the Go board or pick up a piece if it accidentally falls on the floor. It doesn’t drive a car to the Go tournament, engage in a conversation about what makes Go so fascinating, make a plan for when and how it should practice to improve its game, or decide if practicing more is the best use of its time. The human mind, of course, can do all these things and more—independently learning and thinking for itself to operate in a hugely complex physical, social, technological, and intellectual world. And the human mind spontaneously learns to figure all this out without a team of data scientists curating the data on which it learns, but instead through growing up interacting with that complex and chaotic world, albeit with crucial help with caregivers, teachers, and textbooks.
To be sure, recent and remarkable developments in deep learning have created AI models which, with the right prompting, can be used to perform a surprisingly diverse range of tasks, from writing computer code, academic essays, and poems, and even to creating images. But, by contrast, humans autonomously create their own objectives and plans and are variously curious, bored, or inspired to explore, create, play, and work together in ways that are open-ended and self-directed. AI is smart; but as yet it is only a faint echo of human intelligence.
What’s missing? Why is there such a gap between what we call AI today and the general computational model of human intelligence that the first computer scientists and cognitive psychologists envisioned? And how did AI and cognitive science lose, as has become increasingly evident, their original sense of common purpose? The pressures and opportunities arising from market forces and larger technological developments in computing, along with familiar patterns of academic fads and trends, have all surely played a role. Some of today’s AI technologies are often described as inspired by the mind or brain, most notably those based on artificial neural networks or reinforcement learning, but the analogies, although they have historically been crucial in inspiring modern AI methods, are loose at best. And most cognitive scientists would say that while their field has make real progress, its biggest questions remain open. What are the basic principles that govern how the human mind works? If pressed to answer that question honestly, many cognitive scientists would say either that we don’t know or that at least there is no scientific consensus or broadly shared paradigm for the field yet…
Griffiths, Thomas L.; Chater, Nick; Tenenbaum, Joshua B. (2024). Bayesian Models of Cognition: Reverse Engineering the Mind (Preface). Kindle Edition.
Everyone has a basic understanding of how the physical world works. We learn about physics and chemistry in school, letting us explain the world around us in terms of concepts like force, acceleration, and gravity—the Laws of Nature. But we don’t have the same fluency with the concepts needed to understand the world inside us—the Laws of Thought. You have probably heard of Newton’s universal law of gravitation. But you might not have heard of Shepard’s universal law of generalization—a simple principle that describes the behavior of any intelligent organism, anywhere in the universe. While the story of how mathematics has been used to reveal the mysteries of the external world is familiar to anybody who has taken even a casual interest in science, the story of how it has been used to study our internal world is not. This book tells that story.
A little over three hundred years ago, a small group of philosophers and mathematicians began to pull together the threads that make up modern science. They developed a keen sense of observation, a talent for conducting experiments, and a new set of tools for expressing mathematical theories. Over the following centuries, observation, experiment, and mathematics have been combined to reveal both the smallest and the largest things in the physical universe in ever-increasing detail. But those philosophers and mathematicians weren’t just interested in the physical universe. They were also interested in the mind, and they wanted to use the same mathematical tools to study it. Thomas Hobbes wrote about the possibility of understanding “ratiocination” as “calculation,” asking whether we might imagine thoughts being added and subtracted. René Descartes imagined numbers being assigned to thoughts in much the same way that they are assigned to collections of physical objects. Gottfried Wilhelm Leibniz spent his life trying, and ultimately failing, to find a way to use arithmetic to describe human reason.
The first success in using mathematics to analyze thought wouldn’t appear until the middle of the nineteenth century, when George Boole cracked the problem that Leibniz hadn’t been able to solve by coming up with a new kind of algebra. That first success had far-reaching consequences, leading to the development of formal logic and computers. The first attempts to evaluate mathematical theories of thought by comparing them to human behavior wouldn’t appear until the middle of the twentieth century, in the Cognitive Revolution that launched the field of cognitive science—the interdisciplinary science of the mind. Cognitive scientists have since come to recognize the limits of formal logic as a model of human cognition, and have developed completely new mathematical approaches—artificial neural networks, which illustrate the power of continuous representations and statistical learning, and Bayesian models, which reveal how to capture prior knowledge and deal with uncertainty. Each has something to offer for understanding the mind.
In the twenty-first century, knowing the Laws of Thought is just as important to scientific literacy as knowing the Laws of Nature. Artificial intelligence systems demonstrate on a daily basis that yet another aspect of thought and language can be emulated by machines, pushing us to reconsider the way we think about ourselves. Understanding human minds and how much of them can be automated becomes critical as we plan our careers and think about the world that our children will occupy. By the end of this book you will know the basic principles behind how modern artificial intelligence systems work, and know exactly where they are likely to continue to fall short of human abilities…
Griffiths, Tom (2026). The Laws of Thought: The Quest for a Mathematical Theory of the Mind (Introduction). Kindle Edition.
QUICK UPDATE
I finished Episode 3 of NetFlix's "3 Body Problem." Bought volume 1 of the print trilogy snd have begun reading. Pretty cool.
UPDATE
| From Science Magazine |
Large language models (LLMs) are artificial intelligence (AI) algorithms that are trained on vast amounts of data to learn patterns that enable them to generate human-like responses. Reasoning models are LLMs with the added capability of working through problems step by step before responding, thus mirroring structured thinking. Such AI systems have performed well in assessing medical knowledge, but whether they can match physician- level clinical reasoning on authentic diagnostic tasks remains largely unknown. On page 524 of this issue, Brodeur et al. (1) demonstrate that AI can now seemingly match or exceed physician-level clinical diagnostic reasoning on text-based scenarios by measuring against human physician performances on clinical vignettes and real-world emergency cases. The findings indicate an urgent need to understand how these tools can be safely integrated into clinical workflows, and a readiness for prospective evaluation alongside clinicians.
AI has the potential to support a broad range of health care applications, from clinical decisions to medical education and the provision of patient-facing health information. LLMs have passed medical licensing examinations and performed well on structured clinical assessments, raising the prospect that they could help alleviate global health care workforce shortages. However, passing examinations is not the same as being a doctor, and demonstrating physician-level performance on authentic clinical tasks is a fundamentally harder challenge (2).
Brodeur et al. evaluated OpenAI’s first reasoning model, o1-preview (released in September 2024), across five experiments that assess diagnostic performance on clinical case vignettes against physician and prior-model baselines. A sixth experiment compared o1 with prior models, and physicians across three diagnostic touchpoints on 76 actual emergency department cases. Across the experiments, the o1 models substantially outperformed prior-generation nonreasoning LLMs (e.g., GPT-4) and, in many cases, the physicians themselves. For example, when provided with published clinicopathological conference cases, GPT-4 achieved exact or very close diagnostic accuracy in 72.9% of cases, whereas o1-preview achieved this in 88.6% of cases. Further, in actual emergency department cases, o1 achieved 67.1% exact or very-close diagnostic accuracy at initial triage, outperforming two expert attending physicians (55.3% and 50.0%), with blinded reviewers unable to distinguish the AI output from human. This advance sets a new evaluation benchmark—testing AI against physician performance, and ideally alongside physicians, on authentic clinical tasks...
Lengthy, detailed discussion. Seriously of interest to me, in light of my long experience working for and with physicians.
ALSO IN SCIENCE TODAY: BOOK REVIEW
Lively. . . . Rousing. . . . Prophecy—roving, intelligent, irreducibly idiosyncratic—can expand our sense of possibility, starting now.” —The New York Times Book ReviewSCIENCE MAG
Tech empires are the prophets of the modern day, and like the ancient oracles and medieval astrologers that preceded them, they're not in it for the common good—they're in it for power. Award-winning University of Oxford professor Carissa Véliz brilliantly argues why we must reclaim that power, and shows us how.
“A masterpiece. . . . The most important book you will read for years.” —Roger McNamee, New York Times bestselling author of Zucked
For thousands of years, oracles, seers, and astrologers advised leaders and commoners alike about the future. But predictions are often power plays in disguise, obfuscating accountability and stripping individuals of their agency. Today we face the same threat of powerful prophets but under a new facade: tech.
Not only do modern predictions made by tech companies advise on war, industry, and marriages, but artificial intelligence also now determines whether we can get a loan, a job, an apartment, or an organ transplant. And when we cede ground to these predictions, we lose control of our own lives.
Drawing on history’s cautionary tales and modern-day tech companies’ malfeasance—from surveillance and biased algorithms to a startling lack of accountability—Carissa Véliz demonstrates that big tech’s prophecies are just as shallow, dangerous, and unjust as their ancient counterparts’. What she uncovers in the process is chilling. Artificial intelligence is increasing risk in business and society while creating a false sense of security. In this incisive, witty, and bracingly original book, Véliz contends that the main promise of prediction is not knowledge of the future but domination over others. Powerful people use predictions to determine our future. Prophecy is an invitation to defy those orders and live life on our own terms
As I finished reading Prophecy, by philosopher Carissa Véliz, the soundtrack of The Matrix hummed in my mind, howled by the band Rage Against the Machine (“Wake up! Wake up!”). In fact, a weird kind of intellectual synesthesia took place throughout my perusal of the book, as I could also hear the dystopian slogan from George Orwell’s 1984 furiously sung by the same band: “Who controls the past now controls the future, who controls the present now controls the past.”
Véliz’s scholarship focuses on ethics and artificial intelligence (AI)—two realms that often refuse to meet. In 2020, she published Privacy Is Power, warning readers against the algorithmic invasion that has continued to take hold of our personal data, feeding the techno-golems of Silicon Valley and threatening our sanity and liberty. This book is her second major admonition.
Prophecy is about the power of predictions, especially when they are maliciously misleading. The structure of the book is dialectic: The first part expounds the promise of predictions and their influence throughout history, and the second articulates their manifold perils and related abuses of power, particularly in the current age of AI oracles. The third and final part of the book seeks a resolution, namely, how to rethink predictions and resist their deceptive lure.
Predictions have always been with us, from ancestral wisdom that told us when to sow and reap to mathematics used to optimize decisionmaking under uncertainty. Forecasts are ultimately guesses—educated or naïve, right or wrong, innocuous or consequential. They can also be deliberately deceptive, not anticipating the future but rather covertly shaping it. When such predictions are then turned into promises and ossified into decrees, we are in trouble.
Using predictions as prescriptions to benefit one’s agenda is not new. Rulers have always done so. But current AI empires make such a practice unprecedentedly pervasive and pernicious.
AI, argues Véliz, is the new diviner—the ultimate prediction machine. Mirrored on the fashionable idea that our brains are inference devices, such artificial prophets are dangerous. Digital technologies now rule our personal and professional lives, as well as the fate of countries and civilizations. They tell us who to date, what to watch, who to hire, when to start a war, and so forth. These simulacra are presented as deep knowledge, even truth, and then turned into self-fulfilling prophecies. Perils abound, as predictions also give us a false sense of security, increasing risks and lacking accountability...
'eh?
More shortly...























