With affability, humility, and synaptic smog-clearing clarity, Dr. Erik J. Larson Takes No Prisoners.
Yikes, indeed.
In sum, we can dial back the ominously foreboding hyperbole regarding the incipient homo sapiens-enslaving / exterminating Singularity said to soon be wrought by AI. 320 pages of Chill Pills. 320 doses of fast-acting Naloxone HCl antidote to AGI Cognitive Pearl-Clutching Disorder.
One fumbles to know where to begin.
I love it when I learn stuff (unlearn, mostly, anymore). Even when it entails humbling internal reactions of "how the [bleep] have you missed that all these decades?"
In 1980, at the age of 34, divorced w/ custody of my two girls, I found it prudent and necessary to give up my then-16 years of hardscrabble roadhouse touring musician life (or, as my wife Cheryl calls it, "working in the not-for-profit sector") and enroll in undergraduate school at UTK.
I thrived, ravenously consuming the likes of deductive logic, inductive logic, philosophy of science, various flavors of ECON, the gamut of statistics courses, and experimental psychology and psychometrics.
In addition to my awesome Linear Regression text (written by my Prof, Mary Sue Younger), I still have my UTK deductive logic textbook in my hardcopy stash.
Looking back now through the index, I find no reference to either "abductive inference" or Charles Sanders Peirce. What? Who?
Erik Larson:
...[C]ommon sense is itself mysterious, precisely because it doesn’t fit into logical frameworks like deduction or induction. Abduction captures the insight that much of our everyday reasoning is a kind of detective work, where we see facts (data) as clues to help us make sense of things. We are extraordinarily good at hypothesizing, which is, to Peirce’s mind, not explainable by mechanics but rather by an operation of mind which he calls, for lack of another explanation, instinct. We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.
We must account for this in building an intelligence, because it is the starting point for any intelligent thinking at all. Without a prior abductive step, inductions are blind, and deductions are equally useless.
Induction requires abduction as a first step, because we need to bring into observation some framework for making sense of what philosophers call sense-datum—raw experience, uninterpreted. Even in simple induction, where we induce a general statement that All swans are white from observations of swans, a minimal conceptual framework or theory guides the acquisition of knowledge. We could induce that all swans have beaks by the same inductive strategy, but the induction would be less powerful, because all birds have beaks, and swans are a small subset of birds. Prior knowledge is used to form hypotheses. Intuition provides mathematicians with interesting problems.
When the developers of DeepMind claimed, in a much-read article in the prestigious journal Nature, that it had mastered Go “without human knowledge,” they misunderstood the nature of inference, mechanical or otherwise. The article clearly “overstated the case,” as Marcus and Davis put it. In fact, DeepMind’s scientists engineered into AlphaGo a rich model of the game of Go, and went to the trouble of finding the best algorithms to solve various aspects of the game—all before the system ever played in a real competition. As Marcus and Davis explain, “the system relied heavily on things that human researchers had discovered over the last few decades about how to get machines to play games like Go, most notably Monte Carlo Tree Search … random sampling from a tree of different game possibilities, which has nothing intrinsic to do with deep learning. DeepMind also (unlike [the Atari system]) built in rules and some other detailed knowledge about the game. The claim that human knowledge wasn’t involved simply wasn’t factually accurate.” A more succinct way of putting this is that the DeepMind team used human inferences—namely, abductive ones—to design the system to successfully accomplish its task. These inferences were supplied from outside the inductive framework.
Larson, Erik J.. The Myth of Artificial Intelligence (pp. 161-162). Harvard University Press. Kindle Edition.
I now humbly stand "Abducted."
(BTW: Not even in graduate school did I encounter C.S. Peirce or abductive inference. An inexplicable, significant omission, given what I now learn.)
I am not kidding about Erik J. Larson's new book. Another great place within which to hide a $100 bill from Your Favorite President Donald Trump.
This has been a compelling and fun read. I think the author has sustained his case.
In the pages of this book you will read about the myth of artificial intelligence. The myth is not that true AI is possible. As to that, the future of AI is a scientific unknown. The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time—that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations. Yet the inevitability of AI is so ingrained in popular discussion—promoted by media pundits, thought leaders like Elon Musk, and even many AI scientists (though certainly not all)—that arguing against it is often taken as a form of Luddism, or at the very least a shortsighted view of the future of technology and a dangerous failure to prepare for a world of intelligent machines.I think I can safely continue to exclude The Singularity from BobbyG's list of priority exigencies.
As I will show, the science of AI has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Proponents of AI have huge incentives to minimize its known limitations. After all, AI is big business, and it’s increasingly dominant in culture. Yet the possibilities for future AI systems are limited by what we currently know about the nature of intelligence, whether we like it or not. And here we should say it directly: all evidence suggests that human and machine intelligence are radically different. The myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them. Futurists like Ray Kurzweil and philosopher Nick Bostrom, prominent purveyors of the myth, talk not only as if human-level AI were inevitable, but as if, soon after its arrival, superintelligent machines would leave us far behind.
This book explains two important aspects of the AI myth, one scientific and one cultural. The scientific part of the myth assumes that we need only keep “chipping away” at the challenge of general intelligence by making progress on narrow feats of intelligence, like playing games or recognizing images. This is a profound mistake: success on narrow applications gets us not one step closer to general intelligence. The inferences that systems require for general intelligence—to read a newspaper, or hold a basic conversation, or become a helpmeet like Rosie the Robot in The Jetsons—cannot be programmed, learned, or engineered with our current knowledge of AI. As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking low-hanging fruit. The jump to general “common sense” is completely different, and there’s no known path from the one to the other. No algorithm exists for general intelligence. And we have good reason to be skeptical that such an algorithm will emerge through further efforts on deep learning systems or any other approach popular today. Much more likely, it will require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like, let alone the details of getting to it… [Erik J. Larson, pp 1-2]
Gotta love the fictional MARK 13. |
apropos of exigencies,
"AI” OR "IA" (Intelligence Augmentation)?
It remains true that technology often acts like a prosthetic to human abilities, as with the telescope and microscope. AI has this role to play, at least, but a mythology about a coming superintelligence should be placed in the category of scientific unknowns. If we wish to pursue a scientific mystery directly, we must at any rate invest in a culture that encourages intellectual ideas—we will need them, if any path to artificial general intelligence is possible at all. [Erik J. Larson, pg 280]
My intial interest in the topic went to all of the Gartner Hype Cycle swooning over AI's (overstated) clinical potential in Health IT. Beyond that, I mused about this stuff (click the robot thinker graphic):
My dubiety is hardly attenuated in the wake of reading Erik's book.
More broadly,
"Someone has to have an idea."
Yeah, and where do we get them?
“Because of livewiring, we are each a vessel of space and time. We drop into a particular spot on the world and vacuum in the details of that spot. We become, in essence, a recording device for our moment in the world.The fuel source of abductive inference?
When you meet an older person and feel shocked by the opinions or worldview she holds, you can try to empathize with her as a recording device for her window of time and her set of experiences. Someday your brain will be that time-ossified snapshot that frustrates the next generation.
Here’s a nugget from my vessel: I remember a song produced in 1985 called “We Are the World.” Dozens of superstar musicians performed it to raise money for impoverished children in Africa. The theme was that each of us shares responsibility for the well-being of everyone. Looking back on the song now, I can’t help but see another interpretation through my lens as a neuroscientist.
We generally go through life thinking there’s me and there’s the world. But as we’ve seen in this book, who you are emerges from everything you’ve interacted with: your environment, all of your experiences, your friends, your enemies, your culture, your belief system, your era—all of it.
Although we value statements such as “he’s his own man” or “she’s an independent thinker,” there is in fact no way to separate yourself from the rich context in which you’re embedded. There is no you without the external. Your beliefs and dogmas and aspirations are shaped by it, inside and out, like a sculpture from a block of marble. Thanks to livewiring, each of us is the world.”
— Livewired: The Inside Story of the Ever-Changing Brain by David Eagleman pp 244-245
UPDATES
Mathematician Bill Dembski's lengthy review of Erik's book.
Erik Larson’s THE MYTH OF ARTIFICIAL INTELLIGENCE is far and away the best refutation of Kurzweil’s overpromises, but also of the hype pressed by those who have fallen in love with AI’s latest incarnation, which is the combination of big data with machine learning. Just to be clear, Larson is not a contrarian. He does not have a death wish for AI. He is not trying to sabotage research in the area (if anything, he is trying to extricate AI research from the fantasy land it currently inhabits). In fact, he has been a solid contributor to the field, coming to the problem of strong AI, or artificial general intelligence (AGI) as he prefers to call it, with an open mind about its possibilities...Yep. I give Dembski's review five stars as well—one star for each 1,000 words. /s
Cool guy.
OFF-TOPIC PERSONAL ERRATUM
22 months ago I was officially dx'd with Parkinson's. Until a month ago I've avoided the Rx, but I'm now doing the Carbidopa/Levodopa 25/100 regimen (6 tabs/day, 2 at a time). I find it problematic. Might yet be too early to assess efficacy, though. My side-effects thus far are pretty much just elevated standing balance issues. I was already a fall risk. Sux. But, so do the tremors, which are messing with my typing, mouse use, and guitar playing (and manual dexterity more broadly).
Mixing the film metaphor riffs, "To Make Benefit Glorious Nation of Sinemetistan."
TOTALLY FRIVOLOUS TWEET
__________
No comments:
Post a Comment