"In a few years we’ll have artificial intelligence that can accomplish professional human tasks. There is nothing we can do to stop this. In addition our lives will be totally 100% tracked by ourselves and others. This too is inevitable. Indeed much of what will happen in the next 30 years is inevitable, driven by technological trends which are already in motion, and are impossible to halt without halting civilization. Some of what is coming may seem scary, like ubiquitous tracking, or robots replacing humans. Others innovations seem more desirable, such as an on-demand economy, and virtual reality in the home. And some that is coming like network crime and anonymous hacking will be society’s new scourges. Yet both the desirable good and the undesirable bad of these emerging technologies all obey the same formation principles."
Kevin Kelly, co-founder of Wired.com, an extremely important thinker. I was in the car with NPR on the radio the other day and heard an interview featuring him. Had to look him up forthwith. Interesting man:
My educational background is minimal. I am a college drop out. Instead of going to university, I went to Asia. That was one of the best decisions I ever made. I traveled in the 1970s as a poor, solo photographer in the hinterlands and villages of Asia, between Iran and Japan. I traveled on about US$2,500 per year and came back with 36,000 slides...His photos are breathtaking. His writing is witty and elegant. His views are at once spot-on and challenging.
"The ongoing scientific process of moving our lives away from the rule of matter and toward abstractions and intangibles can only prepare us for a better understanding of the ultimate abstraction." - Nerd Theology, 1999
OK, got the interview audio embed link. Thanks, Kevin.
At first glance, Kevin Kelly is a contradiction: a self-described old hippie and onetime editor of hippiedom’s do-it-yourself bible, The Whole Earth Catalog, who went on to co-found Wired magazine, a beacon of the digital age.
In our latest edition of FREAK-quently Asked Questions, Kelly sits down with Stephen Dubner to explain himself; the episode is called “Someone Else’s Acid Trip”...
...Images of a machine as organism and an organism as machine are as old as the first machine itself. But now those enduring metaphors are no longer poetry. They are becoming real—profitably real.
This book is about the marriage of the born and the made. By extracting the logical principle of both life and machines, and applying each to the task of building extremely complex systems, technicians are conjuring up contraptions that are at once both made and alive. This marriage between life and machines is one of convenience, because, in part, it has been forced by our current technical limitations. For the world of our own making has become so complicated that we must turn to the world of the born to understand how to manage it. That is, the more mechanical we make our fabricated environment, the more biological it will eventually have to be if it is to work at all. Our future is technological; but it will not be a world of gray steel. Rather our technological future is headed toward a neo-biological civilization.
The triumph of the bio-logic
Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.
Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.
It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning. We have reason to believe yet more can be synthesized and made into something new.
Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life.
The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in a “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.
Genetic engineering is precisely what cattle breeders do when they select better strains of Holsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.
The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1) Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being. What should we call that common soul between the organic communities we know of as organisms and ecologies, and their manufactured counterparts of robots, corporations, economies, and computer circuits? Icall those examples, both made and born, “vivisystems” for the lifelikeness each kind of system holds.
In the following chapters I survey this unified bionic frontier. Many of the vivisystems I report on are “artificial”—artifices of human making—but in almost every case they are also real—experimentally implemented rather than mere theory. The artificial vivisystems Isurvey are all complex and grand: planetary telephone systems, computer virus incubators, robot prototypes, virtual reality worlds, synthetic animated characters, diverse artificial ecologies, and computer models of the whole Earth.
But the wildness of nature is the chief source for clarifying insights into vivisystems, and probably the paramount source of more insights to come...
The vivisystems I examine in this book are nearly bottomless complications, vast n range, and gigantic in nuance. From these particular big systems I have appropriated unifying principles for all large vivisystems; I call them the laws of god, and they are the fundamentals shared by all self-sustaining, self-improving systems.
As we look at human efforts to create complex mechanical things, again and again we return to nature for directions. nature is thus more than a diverse gene bank harbor- ing undiscovered herbal cures for future diseases—although it is certainly this. nature is also a “meme bank,” an idea factory. Vital, postindustrial paradigms are hidden in every jungly ant hill. The billion-footed beast of living bugs and weeds, and the aboriginal human cultures which have extracted meaning from this life, are worth protecting, if for no other reason than for the postmodern metaphors they still have not revealed. destroying a prairie destroys not only a reservoir of genes but also a treasure of future metaphors, insight, and models for a neo-biological civilization... [pp. 6-8]In a month, I'll be downloading and reading his forthcoming title.
I'll be triangulating a lot of Kevin Kelly with my prior posts about "AI." e.g., "AI vs IA: At the cutting edge of IT R&D."
A bonus for watching the complete Kevin Kelly SXSW talk was that it was followed in my YouTube feed with one by the equally compelling Douglas Rushkoff, who I've cited on this blog in prior posts. See here as well.
If you're too busy (or too lazy) to obtain and read the Rushkoff book, his SXSW talk will likely suffice as Cliff Notes.
I also recommend you triangulate all of this stuff with Paul Mason's excellent book "Postcapitalism." See "The future of health care, continued. Where will economics come in?"
So coming back around to the core tech thing -- Health IT -- see my May 2nd post "Better, Smarter, and Healthier" Really?" Wherein Robin Farmanfarmian waxes rhapsodic over the nascent wave of 24/7 ubiquitous digital sensing technologies.
"Free Beer Tomorrow?"
Interesting link on Kevin Kelly's site:
The Anecdote is the Antidote for What Ails Modern Medical Science by John R. Adler, Jr. M.D. It’s hard to imagine anybody being more of a medical insider than Dr. John R. Adler, the founding editor of Cureus. Adler has a Harvard medical degree, served his residency at Massachusetts General hospital, and is a Stanford professor of neurosurgery, as well as founding CEO of a leading radiation oncology company, Accuray. This makes it especially heartening that Dr. Adler is now focused on opening up medical research literature to important kinds of evidence that have often been ignored: the anecdote and the case report. Quote: “The altruism that is supposed to drive the publication of scientific research has been almost entirely co-opted by the peculiar needs of academic promotion and tenure, as well as the pecuniary demands of the scholarly publishing industry; the public good of medical knowledge has been reduced to a mere after-thought by both academia and the publishing industry.”__
apropos of the foregoing AI/IA/Robotics riff, an across this article this morning on Medium:
Why You Will Soon Be Sharing Your Deepest Secrets With A Robot: This will be the largest cognitive leap in human history.Note the author byline:
I guess now resumes will be increasingly replete with how many IPO exits one has been "associated" with, for what dollars amounts.
Another cool article I first noticed in a Medium.com cross-post: "Three Years in San Francisco."
Then there's Edge.org. Like I don't already have enough to read. to wit:
Weighing in from the cutting-edge frontiers of science, today’s most forward-thinking minds explore the rise of “machines that think.”I find it interesting that the usual list of allusively admonitory movie citations misses this one:
Stephen Hawking recently made headlines by noting, “The development of full artificial intelligence could spell the end of the human race.” Others, conversely, have trumpeted a new age of “superintelligence” in which smart devices will exponentially extend human capacities. No longer just a matter of science-fiction fantasy (2001, Blade Runner, The Terminator, Her, etc.), it is time to seriously consider the reality of intelligent technology, many forms of which are already being integrated into our daily lives. In that spirit, John Brockman, publisher of Edge. org (“the world’s smartest website” – The Guardian), asked the world’s most influential scientists, philosophers, and artists one of today’s most consequential questions: What do you think about machines that think?
The self-repairing autonomous "Mark 13" warrior robot. A gruesome, dystopian cult flick. One of my faves.
Also up amid the never-diminishing book stash:
Additionally on deck: "Two cheers for uncertainty." What the hell does that mean? Isn't "science" (in particular medical science and its bedrock IT foundation) all about reducing uncertainty?
For now, a bit of Kevin Kelly tease on the topic:
Letting go to win"...one must relinquish control and embrace uncertainty. Absolute control is absolutely boring."
As Moses tells the story, on the sixth day of creation, that is at the eleventh hour of a particularly frantic creative bout, the god kneaded some clayey earth and in an almost playful gesture, crafted a tiny model to dwell in his new world. This god, Yahweh, was an unspeakably mighty inventor who built his universe merely by thinking aloud. He had been able to do the rest of his creation in his head, but this part required some fiddling. The final hand-tuned model—a blinking, dazed thing, a “man” as Yahweh called him—was to be a bit more than the other creatures the almighty made that week.
This one was to be a model in imitation of the great Yahweh himself. In some cybernetic way the man was to be a simulacra of Yahweh.
As Yahweh was a creator, this model would also create in simulation of Yahweh’s creativity. As Yahweh had free will and loved, this model was to have free will and love in reflection of Yahweh. So Yahweh endowed the model the same type of true creativity he himself possessed.
Free will and creativity meant an open-ended world with no limits. Anything could be imagined, anything could be done. This meant that the man-thing could be creatively hateful as well as creatively loving (although Yahweh attempted to encode heuristics in the model to help it decide).
Now Yahweh himself was outside of time, beyond space and form, and unlimited in scope—ultimate software. So making a model of himself that could operate in bounded material, limited in scale, and constrained by time was not a cinch. By definition, the model wasn’t perfect.
To continue where Moses left off, Yahweh’s man-thing has been around in creation for millennia, long enough to pick up the patterns of birth, being, and becoming. A few bold man-things have had a recurring dream: to do as Yahweh did and make a model of themselves—a simulacra that will spring from their own hands and in its turn create novelty freely as Yahweh and man-things can.
So by now some of Yahweh’s creatures have begun to gather minerals from the earth to build their own model creatures. Like Yahweh, they have given their created model a name. But in the cursed babel of man-things, it has many designations: automata, robot, golem, droid, homunculus, simulacra.
The simulacra they have built so far vary. Some species, such as computer viruses, are more spirit than flesh. Others species of simulacra exist on another plane of being—virtual space. And some simulacra, like the kind marching forward in SIMNET, are terrifying hybrids between the real and the hyperreal.
The rest of the man-things are perplexed by the dream of the model builders. Some of the curious bystanders cheer: how wonderful to reenact Yahweh’s incomparable creation! Others are worried; there goes our humanity. It’s a good question. Will creating our own simulacra complete Yahweh’s genesis in an act of true flattery? Or does it commence mankind’s demise in the most foolish audacity?
Is the work of the model-making-its-own-model a sacrament or a blasphemy?
One thing the man-creatures know for sure: making models of themselves is no cinch.
The other thing the man-things should know is that their models won’t be perfect, either. Nor will these imperfect creations be under godly control. To succeed at all in creating a creative creature, the creators have to turn over control to the created, just as Yahweh relinquished control to them.
To be a god, at least to be a creative one, one must relinquish control and embrace uncertainty. Absolute control is absolutely boring. To birth the new, the unexpected, the truly novel—that is, to be genuinely surprised—one must surrender the seat of power to the mob below.
The great irony of god games is that letting go is the only way to win. [Kevin Kelly, Out of Control, pp. 219-220]
From my stacks:
Two cheers for uncertaintyI once appropriated this idea for some song lyrics:
Imagine a life without uncertainty. Hope, according to Aeschylus, from the lack of certainty of fate; perhaps hope is inherently blind. Imagine how dull life would be if variables assessed for admission to professional school, graduate program, or executive training program really did predict with great accuracy who would succeed and who would fail. Life would be intolerable — no hope, no challenge.
Thus we have a paradox. Although we all strive to reduce the uncertainties of our existence and of the environment, ultimate success — that is, a total elimination of uncertainty — would be horrific...
[even] knowing pleasant outcomes with certainty would also detract from life's joy. An essential part of knowledge is to shrink the domain of the unpredictable. But although we pursue this goal, its ultimate attainment would not be at all desirable.
Living with uncertainty
Without uncertainty, there would be no hope, no ethics, and no freedom of choice. It is only because we do not know what the future holds for us (e.g., the exact time and manner of our own death) that we can have hope. It is only because we do not know exactly the results of our choices that our choices can be free and to compose a true ethical dilemma. Moreover, most of us are aware that there is much uncertainty in the world, and one of our most basic choices is whether we will accept that uncertainty is a fact or try to run away from it. Those who choose to deny uncertainty invent a stable world of their own. Such people's natural desire to reduce uncertainty, which may be basic to the whole cognitive enterprise of understanding the world, is taken to the extreme point where they believe that uncertainty does not exist. The statisticians definition that an optimist is "someone who believes that the future is uncertain" is not as cynical as it may at first appear to be [Hastie & Dawes, Rational Choice in an Uncertain World, 2001, pp. 326 -- 328].
Maybe one day I'll finish that song.What's there to see
In the Land of the Blind?
Where they're all 20-20,
Having made up their minds.
Well, Two Cheers for the Mystery,
Otherwise, it's all History.
If it's all just so clear,
What're we still doin here?
I'm not sure I'm driving toward any clear point here. It's just about the broad implications of AI science and technology, and the uncertainty regarding where they will lead us, given that notable people like Hawking, Boostrom, Musk, and Bill Gates have voiced concerns -- concerns that Kevin Kelly has done his best to counter, e.g., "Why I Don't Worry About a Super AI."
NOTE: If you can get through Jaron Lanier's ramblingly obtuse "The Myth of AI," beginning at about halfway down the long web page are a raft of illuminating expert comments on the topic. Well worth your time.At this point it looks to me that AI will be more "IA" in the health care domain -- "Intelligence Augmentation" (recall Larry Weed?) that helps the clinicians sift through the otherwise increasingly unmanageable torrents of data they face when trying to arrive at accurate dx's and efficacious px's and tx's. to wit,
The Rise of the Chief Cognitive Officer
...Toby Cosgrove, the CEO of the Cleveland Clinic stated “It may sound odd, but technology like Watson will make healthcare less robotic and more human.” The reasoning behind putting an AI through a version of medical school is that human physicians can’t possibly read and process the exponentially growing volumes of clinical trials, medical journals, and individual cases available in the digital domain. A computer that digests them can transform them into useful support options for care of a patient. Furthermore humans can’t be a part of every case and learn from every physician. But by combining a human with the capacity of a computer as a physician’s assistant, physicians can focus on the many things that they are uniquely able to do in the complex domain of medicine. This includes the critical conversations with patients and their families.
In short a more human physician interaction can be made by sharing the workload between man and machine. The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (Artificial Intelligence) thinking and human provider thinking within medicine...
The root word for cognitive in Latin is “cogn” which means “to learn, know.” It tends to be an active requirement for success in modern medicine with ideas such as ‘the learning health system’ — and the modern learning health system is typically able to learn and know things because algorithms do a chunk of the learning that people can’t scale to do.The Robot Will See You Now?
At the moment I don’t work within a health system. Because I am focusing on Cognitive Computing I am considering declaring myself the Chief Cognitive Officer and taking over establishing systems of anything related to thinking and knowledge. I wonder if anyone will stop me? Maybe the job of the chief cognitive officer should be the CEO? I’d say so but ‘executive’ implies decisions. Thinking isn’t always about big decisions but about the tools that can be used to make the millions of little decisions needed for every interaction to generate value. Someone has to make this next generation stuff work.
At least that is how my AI and I think.
On another aspect of cutting edge technology,
Top scientists hold closed meeting to discuss building a human genome from scratch'eh?
Over 130 scientists, lawyers, entrepreneurs, and government officials from five continents gathered at Harvard on Tuesday for an “exploratory” meeting to discuss the topic of creating genomes from scratch — including, but not limited to, those of humans, said George Church, Harvard geneticist and co-organizer of the meeting.
The meeting was closed to the press, which drew the ire of prominent academics...
The topic is a heavy one, touching on fundamental philosophical questions of meaning and being. If we can build a synthetic genome — and eventually, a creature — from the ground up, then what does it mean to be human?
“This idea is an enormous step for the human species, and it shouldn’t be discussed only behind closed doors,” said Laurie Zoloth, a professor of religious studies, bioethics, and medical humanities at Northwestern University...
More to come...