Can Computers Learn Like Humans?
The world of artificial intelligence has exploded in recent years. Computers armed with AI do everything from drive cars to pick movies you'll probably like. Some have warned we're putting too much trust in computers that appear to do wondrous things.
But what exactly do people mean when they talk about artificial intelligence?
It's hard to find a universally accepted definition of artificial intelligence. Basically it's about getting a computer to be smart — getting it to do something that in the past only humans could do.
One key to artificial intelligence is machine learning. Instead of telling a computer how to do something, you write a program that lets the computer figure out how to do something all on its own…
Medicine Is a Profession That is Rapidly Losing Control of Its ToolsOver at Medium.com:
By ADRIAN GROPPER, MD
Artificial Intelligence hype and reality are everywhere. However, the last month or two has seen some thoughtful reflection. HHS / ONC announced “Hype to Reality: How Artificial Intelligence (AI) Can Transform Health and Healthcare” referencing a major JASON report “Artificial Intelligence for Health and Health Care [PDF -817 KB],”. From a legal and ethical perspective, we have a new multinational program: “PMAIL will provide a comparative analysis of the law and ethics of black-box personalized medicine,…”. Another Harvard affiliate writes “Optimization over Explanation” subtitled “Maximizing the benefits of machine learning without sacrificing its intelligence”. Meanwhile, an investigative journalism report from the UK “Google DeepMind and healthcare in an age of algorithms”, “…draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age”…
Interesting 4-part (thus far) series:
Living in the MachineComes with audio versions as well. Nice.
Does technology change the very state of being human?
Artificial intelligence and automation outsources even more of our cognitive functions to machines. What does this mean for art, for relationships — even for our connection to a higher being? What does it mean to be human in the age of the machine?
From the first post:
From Mining to Meaning
If you use digital devices, AI is already being sicced on the grotesque bricolage that is your life to eliminate potential sources of “friction” — a tech-speak jargon term that means roughly “whatever grinds your gears.”Good stuff.
Whether it’s by monitoring your calorie intake, presenting you with “optimal” romantic prospects, or making it easier to spend a fortune on Amazon, algorithms are even now insinuating themselves into your every existential crack and crevice like so many squirts of WD-40.
The possibility of using AI to eliminate diseases is undeniably exciting. Excising problems like cancer from society would make for a better future. But is maximizing efficiency the only way to add value to the world?
Most people don’t see the world and its inhabitants simply as a resource to be mined more or less effectively, nor do we tend to think that human value is exhausted by the efficiency or otherwise of this resource mining. Sometimes we just want to make sense of things: to look closely at the world, grasp some pattern in it, and articulate its significance, without some further goal in mind. This desire is what drives people to become scholars, but it’s also why people look at art, listen to music, or strive to build relationships with their grandchildren. If a concern for efficiency is a big part of what makes us human, our desire to grasp significance and share meaningful experiences with others is just as crucial.
Like a world without cancer, a more thoughtful, artistic, and compassionate future strikes us as an unequivocal Good Thing. But adding this kind of value to the world requires something more than maximizing efficiency. Anyone who tries to “hack” being a thoughtful scholar, or a good friend, is kind of missing the point…
UPDATE
Will robots take your job? Humans ignore the coming AI revolution at their peril.
Artificial intelligence aims to replace the human mind, not simply make industry more efficient.
by Subhash Kak
Robots have transformed industrial manufacturing, and now they are being rolled out for food production and restaurant kitchens. Already, artificial intelligence (AI) machines can do many tasks where learning and judgment is required, including self-driving cars, insurance assessment, stock trading, accounting, HR and many tasks in healthcare. So are we approaching a jobless future, or will new jobs replace the ones that are lost?
According to the optimistic view, our current phase of increasing automation will create new kinds of employment for those who have been made redundant. There is some historical precedent for this: Over a hundred years ago, people feared that the automobile revolution would be bad for workers. But while jobs related to horse-drawn carriages disappeared, the invention of the car lead to a need for automobile mechanics; the internal combustion engine soon found applications in mining, airplanes and other new fields.Yeah, this is not a new concern. I've hit on the topic a number of times before. See also here.
The difference, however, is that today’s AI technology aims to replace the human mind, not simply make industry more efficient. This will have unprecedented consequences not predicted by the advent of the car, or the automated knitting machine…
BTW, another new read. Just getting started. A lot of technical overlap between AI, IA, AR, and VR.
Strongly recommend you tour his Stanford Virtual Human Interaction Lab website.
From NPR's Science Friday (March 2016):
How advances in virtual reality will change how we work and communicate.
My specific interests go to the potential utility of this technology in health care -- inclusive of clinical pedagogy. I'll reserve judgments until I've finished Jeremy's book.
BTW: Jeff and April, recall, are deploying VR in their startup, NeuroTrainer.com.
FEB 8TH UPDATE
Any tangential AI/VR connection here? From Medium this morning:
We are our own typosHmmm...
Everyone seems to be writing about the recently announced effort by Amazon, Berkshire Hathaway, and JP Morgan Chase to attack their employee health costs. It is certainly newsworthy, and I am generally interested in whatever Amazon may do in healthcare.
They may very well have some success with this effort, but until I read a positive story about employee working conditions at Amazon, I’m going to be skeptical that any disruption in healthcare they accomplish with it is something that I shouldn’t be worried about.
So, instead, I’m going write to about why we can’t recognize our own typos, and what that means for our health.
As Wired summarized the problem a few years ago: “The reason we don’t see our own typos is because what we see on the screen is competing with the version that exists in our heads.” They go on to explain that one of the great skills of our big brains is that we build mental maps of the world, but those maps are not always faithful to the actual world.
As psychologist Tom Stafford explained: “We don’t catch every detail, we’re not like computers or NSA databases. Rather, we take in sensory information and combine it with what we expect, and we extract meaning.”
Thus, typos.
Unfortunately, the same is often true with how we view our health. We don’t think we’re as overweight as we are. We think we get more exercise than we do. We think our nutrition is better than it is. Overall, we think we’re in better health than we probably are.
Over the past few decades, the U.S. has been suffering “epidemics” of obesity, diabetes, asthma, and allergies, to name a few. Over half of adults now have one or more chronic conditions. Yet two-thirds of us still report being in good or excellent health, virtually unchanged for at least the last twenty years.
Something doesn’t jibe…
I'm reminded of the old QA auditor's saying, "you get what you INspect, not what you EXpect."
More news. Margalit is back with a vengeance:
Ambergan PrimeLOL. Read all of it.
Dear primary care doctor, Jeff Bezos is about to devour your lunch. All of it. And then he’ll eat the table, the plates, the napkins and the utensils too, so you’ll never have lunch ever again. Oh yeah, and they’ll also finally disrupt and fix health care once and for all, because enough is enough already. Mr. Bezos, it seems, got together with two of his innovator buddies, Warren Buffet from Berkshire Hathaway and Jamie Dimon from J.P. Morgan, and they are fixing up to serve us some freshly yummy and healthy concoction.
Let’s call it Ambergan for now.
This is big. This is huge. It comes from outside the sclerotic “industry”. And it’s all about technology. The founders are no doubt well versed in the latest disruption theories and Ambergan will be a classic Christensen stealth destroyer of existing markets. When the greatest investor that ever-lived combines forces with the greatest banker in recent memory and the premier markets slayer of all times, who happens to be the richest man on earth, all to bring good things to life (sorry GE), nothing but goodness will certainly ensue.
Everybody inside and outside the legacy health care industry is going to write volumes about this magnificent new venture in the coming days and months, so I will leave the big picture to my betters. But since our soon to be dead industry has been busy lately bloviating about the importance of good old fashioned, relationship based primary care, perhaps it would be useful to understand that Ambergan is likely to take the entire primary care thing off the table and stash it safely in the bottomless cash vaults of its founders. It’s not personal, dear doctor. It’s business. Ambergan will be your primary care platform and you may even like it…
The Cherry on top:
"The Amazon platform IS the network, and there will be terms, conditions, stars and promotions. There certainly are many legacy obstacles to overcome, and perhaps that is why Amazon couldn’t or wouldn’t go it alone. Throwing highly regulated markets wide open requires two strong lobbying arms, and a federal government willing to play fast and loose. The stars are indeed perfectly aligned for the first true disruption of our health care since 1965."FEB 9TH UPDATE
Happy Birthday to me (72 today). What's the joke? "If I'd known I was gonna live this long, I'd have taken better care of myself."
apropos of the overall topic of this post, another must-read has just come to my attention, via my latest issue of Science Magazine, in a book review entitled "The fetishization of quantification."
From the Amazon blurb:
How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government'eh? "Data, Learning, Experience, Perception, Meaning..."
Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we've gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing--and shows how we can begin to fix the problem.
Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging "gaming the stats" or "teaching to the test." That's because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn't work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to—rather than a replacement for—judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial.
Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.
Frrom the Science Magazine (non-paywalled) summary:
SummaryYeah. While, hey, I'm a long-time "quant guy," a "QI guy," I've had the Brent James training ("if you can't measure it, you can't improve it"), I too have concerns that the phrase "data-driven" can often mean putting your brain in "park." One of the cautions regarding "machine learning" goes to the concern that the machines will "learn" all of our bias errors.
Although the numbers whose "tyranny" forms the subject of Jerry Muller's timely book share some of the attributes of scientific measurement, their purposes are primarily administrative and political. They are designed to be incorporated into systems of what might be called "data-ocracy," often for the sake of public accountability: Schools, hospitals, and corporate divisions whose numbers meet or exceed their goals are to be rewarded, whereas poor numbers, taken to imply underperformance, may bring penalties or even annihilation. In The Tyranny of Metrics, Muller shows how teachers, doctors, researchers, and managers are driven to sacrifice the professional goals they value in order to improve their numbers.
The Science Magazine book review concludes:
In 1975, the American social psychologist Donald Campbell and the British economist C. A. E. Goodhart articulated independently the principle that reliance on measurement to incentivize behaviors leads almost inevitably to a corruption of the measures. Muller explains the logic of this corruption and defends, in place of indiscriminate numbers, an ideal of professional knowledge and experience.Add another book to the stash.
Measurement, he concludes, can contribute to better performance, but only if the measures are designed to function in alliance with professional values rather than as an alternative to them. Good metrics cannot be detached from customs and practices but must depend on a willingness to immerse oneself in the work of these institutions.
ERRATUM
Speaking of "data," heard this in the car yesterday. The new "Panopticon":
With Closed-Circuit TV, Satellites And Phones, Millions Of Cameras Are Watching
Journalist Robert Draper writes in National Geographic that the proliferation of cameras focused on the public has led "to the point where we're expecting to be voyeur and exhibitionist 24/7."
See my November post "Artificial Intelligence and Ethics." See also my "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."
UPDATE
Interesting long-read article:
The Coming Software Apocalypse--
A small group of programmers wants to change how we code — before catastrophe strikes.
By James Somers
ANOTHER ERRATUM
Just heard Emily Chang interviewed on MSNBC.
Will have to read this one too. Brings to mind this prior post of mine.
CODA
The ultimate utility of AI/NLP?
BELOW, PURE MARKETING GENIUS
This came across my Twitter feed. My first reaction was "yeah, right, this is straight outa SNL."
Nope. Some "As-Seen-on-TV" vendor fleecing the rubes on cheesy cable channels for months now. Making Bank.
Is this a great country, or what?
_____________
More to come...
No comments:
Post a Comment