Search the KHIT Blog

Tuesday, April 5, 2016

The future of health care? "Flawlessly run by AI-enabled robots, and 'essentially' free?"

 "Within the next decade, [according to Singularity University's Peter Diamandis] self-driving cars will eliminate all driving fatalities. Artificial intelligence will soon surpass the skills of the best human doctors and remove all inefficiencies from health care systems. These AIs will invent new pharmaceuticals to cure previously fatal diseases and will 3D print customized medicines based on genetic analysis of individual patients. Perhaps best of all, he said, plummeting production costs and rising prosperity will make such fantastic medical care essentially free..."

From Singularity University: The Harvard of Silicon Valley Is Planning for a Robot Apocalypse.

I'll be 80 in ten years, so I guess I'd love to have "essentially free," totally efficient, and personalized health care, in lieu of the expensive and frustrating "Shards" of my recent experience. Or, will health care delivery "transformation" in 2026 still be "Free Beer Tomorrow?"

See my June 2015 post "AI vs IA: At the cutting edge of IT R&D." See my December 2015 post wherein I cite Calum Chance's book "Surviving AI" (scroll down). See my March 18th post wherein I review Michael Lynch's "The Internet of Us."

I'd love to have Evgeny Morozov's take on this "Future-of-Healthcare" stuff.

As hospitals go digital, human stories get left behind

he official reason for my patient’s visit, according to her electronic medical chart, was fatigue, though that was far from her only concern.

In the exam room, this usually upbeat woman had a sad tale to tell. Several months earlier, a close relative fell seriously ill and my patient, elderly and not in great health herself, became a caregiver. The relative’s grueling treatment proved unsuccessful and he died. Following the funeral, my patient was overwhelmed by exhaustion, grief, and guilt. To make matters worse, the relative’s death exposed some long-submerged family tensions.

An interview and physical exam confirmed what we both already knew: My patient’s fatigue was more emotional than physical. In fact, at the end of our visit, she said that just describing what she’d been through made her feel better. I left the exam room feeling better myself. Hearing my patients’ stories and using those stories to help them heal is, for me, the most gratifying part of practicing medicine.

My warm feelings vanished as I sat down to document the visit. While I’ve used an electronic medical record for several years, Epic, the system my hospital recently adopted, makes recording stories such as the one my patient shared especially difficult. Her grief and her fatigue, which are inseparable in reality, Epic treats as different problems. That she lives alone and there’s conflict in her extended family, which are also inextricable from her symptoms, must be filed under a tab marked “Social Documentation.”

Epic features lists of diagnoses and template-generated descriptions of symptoms and physical examination findings. But it provides little sense of how one event led to the next, how one symptom relates to another, the emotional context in which the symptoms or events occurred, or the thought process of the physician trying to pull together individual strands of data into a coherent narrative. Epic is not well-suited to communicating a patient’s complex experience or a physician’s interpretation of that experience as it evolves over time, which is to say: Epic is not built to tell a story...
A medical record that abandons narrative in favor of a list does more than dehumanize our patients. It also hampers a clinician’s diagnostic abilities. Take a patient I saw recently, a middle-aged woman with palpitations. She was perimenopausal, stressed out at work, having trouble sleeping, drinking lots of coffee to stay awake during the day, and had a family history of heart disease. Any one of those issues might explain her palpitations, but more likely some combination of interrelated factors was causing them. Sorting out the story is crucial to deciding which tests to order and what treatment to recommend...
Hmmm... See my December post "Are structured data the enemy of health care quality?"


The Doctor’s New Dilemma

Suzanne Koven, M.D.

The woman sits perched on the end of my exam table, leaning forward, blond curls tumbling over her eyes, her precarious posture mirroring her emotional state. Though the symptom she describes is relatively minor — some diarrhea on and off — she appears distraught. She grips the table as if doing so will hold back her tears.

A psychiatrist colleague tells me that such moments, when there’s a clear mismatch between what a patient says and the intensity of feeling with which he or she says it, are especially ripe for probing. But the psychiatrist sees patients for 45 minutes. I have 15, several of which have already passed, in which to address and document the woman’s chief symptom: loose stool. I find myself in a quandary: Do I ask the patient why she’s so upset, or do I order a culture, prescribe antidiarrheal medication, type my note, and send her on her way?...

The dilemma I face most often as a primary care doctor, however, is not one that Shaw anticipated. The commodities I struggle to ration are my own time and emotional energy. Almost every day I see a patient like the woman with diarrhea and I find myself at a crossroads: Do I ask her what’s really bothering her and risk a time-consuming interaction? Or do I accept what she’s saying at face value and risk missing a chance to truly help her?

Often, the situation is not so dramatic. Say I walk into an exam room and find a patient waiting for me, reading a book. Do I ask what book she’s reading? If it’s one I’ve recently read myself, do I ask whether she, like me, enjoyed it but found it a bit longer than it needed to be? We might debate that point, and then she might start telling me about other novels her book group has read, and pretty soon we’d be having — horrors! — a conversation. Precious minutes wasted on useless chitchat...

...Weinberg may have had meaningful conversations, but he didn’t have “meaningful use.” In 1985, free from the shackles of the computer screen, Weinberg faces only one obstacle in engaging the troubled young woman: his own willingness to do so. His leisurely conversations with her seem as quaint to us now as black bags and glass hypodermics.

Still, the moment when Weinberg takes the plunge, when he asks the woman about pastry, seems very familiar. It’s a moment we have all inhabited and, all too often, pulled back from — a threshold we fear crossing. We imagine ourselves, now, in Weinberg’s place, and we recognize a double bind, a new doctor’s dilemma: if we ask about the pastry, we fall hopelessly behind in administrative tasks and feel more burned out. If we don’t ask about the pastry, we avoid the kind of intimacy that not only helps the patient, but also nourishes us and keeps us from feeling burned out.

The woman with the blond curls can keep back her tears no longer. She gestures to her midsection and sobs, “I can’t hold on to anything!”

I am struck by her choice of words, by the metaphorical power of her cry. In the past, she’s told me of her difficulty maintaining relationships, of her loneliness. I’ve recommended psychotherapy, but she’s declined. I consider pointing that out to her, suggesting that her diarrhea might be an eloquent manifestation of her psychological pain. But 25 minutes have passed, and there just isn’t time to open that door.

I order the cultures, prescribe an antidiarrheal drug and some dietary modifications, briefly mention psychotherapy again, and leave the room. Then I sit at my workstation to document and bill for our encounter, perched at the edge of my seat, on the verge of despair.
Nortin Hadler, MD argues for "Fixing U.S. health care by "monetizing altruism," recall? Well, will our nascent AI/robotic clinical successors possess the emotive attributes of "empathy," "sympathy," and "altruism" in addition to their ostensibly superior adaptive "cognition?" See my January post "We need more seasoned clinicians at the front lines of digital health." Will technology render that idea moot?

Will the robot give a flip about your "story?"

apropos, eminent computer scientist David Gelernter:

Machines That Will Think and Feel
Artificial intelligence is still in its infancy—and that should scare us

Artificial intelligence is breathing down our necks: Software built by Google startled the field last week by easily defeating the world’s best player of the Asian board game Go in a five-game match. Go resembles chess in the deep, complex problems it poses but is even harder to play and has resisted AI researchers longer. It requires mastery of strategy and tactics while you conceal your own plans and try to read your opponent’s.

Mastering Go fits well into the ambitious goals of AI research. It shows us how much has been accomplished and forces us to confront, as never before, AI’s future plans. So what will artificial intelligence accomplish and when?

AI prophets envision humanlike intelligence within a few decades: not expertise at a single, specified task only but the flexible, wide-ranging intelligence that Alan Turing foresaw in a 1950 paper proposing the test for machine intelligence that still bears his name. Once we have figured out how to build artificial minds with the average human IQ of 100, before long we will build machines with IQs of 500 and 5,000. The potential good and bad consequences are staggering. Humanity’s future is at stake...
Another highly recommended fairly long read.

But, wait! There's more! From The Atlantic:

What Is a Robot?
The question is more complicated than it seems.

The year is 2016. Robots have infiltrated the human world. We built them, one by one, and now they are all around us. Soon there will be many more of them, working alone and in swarms. One is no larger than a single grain of rice, while another is larger than a prairie barn. These machines can be angular, flat, tubby, spindly, bulbous, and gangly. Not all of them have faces. Not all of them have bodies.

And yet they can do things once thought impossible for machine. They vacuum carpets, zip up winter coats, paint cars, organize warehouses, mix drinks, play beer pong, waltz across a school gymnasium, limp like wounded animals, write and publish stories, replicate abstract expressionist art, clean up nuclear waste, even dream.

Except, wait. Are these all really robots? What is a robot, anyway?

This has become an increasingly difficult question to answer. Yet it’s a crucial one. Ubiquitous computing and automation are occurring in tandem. Self-operating machines are permeating every dimension of society, so that humans find themselves interacting more frequently with robots than ever before—often without even realizing it. The human-machine relationship is rapidly evolving as a result. Humanity, and what it means to be a human, will be defined in part by the machines people design...

The tech research firm Business Intelligence estimates that the market for corporate and consumer robots will grow to $1.5 billion by 2019. The rise of the robots seems to have reached a tipping point; they’ve broken out of engineering labs and novelty stores, and moved into homes, hospitals, schools, and businesses. Their upward trajectory seems unstoppable. This isn’t necessarily a good thing. While robots are poised to help improve and even save human lives, people are left grappling with what’s at stake: A robot car might be able to safely drive you to work, but, because of robots, you no longer have a job.

This tension is likely to affect how people treat robots. Humans have long positioned themselves as adversaries to their machines, and not just in pop culture. More than 80 years ago, New York’s industrial commissioner, Frances Perkins, vowed to fulfill her duty to prevent “the rearing of a race of robots.” Thirty years ago, Noah Bushnell, the founder of Atari, told The New York Times that he believed the ultimate role of robots in society would be, in his word, slaves...
What matters ... is who is in control—and how well humans understand that autonomy occurs along a gradient. Increasingly, people are turning over everyday tasks to machines without necessarily realizing it. “People who are between 20 and 35, basically they’re surrounded by a soup of algorithms telling them everything from where to get Korean barbecue to who to date,” Markoff told me. “That’s a very subtle form of shifting control. It’s sort of soft fascism in a way, all watched over by these machines of loving grace. Why should we trust them to work in our interest? Are they working in our interest? No one thinks about that.”

“A society-wide discussion about autonomy is essential,” he added.

In such a conversation, people would have to try to answer the question of how much control humans are willing to relinquish, and for what purposes. And that question may not be answerable until power dynamics have shifted irreversibly...
Yet another good long read. You might want to triangulate it with the essay "Four Futures."

More mundanely,


Ran into this at
Healthcare is Shifting, but can it get out of its own way?
By Bill Bunting
In the not-so-distant future, healthcare organizations as we know them today will change. 
Providers will send prescriptions directly to the patient, follow up by text or asynchronous video, and check vitals such as blood pressure wirelessly through Bluetooth and mobile. They will treat acute conditions — and even more — in the patient’s home, both through direct care and via-real-time video. They will deliver sterile injectables to home caregivers by drone or courier, and monitor their delivery through a plethora of innovative technologies...
Interesting piece. Worth your time.

Curious, I recently tried out the authoring platform, writing a short piece on our current national tx-resistant political UTI.


A $1 million pill that extended life by 10 years would be considered cost-effective, but to provide it to every American would require an expenditure that is equivalent to more than 1000 years of US drug spending. It would be both painful and difficult to deny such a pill to patients who could not afford it. But alternative methods of rationing are perhaps even less palatable. Such are the financial, political, and cultural limits of our ability to manage spending for expensive, effective medicine.

In fact, we may have already have reached the point of confronting the fact that we cannot all have it all. New, expensive drugs for hepatitis C—Viekera Pak, Sovaldi, and Harvoni—severely stress budget-constrained programs like Medicaid and the Veterans Health Administration. Even at the steep discounts those programs receive, these treatments—though cost-effective—are indicated for such large populations that their aggregate cost would overwhelm budgeted resources. The day that life-extending $1 million “miracle” pill arrives (or the precision-medicine equivalent of a collection of drugs), we may look back on the current hepatitis C treatment funding problems nostalgically. As innovation continues, drug pricing and budgeting problems will only get worse....

Austin Frakt, PhD, "JAMA Forum: We Can’t All Have It All: The Economic Limits of Pharmaceutical Innovation"
I am again reminded of Einer Elhauge, "Allocating health care morally" (pdf):
At the extreme, the moral paradigm insists that health care should be provided whenever it has any positive health benefit, denouncing as immoral any attempt to weigh health against mere monetary costs...
...Any denial of health care produced by a limit on the number of professionals or equipment would be the moral responsibility of those who imposed the limit.

Third, and relatedly, under moral absolutism society has an affirmative duty to provide the resources necessary to eliminate any scarcity that prevents the provision of beneficial health care...

...This is where our technology solution comes in. Google is dreaming of connectivity balloons while Facebook prefers drones as the means to connect billions of laborers to the mobile virtual reality we all partake in. Having Google makes you feel educated and well informed. Having Facebook makes you feel connected, important and well liked. Having virtualized health care will make you feel healthy and well cared for. And it’s all free, infinitely abundant and available equally to all, regardless of socioeconomic condition.  The Internet is your friend, your confidant, your teacher, your counsel, your entertainer, and now it will be your doctor, because the Internet knows you better than you know yourself, is there for you when no one else is, misses you terribly when you stay away, and cares for you as nobody cared for you before. The Internet is you.

...we have one last barrier that is painfully real. We don’t have the technology to hack the doctors. We are certainly talking up a big game while scrambling to put something together that at least looks at first glance like the real McCoy. We talk about tricorders and artificial intelligence. We talk about deep machine learning and veritable oceans of omniscient data. We talk a lot about robots, genomes, bloodless tests and iPhones that deliver intensive medical care. But we have no idea how to mix the doctor solute into the virtual technology solvent to generate the coveted solution we put forward as fait accompli.

Technology in its current state cannot absorb and distill, let alone replicate, highly variable processes that lack both a clear starting point and a predefined endpoint. We don’t know what we don’t know, and in spite of flowery rhetoric, computers can only perform, and can indeed improve upon, tasks we fully understand and are able to precisely codify down to the most minute detail. Simply put, without an atomic level understanding of clinical decision making, we cannot dilute the doctor over and over again, until there is no visible trace of human physicians in our high tech brew of health care. We can however abstract a coarse approximation of relatively straightforward scenarios at the low risk end of the clinical spectrum, and advertise aggressively that the Southwest Airlines or its evil younger cousin Uber of medicine has arrived.

Here is the watershed event to watch for: the first FDA approved app that will diagnose, prescribe and deliver medications to your house by secure drone. It may initially be confined to over the counter stuff, but once that is mainstreamed, simple meds like antibiotics, high end antacids, allergy pills and such, will certainly follow. Next up will be staples such as simvastatin, Lisinopril and metformin, first the renewals and then a slew of new diagnoses of pre-this and pre-that. At the high end of disease, “precision” medicine will isolate one or two rare scenarios that affect one in a million people, script them and execute them flawlessly once or twice without physician intervention. Then we declare victory and spread the gospel to every $5 mobile phone from Guizhou province to the Appalachian Mountains to the banks of the Ganges river.

Médecine sans Médecins
There is no doubt in my mind that we shall overcome the first two barriers at very short order. There is no doubt in my mind that even if we fail to hack doctors in the abstract sense, we will be hacking the medical profession to pieces in the most physical sense. And there should be no doubt in anybody’s mind that whatever these cheap hacks are doing to our health care, the effects will not be apparent for decades, and even then the results will be attributed to the inevitability of external factors such as cultural change, climate change, famine, wars, migrations, solar flares, or random disturbances in the Force. Three centuries later, it looks like John Dryden had it right after all, and “God never made his Work, for Man to mend.”
Ouch. Margalit Gur-Arie is always worth waiting for. Her eagerly awaited new post, "Hacking Doctors... to Pieces."

More to come...

No comments:

Post a Comment