Search the KHIT Blog

Friday, November 3, 2017

Clinical cognition in the digital age


From the New England Journal of Medicine (open access essay):
Lost in Thought — The Limits of the Human Mind and the Future of Medicine
Ziad Obermeyer, M.D., and Thomas H. Lee, M.D.
In the good old days, clinicians thought in groups; “rounding,” whether on the wards or in the radiology reading room, was a chance for colleagues to work together on problems too difficult for any single mind to solve.

Today, thinking looks very different: we do it alone, bathed in the blue light of computer screens.

Our knee-jerk reaction is to blame the computer, but the roots of this shift run far deeper. Medical thinking has become vastly more complex, mirroring changes in our patients, our health care system, and medical science. The complexity of medicine now exceeds the capacity of the human mind.

Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research.

It’s ironic that just when clinicians feel that there’s no time in their daily routines for thinking, the need for deep thinking is more urgent than ever. Medical knowledge is expanding rapidly, with a widening array of therapies and diagnostics fueled by advances in immunology, genetics, and systems biology. Patients are older, with more coexisting illnesses and more medications. They see more specialists and undergo more diagnostic testing, which leads to exponential accumulation of electronic health record (EHR) data. Every patient is now a “big data” challenge, with vast amounts of information on past trajectories and current states.

All this information strains our collective ability to think. Medical decision making has become maddeningly complex. Patients and clinicians want simple answers, but we know little about whom to refer for BRCA testing or whom to treat with PCSK9 inhibitors. Common processes that were once straightforward — ruling out pulmonary embolism or managing new atrial fibrillation — now require numerous decisions...
"Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research."

'eh?

I am reminded of a prior contrarian post "Are structured data the enemy of health care quality?"

More recently, I've reported on the latest (excessively?) exuberant rah-rah over stuff like AI, NLP, and Robotics. See also here.

More Obermeyer and Lee from NEJM:
The first step toward a solution is acknowledging the profound mismatch between the human mind’s abilities and medicine’s complexity. Long ago, we realized that our inborn sensorium was inadequate for scrutinizing the body’s inner workings — hence, we developed microscopes, stethoscopes, electrocardiograms, and radiographs. Will our inborn cognition alone solve the mysteries of health and disease in a new century? The state of our health care system offers little reason for optimism. 
But there is hope. The same computers that today torment us with never-ending checkboxes and forms will tomorrow be able to process and synthesize medical data in ways we could never do ourselves. Already, there are indications that data science can help us with critical problems...
I found it quite interesting that Lincoln Weed, JD, co-author of the excellent "Medicine in Denial" (now available free in searchable PDF format) was first to comment under the essay.
LINCOLN WEED
Underhill VT
October 04, 2017

Medicine has long been operating in denial of complexity and its solutions
The authors correctly observe, "Algorithms that learn from human decisions will also learn human mistakes." But the authors understate the problem. "The complexity of medicine," they argue, "NOW exceeds the capacity of the human mind" (emphasis added). This is a bit like saying, "The demands of transportation NOW exceed the capacity of horse-powered vehicles." In reality, the complexity of medicine overtook the human mind many decades ago.  Moreover, conventional software engineering demonstrated the potential for tools to cope with complexity and transform medicine long before algorithms driven by machine learning emerged. Medical education, licensure, and practice have been operating in denial of this reality.
 
Interested readers are referred to Weed LL, Physicians of the Future, New Eng. J. Med. 1981;304:903-907; Weed LL, Weed L, Medicine in Denial, CreateSpace, 2011 (a book available in full text at www.world3medicine.org); and a recent guest blog post, https://nlmdirector.nlm.nih.gov/2017/09/05/larry-weeds-legacy-and-clinical-decision-support/. Disclosure: I am a son of and co-author with the late Dr. Larry Weed, author of the article and lead author of the book just cited.
I could not recommend the Weeds' book more highly. I've cited it multiple times, e.g., "Down in the Weeds'," "Back down in the Weeds'," and "Back down in the Weeds': A Complex Systems Science Approach to Healthcare Costs and Quality."

Back to more Obermeyer and Lee:
...Machine learning has already spurred innovation in fields ranging from astrophysics to ecology. In these disciplines, the expert advice of computer scientists is sought when cutting-edge algorithms are needed for thorny problems, but experts in the field — astrophysicists or ecologists — set the research agenda and lead the day-to-day business of applying machine learning to relevant data.
In medicine, by contrast, clinical records are considered treasure troves of data for researchers from nonclinical disciplines. Physicians are not needed to enroll patients — so they’re consulted only occasionally, perhaps to suggest an interesting outcome to predict. They are far from the intellectual center of the work and rarely engage meaningfully in thinking about how algorithms are developed or what would happen if they were applied clinically.
But ignoring clinical thinking is dangerous. Imagine a highly accurate algorithm that uses EHR data to predict which emergency department patients are at high risk for stroke. It would learn to diagnose stroke by churning through large sets of routinely collected data. Critically, all these data are the product of human decisions: a patient’s decision to seek care, a doctor’s decision to order a test, a diagnostician’s decision to call the condition a stroke. Thus, rather than predicting the biologic phenomenon of cerebral ischemia, the algorithm would predict the chain of human decisions leading to the coding of stroke.
Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system. Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors.
Ultimately, machine learning in medicine will be a team sport, like medicine itself. But the team will need some new players: clinicians trained in statistics and computer science, who can contribute meaningfully to algorithm development and evaluation. Today’s medical education system is ill prepared to meet these needs. Undergraduate premedical requirements are absurdly outdated. Medical education does little to train doctors in the data science, statistics, or behavioral science required to develop, evaluate, and apply algorithms in clinical practice.
The integration of data science and medicine is not as far away as it may seem: cell biology and genetics, once also foreign to medicine, are now at the core of medical research, and medical education has made all doctors into informed consumers of these fields. Similar efforts in data science are urgently needed. If we lay the groundwork today, 21st-century clinicians can have the tools they need to process data, make decisions, and master the complexity of 21st-century patients.
Big "AI/IA" takeaway for me:
"Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system. Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors."
Indeed. That is a huge and perhaps underappreciated concern in light of the prevalence of errors and omissions in many, many sources of data.


UPDATE

An important new "AI Now" report is out. See "Why AI is Still Waiting for its Ethics Transplant." Much more on this shortly.
__

Below: audio interview with Dr. Obermeyer.
In the aggregate foregoing vein, you might also like my prior riffs on "The Art of Medicine." In addition, see my "Philosophia sana in ars medica sana."

CODA

Save the date.

Link

____________

More to come...

No comments:

Post a Comment