Acclaimed author
Michael Lewis has hit another one out of the park. I recommend that you stop what you're doing ASAP and read it.
Lewis' biography of
Daniel Kahneman and his collaborator the late
Amos Tversky is simply a compelling, must-read. This odd couple of Israeli psychologists
, two of the clearest thinkers of our times, changed the course of the psych discipline and founded the field now known as "
behavioral economics."
Beyond Lewis's excellent lay-accessible translations of the core elements of the Kahneman-Tversky
"Prospect Theory" (as it would come to be widely known), his riveting, poignant recounting the personal stories of the lives of these two unique men is itself worthy of
some kind of literary award.
With regard to the technical specifics as they go to clinical cognition and decisionmaking, the book is a gold mine. Chapter 8 in particular.
8: Going Viral
The young woman they called him to examine that summer day was still in a state of shock. As Don Redelmeier understood it, her car had smashed head-on into another car a few hours earlier, and the ambulance had rushed her straight to Sunnybrook Hospital. She’d suffered broken bones everywhere— some of which they had detected and others, it later became clear, they had not. They’d found the multiple fractures in her ankles, feet, hips, and face. (They’d missed the fractures in her ribs.) But it was only after she arrived in the Sunnybrook operating room that they realized there was something wrong with her heart.
Sunnybrook was Canada’s first and largest regional trauma center, an eruption of red-brown bricks in a quiet Toronto suburb. It had started its life as a hospital for soldiers returning from the Second World War, but as the veterans died, its purpose shifted. In the 1960s the government finished building what would become at its widest a twenty-four-lane highway across Ontario. It would also become the most heavily used road in North America, and one of its busiest stretches passed close by the hospital. The carnage from Highway 401 gave the hospital a new life. Sunnybrook rapidly acquired a reputation for treating victims of automobile accidents; its ability to cope with one sort of medical trauma inevitably attracted other sorts of trauma. “Business begets business,” explained one of Sunnybrook’s administrators. By the turn of the twenty-first century, Sunnybrook was the go-to destination not only for victims of car crashes but for attempted suicides, wounded police officers, old people who had taken a fall, pregnant women with serious complications, construction workers who had been hurt on the job, and the survivors of gruesome snowmobile crashes— who were medevaced in with surprising frequency from the northern Canadian boondocks. Along with the trauma came complexity. A lot of the damaged people who turned up at Sunnybrook had more than one thing wrong with them.
That’s where Redelmeier entered. By nature a generalist, and by training an internist, his job in the trauma center was, in part, to check the understanding of the specialists for mental errors. “It isn’t explicit but it’s acknowledged that he will serve as a check on other people’s thinking,” said Rob Fowler, an epidemiologist at Sunnybrook. “About how people do their thinking. He keeps people honest. The first time people interact with him they’ll be taken aback: Who the hell is this guy, and why is he giving me feedback? But he’s lovable, at least the second time you meet him.” That Sunnybrook’s doctors had come to appreciate the need for a person to serve as a check on their thinking, Redelmeier thought, was a sign of how much the profession had changed since he entered it in the mid-1980s. When he’d started out, doctors set themselves up as infallible experts; now there was a place in Canada’s leading regional trauma center for a connoisseur of medical error. A hospital was now viewed not just as a place to treat the unwell but also as a machine for coping with uncertainty. “Wherever there is uncertainty there has got to be judgment,” said Redelmeier, “and wherever there is judgment there is an opportunity for human fallibility.”
Across North America, more people died every year as a result of preventable accidents in hospitals than died in car crashes— which was saying something. Bad things happened to patients, Redelmeier often pointed out, when they were moved without extreme care from one place in a hospital to another. Bad things happened when patients were treated by doctors and nurses who had forgotten to wash their hands. Bad things even happened to people when they pressed hospital elevator buttons. Redelmeier had actually co-written an article about that: “Elevator Buttons as Unrecognized Sources of Bacterial Colonization in Hospitals.” For one of his studies, he had swabbed 120 elevator buttons and 96 toilet seats at three big Toronto hospitals and produced evidence that the elevator buttons were far more likely to infect you with some disease.
But of all the bad things that happened to people in hospitals, the one that most preoccupied Redelmeier was clinical misjudgment. Doctors and nurses were human, too. They sometimes failed to see that the information patients offered them was unreliable— for instance, patients often said that they were feeling better, and might indeed believe themselves to be improving, when they had experienced no real change in their condition. Doctors tended to pay attention mainly to what they were asked to pay attention to, and to miss some bigger picture. They sometimes failed to notice what they were not directly assigned to notice. “One of the things Don taught me was the value of observing the room when the patient isn’t there,” says Jon Zipursky, chief of residents at Sunnybrook. “Look at their meal tray. Did they eat? Did they pack for a long stay or a short one? Is the room messy or neat? Once we walked into the room and the patient was sleeping. I was about to wake him up and Don stops me and says, "There is a lot you can learn about people from just watching.”
Doctors tended to see only what they were trained to see: That was another big reason bad things might happen to a patient inside a hospital. A patient received treatment for something that was obviously wrong with him, from a specialist oblivious to the possibility that some less obvious thing might also be wrong with him. The less obvious thing, on occasion, could kill a person.
The conditions of people mangled on the 401 were often so dire that the most obvious things wrong with them demanded the complete attention of the medical staff, and immediate treatment. But the dazed young woman who arrived in the Sunnybrook emergency room directly from her head-on car crash, with her many broken bones, presented her surgeons, as they treated her, with a disturbing problem. The rhythm of her heartbeat had become wildly irregular. It was either skipping beats or adding extra beats; in any case, she had more than one thing seriously wrong with her.
Immediately after the trauma center staff called Redelmeier to come to the operating room, they diagnosed the heart problem on their own— or thought they had. The young woman remained alert enough to tell them that she had a past history of an overactive thyroid. An overactive thyroid can cause an irregular heartbeat. And so, when Redelmeier arrived, the staff no longer needed him to investigate the source of the irregular heartbeat but to treat it. No one in the operating room would have batted an eye if Redelmeier had simply administered the drugs for hyperthyroidism. Instead, Redelmeier asked everyone to slow down. To wait. Just a moment. Just to check their thinking— and to make sure they were not trying to force the facts into an easy, coherent, but ultimately false story.
Something bothered him. As he said later, “Hyperthyroidism is a classic cause of an irregular heart rhythm, but hyperthyroidism is an infrequent cause of an irregular heart rhythm.” Hearing that the young woman had a history of excess thyroid hormone production, the emergency room medical staff had leaped, with seeming reason, to the assumption that her overactive thyroid had caused the dangerous beating of her heart. They hadn’t bothered to consider statistically far more likely causes of an irregular heartbeat. In Redelmeier’s experience, doctors did not think statistically. “Eighty percent of doctors don’t think probabilities apply to their patients,” he said. “Just like 95 percent of married couples don’t believe the 50 percent divorce rate applies to them, and 95 percent of drunk drivers don’t think the statistics that show that you are more likely to be killed if you are driving drunk than if you are driving sober applies to them.”
Redelmeier asked the emergency room staff to search for other, more statistically likely causes of the woman’s irregular heartbeat. That’s when they found her collapsed lung. Like her fractured ribs, her collapsed lung had failed to turn up on the X-ray. Unlike the fractured ribs, it could kill her. Redelmeier ignored the thyroid and treated the collapsed lung. The young woman’s heartbeat returned to normal. The next day, her formal thyroid tests came back: Her thyroid hormone production was perfectly normal. Her thyroid never had been the issue. “It was a classic case of the representativeness heuristic,” said Redelmeier. “You need to be so careful when there is one simple diagnosis that instantly pops into your mind that beautifully explains everything all at once. That’s when you need to stop and check your thinking.”
It wasn’t that what first came to mind was always wrong; it was that its existence in your mind led you to feel more certain than you should be that it was correct. “Beware of the delirious guy in the emergency unit with the long history of alcoholism,” said Redelmeier, “because you will say, ‘He’s just drunk,’ and you’ll miss the subdural hematoma.” The woman’s surgeons had leapt from her medical history to a diagnosis without considering the base rates. As Kahneman and Tversky long ago had pointed out, a person who is making a prediction— or a diagnosis— is allowed to ignore base rates only if he is completely certain he is correct. Inside a hospital, or really anyplace else, Redelmeier was never completely certain about anything, and he didn’t see why anybody else should be, either...
Lewis, Michael (2016-12-06). The Undoing Project: A Friendship That Changed Our Minds (Kindle Locations 2821-2890). W. W. Norton & Company. Kindle Edition.
That's just the opening stanza. Chapter 8 is a
tour de force. Just a taste more:
Redelmeier had grown up in Toronto, in the same house in which his stockbroker father had been raised. The youngest of three boys, he often felt a little stupid; his older brothers always seemed to know more than he did and were keen to let him know it. Redelmeier also had a speech impediment— a maddening stammer he would never cease to work hard, and painfully, to compensate for. (When he called for restaurant reservations, he just told them his name was “Don Red.”) His stammer slowed him down when he spoke; his weakness as a speller slowed him down when he wrote. His body was not terribly well coordinated, and by the fifth grade he required glasses to correct his eyesight. His two great strengths were his mind and his temperament. He was always extremely good at math; he loved math. He could explain it, too, and other kids came to him when they couldn’t understand what the teacher had said. That is where his temperament entered. He was almost peculiarly considerate of others. From the time he was a small child, grown-ups had noticed that about him: His first instinct upon meeting someone else was to take care of the person.
Still, even from math class, where he often wound up helping all the other students, what he took away was a sense of his own fallibility. In math there was a right answer and a wrong answer, and you couldn’t fudge it. “And the errors are sometimes predictable,” he said. “You see them coming a mile away and you still make them.” His experience of life as an error-filled sequence of events, he later thought, might be what had made him so receptive to an obscure article, in the journal Science, that his favorite high school teacher, Mr. Fleming, had given him to read in late 1977. He took the article home with him and read it that night at his desk.
The article was called “Judgment Under Uncertainty: Heuristics and Biases.” It was in equal parts familiar and strange— what the hell was a “heuristic”? Redelmeier was seventeen years old, and some of the jargon was beyond him. But the article described three ways in which people made judgments when they didn’t know the answer for sure. The names the authors had given these— representativeness, availability, anchoring— were at once weird and seductive. They made the phenomenon they described feel like secret knowledge. And yet what they were saying struck Redelmeier as the simple truth— mainly because he was fooled by the questions they put to the reader... [ibid, Kindle Locations 2891-2908].
I first got onto "
Judgement Under Uncertainty" more than a quarter century ago. My hardcopy of the book, along with the
subsequent Kahneman-Tversky "
Choices, Values, and Frames," sits right here on the bookshelf above my iMac in my office.
If you want to go down deep in the scholarly/technical weeds, see their 1979 paper
"Prospect Theory: An analysis of Decision under Risk" (pdf).
Also recommended. Daniel Kahneman's acclaimed recent solo book "
Thinking, Fast and Slow."
BTW,
apropos of current U.S. political events, I found
this little passage in "
The Undoing Project" of relevance:
In late 1973 or early 1974, [Daniel Kahneman] gave a talk, which he would deliver more than once, and which he called “Cognitive Limitations and Public Decision Making.” It was troubling to consider, he began, “an organism equipped with an affective and hormonal system not much different from that of the jungle rat being given the ability to destroy every living thing by pushing a few buttons.” Given the work on human judgment that he and [Amos Tversky] had just finished, he found it further troubling to think that “crucial decisions are made, today as thousands of years ago, in terms of the intuitive guesses and preferences of a few men in positions of authority.” The failure of decision makers to grapple with the inner workings of their own minds, and their desire to indulge their gut feelings, made it “quite likely that the fate of entire societies may be sealed by a series of avoidable mistakes committed by their leaders.” [op cit, Kindle Locations 3316-3322].
Are we reminded of anyone?
This guy?
With respect to the allusion to "base rates" above, I should alert readers to
this cool little book.
My Bayes chops are pretty tight, so I don't really need it. Nonetheless, it never hurts to review stuff from yet another perspective, so I bought it. And, at $2.99 Kindle price you can't go wrong.
Relevant to all of the foregoing is a current post over at
THCB,
"Lead Time Bias and Court Rooms," by
regular contributor
Saurabh Jha, MD.
"...Physicians are asked to be mindful of the population, not just the individual patient. The message from policy makers is that we must be sensitive of limited resources. We’re chastised for overutilization of imaging. Yet it’s hard to see how excess utilization can be curbed unless courts respect evidence-based medicine. It’s at times like this that meaningful tort reform appears painfully conspicuous by its absence from the Affordable Care Act..."
Yeah. There's some very interesting stuff going to Dr. Jha's observations in the Michael Lewis book -- i.e., the vexing tension between trying to optimally treat
this patient vs. setting effective "EBM" policy (Evidence Based Medicine) optimally impacting
all patients.
to wit:
As it happens, a movement was taking shape right then and there in Toronto that came to be called “evidence-based medicine.” The core idea of evidence-based medicine was to test the intuition of medical experts— to check the thinking of doctors against hard data. When subjected to scientific investigation, some of what passed for medical wisdom turned out to be shockingly wrong-headed. When Redelmeier entered medical school in 1980, for instance, the conventional wisdom held that if a heart attack victim suffered from some subsequent arrhythmia, you gave him drugs to suppress it. By the end of Redelmeier’s medical training, seven years later, researchers had shown that heart attack patients whose arrhythmia was suppressed died more often than the ones whose condition went untreated. No one explained why doctors, for years, had opted for a treatment that systematically killed patients— though proponents of evidence-based medicine were beginning to look to the work of Kahneman and Tversky for possible explanations. But it was clear that the intuitive judgments of doctors could be gravely flawed: The evidence of the medical trials now could not be ignored. And Redelmeier was alive to the evidence. “I became very aware of the buried analysis— that a lot of the probabilities were being made up by expert opinion,” said Redelmeier. “I saw error in the way people think that was being transmitted to patients. And people had no recognition of the mistakes that they were making. I had a little unhappiness, a little dissatisfaction, a sense that all was not right in the state of Denmark.”... [op cit, Kindle Locations 2961-2971].
More on this later. I gotta run off to my semi-annual
radiation oncology f/up visit. Stay tuned.
P.M. UPDATE
I'm back. Prognosis is looking good one year out of tx. My PSA is down to nil, and there are no other signs of further trouble.
Back on topic. Other works to consider when ruminating on clinical cognition?
Dr. Groopman cites Kahneman and Tversky multiple times. e.g.,
As there are classic clinical maladies, there are classic cognitive errors. Alter's misdiagnosis resulted from such an error, the use of a heuristic called "availability." Amos Tversky and Daniel Kahneman, psychologists from the Hebrew University in Jerusalem, explored this shortcut in a seminal paper more than two decades ago. Kahneman won the Nobel Prize in economics in 2002 for work illuminating the way certain patterns of thinking cause irrational decisions in the marketplace; Tversky certainly would have shared the prize had he not died an untimely death in 1996.
"Availability" means the tendency to judge the likelihood of an event by the ease with which relevant examples come to mind. Alter's diagnosis of subclinical pneumonia was readily available to him because he had seen numerous cases of the infection over recent weeks. As in any environment, there is an ecology in medical clinics. For example, large numbers of patients who abuse alcohol populate inner-city hospitals like Cook County in Chicago, Highland in Oakland, or Bellevue in Manhattan; over the course of a week, an intern in one of these hospitals may evaluate ten trembling alcoholics, all of whom have DTs—delirium tremens, a violent shaking due to withdrawal. He will tend to judge it as highly likely that the eleventh jittery alcoholic has DTs because it readily comes to mind, although there is a long list of diagnostic possibilities for uncontrolled shaking. DTs is the most available hypothesis based on his most recent experience. He is familiar with DTs, and that familiarity points his thinking that way.
Alter experienced what might be called "distorted pattern recognition," caused by the background "ecology" of Begaye's case. Instead of integrating all the key information, he cherry-picked only a few features of her illness: her fever, her rapid breathing, and the shift in the acid-base balance in her blood. He rationalized the contradictory data—the absence of any streaking on the chest x-ray, the normal white blood cell count—as simply reflecting the earliest stage of an infection. In fact, these discrepancies should have signaled to him that his hypothesis was wrong.
Such cognitive cherry-picking is termed "confirmation bias." This fallacy, confirming what you expect to find by selectively accepting or ignoring information, follows what Tversky and Kahneman referred to as "anchoring." Anchoring is a shortcut in thinking where a person doesn't consider multiple possibilities but quickly and firmly latches on to a single one, sure that he has thrown his anchor down just where he needs to be. You look at your map but your mind plays tricks on you—confirmation bias—because you see only the landmarks you expect to see and neglect those that should tell you that in fact you're still at sea. Your skewed reading of the map "confirms" your mistaken assumption that you have reached your destination. Affective error resembles confirmation bias in selectively surveying the data. The former is driven by a wish for a certain outcome, the latter driven by the expectation that your initial diagnosis was correct, even if it was bad for the patient...
Groopman, Jerome (2008-03-12). How Doctors Think (pp. 64-66). Houghton Mifflin Harcourt. Kindle Edition.
How about "
Snowball in a Blizzard"?
Why do people— physician and patient alike— have such difficulties coping with concepts as probability and uncertainty? The answers can be found in the disciplines of evolution and psychology and are largely beyond the scope of this book, but the power of stories, and the influence of narratives on our thinking, is critically important. We think about ourselves, and of the universe around us, in absolute terms of cause and effect. We don’t regard our lives as being subject to mere chance; we assume that the variables are within our control and that our successes can be attributed to our strengths and our failures to our weaknesses. Medicine, too, is a story of sorts, and we resist the notion that chance plays a key role in the endeavor.
But this just isn’t so. It is a trick of the mind, and it impedes us from understanding the modern world. Daniel Kahneman, a Nobel laureate in economics, refers to this as the “narrative fallacy,” writing that it inevitably arises “from our continuous attempt to make sense of the world,” adding that “the explanatory stories that people find compelling are simple; are concrete rather than abstract . . . and focus on a few striking events that happened rather than on the countless events that failed to happen.” In medicine— both at the personal and at the policy level— succumbing to the narrative fallacy can be disastrous...
Hatch, Steven (2016-02-23). Snowball in a Blizzard: A Physician's Notes on Uncertainty in Medicine (pp. 19-20). Basic Books. Kindle Edition.
I could go on at length. Kahneman and Tversky have been cited repeatedly throughout the literature. Suffice it to cite just one more:
Meehl’s work was extended by his coworker Robyn Dawes (1996), and both of them joined with David Faust in writing an influential paper (Dawes et al. 1989). This article provoked passionate controversy and did much to secure the prestige of the statistical approach in clinical psychology, medicine, and even the law. Psychotherapy was the main casualty of their campaign, even though Meehl was a practicing psychoanalyst. But their most important contribution is that a great many physicians now take statistics seriously when making diagnoses. (Unfortunately, they tend to call this the Bayesian method, which is the totally different game of pinning numbers on facts in an intuitive fashion.)
The Nobel laureate psychologist Daniel Kahneman (2011), a lifelong admirer of Meehl, and one of the earliest users of his approach to psychological diagnosis and prognosis, asked why the “clinical” or intuitive approach is far inferior to the algorithmic (or statistical, or actuarial) one. He suggested the following reasons: the clinical predictor is a victim of the halo effect; he cannot avoid being impressed by irrelevant traits; he juggles at the same time too many variables that he has no time to weigh; he is often inconsistent; and, above all, he is “overconfident in his intuitions.” Clearly, algorithms do not have such flaws. (For the pros and cons of intuition, see Bunge 1962.)
Of course, statistical reasoning, in focusing on whole populations, deliberately ignores personal characteristics. Does it follow that those using statistical data to diagnose cannot prescribe a treatment tailored to the patient’s peculiarities? Yes, if they use only statistics; no, if they also use personal data, such as age, family antecedents, surgeries, allergies, occupation, and recent relevant incidents. And this is, precisely, what all physicians and nurses do: to them, every patient is unique in several respects. All medicines have always been tailored, even though “personalization,” that is, therapy tailored to the individual genome, is a very recent development...
Bunge, Mario (2013-05-30). Medical Philosophy:Conceptual Issues in Medicine (Kindle Locations 1850-1865). World Scientific Publishing Co Pte Ltd. Kindle Edition.
"Of
course, statistical reasoning, in focusing on whole populations,
deliberately ignores personal characteristics. Does it follow that those
using statistical data to diagnose cannot prescribe a treatment
tailored to the patient’s peculiarities? Yes, if they use only
statistics; no, if they also use personal data, such as age, family
antecedents, surgeries, allergies, occupation, and recent relevant
incidents."
OK, and
that brings me back around to the Michael Lewis book (particularly in the coxtext of issues raise by Dr. Jha over at the
THCB post cited above):
“The physician is meant to be the perfect agent for the patient as well as the protector of society,” he said. “Physicians deal with patients one at a time, whereas health policy makers deal with aggregates.”
But there was a conflict between the two roles. The safest treatment for any one patient, for instance, might be a course of antibiotics; but the larger society suffers when antibiotics are overprescribed and the bacteria they were meant to treat evolved into versions of themselves that were more dangerous and difficult to treat. A doctor who did his job properly really could not just consider the interests of the individual patient; he needed to consider the aggregate of patients with that illness. The issue was even bigger than one of public health policy. Doctors saw the same illness over and again. Treating patients, they weren’t merely making a single bet; they were being asked to make that same bet over and over again. Did doctors behave differently when they were offered a single gamble and when they were offered the same gamble repeatedly?
The paper subsequently written by Amos with Redelmeier* showed that, in treating individual patients, the doctors behaved differently than they did when they designed ideal treatments for groups of patients with the same symptoms. They were likely to order additional tests to avoid raising troubling issues, and less likely to ask if patients wished to donate their organs if they died. In treating individual patients, doctors often did things they would disapprove of if they were creating a public policy to treat groups of patients with the exact same illness. Doctors all agreed that, if required by law, they should report the names of patients diagnosed with a seizure disorder, diabetes, or some other condition that might lead to loss of consciousness while driving a car. In practice, they didn’t do this— which could hardly be in the interest even of the individual patient in question. “This result is not just another manifestation of the conflict between the interests of the patient and the general interests of society,” Tversky and Redelmeier wrote, in a letter to the editor of the New England Journal of Medicine. “The discrepancy between the aggregate and the individual perspectives also exists in the mind of the physician. The discrepancy seems to call for a resolution; it is odd to endorse a treatment in every case and reject it in general, or vice versa.”
The point was not that the doctor was incorrectly or inadequately treating individual patients. The point was that he could not treat his patient one way, and groups of patients suffering from precisely the same problem in another way, and be doing his best in both cases. Both could not be right. And the point was obviously troubling— at least to the doctors who flooded the New England Journal of Medicine with letters written in response to the article. “Most physicians try to maintain this facade of being rational and scientific and logical and it’s a great lie,” said Redelmeier. “A partial lie. What leads us is hopes and dreams and emotion.”... [Lewis, op cit, Kindle Locations 3051-3073].
'eh?
From my KHIT perspective, my primary interest in all of this stuff goes to sharpening my understanding of potential problems with clinical cognition in order to better advocate for improved, dx-enhancing Health IT and workflows. See also my prior posts such as
"Clinical workflow, clinical cognition, and The Distracted Mind," and
"Are structured data the enemy of health care quality?" While much of the Kahneman-Tversky insights will necessarily go to reforms in medical pedagogy (i.e., teaching more effective clinically-focused "critical thinking" beginning with med school), all of the components have to cohere ongoing in practice.
Michael Lewis has done us all a great service with his new book.
BTW, You might also find this paper of interest [pdf]:
"Why do humans reason?"
____________
More to come...