Search the KHIT Blog

Wednesday, July 12, 2017

Webinar on Natural Language Processing in health care

On my calendar today. Saw this on Facebook and signed up.

Company link
DEMYSTIFYING TEXT ANALYTICS AND NLP IN HEALTHCARE

Over the past ten years, we have seen a wave of EMR implementations and quality reporting initiatives which have sought out those discrete, reportable data elements which can be used for clinical analytics. However, many more crucial pieces of information that we would like to use for analytics are trapped in radiology reports, clinician notes and other free text fields. A few examples illustrate the point. Ejection fraction data for heart failure patients is embedded in diagnostic test reports and physician notes. A cancer diagnosis resides in the problem list, but the stage and tumor size is often found only in the pathology report. Even in quality reporting used by payers, dozens of data elements are only found in notes. As a result, most health systems employ clinical chart abstractors and nurses to manually hunt through this free text content for critical pieces of information required for reporting on clinical performance.

Today, advancements in technology have made it possible to develop accurate, faster, more scalable alternatives to a manual chart extraction process, and in this webinar, we will review the core capabilities of the software that is used to search text along with the key natural language processing (NLP) techniques that allow teams to effectively analyze the free text found in clinical systems.
Interesting. I recently finished a full cardiology workup comprised of EKG's, a treadmill stress test, and a lengthy cardiac ultrasound px. The diagnostic meat of the ultrasound in particular was all contained in the lengthy text narrative "impression" write-up.

See my prior riffs on NLP here and here.


I've now completed a good bit of background study spanning some of these topics -- AI-related stuff involving Machine Learning, Deep Learning, Natural Language Generation (NLG), Natural Language Understanding (NLU, the far more difficult area), Linguistics generally, and Computational Linguistics specifically.


Based on my readings thus far, my answer to the question "might NLP/AI applications be used to accurately analytically parse the logic in textual arguments?" is "rather unlikely at this point, at least with respect to arguments of significant heft and complexity" (see, e.g., my 1994 "Single Payer proposal" deconstruction). Perhaps some braniac scholar in Computional Linguistics could take it on as a doctoral investigation, but I think the difficulties are too daunting for any relatively quick commercial turnaround of the concept -- given that market interest in such an application might be rather narrow. Moreover, I envision the type of random inaccuracies that dogged the earlier releases of Google Translate.

I could be wrong.
__

BTW: Some useful contextual background going to "textual analysis NLP in Health IT" is in my prior post "Are structured data the enemy of health care quality?"

POST WEBINAR UPDATE
Time well-spent (only an hour, including Q&A), though I did have a bit of an "old wine in new bottles" reaction (with a dash of "Gartner Hype Cycle"). Teasing out quantitative (or mixed "semi-quantitative") "data" from unstructured chart narratives/documents is not exactly a new idea, as I've noted before (e.g., it was a topic in the 2005-2008 DOQ-IT Meaningful Use precursor era) -- notwithstanding that the search engines/query tools (as we ought expect) are getting much better. The presenters noted that this emerging "text analytics/NLP" methodology still requires multidisciplinary SME (Subject Matter Expertise) -- e.g., clinicians, informaticists, and "data architects." We're not at the point of just hitting 'Enter' and let the AI do everything for us all the way to dx and/or prognosis derivation or aggregate reportage. This type of process remains more heuristic than algorithmic. Adroit (iterative and recursive) Boolean searching is at once science and art, not yet fully AI mechanistic. Among my takeaways was that they were describing a "computer-assisted, episodically SME human intermediated process." All well and good, but a bit of a stretch to fully call it "AI NLP."
One favorable reality, though, is that "text" is pretty much just text (e.g., mostly ASCII collating std). Consequently, some of the barriers that dog "interoperability" (the "interoperababble" of my "metadata heterogeneity" snark) are of lesser concern. The other favorable thing is the relative narrow range of formatting of clinical "narrative" discourse relative to that of general language communication.
Notably, among deployable clinical experts, they alluded to the utility of "nurse abstractors" for the "data validation" phase. I found that interesting. Back during my first Medicare QIO tenure (1993-1995, called "PROs" back then), we routinely sent teams of laptop-lugging nurse abstracters out to hospitals to collect clinical data for drill-down projects indicated by our UB-82 Claims Forms data analytics (pdf).
Nice downloadable 24-slide deck available to today's webinar registrants. Marked "© 2017 Private and Confidential" on every slide, so I'll refrain from showing anything here.
One slide I will show,


Click to enlarge. I'd love to go to that. Back during the DOQ-IT era, we'd go to the "TEPR" Conference in SLC every year ("Toward an Electronic Patient Record").
___

Also perhaps of topical relevance (albeit sometimes a bit tangential?), from a current read of mine, going to the salient elements of accurate medical diagnostics:

For all of the sophisticated diagnostic tools of modern medicine, the conversation between doctor and patient remains the primary diagnostic tool. Even in the fields that are visually based, such as dermatology, or procedurally based, such as surgery, the patient’s verbal description of the problem and the doctor’s questions about it are critical to an accurate diagnosis.

In some ways this seems almost anachronistic, given how advanced so much of our technology is now. Science-fiction movies predicted that medical diagnosis would be achieved by running a handheld machine over the patient’s body. And indeed much diagnosis is made with MRIs, PET scans, and advanced CT technology. Yet the simple verbal exchange between patient and doctor remains the cornerstone of medical diagnosis. The story the patient tells the doctor constitutes the primary data that guide diagnosis, clinical decision-making, and treatment.

However, the story the patient tells and the story the doctor hears are often not the same thing. The story Mr. Amadou was telling me and the story I was hearing were not identical. There were so many layers of emotion, frustration, logistics, and desperation, that it was almost as if we were in two different conversations entirely.

It is a common complaint of patients. They feel their doctors don’t really listen, don’t hear what they are trying to say. Many patients leave their medical encounters disappointed and frustrated. But beyond being merely dissatisfied, many patients leave misdiagnosed or improperly treated.

Doctors are equally frustrated with the difficulties of piecing together a patient’s story, especially for those with complex and inscrutable symptoms. As medicine grows more complicated, with illnesses more multifold and complex, the gap between what patients say and what doctors hear—and vice versa—grows more significant...
[

Ofri, Danielle. What Patients Say, What Doctors Hear (Kindle Locations 98-112). Beacon Press. Kindle Edition.]

We typically think of communication as the words exchanged when doctors and patients are seated across from one another at a desk during the “history” part of the visit. True, this is the bulk of communication and certainly the bulk of what communications researchers focus on, but over the years I’ve come to appreciate that a good deal of communication and connection arises during the physical exam. When I mention this observation, many people—both doctors and patients—are unconvinced. Who wants to chitchat after disrobing and having your body probed by a relative stranger in a room that feels like a meat locker? Who has time, anyway, for a real physical exam when there is so much to document in that electronic medical record and so little time?

So yes, there’s less and less physical examination these days. Visits are shorter and competing issues wrench away precious minutes. The ease and temptation of CTs and MRIs, the constant fear of lawsuits, and—let’s face it—the atrophy of our skills push doctors toward ordering more tests at the expense of a true physical exam. Often, the exam boils down to a halfhearted plop of the stethoscope on the fully clothed patient. I have been equally guilty of rushing through a pro forma physical exam when the pressure is on. And, in any case, the exam primarily serves as an adjunct to confirm or rule out a diagnosis that was ascertained in the history.

Doctors typically don’t like to talk about their truncation of the physical because it stirs an awkward mix of guilt and longing within us. We recall wistfully our rounds as students, when our bow-tied and starched-coated attendings unhurriedly probed every fingernail, meticulously percussed the cardiac contours, palpated the epitrochlear lymph nodes. We feel we are remiss with our current patients, that we are skimping on what has always been the sine qua non of the doctor-patient connection.

A decade ago people were predicting the permanent demise of the physical exam. Luckily there’s been a resurgence of interest in the physical because it can obviate the need for many expensive tests.

In the past few years I’ve observed that the physical exam has taken on an important role again, though as a slightly different medical tool. Now that the computer is front and center in almost every doctor-patient encounter, doctors spend the bulk of the visit staring at a screen. Not only are our eyes yanked away from the patient, but our attention is fragmented by the disjointed and niggling nature of the typical computer interface. It’s no wonder patients feel ignored by their doctors.

But then the doctor and patient move to the exam table and everything changes. This is often the first moment that we can talk directly, without the impediment of technology...
[ibid, Kindle Locations 446-466]

There were only small tangential studies about how much doctors recall of the information they read in medical journals (embarrassingly little) or how well they remember clinical information from a fictionalized case study (full-fledged doctors do better than medical students), but nothing with real patients.

There is one real-life experiment regarding physician memory that happens, unfortunately, a little too regularly. Electronic medical records have been revolutionary in many respects—a patient’s chart can no longer be adrift in the cardiology clinic and a crucial X-ray can’t be languishing in a surgeon’s back pocket. However, by dint of being computerized, such information is susceptible to the same glitches as every other bit of computerized material. In the middle of writing your Tolstoy-worthy note about a patient with sixteen illnesses, the computer freezes, or the program crashes, or you inadvertently hit “escape” or “delete” at an inopportune moment, and all of your carefully wrought observations evaporate into the ether.

At least once a day, it seems, a medical student or intern will turn up, ashen-faced, stammering with incomprehensibility about the note they just lost, about all their efforts that just went up in smoke. Even the old hands at the hospital, who’ve learned the electronic landmines in trial-by-fire experience, are not immune. Recently I’d been writing a particularly complicated note about a patient with multiple chronic illnesses who was on more than a dozen medications and had numerous lab values out of whack, when I was interrupted by a phone call about an abnormal X-ray for a different patient. I had to open that patient’s chart to untangle that issue. After sorting through that second patient’s medical history and what to do about the X-ray, I went to close the second chart so I wouldn’t commit the cardinal sin of mixing up two charts.

It took only a fraction of a second. Before I’d even released my finger from the mouse, I realized I’d closed the wrong tab. I kept my finger depressed on the mouse as long as I could, hoping that I could will that brief gesture into reverse, that I could telepathically conjure the information back onto the screen. When my irrational hopes could be sustained no longer and the boulder of despair had fully dropped anchor into my deepest bowels, I released my finger in agonizing slow motion.

I remained in vigorous denial for as long as I could, but finally my brain was forced to articulate what I already knew in my heart: I’d just lost everything. (And if you thought our vaunted electronic medical-record system would have something practical like auto save to prevent such a mess, dream on!)

I’d lost all the information I’d taken down while the patient was in the room. I’d lost all the analysis I’d been writing after she’d left the room. (She was a new patient, so I’d done an extensive background history.) I’d lost all of the details of her prior medical evaluations. I’d lost my entire train of thought about her because I’d been forced to delve into another patient’s medical history...
[ibid, Kindle Locations 2004-2027]


When I talk to the students and interns whom I’ve coached through similar electronic meltdowns, they have comparable experiences. The HPI and social history are the quickest to reformulate; other details can be sketchier. When I think about this from a literary perspective, the reason is obvious. The HPI is a story—there is a plot with twists and turns, challenges and conflicts. Stories are always easier to remember than lists of facts. And the social history is what writing teachers refer to as “fleshing out the character.” Without the social history, the patient is just a stock character. A thirty-two-year-old woman with abdominal pain is as much a stock character in medicine as the tragic hero or the Southern belle or the wise old man are stock characters in fiction. These are stick figures until the writer fleshes them out to become Orpheus, Blanche DuBois, or Albus Dumbledore. They are now three-dimensional and realistic human beings who lodge themselves in our memories. And while the social history in the medical interview doesn’t allow us the hundreds of pages that Tennessee Williams or J. K. Rowling can luxuriate in, it does permit us to get a fuller sense of our patients and some context of their lives. I may have forgotten what this patient’s diastolic blood pressure was, but I could never forget the pained expression on her face when she spoke of how her job made her miss reading bedtime stories to her daughter each night and how she wasn’t confident that the babysitter was reliably reading those stories to her daughter, who so needed the extra enrichment... [ibid, Kindle Locations 2054-2065]
Stay tuned.

UPDATE

The company's pitch video:


Nicely done. As is this one, below:

__

JULY 14TH UPDATE

They've published the webinar to YouTube. No warning notice of it being "private," so, here it is.

__

OF NOTE

My July issue of Science Magazine just arrived in the snailmail.


Numerous articles on AI applications across a breadth of scientific disciplines.
EDITORIAL
AI, people, and society
Eric Horvitz


n an essay about his science fiction, Isaac Asimov reflected that “it became very common…to picture robots as dangerous devices that invariably destroyed their creators.” He rejected this view and formulated the “laws of robotics,” aimed at ensuring the safety and benevolence of robotic systems. Asimov's stories about the relationship between people and robots were only a few years old when the phrase “artificial intelligence” (AI) was used for the first time in a 1955 proposal for a study on using computers to “…solve kinds of problems now reserved for humans.” Over the half-century since that study, AI has matured into subdisciplines that have yielded a constellation of methods that enable perception, learning, reasoning, and natural language understanding.

Growing exuberance about AI has come in the wake of surprising jumps in the accuracy of machine pattern recognition using methods referred to as “deep learning.” The advances have put new capabilities in the hands of consumers, including speech-to-speech translation and semi-autonomous driving. Yet, many hard challenges persist—and AI scientists remain mystified by numerous capabilities of human intellect.

Excitement about AI has been tempered by concerns about potential downsides. Some fear the rise of superintelligences and the loss of control of AI systems, echoing themes from age-old stories. Others have focused on nearer-term issues, highlighting potential adverse outcomes. For example, data-fueled classifiers used to guide high-stakes decisions in health care and criminal justice may be influenced by biases buried deep in data sets, leading to unfair and inaccurate inferences. Other imminent concerns include legal and ethical issues regarding decisions made by autonomous systems, difficulties with explaining inferences, threats to civil liberties through new forms of surveillance, precision manipulation aimed at persuasion, criminal uses of AI, destabilizing influences in military applications, and the potential to displace workers from jobs and to amplify inequities in wealth…
Yeah. I've had recurrent runs at some of these topics of concern. See, e.g., here , here, and here.

Nice, brief core AI glossary in the issue.
Defining the terms of artificial intelligence 
Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced at a seminal 1956 workshop at Dartmouth College, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human. An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if it's 3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a human's expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.

NATURAL LANGUAGE PROCESSING A computer's attempt to “understand” spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination. PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as “earn a high video game score” or “manage a factory efficiently.” During training, each effort is evaluated based on its contribution toward the goal.

STRONG AI AI that is as smart and well-rounded as a human. Some say it's impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.
TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.

TURING TEST A test of AI's ability to pass as human. In Alan Turing's original conception, an AI would be judged by its ability to converse through written text.
Notable to me is that the word "HEURISTIC" is not included. Human (brain "wetware") perception/cognition is way more inductive/heuristic than deductive/algorithmic.

apropos, another new read in my stash:

__
JULY 13TH OFF-TOPIC ERRATUM

One of my Facebook friends, a physician, is pumping this every day.

WHY MEDICARE FOR ALL?

The United States is the only country in the developed world that does not guarantee access to basic health care for residents. Countries that guarantee health care as a human right do so through a “single-payer” system, which replaces the thousands of for-profit health insurance companies with a public, universal plan.


Does that sound impossible to win in the United States? It already exists – for seniors! Medicare is a public, universal plan that provides basic health coverage to those age 65 and older. Medicare costs less than private health insurance, provides better financial security, and is preferred by patients (Davis, 2012). Single-payer health care is often referred to as “Expanded & Improved Medicare for All.”

Under the single-payer legislation in Congress (H.R. 676):

  • Everyone would receive comprehensive healthcare coverage under single-payer;
  • Care would be based on need, not on ability to pay;
  • Employers would no longer be responsible for health care costs and coverage decisions.
Single-payer would reduce costs by 24%, saving $829 billion in the first year by cutting administrative waste and allowing negotiation of prescription drugs (Friedman, 2013); and

Single-payer would create savings for 95% of the population. Only the top 5% would pay slightly more. (Friedman, 2013)
 Recall I did my first grad school paper on the Single Payer proposal in 1994 (pdf).

See also PNHP.org, "Physicians for a National Health Program."

Much of my prior post is of relevance here.

Also apropos, a great post now up at THCB:
The Most Important Questions About the GOP’s Health Plan Go Beyond Insurance and Deficits
By ROSS KOPPEL and JASMINE MARTINEZ


Ending healthcare for those who need it will not make them or their problems disappear. On the contrary, the GOP plan will shatter American families and the economy. Nothing magical happens if we stop caring for the elderly, the ones who need vaccinations, the small infections that can be treated for $2 worth of antibiotics, the uncontrolled diabetics, and those with contagious diseases who clean our schools’ offices and homes. They don’t just get healthy.

As George Orwell said in Down and Out in Paris and London, “the more one pays for food, the more sweat and spittle one is obliged to eat with it.” Cutting care only exacerbates illnesses, infection, disability, the effects of age and the costs to society. The burdens continue or increase but the cost is shifted to American families, businesses, and states.

Fifteen years ago, one of the authors showed that lost productivity from workers caring for Alzheimer’s patients cost US businesses over $60 billion a year. Employee-caregivers, usually at the peak of their responsibilities and corporate experience, quit, prematurely retired, were constantly distracted, or engaged in presentism (e.g., at work but focused on mom burning down the house). Business cost were incurred by the need to replace workers, extra training of replacement workers, and increased pressure on other workers to cover for caregivers. The more expensive the employee, the longer and more costly the search and the longer the time to get them up to speed. But that study examined just a miniscule number of patients and workers compared to the tens of millions of people affected by the proposed GOP bill. As noted, it’s not only those needing care, but our society and our families that must deal with the elderly, ill, disabled, under and uninsured, children not receiving even ordinary care, people not being screened for preventable illness, and countless others.

Extrapolating from Koppel’s tiny study to the US population and businesses reveals the GOP bill will cost the nation trillions of dollars in losses and extra costs. It will devastate state budgets, and explains why GOP governors are among those leading the resistance…
Read all of it.

CODA

'Access to broadband “is, or soon will become, a social determinant of health.”

What? From "How AI could exacerbate existing health disparities."
____________

More to come...

No comments:

Post a Comment