Search the KHIT Blog

Monday, August 31, 2015

Health IT: process mining and analytics for healthcare QI


So, I saw this tweet by UberGeek Chuck Webster, whom I've known for a while (mostly online, though we met at HIMSS one year)...


Went and looked it up on both Springer and Amazon.


The Amazon blurb copy:
What are the possibilities for process mining in hospitals?  In this book the authors provide an answer to this question by presenting a healthcare reference model that outlines all the different classes of data that are potentially available for process mining in healthcare and the relationships between them. Subsequently, based on this reference model, they explain the application opportunities for process mining in this domain and discuss the various kinds of analyses that can be performed.

They focus on organizational healthcare processes rather than medical treatment processes. The combination of event data and process mining techniques allows them to analyze the operational processes within a hospital based on facts, thus providing a solid basis for managing and improving processes within hospitals. To this end, they also explicitly elaborate on data quality issues that are relevant for the data aspects of the healthcare reference model.

This book mainly targets advanced professionals involved in areas related to business process management, business intelligence, data mining, and business process redesign for healthcare systems as well as graduate students specializing in healthcare information systems and process analysis.
Being as yet too cheapskate to pay $52.24 for the Kindle version, I fired up Dragon and read in the following from the "Look Inside" preview.
Chapter 1
Introduction


Abstract: Health care costs have increased dramatically and the demand for high quality care will only grow in our aging society. At the same time, more event data are being collected about care processes. Healthcare information systems (HIS) have hundreds of tables with patient related event data. Therefore, it is quite natural to exploit these data to improve care processes while reducing costs. Data science techniques will play a crucial role in this endeavor. Process mining can be used to improve compliance and performance while reducing costs.

Process mining has been applied successfully in a variety of domains, e.g., banking, insurance, logistics, production, and e-government, customer relationship management, remote monitoring, and smart diagnostics. Through process mining one can relate the actual behavior of people, machines, and organizations with modeled behavior. This often leads to surprising insights showing that reality is very different from the perceptions, opinions, and beliefs stakeholders have. This is particularly relevant for healthcare processes. These processes are often only partly structured with many exceptional behaviors and different stakeholders. Healthcare requires flexibility and ad hoc decision-making. These characteristics make it impossible to apply rigorous business process management (BPM), workflow management (WFM), and business process reengineering (BPR) techniques. Clearly, a hospital is not a factory and patients cannot be cured using a conveyor belt system. However, the abundance of data collected in today’s hospitals can be used to improve care processes dramatically. Unlike many other domains, there is still room for dramatic improvements in healthcare processes. Process mining can be used to improve compliance and performance while reducing costs…

One approach... is to focus on the many complex time-consuming and nontrivial processes that are undertaken within these organizations.  In order to give suggestions for improving and redesigning these processes they need to be analyzed. Such an analysis is typically done by conducting interviews. Unfortunately, this is time-consuming and costly.  furthermore, typically the subjective view is provided on how a process is executed. That is, people involved in the performance of these healthcare processes (e.g., physicians, managers) tend to have an ideal scenario in mind, which in reality is only one of the many scenarios possible. Moreover, in many hospitals “political battles” take place due to organizational issues. Different stakeholders may have different views, e.g., some parties may not be interested in reducing the overall costs and improving transparency. Therefore in order to give objective suggestions for improving and redesigning processes one needs to exploit the event data readily available. Such an analysis is possible using process mining.

1.2 process mining: data science in action

Although our capabilities to store and process data have been increasing exponentially since the 1960s, suddenly many organizations realized that survival is not possible without exploiting available data intelligently. This of course  also holds for healthcare organizations. Society, organizations, and people are “always on”. Data are collected about anything, at any time, and at any place. Gartner uses the phrase “the nexus of forces” to refer to the convergence and mutual reinforcement of four interdependent trends: social, mobile, cloud, and information. The term  “big data” is often used to refer to the incredible growth of data in recent years. For hospitals of course the goal is not to collect more data, but to exploit data to realize more efficient and effective care processes.

Obviously the term “big data” has been hyped in recent years. However there is a growing demand for data scientists that can turn data into value. Just like computer science emerged as a new discipline from mathematics when computers became abundantly available, we now seem worth of data science as a new discipline driven by the huge amounts of data available today. Data science aims to use the different data sources to answer questions that can be grouped into the following four categories:

Reporting: what happened?
Diagnosis: why did it happen?
Prediction: what will happen?
Recommendation: what is the best that can happen?

Figure 1.2 deliberately emphasizes the process aspect. The goal is not to analyze data, but to improve care processes. Process mining aims to discover, monitor and improve real processes by extracting knowledge from event logs readily available in today’s information systems. Starting point for process mining is an event log. Each event in such a log refers to an activity (i.e., a well-defined step in some process) and is related to a particular case (i.e., a process instance)...

Yeah. That's enough. I get the drift. Better late than never, I guess. My experience in this area of "operations analytics" antedates my Health IT involvement by quite some time. As I posted a year ago:
@BobbyGvegas says...

A comment I just made over at THCB.
@BobbyGvegas says: August 20, 2014 at 12:39 pm
There are 3 fundamental aspects of workflow in the digital era: physical tasks, IT (EHR) tasks, and cognitive tasks. Every certified EHR has to have an audit trail to comply with HIPAA, given that every time ePHI is created, viewed, updated, transmitted, or deleted the transaction must be “date-time/who/what/about whom” captured in the audit trail log. 
The ePHI audit log, to me, is a workflow record component. It can’t tell me WHY front desk Susie or Dr. Simmons took so long to get from one transaction element to the next — i.e., physical movements or cognitive efforts — but it can tell me a lot, adroitly analyzed. 
I worked for number of years as a credit risk and portfolio management analyst in a credit card bank. We had an in-house collections department that took up an entire football field sized building, housing about 1,000 call center employees. I had free run of the internal network and data warehouse. One day I just happened upon the call center database and the source code modules (written by an IT employee in FoxPro, which I already knew at an expert level). I could open up the collections call log and watch calls get completed in real time. We were doing maybe a million outbound calls a month (a small Visa/MC bank).

(My fav in the Comments field was “CH [cardholder] used fowl language,” LOL)

It was, in essence, an ongoing workflow record of collections activity.
I pulled these data over into SAS and ground them up. I could track and analyze all activity sorted by any criteria I wished, all the way down to the individual collector level. I could see what you did all day, and what we got (or didn’t) for your trouble.

I was [able to] rather quickly show upper management “Seriously? You dudes are spending $1,000 to collect $50, every day, every hour” etc. The misalignment was stunning. I started issuing a snarky monthly summary called “The Don Quixote Report” with a monthly “winner.” …Yeah, we called this hapless deadbeat 143 times this month trying to get 15 bucks out of him…

Well, it didn’t take long to squelch all that. We saved the bank 6 million dollars in Collections Department Ops costs that year via call center reforms. Didn’t exactly endear me to the VP of Collections, whose bonus was tied to his budget.

Gimme a SAS or Stata install and SQL access to the HIT audit logs, and I will tell you some pretty interesting (Wafts-of-Taylorism 2.0) workflow stories.
Again, better late than never in the healthcare space.

Also, apropos of the inextricability of "process" and digital IT, from Dr. Jerome Carter's latest:
We are in the earliest stages of determining how current software design principles, precepts, and methods apply to clinical care systems. The paper chart is the main stumbling block preventing a critical (re)assessment of clinical software design principles. A static information archive has been used as the basis for EHR systems, leaving clinical processes out in the cold. As we are learning, processes are just as important as data. Unfortunately, legacy clinical software has made workflows implicit, and current UCD processes and usability research are mainly focused on legacy EHR systems. Legacy systems are poor at process support because they are designed to offer information, not support processes. Fixing them will require revisiting all three design aspects along with the recognition of a fourth aspect – explicit process representation. Next-generation clinical software must allow explicit process representation and execution. Likewise, clinical software development practices must be able to unambiguously represent clinical processes from requirements to deployment.

Clearly, the HIT community has embraced the idea that processes are important. But, in doing so, seems to be doubling down on trying to shoehorn process support into legacy systems whose designs are based on paper charts. Bad usability and poor interfaces are frequently cited as the reasons EHR systems are so disruptive to clinicians’ workflows. However, usability and interface issues are often symptoms of deeper problems. Design teams must address each of the four aspects of clinical software, and these teams must have a shared conceptualization of what the term “design” encompasses. Explicit representation of clinical processes and workflow technology must become part of the design discussion when addressing usability concerns, care coordination features, and CDS needs. Otherwise, we will continue to bump into the “workflow” elephant in the room. Design has four aspects—all must addressed…
UX of the IT systems themselves are part of the processes to be studied. As I noted in my comment under Dr. Carter's post:
“…The AI community began by trying to model isolated human intelligence while the emerging community of human-computer interaction designers followed in Engelbart’s augmentation tradition. He had begun by designing a computer system that enhanced the capabilities of small groups of people who collaborated. Now Gruber had firmly aligned himself with the IA community. At the Stanford Knowledge Systems Laboratory, he had interviewed avionics designers and took their insights to heart. There had been an entire era of industrial design during which designers assumed that people would adapt to the machine. Designers originally believed that the machine was the center of the universe and the people who used the machines were peripheral actors. Aircraft designers had learned the hard way that until they considered the human-machine interaction as a single system, they built control systems that led to aircraft crashes. It simply wasn’t possible to account for all accidents by blaming pilot error. Aircraft cockpit design changed, however, when designers realized that the pilot was part of the system. Variables like attention span and cognitive load, which had been pioneered and popularized by psychologists, became an integral part first in avionics and, more recently, computer system design…”

Markoff, John (2015-08-25). Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots (Kindle Locations 4720-4730). HarperCollins. Kindle Edition.
___

More to come...

A sad passing: Oliver Sacks

Oliver Sacks, the Doctor
BY JEROME GROOPMAN
Oliver Sacks, a dear colleague of mine at The New Yorker and in the world of medicine, was an inspiration to me and to countless physicians. A great deal will be said in the coming days about Oliver’s unique literary output—masterful books including “An Anthropologist on Mars,” “Awakenings,” and “The Man Who Mistook His Wife for a Hat.” But we should remember that he also embodied in his medical practice a kind of ideal approach—creative, sensitive, and large-hearted—to his many patients. He was an extraordinary and exemplary doctor.

Neurology is often depicted as a discipline of great detachment. Sacks, who was eighty-two when he died, trained in the field before the advent of the CT scan and the MRI. He learned to observe his patients in extreme detail, calling on his professional training and uncanny perception to make meticulous analyses of motor strength, reflexes, sensation, and mental status; in doing so, he arrived at a diagnosis that might locate a lesion within the anatomy of the brain or spinal cord. And yet, because medical technology had only gone so far in those days, once this intellectual exercise was completed, there was often very little that could be done to ameliorate most neurological maladies.

Sacks showed that it was possible to overcome this limited perspective. He questioned absolutist categories of normal and abnormal, healthy and debilitated. He did not ignore or romanticize the suffering of the individual. He sought to locate not just the affliction but a core of creative possibility and a reservoir of potential that was untapped in the patient...
Sacks was a contrarian who refused to compromise this approach to the sick and the suffering. He resisted the powerful current of modern practice that seeks the generic. He rejected a monolithic mindset, and retrieved the individual from the obscuring blanket of statistics. This put him outside of the academy, exiled to chronic-care institutions. Through his writing, Sacks ultimately received recognition for advancing a unique form of clinical scholarship that was largely abandoned: the study of the single person within the context of his own life. Ever the acute observer, his case histories confirmed that under a single diagnostic term was a spectrum of human biology. No two patients are ever the same, he emphasized...

Sacks made house calls, not only in California and New York where he practiced, but globally, visiting Dr. “Bennet,” a surgeon with Tourette’s syndrome in rural Canada or the autistic artist Stephen Wiltshire on a tour of Europe. In these visits, he practiced what might be called the medicine of friendship, showing genuine interest and respect to people who are often shunned. This was the therapeutic intervention when neurology lacked effective pills or procedures.

This did not mean Sacks was a Luddite. He was an avid reader of scientific journals, fascinated by scientific advancements in imaging the nervous system at work. He engaged in dialogue with Nobel laureates and lab scientists about the nature of consciousness, providing what they lacked—the insights of a naturalist, a field worker.

Sacks also embodied an attribute that can be lost after people become famous: a boundless generosity of spirit. He encouraged young doctors and scientists to record their experiences and communicate them in prose, celebrating their endeavors rather than seeing them as a form of competition or threat. I believe his intense curiosity and boundless energy moved him to want to learn from the succeeding generation, as great teachers do...
Rest in peace.
"No two patients are ever the same," he emphasized.
Yes, and it goes well beyond our infatuation with the scientific / technological aspects of the nascent "Personalized Medicine" industry.
___

Wednesday, August 26, 2015

Docs vs Glocks







Above, a WDBJ morning news anchor at the precise moment of horrified, bewildered, dawning disbelief at what she's just witnessed from a remote live interview feed.

I'd not planned on posting today. Got two new books to read. But I awoke to news of a live on-camera shooting in Virginia, where a disgruntled former employee of CBS affiliate WDBJ walked up and murdered a reporter and her cameraman while they were interviewing a local Chamber of Commerce official at a lakeside shopping locale.

Most grotesquely of all, he'd mounted a GoPro camera (or smartphone in video mode) on himself and video'd and then uploaded all of it. The YouTube was quickly taken down, but I saw the entire thing before it was deleted. He fired two volleys of about 8 shots each, pausing, I assume, to reload a second clip in his semi-automatic pistol. He also hit the woman being interviewed. Reports are that she survived and is now out of surgery.

As I write, the gunman, Vester Lee Flanagan II, is reported to have shot himself as he was about to be apprehended. Not clear at this point whether he remains alive. (Update: he died after being airlifted to a hospital.)

I am just aghast. And, depressingly, it's unlikely that anything will ever change. The NRA will continue to thwart all rational gun control efforts. e.g., Google "Docs vs Glocks." The Manly Moron Militias will continue to delusionally roam. I've gotten death threats for mocking them.


I will continue to mock the insanity.
___

Friday, August 21, 2015

Medical Progress: Looking back, looking ahead.

As I have noted before, this blog began in 2010 principally to serve as as a chronicle of my involvement with the then- just-deployed federal Meaningful Use program. But, given my ineradicable contextual big-picture interests, the posts have ranged far and wide topically, spanning the breadth of more or less overlapping subjects surrounding "mere" Health IT (itself inclusive of a number of areas such as "usability" and the vexing misnomer "interoperability"): ePHI security and privacy (including the dense requirements of HIPAA), intellectual property, the Byzantium of our healthcare economics (including the durably-contentious PPACA and the recent hot area of venture capital-fueled health apps "innovation"), clinical pedagogy, medical science, the so-called "Art of Medicine," organizational culture (most notably, though not exclusively, as it impacts patient safety), process QI initiatives such as Lean, Six Sigma, and Agile, and the confounding multifaceted "Upstream."

I thought it a good time to reflect on some history, particularly in light of my recent post on the incipient, ostensibly "transformative" boom in applied "Omics" science.

Fifty years ago this month, when I was an exuberant living-the-dream 19 yr old guitar player performing with the veteran "Hollywood Argyles" at The Beachcomber nightclub on the boardwalk in Seaside Heights, NJ, this opinion piece appeared in JAMA.

August 16, 1965

Of Science, Humanism, and Medicine

The school of Hippocrates established the course of modern medicine when it discarded the magic of previous centuries to base its teaching on observation of the patient at the bedside. Essential unit of medical practice was what later became known as the consultation, described by English clinician Sir James Calvert Spence as “...the occasion when, in the intimacy of the sick room, a person who is ill, or believes himself to be ill, seeks the advice of a doctor whom he trusts. This is a consultation and all else in the practice of medicine derives from it.”

After  many intervening changes, a further revolution developed in the 19th century when science and technology effectively invaded clinical medicine. In the 20th century, the Flexner report revolutionized medical education and stimulated the establishment of full-time clinical departments. Medical thinking became permeated by new, rapidly developing technology, and medicine moved into the Age of Science. Today, the laboratory test in the clinical experiment have too a large degree supplanted the consultation is the essential unit of medical practice. Family–doctoring and even consulting practice have come to be regarded as comparatively inferior pursuits in which the truths of science have not yet come to full bloom.

The scientific explosion of the century has brought incalculable benefit to the progress of medicine. Research institutions under full-time staffs give promise of being the instruments of even greater scientific discovery in the future. Yet, there is cause for concern in this new climate of opinion  (as authoritative as it is naïve) that has risen from the astonishing success of science in our lifetime; there is danger in the optimistic view that science holds the answers  to all of the problems of medicine, if we can but find them. Other values are made to seem less important. Obscured is the realization that sickness and death are inevitable sequelae of life, that science can never bring to an end human suffering.

Essentially, medicine is at least as close to humanism as it is to science, it is concerned with all that touches on human life and feeling. Medicine must employ science; his horizons are too broad to be in compost by any division of knowledge. The practice of medicine — although it must be buttressed by hospitals and laboratories — is basically an affair between highly educated, highly trained human beings and those who seek their counsel in the privacy of the consultation room. The quality of medical practice depends on the  intensity with which physicians, motivated by humane feelings toward their patients, apply their knowledge and experience to the particular problem at hand. The proper concern of medicine’s people, not things; human life and values, not an anatomy and physiology. Practitioners must be guided by the cold light of science; and if they are to do their best, they must feel warmth and fulfillment in applying their knowledge to the problems of individuals.

It is unfortunate that a schism exists between academic medicine and medical practice at a time in history when social and economic forces portend a further medical revolution which, although primarily concerned with the distribution of medical services, has serious implications for medical progress. “The Obsolescence of the Practitioner–Teacher”… Attempts to stimulate every unification of the ranks of physicians. The profession must guide its way into a future that will not have sacrificed the best of its heritage.
JAMA. 1965; 193 (7):610
Interesting, no? In some ways, the more things change, the more they remain the same. See my December 2014 The art of medicine consists of amusing the patient while nature cures the disease.
__


UPDATE: I've just finally had my Calypso Beacon prostate implants done (after several weeks' delay), for my upcoming IMRT tx. That did not go particularly well. The urologist had difficulty getting a clear view prostate image with the rectal ultrasound probe, and the px took a long time to complete. I don't think he does a lot of these. He even alluded to not having "done this lately." Maybe that's my Bad; I probably should have asked. I had been told by his MA that "it won't be as bad as the biopsy."

Wrong. the biopsy px was a piece of cake compared to this transient bit of torture. I was wishing I'd brought a change of shirts. I soaked the one I was wearing.

I joke that "I feel like I have Toby Keith's boot up my ass."

Whatever. It should be relatively easy from here on.
Side note: Monday I called the urology clinic to verify my Tuesday Calypso implants px appointment and their Oakland address. "No, we don't show any upcoming appointments for you ... wait, let me check ... OK, we're changing our computer system, and your appointment is still in the old one."
They're migrating from NextGen to Epic. 30 minutes later I got an email notifying me of a new message in my Muir portal inbox. I logged in. It was advising my of my next day's urology clinic appointment.

Shards.
On the continuing BCBSRI EoB follies. This latest is a real head-scratcher.





Click to enlarge. So, ZERO dollars billed to the insuror by anyone, yet BCBSRI nonetheless paid $10.64 to someone, for something, and my coinsurance amount for this phantom remittance is $1.17? The EoB page 2 detail tabulation lists a bunch of my November 2014 orthopedic PT encounters at the Brentwood facility (long since settled, and regarding which I apparently overpaid back many months ago by $123.16). None of the subtotals and totals add up. Not even close.

Another EoB dated the next day (07/23/15) also showed up in the mail along with this one, stating that my OOP met (Out Of Pocket) for 2015 is in fact not zero, but I have a max OOP balance of $986.92 (which, no doubt, my IMRT tx will zero out).

In the words of President-elect Donald Trump®, "these people are stupid!"

This is one reason why we in the U.S. pay double. This kind of stuff is pure Steve Brill.

BUT, WAIT! THERE'S MORE!

Sometimes I get to thinking that the folks at BCBSRI should be designated a Protected Class under the ADA. I just got a bill from "John Muir Magnetic Imaging." $2,925.00 for my July 9th endo-rectal coil MRI. Full retail balance. "Due upon receipt."

I logged into the BCBSRI subscriber portal.

At least this time it shows up.


That's it. No additional information detailing the justification rationale for the claim denial. No idea what "UM" refers to. "Utilization Management"?

I responded in their "secure messaging system."



Click to enlarge if necessary for reading clarity.

Let's recap: first, they denied my post-biopsy sepsis hospitalization claim, on the negligently erroneous grounds that I was "no longer insured" (they'd used my expired 2014 subscriber ID, still floating around somewhere in Muir's database -- notwithstanding that I'd handed over my active 2015 card at the hospital when I went in for the sepsis tx. Then they flubbed the ER doc group's separate "independent contractor" claim for $623. It finally got processed and paid some 90 days out, after a bunch of emails and phone calls. I was on the hook for about $45 of that in the end. The reason for that bumbling has really never fully clarified. In part some, stupid, dissembling beg-off crap about a mismatch between my wife's Walnut Creek P.O. box (where her BCBS mail goes) and our Antioch street address.

Look; you have my full name, my DoB, my Social, and my BCBSRI Subscriber ID, don't try to play me with this "no-match" baloney. I've been around IT too long.

With respect to this latest CusterFluck, recall, if you've followed my "shards" posts, that BCBSRI, via their "EviCore" auth review vendor, initially denied pre-auth for the endo-rectal coil pelvic/prostate MRI, which was subsequently overturned on appeal. Muir refused to even schedule me absent an auth. It was eventually approved, and the px was performed.

But, now, they refuse to pay for any of it.

It almost smacks of fraud. More charitably, though, chalk it up to bureaucratic incompetence? 'eh?

Shards. Sand in the gears.

So, come Monday morning I will surely again be at length on the phone in "please hold; your call is important to us" mode, wasting my (unpaid) time trying to rectify this so that the MRI provider gets paid, without my being backed up into the coercive threat of a "past due collections" action.

UPDATE: Monday morning I called the Muir Imaging payment center to apprise them of the issue, and also had some "secure messaging" interaction on the BCBSRI portal. The latest reply:
Dear Robert Gladd, Thank you for the additional information. I have spoken with Dr. Xxxxx's office, and I am waiting for a return call from them, as they needed to access some records that the person that I spoke with could not access. They have promised me a return call by the end of day on Wednesday. I will reply via secure message after I have spoken with them. Thank you for your patience.
Dr. Xxxxx is the Radiation Oncologist at Stanford who ordered the pelvic/prostate endo-rectal coil MRI, sent electronically to Muir. Precisely why BCBSRI needs to contact him again escapes me. His px order is on record. The initial denial by their px/tx review subcontractor EviCore is on record. The appeal and subsequent denial reversal by EviCore is on record. The pre-auth is on record. The MRI was scheduled and performed. All a matter of record. A record that should be right there in the BCBSRI data.

I don't get it. there's no un-ringing this bell. What are we gonna have now? A "post-authorization" denial?

Refrain: This is why we pay double.

FRIDAY UPDATE

My query is on the bottom. The CS Rep response is atop that.


So, Muir Imaging used the wrong code in their billing submission. And, the data processing amnesiacs at BCBSRI looked no further -- which, I guess, plays to their advantage. Work the float for at least another billing cycle.
__

Next up for me? A CT imaging "targeting / tx planning" encounter at Rad Onco on Tuesday, after which I will commence 9 weeks of Calypso M-F IMRT, probably 2 weeks following that session. Should BCBSRI remain true to form thus far, I expect there will be more bozo stuff to report.
__

IN OTHER NEWS

Recall my recent citing of this book?


It will be released on Tuesday August 25th. Can't wait to get it and study it. It was just the topic of a NPR "Fresh Air" segment.

MONDAY MORNING UPDATE

While I await tomorrow's release of "Machines of Loving Grace," yet another book on the topic has hit my radar. Just downloaded it.


Have yet to read it, but here's a snip from a quick keyword search:
The days of the “country doc” are long gone, but information technology is also transforming the character of medical practitioners in surprising ways.

The main shift is a growing recognition that the medical arts are not arts at all but a science that is better driven by statistics and data than intuition and judgment. In bygone eras, it was at least plausible that someone could absorb a reasonable proportion of the world’s medical knowledge and apply it to cases as they are presented. But over the past half century or so, as it became clear that the avalanche of research, clinical trials, and increased understanding of how our bodies (and minds) work was beyond the comprehension of a single individual, the field fractured into a myriad of specialties and practices. Today, your “primary care physician” is more of a travel agent to the land of specialists than a caregiver, except for the simplest of ailments.

But the hidden costs of this divide-and-conquer approach to medical care are about to become painstakingly clear. Coordinating the activities of multiple practitioners into a coherent plan of action is becoming increasingly difficult, for two reasons. First, no one has the complete picture, and, even if they do, they often lack the detailed knowledge required to formulate the best plan of action. Second, specialists tend to treat the specific conditions or body parts that they are trained for, with inadequate regard for the side effects or interactions with other treatments the patient may be receiving. For me, the practice of medicine today conjures the image of a Hieronymus Bosch painting, with tiny, pitchfork-wielding devils inflicting their own unique forms of pain.

As a patient, you would ideally prefer to be treated by a superdoc who is expert in all the specialties and is up to date on all of the latest medical information and best practices. But of course no such human exists.

Enter IBM’s Watson program. Fresh off its Jeopardy! victory over champions Brad Rutter and Ken Jennings, Watson was immediately redeployed to tackle this new challenge. In 2011, IBM and WellPoint, the nation’s largest healthcare benefits manager, entered into a collaboration to apply Watson technology to help improve patient care. The announcement says, “Watson can sift through an equivalent of about one million books or roughly 200 million pages of data, and analyze this information and provide precise responses in less than three seconds. Using this extraordinary capability WellPoint is expected to enable Watson to allow physicians to easily coordinate medical data programmed into Watson with specified patient factors, to help identify the most likely diagnosis and treatment options in complex cases. Watson is expected to serve as a powerful tool in the physician’s decision making process.” As with its original foray into AI fifty years ago, IBM is still cautious not to ruffle the feathers of the people whose rice bowls they are breaking, but one person’s decision process support tool is another’s ticket to the unemployment line.

No one likes the idea that his or her field is simply too big and fast moving to master. And doctors in particular aren’t likely to graciously concede control of their patients’ treatment to synthetic intellects. But eventually, when outcomes demonstrate that this is the better option, patients will demand to see the attentive robot, not the overworked doctor, for a fraction of the fee, just as many people would now rather have an ATM than a human teller count out their cash.

Kaplan, Jerry (2015-08-04). Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence (Kindle Locations 1738-1764). Yale University Press. Kindle Edition.
See my July 20th post "AI vs IA: At the cutting edge of IT R&D."

Also apropos, Margalit Gur-Arie has a fine new post up.
Measuring the Doctor-Patient Relationship
...Although there is ample rhetoric about the doctor-patient relationship and patient-centered everything, much of what we do in health care today is in stark contradiction ... Patient choice is being curtailed by a bewildering array of narrow network health plans and wholesale clinical decisions made by corporate CEOs. Competence is being redefined to include care provided by non-physicians, non-clinicians, and algorithmic software. Continuity of care is being discouraged in favor of cheapness, convenience and continuity of medical records, while conflict of interest is inherent in all so called value-based arrangements. Compassion has been scripted by marketers, and communication, precisely codified for the eclectic, self-managing, highly educated, financially secure, and largely healthy, patient segment, has become the second most important factor defining the interaction between patients and the health system. The premier factor is of course, access to all of the above...
And then there's this, from NEJM:
The Paternalism Preference — Choosing Unshared Decision Making
Lisa Rosenbaum, M.D.


...Clearly, patients should have access to all available information, from their medical records to anticipated costs of care. But that it's wrong to deny anyone information doesn't make it right to always provide as much as possible. Might there, in fact, be such a thing in medicine as Too Much Information?...

Last March, my friend Paul Kalanithi, a 37-year-old neurosurgeon, died of lung cancer. Writing after his diagnosis, he contrasted his newfound obsession with cancer survival statistics with his struggle to communicate such information to his own patients without destroying their hope. As he struggled to extract from his oncologist precise information about his life expectancy, he realized, “What patients seek is not scientific knowledge doctors hide, but existential authenticity each must find on her own.”

Perhaps we can't provide existential meaning, but the way we share information may exacerbate patients' sense of vulnerability and alienation. When we rattle off a litany of possible risks, say “Please sign here,” and check our watches when the patient says, “Hold on, I need to put on my glasses to read this,” we have neither succeeded in the spirit of patient engagement nor honored anyone's values. But is more information the answer?


In an essay entitled “Arrogance,” published posthumously in 1980, former Journal editor Franz Ingelfinger describes his experience as a patient with adenocarcinoma of the gastroesophageal junction — the area he'd studied for much of his career. As he considered the trade-offs of chemotherapy and radiation, receiving contradictory expert opinions, he and his physician family members became “increasingly confused and emotionally distraught.” Finally, one physician friend told him, “`What you need is a doctor.'” Ingelfinger notes, “He was telling me to forget the information . . . and to seek instead a person who would . . . in a paternalistic manner assume responsibility for my care. When that excellent advice was followed, my family and I sensed immediate and immense relief.”


The doctors I admire most are characterized not by how much they know but by a sophisticated intuition about how best to share it. Sometimes they tell their patients what to do; sometimes they give them a choice. Sometimes, when discussing treatment options, they cover all seven tenets of informed consent. Sometimes, instead, seeing the terror of uncertainty in a patient's face, they make their best recommendation and say, “I don't know how things are going to turn out, but I promise I'll be there with you the whole way.”
___

More to come...

Sunday, August 16, 2015

"Personalized Medicine" - will Health IT be up to the task?

Bought this book yesterday morning, and read it straight through, it was so compelling.
...“[A]dverse drug reactions” are common: every year more than 2 million North Americans are hospitalized because of adverse reactions to prescription drugs. The reason why drugs work in some people and cause bad reactions in others can usually be traced back to differences in genetic makeup. Medicines that work for most people may not work for you. They may, indeed, harm you. So we need two things: first, we need ways of predicting and detecting disease well before it becomes life threatening; and second, we need medicines that work for you and your unique body.

Medicine has been trying to do these two things for millennia. While enormous progress has been made, it is still not good enough. And so it is that we are nearing the biggest revolution of our time — perhaps of all time.

This revolution has many names and guises. It is sometimes called personalized medicine, sometimes precision medicine, sometimes stratified medicine. It is a cousin of “evidence-based” medicine, a relatively new concept in medical practice. (Whoever came up with that name was clearly trying to make a point.) Whatever the name, what we will call personalized medicine — medicine based on the unique molecular makeup of our individual selves and a molecular-level understanding of whatever disorder we may have — is on our doorsteps...

Medical progress to this point has been mainly based on advances that benefit the population as a whole rather than you as an individual...

[A] family practice physician, on being asked how he selects the best drug for patients suffering from depression, answered, “Well, I have a dartboard hanging behind my door. It depends what number I hit.” There is no way for him to know in advance which drug will work best for which patient and which patient will suffer a nasty side effect. And so the patient and the doctor embark on a risky trial-and-error journey to find the best, most effective drug for him or her...

Your doctor sees the macroscopic version of you and, on the basis of a physical examination and your symptoms, can often diagnose what is wrong quite efficiently. But your doctor does not know much about the microscopic version of you, where disease and your responses to treatment are first manifested.

Your doctor does not know details of your genetic code and therefore cannot know how you will respond to a drug he or she may prescribe. Your doctor does not know the composition of molecules in your blood, which contain a huge amount of diagnostic information regarding diseases that you may have or be trending towards, whether the drugs you are taking are working to cure whatever disorder you are suffering from, or whether your diet is appropriate. Your doctor does not know the types and amounts of micro-organisms that are living in you and on you, which influence how well your immune system is working and play important roles in inflammatory diseases. In short, your doctor does not have access to a lot of important molecular-level information about you to guide many of his or her decisions. This can lead to wrong or delayed diagnoses and inappropriate therapeutic interventions.

The medicine of the future will be more personalized — and much more effective — because detailed molecular-level information about you and whatever disorder you may have is increasingly available...

This future is not very far away. [Kindle locations 44-147]
A great read. At once informed, erudite, accessible, ethically introspective, and genial. Dr. Cullis has considerable cred.

A nice follow-on perspective to this one I'd first cited in a prior post back in May.


I then cited "Biocode" again in my July 16th post "Personalized Medicine" and "Omics" -- HIT and QA considerations.

Let me return for my core concerns aired a month ago: 
I have a couple of concerns. Docs often don’t have enough time TODAY to get through an electronic SOAP note effectively, given workflow constraints. Adding in torrents of “omics” data may be problematic, both in terms of the sheer number of additional potential dx/tx variables to be considered in a short amount of time, and questions of “omics” analytic competency. To that latter point, what will even constitute dx "competency" in the individual patient dx context, given the relative infancy of the research domain? (Not to mention issues of genomic lab QC/QA -- a particular focus that I will have, in light of my 80's lab QC background).
President Obama’s current infatuation with “Precision Medicine” notwithstanding, just dumping a bunch of “omics” data into EHRs (insufficiently vetted for accuracy and utility, and inadequately understood by the diagnosing clinician) is likely to set us up for our latest HIT disappointment -- and perhaps injure patients in the process.
apropos, Dr. Cullis, Chapter 5:
THE TIPPING POINT, where medical practice will suddenly adopt the principles of personalized medicine, will be reached within the next five years. You will experience a memorable personal tipping point when you first get your genome sequenced, your microbiome analyzed, your metabolome assayed, and your proteome measured and then sit down with your doctor or wellness coach to discuss the implications of this very definitive data that is all about you. The data will be so precise and all-encompassing that it will show not only what may be wrong with you but also what you ate for breakfast yesterday and what type of dog you own. The impact of this information on you and the way you live your life will be greater than any other technological advance you have ever experienced. Remember when you started using Google and suddenly had access to all the information in the world and couldn’t imagine how you operated before? Or, for those who are old enough, remember the feeling you had when you started using email and suddenly had immediate communication with all parts of the world, for free? Or, for those of you who are really old, the first time you realized that having your own computer could actually be useful? Or, for those of you who, like the author, are positively ancient, the time you picked up your first hand-held calculator that could multiply and divide and take square roots? Well the personalized medicine revolution will trump them all.

The harbingers of the revolution are all around us.

Early versions of the digital version of you are starting to appear, although progress has been slow because of the enormous institutional, technical, and societal issues involved. The deeply conservative instincts of the medical profession have not helped either. The first manifestation of the digital you is (or will be) your electronic medical record (EMR) sometimes known as your electronic health record (EHR). The first attempts to introduce EMRs date back to the late 1960s; in the 1970s, the Department of Veterans Affairs had a working Computerized Patient Record System that established the ability of EMRs to reduce medical errors. Problems ranging from lack of standards, security concerns, and aversion to change prevented general adoption of EMRs until, with some frustration, President Obama pushed the Health Information Technology for Economic and Clinical Health Act into law in 2009. The legislation mandated a transition to EMRs for physicians and hospitals that treat patients covered by government insurance.

Still, it is surprising that in the U.S. in 2012 (the latest year for which records are available), only 72 percent of physicians used any form of electronic health record, ranging from just 54 percent in New Jersey to 89 percent in Massachusetts. In 2009, only 48 percent of physicians used EMRs. And right now, EMRs are not all that complicated. They consist of a digital store of your complete medical history, including medications and allergies, immunization status, laboratory test results, radiology images, vital signs, and personal statistics like age and weight. The lack of EMRs has meant untold duplication, errors due to incomprehensible handwriting, lack of knowledge about pre-existing conditions, and ongoing patient frustration with doctors who refuse to enter the digital age that the rest of us embraced twenty years ago. How many times have you been referred to a doctor, only to be asked the same questions all over again or be required to do a test that you’ve already done, simply because the doctor has no access to an electronic version of your medical history?

Some of the reasons for delay, it has to be admitted, are not the fault of the medical profession. Privacy is an enormous Your EMR, because it is in electronic form, is susceptible to the same sort of hacking as any other personal data stored on your computer or by your credit card company or by your bank. Clearly you don’t want an insurer or employer to get hold of your medical record without your authorization. But if your bank can achieve a secure online system for you to conduct your financial transactions, why can’t one be created for your medical information? Regardless, the digitization dam that’s been holding back universal EMRs has clearly burst, and we are finally entering the digital age of medicine.

Assuming you have one, can you get hold of your own electronic medical record? You should be able to — after all, it’s all about you. But ownership can be complicated, and doctors who have purchased an EMR system may feel that information in it about you belongs to them. Often, a strange distinction exists: the doctor or hospital owns your medical record, but you own the data in your medical record. In any event, if your medical record is digitized, you should be able to get hold of a digital copy.

The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR to achieve a more complete digital version of yourself. Early signs of development of such personalized data clouds and their utility are being seen for a small number of individuals who have access to the sophisticated resources currently required in order to study themselves in detail at the molecular level... [op cit, Kindle Locations 882-920].
"The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR to achieve a more complete digital version of yourself." 
Yeah, OK, but we can't even get our wailing incumbent EHR vendors to agree to comply en masse with the functionally feeble Meaningful Use Stage 3 specs.

Then there's the issue of "omics" analytic competency.
So, you say, “That’s good so far, very impressive. I can see how we’ll generate all this information and store it, but you still haven’t told me how I’m going to use this damn data to prevent or cure my disease or make myself feel better.” Ah, yes — slight problem there. That’s where the bottleneck is, and any readers who want a well-paid, highly secure occupation for the next twenty years should become experts in bioinformatics, particularly as it pertains to interpreting the large datasets surrounding genomic, proteomic, and other “omic” information... [ibid, Kindle Locations 793-797].
Beyond the requisite Omics dx acumen (in short supply), I remain skeptical regarding the notion of adding "your genomic, proteomic, microbiomic, and all the other data to your EMR."

For one thing, we think we're having "interoperability" problems now? Lordy.

UPDATE: AN ADMONITION FROM ANOTHER SOURCE

The use of genomic sequence data in a clinical setting is truly a new phenomenon in this 21st century. In the year 1999, no human had ever had their genomes sequenced; by 2009 there were seven individuals who it had their genomes sequenced, and it has been predicted by some that the millionth person will have their genome sequenced in calendar year 2014. Although it is not clear if or when the remarkable uptake of this technology might level off, it is clear that this technology will increasingly affect the way medicine is practiced. All healthcare practitioners will be increasingly asked to put this [sic] data into an appropriate clinical context.

Genomics has the potential to improve our approach to care in almost every aspect of medicine from cancer to infectious diseases.

We are moving into an increasingly DNA–first world of clinical medicine, where DNA sequence data will be available for decision-making prior to the patient’s visit to your office or hospital. In this DNA – first world, we will all need point of care decision support and we hope in these early years of genomic medicine you will find this book to be useful in your decision support needs.

Importantly, enthusiasm about genomic medicine should not lead to an abandonment of the classic tools of clinical medicine that are needed to inform care. First and foremost among these tools are the history and physical examination. The importance of environmental influences on health must not be overlooked. While this text gives great attention to the “nature” side of the nature versus nurture equation, there are many other sources of important information on the role of environment in all diseases including that which is considered as “genetic”.

The evolving role for family health history

Family health history has been used in clinical medicine for generations as a proxy for genetic information in efforts to protect disease risk in patients. In contrast to its previous application, in the era of DNA–first genomic medicine, family health history will increasingly be used to add context to the DNA Since every patient will be revealed to have rare genetic changes, so rare that they may only exist within their immediate family, the health history of others in the family who share these changes will be necessary in order to gain insight into how these changes will affect health. It will only be through knowing how these changes played out for the patient’s immediate relatives that we will be able to interpret some sequence variations in the patient who is being seen. It is because of this “interpretive need” that we will ultimately require detailed family histories that are annotated with DNA sequence variant information...
I haven't bought this book. Yet. I may. I used Dragon to talk the foregoing excerpt in.

It's rather expensive. But, if you surf through the Amazon Preview chapter topic listings, you can see how a clinician would find it useful. A principal takeaway for me is that a truly effective "genetic counselor" might also have to be an MD. And, we might ask, will such "Omics" curricula become a requisite component of physician training -- or, will it become the domain of yet another specialty subset (or, more troubling, some sub-MD pay-to-play "online certificate" or otherwise for-profit diploma-mill holder)? I seriously doubt that my urologist did anything other than take my OncoType dx genetic assay prostate cancer report at face value.

Just like my RadOnco docs likely (necessarily) did little to nothing beyond reading the radiologists' "impression" narrative summaries for my CT, MRI, and bone scans.
__

"The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR"

Yeah, but, beyond workforce capacity and dx acumen, what about the chronic, persistent data silo/opacity issue? A recent THCB post asks
What’s the Definition of Interoperability?
Seriously? We already have it. The IEEE definition. As I commented at THCB:
We already HAVE a concise definition of “interoperability," via the IEEE, “interoperability: Ability of a system or a product to work with other systems or products without special effort on the part of the customer. Interoperability is made possible by the implementation of standards.”

This other stuff is merely about “data exchange.” What is happening is that we’re “defining interoperability down,” removing the “without special effort” part. Were there a Data Dictionary Standard, then we could talk about interoperability. Data are called “the lifeblood of health care.” Fine. Think Type-O, the universal blood type, by way of precise analogy.

Maybe the API will comprise data exchange salvation. Maybe.
Responding to the interviewee in the THCB post, I quoted him and offered a response.
“One of the key ways we build good support systems is by having good data. It’s a “garbage in – garbage out” problem. One can’t make good decisions without good data. One of the problems is that a lot of the time data exists but it isn’t in the computer or isn’t in MY computer. Maybe someone has had a test somewhere else and I might not have any info, or if I’m lucky I might have a scanned PDF of the results. But it’s rarer still that I’ll have good, structured data that I’ve been able to pull in from outside sources without a lot of transcription or effort. So I became very interested in this problem of interoperability and have been doing a range of different kinds of work. Some of it actually focuses on how you do decision support, in the cloud or across systems. So the question becomes: how can you build a decision support system that spans several electronic health records and integrated data from multiple sources to make more accurate suggestions for patient care?”
__

By having a comprehensive standard data dictionary. Absent that, you have to have “n” variables x n(n-1) translative “interfaces” (if you’re after computable “structured data” rather than document-centric reports).
It's an excellent post, albeit replete with the usual obtuse "interoperababble" we have all come to know and love. On the upshot of Meaningful Use:
[Adam Wright] We were going to create these strong incentives for people to adopt EHR’s, knowing that EHR’s were not yet perfectly interoperable or even always perfectly usable and didn’t have all the functionality that we wanted. And now we’re trying to go back and patch that. The thing is we now have had a lot of opportunity to learn how, with these EHR’s that were developed with large hospitals or academic systems in mind, how do they really work in critical access hospital or in a single doctor practice. So we’ve learned, and I think the key is going to be to translate what we’ve learned into concrete improvements. But I think that’s been hard. I talk to some of my friends who are vendors and they’ve said “A lot of people are giving us feedback and we’re working on it as fast as we can but at the same time we’re getting a lot of pressure from Meaningful Use. So we can’t even use our best developers to build the stuff our customers are asking for asking for.” So I think some way of fixing how we do innovation in health IT is going to be important and I don’t know how exactly how we’ll do it, given how many competing priorities there are.
Indeed. Nicely stated. More Adam Wright:
I absolutely think that seeing a complete picture of a patient’s information is key for safety and I do think that lack of interoperability is 100% a safety issue. It’s something that we need to work on. But we need to get beyond the “unconscious patient in Wyoming.” I think that there’s so many more complex, subtle and insidious issues. The thing is that it’s often hard to measure. It’s hard to say “this is the one piece of information that changed my mind”. But I do think complete information like that is going to be very important. I’ll also offer you a flip side that I don’t really have an answer for yet: How much information am I responsible for viewing about a patient? Somehow I now have every piece of information about a patient from the moment of birth to the present. That might be more information than I can review before my brief visit with a patient. Part of the solution to that is going to be in technology or tools that will help me summarize the information, spot key information, spot trends etc. I think it’s going to be really exciting when we have that problem and have to build those tools. I look forward to the day when we have so much information that we really need sophisticated tools to organize and sort through it. Right now we’re a long way from that. But I think you’re 100% right that it’s a safety issue.
Interesting. Some of that maps right back to the "omics" concerns set forth above.

At the conclusion of the THCB post:
LK: So the last question is something Dan Monro brought up. Dan did a three-part series on interoperability on the site I write for called HL7 Standards.com. Essentially if you don’t have a patient identifier, then interoperability is a waste of time. He alluded to the idea of patient identifiers being something like a social security number in that they’re kind of old-school. There’s better ways to do it with cryptography and being able to ID people biometrically. So what can you tell us about patient identifiers? 

AW: I think it’s awfully important. It’s certainly the case that when I have a database and you have a database and we want to link them together, it matters that we have a key so that we can tell who is the same person. The approach right now with using a social security number has problems. Not everyone always has a SSN, not everyone remembers them, there are errors, they lack any way to validate them, etc. Using something like your Blue Cross member number is no good either because you can get another job. So I think the solutions we have now are rotten. We need some way to identify patients across systems. The most commonly proposed, and probably simplest or most parsimonious solution, is a national patient identifier. A numerator who sits in the government and assigns everyone a number shortly after birth and that’s their number. That is just not going to fly palatably. I just can’t see us creating the political will and I also am not sure that it is that desirable. Travelers could come to the country and not have numbers or Americans can go elsewhere and how will this all work? It seems problematic. But I think there are smart ways we could use technology to approximate that. For a long time we had probabilistic record linking approaches where we look at your name, date of birth, address, age, sex, etc. and try to figure out what is the probability that these two people are the same, and we’ve had some pretty good results. The Indianapolis Network for Patient Care has had some pretty good results there, using probabilistic linking rather than an identifier. The reality is that if we can put the patient at the center of this and creates a credential and authentication that they control, I think that would be a lot more palatable than if we put some sort of central government number assigned to people...
Well, yeah, a "national patient identifier" seems to be a non-starter, and multi-field proxy keys seem to be the only practical work-arounds. Adam sums up:
I think there’s probably a lot we can learn from internet authentication about how to create reliable patient identifiers with creating better identifiers, more security around them. The patient could see more of their information and how it was shared. I just think that there are better solutions than a single government based national patient identifier. And even if it’s the best solution, I just don’t think it’s politically possible. So I think we ought to be focusing our efforts on something else.
For more on this line of thought, and the "EXTREME" interop model, see "Defining Our Terms: Does Anyone Know What an "Open EHR" Really Is?"

SO, WHAT MIGHT COMPRISE THE "SOMETHING ELSE"?

Enter the "cryptocurrency" model?


WHAT!? you say? Stay with me.

to wit, "YouBase."


For those who really want to get their geek on.
Abstract
YouBase enables individuals to create and maintain a personal data store on a distributed public network, allowing the unprecedented ability to easily gather, analyze, and share private data for any purpose imaginable. Data is structured hierarchically so that increasingly identifiable data can be placed at levels closer to the root, allowing arbitrarily anonymized data to be shared with whomever is requesting access to it. The data format is flexible, enabling easy integration with third parties. In addition, read-only or read/write access can be granted at any node in the tree, allowing the user to tightly control access to every subtree in the data store. YouBase thus provides the building blocks for the ultimate peer-to-peer central repository for private data, enabling individuals, organizations, and the world to make smarter decisions.

Introduction

Cryptography combined with distributed applications and databases in peer-to-peer networks provide the fundamental building blocks required for securing stores of individual-centered digital property in an open standards-based manner. By using encryption, digital signatures, digital wallets, and distributed data, ownership of digital information can be managed in a decentralized store. Such a store will be simultaneously secure and private, with strong identity services, while also available anywhere.

Information and rights to that information will ideally follow an individual as she moves through various contexts in her daily life, enabled with the ability to provide trusted, verified identity within those contexts. A longitudinal record could be created, including consumer-generated application data, with the individual as the primary controller of access - all independent of a third party.

YouBase provides an individual-centric security structure that separates personal data from identity while allowing for secure and structured read and/or write access to trusted parties on a peer-to-peer data store. This structure provides several benefits, including:

  1. a way to securely input, access and share any kind of file or record
  2. a way to organize authorized access to information into a structured hierarchy
  3. improved anonymous information sharing that could be used as a public or shared private asset
  4. information sharing transactions can be to be tied to financial transactions
With these tools in place, we imagine a world where, rather than storing personal data, third parties could simply subscribe to data owned and controlled by the individual.
ePHI "controlled by the individual"? Yeah, Bushkin's "Medkaz" riff comes to mind.

Cutting to the chase of the YouBase paper:
Conclusion
In summary, using an HD Wallet to access a personal data store provides multiple benefits:

  • Data agnostic. Securely input, access and share any kind of file or record by providing keys and digital signatures.
  • Structured access. A way to organize authorized access to information into a structured hierarchy.
  • Authenticated data and proven identity without storing third party personal information.
  • Blockchain available, but not dependent. Information sharing transactions can be tied directly to a universal ledger as a public proof and/or as a financial transaction, but are not required to be. Wallets can be used solely as structured access to a data store, even though wallets are valid bitcoin addresses.
  • Encapsulation. Because each data element has a unique address, any one breach can be insulated from an attack on another.
  • Privacy. Without a high-level key, no one element of data can be tied to any other. With this level of data anonymity, more data can be donated by users for use in various forms of data commons.
  • A universal set of addresses. Using a public-private key hierarchy for data access provides for a universal address to read or write information based on an individual’s HD Wallet. A physician's office could send secure patient information to fulfill Meaningful Use requirements, for example.
  • Longitudinal tracking. All transactions are time-stamped, so all records can become a longitudinal record.
  • Security. Encryption of data by default.
  • Personal data as a service. Opens the door to have an API to share what you want with whom you want.
Sounds great. What's not to love? If you take all this gauzy "secure, seamless interop" stuff uncritically. The "HD Wallet" is unvarnished Bitcoin crypto.
Structured data
The core of the YouBase solution is a new kind of BIP32 hierarchical deterministic wallet or "HD Wallet" tree for the control of access to personal data stores. YouBase's wallet implementation follows the recommendations in BIP43 for extending BIP32, and can thus be thought of as BIP46, following the examples set in BIP44 and BIP45.

The HD Wallet contains a tree structure with extended keys such that each parent key can derive the children keys, children keys can derive the grandchildren keys, etc. An extended key consists of a private or public key and a chain code. Sharing an extended key gives (private or public) access to the entire branch. A useful application is that a user can provide an extended private key to a trusted source that can then write (deposit) information in that branch of the tree without having access to information in other branches.

Providing public/private key pairings in such a structure offers a number of benefits. First, specific branches can be used as data stores for specific types of information. Information secured in wallets can follow a pre-specified structure where different kinds of information are stored in different branches of the tree.

Secondly, along with that data structure, different rights to the tree have permissions structured such that different parties can have different read/write permissions to single nodes or to entire branches. Data can be partitioned into separate branches, allowing users to grant access on a granular level, down to the individual record, or even changes to a record. Specifically, a private key has read/write access to the data while a public key has read access only.

Third, HD Wallets are flexible in that they can create sequences of public keys without having access to the private keys, so that read-only or receive-only permission can be granted in less secure environments without risking access to the private keys. From the outside (with access to a public key), there is no indication that the key is part of any larger structure. It becomes a bitcoin address like any other...
Architecturally, I am reminded a bit here of my 80's lab days in Oak Ridge. Once we'd progressed to having a Netware LAN, we bought a LIMS package (Laboratory Information Management System) that was written in the "System J" language (basically a grab-bag mishmash of purloined elements of C, Fortran, Basic, Cobol, APL, Pascal, etc). It was ostensibly totally end-user modifiable and customizable. The database was not RDBMS, but rather a "hierarchical" node/stem/leaf "forms-based" construct.

We eventually ditched it and went full-bore Oracle RDBMS, first on a PC dumb-terminals client/server LAN running Unix (SCO Xenix), then, having scrapped that as well, on a DEC 3100 MicroVAX. Most of my bench-level apps work was xBase, with specialty library calls; all this other stuff just gave me whiplash. I moved on in 1991. Others toiled on with the endless LIMS project. I don't think it ever bore the fruit we'd all envisioned. Part of the problem was one that endures today. Lab managers and their staffs wanted to more effectively and efficiently manage lab throughput -- i.e., workflows. Corporate Suits wanted to surveill "productivity," in terms of money.

Sound familiar?
 
"YouBase is designed to provide a substrate on which any individual-centric service can be built."
Healthcare
There are a number of widely-known problems in health IT that an individual-centric data store could help solve.

Security Breaches in health care have become all-too-common because of the high value to identity thieves. A premium is paid on the black market for such records, estimated at $50 per record. In the US, in the first quarter of 2015 alone, nearly 90 million medical records have been compromised, including identity, clinical and financial information. At least part of the problem is misaligned incentives between third parties and patients. Providing a repository of data independent of a third party could provide a framework where personal data is provided on a subscription basis, rather than stored in multiple locations by multiple third parties.

  • Access. At the same time, a person's ability to access and manage personal information currently resting with third parties is difficult, recently leading to the #NoMUWithoutME and getmyhealthdata.org campaigns for personal health information access. Ideally, each person will have a private, universal and secure container of their own medical data that serves as a reference for health care stakeholders.
  • Interoperability. YouBase is data agnostic and could act as a single source for data, shared as needed and controlled by the patient, independent of data types. Data within the store could be translated between the various profiles. Any kind of data profile can be accepted and defined in the schema, so there's no need to pre-define the data structure before receiving the payload.
  • Research. Consumers can anonymously donate their validated data with minimal metadata or personal information. Consumers can have a personal container to directly connect their validated clinical and claims data, or provide this validated information to an external party.
  • Sending records from provider to patient. A set of universal addresses will allow for secure transmission of a patient record, simply by scanning a public address for which the health care provider has a private key.
  • Identification at point of care. YouBase can provide validated identity at the point of care through IDs, unique addresses and digital signatures, while maintaining privacy. Digital signatures will improve data quality.

    For example, a user could enter a lab and show that his identity matches with a traditional ID, or simply present his YouBase token that has been signed. The public key token is then verified as belonging to the user and documented in the system. The phlebotomist takes a blood sample, notes the date/time, and the sample is then permanently associated with the user's token/private key. The sample is processed using the lab's existing system. The results (data) are sent to the user's YouBase wallet and can only be opened/viewed by the token/secure-key.

    This could be applied to many health care transactions, creating validated identity and a universal set of addresses to which patient information could be signed. Using this kind of system will also improve data quality as each data entry will require a digital signature linked to the person who entered the information.
These are just a few examples. Our goal here is not to identify every use for the YouBase platform but to provide a starting point around which new ways of using trusted identity and privacy to manage health information can be imagined.
Be interesting to see whether this gets any traction in the healthcare space. Color me skeptical at this point. Securely, accurately, and efficiently moving around the hundreds to thousands of ePHI variables (many of them multi-encounter/longitudinal) connected to a given patient record differs materially from paying for CDs (or dope) with Bitcoins.

We'll see.

IN CLOSING, BACK TO PIETER CULLIS


Chapter 7:
THE GENIE IS OUT OF THE BOTTLE SO, WHERE ARE WE? In the previous six chapters, you have seen how the development and application of modern science over the last 400 years have led to an understanding of everything from planetary motion to the innermost workings of the cells in your body. You have learned about many of the bits and pieces that you’re made of, what they do, and how you can measure them, resulting in the “molecular you.” We have seen how the advent of the digital age allows us to store all this information electronically, and how analysis of the “digital you” embodied by this massive data cloud can identify biomarkers that provide an incredibly accurate picture of your state of health and disease. Remote-sensing devices can now analyze every breath you take and every beat of your heart and alert you well before you pass your best before, or rest in peace, date. Through social media, you will soon be able to share these intimate details with sympathetic listeners suffering from disorders just like yours, compare your digital selves to find the most effective therapies that should work for you, and locate where these are available. Taken together, these advances are driving massive changes in the practice of medicine as we know it today. But that’s only the beginning of the potential disruptions molecular medicine may cause...

The rapid advances in accurate diagnostics that can be anticipated in the near future, and their availability to you, will challenge the medical establishment’s role as the gatekeeper of medical advances, because knowledge and power will be passed to you, the consumer. For example, the current fifteen-year wait time for a new medical advance to reach the doctor’s office — to reach you — will not be tenable when you know with certainty what disorder you have, and have done your research as to which therapy is most appropriate to you and where it is available.

Certainly, personalized medicine will be creating havoc within the medical profession. The role of doctors in making diagnoses will be increasingly supplanted by computer analyses of the digital you. Accurate diagnoses combined with advanced imaging techniques and analyses of genomic and other data in the digital you will mean that safe and effective treatments will be readily identified. Thus the role of doctors will be in transition, as it has been for some time. Fifty years ago, doctors cared for people who were really unwell: 80 percent of their job was looking after the dying or seriously ill. Today, the treatment of chronic disease has become the norm. Care of type-2 diabetes, high blood pressure, arthritis, and cancer survivors takes up the majority of time. As these chronic disorders become increasingly controlled through a molecularly based, personalized approach, and as diagnosis and treatment are largely decided by analysis of the digital you, only complex and severe problems will need the doctor. So what will doctors be doing?


Two scenarios are possible. Those who do not have access to a doctor — or advanced health care — will find the playing field dramatically leveled. Relatively inexpensive omic data — potentially in the range of $ 100 for a complete analysis — and free online analyses available through the Internet will enable patients around the world to access state-of-the-art diagnostic resources. This information, combined with Internet searches and social media such as PatientsLikeMe, CureTogether, and other disease-specific websites, will also make it possible for you to discover what the most appropriate treatment is and where it is available... [op cit, Kindle Locations 1732-1810].
As I asked in the title:
Personalized Medicine" - will Health IT be up to the task?
Not a lot of time to be wasting, with tsunamis of Omics data approaching.


UPDATE

apropos of the increasing need for data exchange/"Interoperability" and the YouBase proffer, some interesting thoughts in Dr. Carter's latest post at EHR Science:
Future-Proofing Clinical Care Systems—Because, Well, Change is Constant
by JEROME CARTER on AUGUST 17, 2015


Everything changes. The workflows from last year may not exist next year. There will be new reporting requirements, new data elements, new forms, and new coding systems. Creating software that grows gracefully with the times takes serious architectural thinking from the outset...

It is no accident that current systems tend to be monolithic and hard to change without additional programming. As with most things in life, creating modular clinical systems is easier said than done. More importantly, modularity and configurability MUST be design goals from the very beginning, as they are not easily added after the fact...

Consider something as basic to EHR systems as the problem list. Problem list functionality could be provided in any number of ways. The simplest would be to treat the list as an array/collection of terms with start/stop dates. In a relational database, such a list would be stored in a single table. At the user interface, rendering of the list could be done using a simple grid, a multi-line text box, or be printed plainly in a window. Here we have the list in three modalities: in memory, in a data store, and presented on screen. In each instance, the problem list is simply treated as data—a series of values to be stored or displayed.

However, what if we decide to make the problem list (PL) a component? We know what the information role of the problem list is for users, but what is the functionality of a PL computationally? Here the question becomes what role the PL plays as a component of the EHR beyond its role as an information source for the user.

Now, consider this scenario. Chronic renal disease is added to the PL. Should there be a function in the PL component that automatically checks meds to suggest dosage adjustments? Should a PL component have a function that scans new labs to look for undiagnosed problems?

Treating a PL as a component that controls all diagnosis/problem functions requires a completely different software design than one that acts as a list and simply records/presents what the user has entered. Now we are considering functions that happen independently of any user interaction with the PL. Going a step further, let’s make the PL swappable (i.e., third-party vendors could sell a PL component that snapped into the EHR). Providing users the ability to swap out components would require yet another architectural adjustment...


The second version of modularity allows one to add a PL module to an EHR system in a way similar to adding a new app to a smartphone. This second version would allow updates to be installed after the EHR system is up and running. Imagine how owning an EHR would be when one could buy the PL from one company and the medication list from another. Don’t like the labs display? Swap it! Both versions of modularity are helpful, but the latter also prevents users from being tied to the innovativeness, or lack thereof, of any particular vendor...
'eh? "Modular EHR components as, in effect, plug & play "apps" that play nice together? Could such comprise the mature evolution of the still-tentative"API"? Read the entire post. Jerome Carter never disappoints.
__

CODA

Is our "interoperability" obsession a case of Tail-Wags-Dog?  See my post "Post-HIMSS15 Interoperababble Update: Margalit Gur-Arie hits one out of the park."
___

More to come...