Search the KHIT Blog

Thursday, August 6, 2015

Are EHRs obsolete?

Hardly, but,
"The majority of EHR systems in use today were designed more than five years ago, and the largest systems used by hospitals systems date back 15 years or earlier. They were designed to fit the constraints that existed at the time."
- Jerome Carter, MD, EHR Science, "Clinical Care Systems: The Next Generation"

I've been working with computer systems for nearly 30 years -- not counting undergrad school at UTK, where I learned to write SAS and SPSS code and associated mainframe JCL (Job Control Language). From January 1986 thru mid-1991, using 3GL and 4GL compilers and RDBMS platforms (Oracle, xBase), I developed laboratory bench "apps" for computing environmental radiation exposure/dose/contamination and concomitant SPC code (pdf), and business office reporting programs in Oak Ridge.

Ten years ago I left the bank where I'd been a credit risk analyst, modeler, and writer (pdf) for 5 years, and returned to HealthInsight to participate in the federal DOQ-IT program, which was funded by CMS to stimulate wider adoption of "EMRs" (Electronic Medical Records). "EMRs" gradually came to be alternatively/synonymously called "EHRs" (Electronic Health Records). Eventually, an inferential distinction came to pass. Again, Dr. Carter:
The terminology shift from computer-based patient record to electronic health record occurred in 2003 when the report, Key Capabilities of an Electronic Health Record, was released.   As part of that terminology change, EHR systems were defined to be different from EMR systems, principally based on expected interoperability features. Here is how the ONC described EHR systems in 2011.
Electronic health records (EHRs) do all those things—and more. EHRs focus on the total health of the patient—going beyond standard clinical data collected in the provider’s office and inclusive of a broader view on a patient’s care. EHRs are designed to reach out beyond the health organization that originally collects and compiles the information. They are built to share information with other health care providers, such as laboratories and specialists, so they contain information from all the clinicians involved in the patient’s care. The National Alliance for Health Information Technology stated that EHR data “can be created, managed, and consulted by authorized clinicians and staff across more than one healthcare organization.”
Like the computer-based patient record before it, the EHR as described by the ONC still does not exist – interoperability as envisioned by the ONC has not yet made its debut.
Yeah. We remain largely mired in "Interoperababble" and fragmented "shards of health care." (To be fair, the bulk of the chronic -- and perhaps worsening -- fragmentation goes to dysfunctional economic imperatives and policy inadequacy, not technology.)

 More Jerome Carter:
In the early 2000s, LAN-based client/server was big and broadband Internet access, a luxury. The cloud and mobile devices with 64-bit processors, touch-based interfaces, voice-recognition, and an array of sensors were unknown. Workflow technology suitable for use in clinical systems did not exist a decade and a half ago; now many good systems are available.

Unfortunately, much of the current thinking about clinical care systems fails to acknowledge that many past constraints have been removed. Workflow issues and workarounds are widely discussed, and AHRQ and NIST have provided detailed reports that critically assess workflow issues. Yet, for some reason, workflow technology is rarely mentioned. We have new tools, good research on clinical work needs, and a new computing environment. It is time to try a new approach; it is time to start from scratch...
In 2010 I returned yet again to HealthInsight (I would introduce myself to outsiders as "the Bad Penny of HealthInsight") to work in the Meaningful Use initiative via their Nevada/Utah REC -- the work that launched this blog on May 10th, 2010.

So, where are we today, and where are we headed? The Meaningful Use effort is essentially over (the looming, much-unloved "Stage 3" notwithstanding), the overwhelming bulk of the incentive money having been distributed. Meaningful Use has been quite good to my Clinic Monkey EHR, I have to say.

More Jerome:
Looking back over the last 50 years, the effects of computer generational changes quickly rippled throughout society.   Personal computers moved computing power from businesses to individuals, and the market for word processors and other productivity software exploded. LANs allowed small businesses to access software previously only affordable to larger enterprises.   The Internet has reshaped entire domains. Amazon is in; Borders is out. Netflix is in; Blockbuster is out. The last three cars bought by my family were located online and picked up at CarMax.

Workflow technology, powerful mobile devices, and cloud data storage represent the newest generation of information technology. Provisioning cloud servers is significantly easier than just two years ago.   Databases have changed as well – there are now ACID-compliant NoSQL systems.

Computing technology has moved forward much faster than I expected...
"ACID-compliant NoSQL"? Lordy.

ACID
ACID is a set of properties that apply specifically to database transactions, defined as follows:

  • Atomicity - Everything in a transaction must happen successfully or none of the changes are committed. This avoids a transaction that changes multiple pieces of data from failing halfway and only making a few changes.
  • Consistency - The data will only be committed if it passes all the rules in place in the database (ie: data types, triggers, constraints, etc).
  • Isolation - Transactions won't affect other transactions by changing data that another operation is counting on; and other users won't see partial results of a transaction in progress (depending on isolation mode).
  • Durability - Once data is committed, it is durably stored and safe against errors, crashes or any other (software) malfunctions within the database.
SQL / Relational DB
ACID is commonly provided by most classic relational databases like MySQL, Microsoft SQL Server, Oracle and others. These databases are known for storing data in spreadsheet-like tables that have their columns and data types strictly defined. The tables can have relationships between each other and the data is queried with SQL (Structured Query Language), which is a standardized language for working with databases.

NoSQL
With the massive amounts of data being created by modern companies, alternative databases have been developed to deal with the scaling and performance issues of existing systems as well as be a better fit for the kind of data created. NoSQL databases are what these alternatives are called because many do not support SQL as a way to query the data...
From "What is the relation between SQL, NoSQL, the CAP theorem and ACID?"

Maybe I'm just an out-to-pasture Old Coot by now, 5 months away from being able to call myself a "70 year old washed-up ex rocker," but all I see here is the continuing necessity for code logic exactitude (irrespective of the development languages) and data matching, collation, merging, and synthesizing, all this geek-speak ACID-dropping aside.

More on ACID and NoSQL:
DataStax CEO: Let's clear the air about NoSQL and ACID
Many still believe NoSQL databases can't play at same level as relational forbearers. DataStax CEO Billy Bosworth puts the notion to rest

Misconception No. 1: You can't build an online application without ACID compliance
 

This misconception is flat out wrong and largely stems from built-in biases that we RDBMS folks have developed over the past two decades. Fortunately, you can easily find hundreds of companies such as eBay, Instagram, and Netflix building mission-critical online applications without full ACID compliance.

Misconception No. 2: ACID is an all-or-nothing proposition
 

Many people forget that ACID is an acronym representing four distinct characteristics: atomicity, consistency, isolation, and durability. In today's world of online applications, developers and architects make trade-offs to serve the greatest need.

Modern NoSQL databases offer various pieces of ACID that serve the needs of a given application just fine. Postrelational technologies often sacrifice consistency for performance reasons, while following (at least partially) the "AID" aspects of ACID. That's what I mean when I say it's not an all-or-nothing proposition. Parts of ACID may still remain relevant for your application, so you optimize for those accordingly.

Misconception No. 3: Eventual consistency violates the "C" in "ACID"
 

A few years ago, I wrote a blog post to address this misconception more thoroughly, but the gist is that for many DBAs, the word "consistency" has two very different meanings. "Consistency" in ACID refers to the enforcement of constraints or rules for a given entry in a database. When we talk about eventual consistency in a database such as Cassandra, we mean something entirely different -- namely, the temporal accuracy of the data itself.

In a distributed system, the same piece of data is usually replicated to multiple machines. When you update that piece of information on one of the machines, it may take some time (usually milliseconds) to reach every machine that holds the replicated data. This creates the possibility that you might get information that hasn't yet updated on the replica. In the old relational world, this conundrum comes very close to what we call a "dirty read."

This approach can present its own set of challenges at times, but the point is that eventual consistency is a different issue than the consistency definition you find in ACID. In order for developers to appropriately manage this new dynamic, it's important that they not confuse the two definitions.

Misconception No. 4: Databases and applications have a 1:1 relationship, so it's either/or between relational and NoSQL technologies
 

Years ago when we wrote applications at much smaller scale, life was easy: We picked a database, wrote our application, and were done. In fact, a single database system such as an Oracle instance would often house multiple schemas that represented different applications.

Today's applications are much more sophisticated. The idea of running a single database technology for a datastore is passé. A few years ago, Martin Fowler highlighted a trend called "polyglot persistence" that is now the normative architecture for modern applications. Virtually every customer I talk to houses a services layer between multiple database technologies and the app they power. This trend has established itself firmly in the real world. Polyglot persistence allows us to use the right technology for the right workload inside an application, which can pay huge dividends and enable functionalities not possible in a 1:1 schema.

Misconception No. 5: NoSQL databases are for "Web scale" applications only; everything else uses ACID-compliant technology
 

Some people believe that only a niche subset of applications require scalable, distributed databases. If you stop and think about it, how many developers today build online applications that don't keep "Web scale" in mind? It's like saying, "Only a small number of cars require the speeds necessary for highway travel."

That may have been true when only a few highways existed, but now they are part of everyday travel. The same is true for Web-scale applications. Given the always-connected nature of endpoints -- whether they are users, devices, or sensors -- what other kind of online application would you write? When are performance and availability not a high priority?

Why it matters
 

Over a decade has passed since Eric Brewer presented his now famous Brewer's Conjecture, which laypeople ultimately learned as the CAP theorem. It got us all thinking that the ability to scale out -- represented by "Partition tolerance" (the "P" in "CAP") -- might be important enough to abandon something as sacrosanct as data consistency within an application. He was right. Today, two attributes reign supreme for online applications: availability and performance. Without both, it's nearly impossible to satisfy the insatiable flow of data and unyielding demands from users...
Yeah. Interesting.

Meanwhile, back down out of the stratosphere, what do individual clinicians continue to mundanely see on a daily basis?


This is a NextGen HPI screen (History of Present Illness) for rheumatology. It is nothing more than a RDBMS data input/update/recall template, one comprising about 120 variables (depending on how you count them). One of the numerous data input screens clinicians and their support staffs must quickly traverse every day -- Admin/Insurance/Demographics, Active Problems, Active Meds (and Rx ordering), PMH, CC, HPI, ROS, Progress Note, etc -- all of the myriad elements comprising the SOAPe. See my July 2012 post "Analytics - SAS, R, SQL, EHR database schema. An old school data miner's ramble involving the intersections of workflow, audit logging, and CER, etc."

As I've noted before, a typical full-featured EMR/EHR has to house about 4,000 variables, typically distributed among perhaps hundreds of RDBMS tables in the application's schema. Much of the workflow travail that so irritates clinicians has little to do with Health IT per se, but rather is principally a function of the prevailing "productivity treadmill" economic imperatives.

One reason we remain stymied with respect to (the misnomer) "interoperability" is that we have failed to require a "Data Dictionary Standard."

No biggie? Just drop some ACID, take 2 API's, and call me in the morning? Are the MU-compliant incumbent EHR's effectively obsolete? Or verging on it?

The pace of Health IT R&D ferment is difficult to keep a good grasp on. Much of it remains "silo'ed," proprietary," and inscrutable to the harried end-users. More bamboozlement inevitably awaits.

"Will Silicon Valley's digerati "solve" healthcare?"

I will continue to count on people like Dr. Carter at EHR Science for Health IT clarity.

QUOTE OF THE DAY
Health care nowadays is like the ticker tape of a hyperactive stock market gone mad. Everything is huge, disruptive and transformative for a few days until the next seismic shock rolls in. Since nothing means anything in particular, everything means exactly what each expert wishes it would mean.

- Margalit Gur-Arie
ERRATUM

apropos of HIPAA, this is worrisome.

"...we will access, disclose and preserve personal data, including your content (such as the content of your emails, other private communications or files in private folders), when we have a good faith belief that doing so is necessary."
Seriously?
__

Off-topic. As I prepare for tonight's GOP "debate" and drinking game, Your Daily Donald

CODA

Talk sbout "futurists" -


My latest New Yorker just arrived.


___

More to come...

No comments:

Post a Comment