Search the KHIT Blog

Wednesday, February 17, 2016

Syntactic and Semantic Interoperababble 2016


I recently read a THCB interview post by Leonard Kish on that hardy perennial misnomer "interoperability."
Interoperability Form and Function: Interview with Doug Fridsma

Leonard Kish talks to Douglas Fridsma, President and CEO at American Medical Informatics Association, about his work in the Office of the National Coordinator for Health Information Technology, or ONC, and the barriers to implementing MIPS in the most useful and transparent way. In order to communicate the data, of course, we’ll need informatics; but how will that work? And which comes first, policy or technology?...

DF: I’ve tried to maintain for the last six years a consistent definition of what interoperability is. The first thing I can describe is what interoperability is not. It is not a state of utopia in which there is this information liquidity. You will hear this all the time: we want ubiquitous information, and free-flow, data liquidity and all those things.

But interoperability is not that state of the world. Interoperability is defined operationally. I use one of the definitions by the IEEE folks. The best version is that interoperability has two parts: the first part of the ability of two or more systems to exchange information and the second one – the one we usually overlook – is the ability of the systems to use the information that has been exchanged. It’s about exchange and use.
I posted a comment.
I see “interoperababble” is alive and well, inclusive of leaving out a key phrase in the IEEE definition of interoperability: “…without special effort on the part of the customer.” No amount of calling n-dimensionally interfaced “data exchange” “interoperability” will make it so...
Let me return briefly to my original 2014 rant on "interoperability."


“We should not prescribe specific functionality for the EHR other than interoperability and security.”
 - John Halamka
__

Updated, annotated: on the (misnomer) “interoperability” side, from my recurring blog rant.
One.Single.Core.Comphrehensive.Data.Dictionary.Standard
One. That’s what the word “Standard” means -- er, should mean. To the extent that you have a plethora of contending “standards” around a single topic, you effectively have none. You have simply a no-value-add “standards promulgation” blindered busywork industry frenetically shoveling sand in the Health IT gears under the illusory guise of doing something goalworthy.

One. Then stand back and watch the private HIT market work its creative, innovative, utilitarian magic in terms of features, functionality, and usability. Let a Thousand RDBMS Schema and Workflow Logic Paths Bloom. Let a Thousand Certified Health IT Systems compete to survive on customer value (including, most importantly, seamless patient data interchange for that most important customer). You need not specify by federal regulation (other than regs pertaining to ePHI security and privacy) any additional substantive “regulation” of the “means” for achieving the ends that we all agree are necessary and desirable. There are, after all, only three fundamental data types at issue: text (structured, e.g., ICD9, those within other normative vocabulary code sets, and unstructured, e.g., open-ended free-form SOAP note narratives), numbers (integer and floating-point decimal), and images. All things above that are mere “representations” of the basic data (e.g., text lengths, datetime formats, Boolean/logical, .pngs, bmps, .tiffs, .jpegs etc).
Actually, all digital data are simply collections of “representations” of values coded in the binary ASCII (or the legacy EBCDIC) collating sequence (“under the hood” of all this stuff at the bit/byte level). Yeah, I’m givin’ away my age.
You can’t tell me that a world that can live with, e.g., 10,000 ICD-9 codes (going up soon by a factor of 5 or so with the 2015 migration to ICD-10) would melt into a distraught puddle on the floor at the prospect of a requisite standard data dictionary comprised of perhaps a similar number of metadata-standardized, “strongly typed” data elements spanning the gamut of administrative and clinical data definitions cutting across ambulatory and inpatient settings and the numerous medical specialties. We’re probably already a good bit of the way there given the certain overlap across systems, just not in any organized fashion.

Think about it.

Why don’t we do this? Well, no vendors want to have to “re-map” their myriad proprietary RDBMS schema to link back to a single data hub dictionary standard. And, apparently the IT industry doesn’t come equipped with any lessons-learned rear view mirrors.

That’s pretty understandable, I have to admit. In the parlance, it goes to opaque data silos, profitable “vendor lock,” etc. But, such is fundamentally anathema to efficient and accurate reciprocal data interchange (the “interoperability” misnomer) that patients ultimately need and deserve.

Yet, the alternatives to a data dictionary standard are our old-news, status quo, frustratingly entrenched, Clunkiness-on-Steroids, Nibble-Endlessly-Around-the-Edges Outside-In workarounds — albeit quixotic efforts that keep armies of Health IT geeks employed starting and putting out the fires they themselves started.

Resources better devoted to actual clinical care.

Visualize going to Lowe’s or Home Depot to have to choose among 800+ ONC Stage 2 CHPL Certified sizes and shapes of 120VAC 15 amp grounded 3-prong wall outlets.

Imagine ASCII v3.14.2.a.7. Which, uhh…, no longer supports ASCII v2.05.1 or earlier…

Ya with me here, Vern?

NIST/ANSI/ISO Health IT ICDDS – Interoperability Core Data Dictionary Standard.
In response to the repeated allusions to data as "the lifeblood of medicine," I've subsequently begun to characterize "standard data" (via a metadata data dictionary standard) as the "Type-O Blood" of health care.


Leonard alludes to the "vendor lock" issue:
LK: It seems like the lack of interoperability is in some ways used as a strategic advantage.  So how do we get from this insular or institution-based perspective of interoperability to a global perspective? Does it have to be a legislative solution? And how do we communicate and bridge to the consumer (which may actually be key to decentralized thinking)?

DF: There are three fundamental things we need to turn the ship in a better direction. The first step is I think we need to focus on those fundamental building blocks; for how we represent meaning, how we structure information, how we transport it, and how we secure it.  Those four things are really just an API.

But unless we think about what those fundamental building blocks are, even if we develop APIs, we’re still going to be in the situation with the switchboard and the cords. The building blocks are the first goal, and one of the pieces that’s missing is “how do we represent granular data” because most of our exchange right now is document centric.

We need to move from document-centric to data-centric exchange. We need to have a way to represent data at a granular level because that’s how we’re going to be able to calculate quality measures, that’s how we’re going to do decision support, and a lot of the other sophisticated computable things that we need to do...
"We need to move from document-centric to data-centric exchange. We need to have a way to represent data at a granular level..."

Well, yes. Maybe we're making progress here ("document-centric" obviously refers to things like "CCD" and "CDA"). Maybe. But, it's critical to keep in mind that -- particularly in patient care -- context matters. More on that point momentarily. First, you might want to Google "semantic interoperability."

"Syntactic and Semantic Interoperability"
Semantics concerns the study of meanings. Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. Semantic interoperability is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems.

Semantic interoperability is therefore concerned not just with the packaging of data (syntax), but the simultaneous transmission of the meaning with the data (semantics). This is accomplished by adding data about the data (metadata), linking each data element to a controlled, shared vocabulary. The meaning of the data is transmitted with the data itself, in one self-describing "information package" that is independent of any information system. It is this shared vocabulary, and its associated links to an ontology, which provides the foundation and capability of machine interpretation, inferencing, and logic.

Syntactic interoperability is a prerequisite for semantic interoperability. Syntactic interoperability refers to the packaging and transmission mechanisms for data. In healthcare, HL7 has been in use for over thirty years (which predates the internet and web technology), and uses the unix pipe (|) as a data delimiter. The current internet standard for document markup is XML, which uses "< >" as a data delimiter. The data delimiters convey no meaning to the data other than to structure the data. Without a data dictionary to translate the contents of the delimiters, the data remains meaningless. While there are many attempts at creating data dictionaries and information models to associate with these data packaging mechanisms, none have been practical to implement. This has only perpetuated the ongoing "babelization" of data and inability to exchange of data with meaning...


Semantic as a function of syntactic interoperability
Syntactic interoperability, provided by for instance XML or the SQL standards, is a pre-requisite to semantic. It involves a common data format and common protocol to structure any data so that the manner of processing the information will be interpretable from the structure. It also allows detection of syntactic errors, thus allowing receiving systems to request resending of any message that appears to be garbled or incomplete. No semantic communication is possible if the syntax is garbled or unable to represent the data. However, information represented in one syntax may in some cases be accurately translated into a different syntax. Where accurate translation of syntaxes is possible, systems using different syntaxes may also interoperate accurately. In some cases the ability to accurately translate information among systems using different syntaxes may be limited to one direction, when the formalisms used have different levels of expressivity (ability to express information). The idea of semantic proximity to specify degree of semantic similarity to achieve interoperability of objects (in the context of database systems) was first discussed in a paper titled ″So Far (Schematically) yet So Near (Semantically)″ published in 1993...
"the ongoing "babelization" of data and inability to exchange of data with meaning"

Love it. Nice to see others make the analogy.

Consider any one datum contained in an EHR, whether residing in Demographics, Family History, Social History, Vitals, Active Problems, Active Meds, Past Medical History, Chief Complaint, History of Present Illness, Review of Systems, Labs (including Imaging), Specialist notes, etc.

An accurate and effective clinical "SOAP" process (Subjective, Objective, Assessment, and Plan) typically derives from the multidimensional contextual synthesis comprised of dozen to hundreds of data points (many of them trended over time via multiple patient encounters -- the "flow sheet" thing). Any individual datum in isolation provides little to nil actionable clinical information.

"Type-O" data cannot but grease those skids.

More on "SOAP." Doug Fridsma:
LK: Teaching the actual care delivery, which health IT is a part, hasn’t until recently been a part of medical curriculum, and still only at a few medical schools so far. Now that we’re focusing economic attention on value and outcomes, you see a little more of it.

DF
: I think we have to change the way to document care delivery. We use something called the SOAP Note (Subjective, Objective, Assessment, Plan) as an outline of how medical information is captured by physicians and others. The problem is that none of those talk about outcomes...
My THCB comment:
...with respect to “SOAP.” it would properly be “SOAPe” (kudos to my former Sup Keith Parker at my QIO/REC/HIE for the observation), wherein the “e” refers to “evaluation” — i.e., “outcome” eval of the assessment and plan. In PDSA terms, the “e” would be the “S,” the “Study” component of science-based QI.
Yeah. Back to the broader point, I am also reminded of one of my earlier posts, "Personalized Medicine" - will Health IT be up to the task?

Scrolling down,

"The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR"

Yeah, but, beyond workforce capacity and dx acumen, what about the chronic, persistent data silo/opacity issue? A recent THCB post asks
What’s the Definition of Interoperability?
Seriously? We already have it. The IEEE definition. As I commented at THCB:
We already HAVE a concise definition of “interoperability," via the IEEE, “interoperability: Ability of a system or a product to work with other systems or products without special effort on the part of the customer. Interoperability is made possible by the implementation of standards.”

This other stuff is merely about “data exchange.” What is happening is that we’re “defining interoperability down,” removing the “without special effort” part. Were there a Data Dictionary Standard, then we could talk about interoperability. Data are called “the lifeblood of health care.” Fine. Think Type-O, the universal blood type, by way of precise analogy.

Maybe the API will comprise data exchange salvation. Maybe.
Responding to the interviewee in the THCB post, I quoted him and offered a response.
“One of the key ways we build good support systems is by having good data. It’s a “garbage in – garbage out” problem. One can’t make good decisions without good data. One of the problems is that a lot of the time data exists but it isn’t in the computer or isn’t in MY computer. Maybe someone has had a test somewhere else and I might not have any info, or if I’m lucky I might have a scanned PDF of the results. But it’s rarer still that I’ll have good, structured data that I’ve been able to pull in from outside sources without a lot of transcription or effort. So I became very interested in this problem of interoperability and have been doing a range of different kinds of work. Some of it actually focuses on how you do decision support, in the cloud or across systems. So the question becomes: how can you build a decision support system that spans several electronic health records and integrated data from multiple sources to make more accurate suggestions for patient care?”
__

By having a comprehensive standard data dictionary. Absent that, you have to have “n” variables x n(n-1) translative “interfaces” (if you’re after computable “structured data” rather than document-centric reports).
It's an excellent post, albeit replete with the usual obtuse "interoperababble" we have all come to know and love. On the upshot of Meaningful Use:
[Adam Wright] We were going to create these strong incentives for people to adopt EHR’s, knowing that EHR’s were not yet perfectly interoperable or even always perfectly usable and didn’t have all the functionality that we wanted. And now we’re trying to go back and patch that. The thing is we now have had a lot of opportunity to learn how, with these EHR’s that were developed with large hospitals or academic systems in mind, how do they really work in critical access hospital or in a single doctor practice. So we’ve learned, and I think the key is going to be to translate what we’ve learned into concrete improvements. But I think that’s been hard. I talk to some of my friends who are vendors and they’ve said “A lot of people are giving us feedback and we’re working on it as fast as we can but at the same time we’re getting a lot of pressure from Meaningful Use. So we can’t even use our best developers to build the stuff our customers are asking for asking for.” So I think some way of fixing how we do innovation in health IT is going to be important and I don’t know how exactly how we’ll do it, given how many competing priorities there are.
Indeed. Nicely stated. More Adam Wright:
I absolutely think that seeing a complete picture of a patient’s information is key for safety and I do think that lack of interoperability is 100% a safety issue. It’s something that we need to work on. But we need to get beyond the “unconscious patient in Wyoming.” I think that there’s so many more complex, subtle and insidious issues. The thing is that it’s often hard to measure. It’s hard to say “this is the one piece of information that changed my mind”. But I do think complete information like that is going to be very important. I’ll also offer you a flip side that I don’t really have an answer for yet: How much information am I responsible for viewing about a patient? Somehow I now have every piece of information about a patient from the moment of birth to the present. That might be more information than I can review before my brief visit with a patient. Part of the solution to that is going to be in technology or tools that will help me summarize the information, spot key information, spot trends etc. I think it’s going to be really exciting when we have that problem and have to build those tools. I look forward to the day when we have so much information that we really need sophisticated tools to organize and sort through it. Right now we’re a long way from that. But I think you’re 100% right that it’s a safety issue.
Interesting. Some of that maps right back to the "omics" concerns set forth above.

At the conclusion of the THCB post:
LK: So the last question is something Dan Monro brought up. Dan did a three-part series on interoperability on the site I write for called HL7 Standards.com. Essentially if you don’t have a patient identifier, then interoperability is a waste of time. He alluded to the idea of patient identifiers being something like a social security number in that they’re kind of old-school. There’s better ways to do it with cryptography and being able to ID people biometrically. So what can you tell us about patient identifiers? 

AW: I think it’s awfully important. It’s certainly the case that when I have a database and you have a database and we want to link them together, it matters that we have a key so that we can tell who is the same person. The approach right now with using a social security number has problems. Not everyone always has a SSN, not everyone remembers them, there are errors, they lack any way to validate them, etc. Using something like your Blue Cross member number is no good either because you can get another job. So I think the solutions we have now are rotten. We need some way to identify patients across systems. The most commonly proposed, and probably simplest or most parsimonious solution, is a national patient identifier. A numerator who sits in the government and assigns everyone a number shortly after birth and that’s their number. That is just not going to fly palatably. I just can’t see us creating the political will and I also am not sure that it is that desirable. Travelers could come to the country and not have numbers or Americans can go elsewhere and how will this all work? It seems problematic. But I think there are smart ways we could use technology to approximate that. For a long time we had probabilistic record linking approaches where we look at your name, date of birth, address, age, sex, etc. and try to figure out what is the probability that these two people are the same, and we’ve had some pretty good results. The Indianapolis Network for Patient Care has had some pretty good results there, using probabilistic linking rather than an identifier. The reality is that if we can put the patient at the center of this and creates a credential and authentication that they control, I think that would be a lot more palatable than if we put some sort of central government number assigned to people...
Well, yeah, a "national patient identifier" seems to be a non-starter, and multi-field proxy keys seem to be the only practical work-arounds. Adam sums up:
I think there’s probably a lot we can learn from internet authentication about how to create reliable patient identifiers with creating better identifiers, more security around them. The patient could see more of their information and how it was shared. I just think that there are better solutions than a single government based national patient identifier. And even if it’s the best solution, I just don’t think it’s politically possible. So I think we ought to be focusing our efforts on something else.
For more on this line of thought, and the "EXTREME" interop model, see "Defining Our Terms: Does Anyone Know What an "Open EHR" Really Is?"
___

See also my 2015 post "Interoperability? We don't need no steenkin' definition." And, my 2014 "Interoperability solution? HL7® FHIR® -- We ® Family."

I will be keenly interested in and attentive to interop/data exchange topics at the looming HIMSS16 conference in Vegas. Maybe the notion of "standard data" is on old-hat concept, maybe APIs will suffice.

Maybe.

I'm not the only one with concerns. My pal Jerome Carter, MD at EHR Science:
Is it time for a standard patient data set?
Determining the computational aspects of patient data requires research. At present, this is being done mostly at individual companies. As a result, every EHR system has its own database design with proprietary names, formats, properties, and groupings for data elements.   The same freedom exists for representing higher concepts such as the problem list, medication list, or family history.  Currently, it is difficult for clinical software designers to build on the work of others because so little is discussed publicly concerning clinical system design and construction.  What if clinical software designers setting out to build a clinical system did not have to reinvent the patient-data-model wheel?


Wouldn’t life be easier for everyone if a standard data set specification existed that provided a data model that included the metadata, context, and provenance information suggested by JASON?   The patient data set specification could be managed at a central location (e.g., the National Library of Medicine), and protocols would be in place for adding, naming, and revising data elements and their extended properties.  The latest version of the data set definition and a working implementation example would be available for download.

Such an approach might actually stimulate innovation because modeling for patient data is a huge headache for anyone designing clinical software.  I know this from personal experience… When designing the EHR at UAB, I spent over eight months tweaking the data model. I would have welcomed a “drop in” patient data specification—especially one that had been vetted by a community of researchers and software companies...
Finally It's worth noting again the contrarian views on "structured data" and "interoperability" ("tail wags dog?") voiced by the ever-pithy Margalit Gur-Arie. See "On Health Care Technology: EHR Call-Outs" and "Are structured data the enemy of health care quality?"

ERRATUM

Big data: Death by mozzarella cheese

...Some have suggested that big data will rapidly improve healthcare delivery. […] The strongest proponents of such big data applications believe that with enough information, causal relationships reveal themselves without an RCT.

Are they right? For clinical applications, this is a vital question. For instance, for every 5 million packages of x-ray contrast media distributed to healthcare facilities, about 6 individuals die from adverse effects. With big data, we learn that such deaths are highly correlated with electrical engineering doctorates awarded, precipitation in Nebraska, and per capita mozzarella cheese consumption (correlations 0.75, 0.85, and 0.74, respectively)...
LOL.
__

UPDATE: DISTURBING HIT NEWS
A Hospital Paralyzed by Hackers
A cyberattack in Los Angeles has left doctors locked out of patient records for more than a week. Unless the medical facility pays a ransom, it’s unclear that they'll get that information back.


A hospital in Los Angeles has been operating without access to email or electronic health records for more than a week, after hackers took over its computer systems and demanded millions of dollars in ransom to return it.

The hackers that broke into the Hollywood Presbyterian Medical Center’s servers are asking for $3.6 million in Bitcoin, a local Fox News affiliate reported. Hospital staff are working with investigators from the Los Angeles Police Department and the FBI to find the intruders’ identities.

Meanwhile, without access to the hospital’s computer systems, doctors and nurses are communicating by fax or in person, according to an NBC affiliate. Medical records that show patients’ treatment history are inaccessible, and the results of X-rays, CT scans, and other medical tests can’t easily be shared. New records and patient-registration information are being recorded on paper, and some patients have been transferred to other hospitals...
Yikes.
___________

More to come...

No comments:

Post a Comment