JASON Task Force pushes for narrowed MU Stage 3 focusing on interoperabilityWhat is an "API"?
October 2, 2014 | By Susan D. Hall
Meaningful Use Stage 3 requirements should be narrowed to more closely focus on interoperability, the Office of the National Coordinator for Health IT's JASON Task Force recommended in a meeting Wednesday.
At its September meeting, the task force pointed out that the JASON report found that Meaningful Use's Stage 1 and 2 had not achieved "meaningful interoperability" and stressed a need to create a unifying software architecture using application programming interfaces...
'Learning the lessons from Meaningful Use Stage 2, the more we expand the complexity of Meaningful Use requirements, both on providers and vendors, the less capacity they have to do novel things. This is proposing that there be sort of a bargain here, which is to say if we really care about this, we should narrow the scope of Meaningful Use Stage 3 and associated certifications to focus on interoperability … and in return set a higher bar for interoperability, specifically related to the development of public APIs," explained co-chair Micky Tripathi.
The draft report also recommends that ONC develop a public-private vision and roadmap for a nationwide coordinated architecture for health IT. It calls for a coordinated architecture tapping into the dynamism of the market to foster innovation rather than a top-down directive. That architecture should be based on the use of a public API that enables data- and document-level access to EHR-based information.
Its recommendations include prohibiting vendors and providers from blocking access to the public API for competitive or proprietary reasons...
In computer programming, an application programming interface (API) specifies a software component in terms of its operations, their inputs and outputs and underlying types. Its main purpose is to define a set of functionalities that are independent of their respective implementation, allowing both definition and implementation to vary without compromising each other.Below, JASON REPORT Appendix E: Possible Implementation Pathway for Public API
The following diagram maps current standards and approaches to JASON-specified gaps.
Addressing problems through better EHR designs cannot be done piecemeal. It requires a holistic view of clinical work that can be modeled and then used as a basis for EHR design.To the extent that a "holistic view of clinical work" necessarily has to include structured data and narrative data transparency and seamless data exchange (the "interoperability" misnomer), a comprehensive (non-piecemeal) data dictionary standard would comprise the simplest and sturdiest interop foundation, IMO (and, no, it would not be a panacea).
- Jerome Carter, MD
Now, for the sake of simplicity, assume n(n-1) "API sets" (each defining the complete data interchange specs for all data variables comprising the elements of the respective point-to-point EHRs). As of today, (Oct 2nd) ONC-CHPL lists 456 "Complete ambulatory and inpatient 2014 Certified systems" alone (in contrast to 1,848 total systems, modular and complete).
Let's make it simpler. List, say, 10 popular EHRs at random:
Never mind "versions / releases." Just 10 EHR products. Each of them will have to interface via unidirectional APIs with 9 others. n(n-1), 10(9)=90.
- Amazing Charts
- Practice Fusion
You might well counter "screw point-to-point; do a hub and spoke translation architecture." OK, I'd buy that, nominally. Nonetheless I fail to see the material difference.
The point-to-point data element exchange translation would still have to happen somewhere.
FROM INTEROPERABABBLE TO HADOOP-IGOOP
Does Hadoop Mean the End of the Data Model?Lordy. Read all of it. Fetch some Advil and Dramamine first.
by Richard Gerbrandt and Steve Dine, OCT 2, 2014
The development of Hadoop and the Hadoop Distributed File System has made it possible to load and process large files of data in a highly scalable, fault tolerant environment. The data loaded into the HDFS can be queried using a batch process provided by MapReduce and other cluster computing frameworks, which will parallelize jobs for developers by distributing processing to the data located on a pool of servers that can be easily scaled.
The Hadoop environment makes it easy to load data into the HDFS without needing to define the structure of the data beforehand. The usage of the Hadoop environment naturally raises the question:
Does Hadoop mean that data models are no longer required?The answer to this question lies in the purpose of a data model and the needs of this type of environment.
The Hadoop Method
Assuming data is [sic] processed using MapReduce, the definition, or layout of the data structure that was loaded into HDFS, is defined within the MapReduce program. This is referred to as “late-binding” of the data layout to the data content. This approach frees the developer from the tyranny of the early-binding method, which requires that a data layout be defined first, often in the form of a data model. This is the approached used by relational databases, which also supports constraints for enforcing business rules and data content validation.
The natural result of separating the data content from the data structure is that the MapReduce program becomes the place where the two are linked. Depending on the data processing needs, this may or may not be a complete data structure definition. In addition, each developer will define this mapping in slightly differing ways, which results in a partial view that makes unified definition hard to assemble.
The late-binding of data content to the data structure essentially places the developer as the middleman between the data and the data consumer since most data consumers are not MapReduce trained. Hadoop requires that the developer know how the data is laid out in the file, its format, whether it is compressed or not, and the name of the file(s), every time a new MapReduce program is developed. This late-binding approach requires the same work be repeated over and over again...
...The use of a Data Structure Model provides a means to capture data source definitions, capture the appropriate metadata and define any relationships that exist between the data sources. The Data Structure Model can provide data source definitions to both the big data and enterprise data architecture worlds.Do I really have to spell out the potential Health IT problems here? Yeah, let's "repeat the same work over and over again," via legions of "middlemen." Kinda like, well, point-to-point or hub-spokes API's? At some point, you have to have data map / metadata consistency, early vs late binding theory notwithstanding (scroll down in the link). What might suffice tolerably for aggregated population health Big Data studies is a far cry from what Dr. Carter refers to above as the need for a patient encounter level "holistic view of clinical work" aided by seamless, accurate data exchange ("interoperability").
The use of Hadoop has not brought an end to the need for data models but rather requires them to provide a connection to enterprise data architecture environment.
From Health Data Management:
Fridsma: Health IT Requires Different Types of InteroperabilityYeah, of course, take some blinding-glimpses-of-the-obvious concepts and glom them up with obtuse ivory tower nitpicky jargon. At root, we need a data/metadata dictionary standard -- as Dr. Carter would say, "standard data." Perhaps Dr. Fridsma's ONC geek-speak is a necessary and good thing at some level, but absent a solid foundation, it's difficult to see how we erect a durable and useful complex "interoperable" health IT/HIE structure based on myriad proprietary elements.
At a minimum, there are three types of interoperability required to achieve an interoperable health IT ecosystem, according to Doug Fridsma, M.D., ONC’s outgoing chief science officer.
Speaking this week at AHIMA’s 2014 conference in San Diego, Fridsma made the case that health IT requires all three types of interoperability--semantic, syntactic, and information exchange. “If you exchange the information and the codes don’t match or it’s a proprietary set of codes, you’ve got the information but you have no idea what those codes mean,” he argued. “Semantic interoperability is about the vocabularies and syntactic interoperability is about the structure.”
The end result, Fridsma said, is to have the ability of systems to exchange information and to use the information that has been exchanged, while taking advantage of both the structuring of the data exchange and the codification of the data including vocabulary—with the receiving systems able to interpret the data.
According to Fridsma’s definition, interoperability is the “ability to exchange information and the second part is to use the information that’s been exchanged,” and “being able to use it is all about semantic interoperability and understanding the information that’s there.”...
Visualize going to Lowe’s or Home Depot to have to choose among 1,848 ONC Stage 2 CHPL Certified sizes and shapes of 120VAC 15 amp grounded 3-prong wall outlets.Yeah, I know, it's not a perfect analogy (analogical perfection is what's known as "redundancy" or "tautology"). Still, the "data" that comprise electricity are free electrons on the move. They have two fundamental attributes, volume (amperes) and pressure (voltage; recall "W=VA"?). Health IT data are ASCII collated binary voltage state representations (ignoring the legacy EBCDIC) comprising numeric and textual symbols (images being specific encodings of the former).
120VAC electrons are rather effortlessly "interoperable." Only proprietary opacity keeps health IT from achieving similar efficient, utilitarian "patient-centered" convenience.
Comment below the post:
Those of us using EHRs in clinical practice are painfully aware of the fact that they are not designed to ensure that the most pertinent and important information is readily available, let alone hard to miss. If avionics and commercial jet cockpits were engineered the way the EHR is, the carnage would be catastrophic. Three simple examples. (1) The absence of a search function, requiring that one scroll through hundreds of entry titles to learn if/when the patient had Test X or saw specialist Z; (2) Alerts where the infrequent important alert about a serious risk is hidden amidst a veritable flood of noise; (3) checklists and drop downs that turn a coherent clinical narrative into a meaningless collection of unrelated factoids, resembling a parts list without assembly instructions.Thank You, Sir! May I have another?
The EHR does the job for which it is designed and purchased: documentation for audit and billing. It is not designed as a clinical care tool...
EHR design is terrible and obstructs communication. The medical record is absolutely polluted with useless information. Using the EHR is distracting and masquerades as effective communication, care collaboration, and all those buzz words. It is all bull. Physicians have been wailing about this. Would someone like to pay attention now? If the nurse and physician had not been lost in endless button clicking and metric meeting, they may have had a chance to talk to each other.From Healthcare IT News:
In an Oct. 2 statement, Texas Health officials said that in the "interest of transparency," they confirmed the chain of events leading up to Duncan's travel history not being initially known by or communicated to physicians.Another "Epic" situation? This story will likely have no legs unless additional Ebola cases emerge, documentably traced back to this particular Liberian vector dude. Nonetheless, the oversight is noteworthy, and the EHR haters will no doubt crawl out from under every rock.
"We have identified a flaw in the way the physician and nursing portions of our electronic health records (EHR) interacted in this specific case. In our electronic health records, there are separate physician and nursing workflows," Texas Health officials explained in the Oct. 1 media statement. "As designed, the travel history would not automatically appear in the physician's standard workflow."
More to come.