Search the KHIT Blog

Friday, July 4, 2014

Interoperababble update

From JAMIA: (pdf)

EHR adoption and Meaningful Use
EHR use in the USA has risen rapidly since 2009 with certified EHRs now used by 78% of office based physicians and 85% of hospitals. Meaningful Use (MU), a staged federal incentive program enacted as part of the American Recovery and Reinvestment Act of 2009, has paid incentive of US$21 billion to hospitals and physicians for installing and using certified EHRs pursuant to specific objectives. Stage 1 of the program (MU1) commenced in 2011, Stage 2 (MU2) in 2014, and Stage 3 is expected by 2017.


While the term interoperability can refer to messages, documents, and services, MU provides several objectives that prioritize document interoperability. Although multiple document standards existed prior to MU1, providers with installed EHRs rarely had the capability to send structured patient care summaries to external providers or patients, as noted by the President’s Council of Advisors on Science and Technology and the Institute of Medicine. MU1 advanced document interoperability by requiring Continuity of Care Document (CCD) or Continuity of Care Record (CCR) implementation as part of EHR certification. Many vendors chose the CCD, which was created to harmonize the CCR with more widely implemented standards. In MU2, the C-CDA, an HL7 consolidation of the MU1 CCD with other clinical document types, became the primary standard for document-based exchange...
Putting aside the "interoperability" misnomer (it's really just about data exchange among 2014 CEHRT systems; no one's remotely operating another's EHR from across town or from the other side of the nation), the unsurprising answer to the title question is "no."
ABSTRACT
Background and objective Upgrades to electronic health record (EHR) systems scheduled to be introduced in the USA in 2014 will advance document interoperability between care providers. Specifically, the second stage of the federal incentive program for EHR adoption, known as Meaningful Use, requires use of the Consolidated Clinical Document Architecture (C-CDA) for document exchange. In an effort to examine and improve C-CDA based exchange, the SMART (Substitutable Medical Applications and Reusable Technology) C-CDA Collaborative brought together a group of certified EHR and other health information technology vendors.
Materials and methods
We examined the machine-readable content of collected samples for semantic correctness and consistency. This included parsing with the open-source BlueButton.js tool, testing with a validator used in EHR certification, scoring with an automated open-source tool, and manual inspection. We also conducted group and individual review sessions with participating vendors to understand their interpretation of C-CDA specifications and requirements.
Results
We contacted 107 health information technology organizations and collected 91 C-CDA sample documents from 21 distinct technologies. Manual and automated document inspection led to 615 observations of errors and data expression variation across represented technologies. Based upon our analysis and vendor discussions, we identified 11 specific areas that represent relevant barriers to the interoperability of C-CDA documents.
Conclusions
We identified errors and permissible heterogeneity in C-CDA documents that will limit semantic interoperability. Our findings also point to several practical opportunities to improve C-CDA document quality and exchange in the coming years.
"615 observations of errors and data expression variation"? In one small study? Now, perhaps some of the data expression "variability" is nominally trivial (the errors are certainly not), but this is pretty disturbing.

I refer you to my February 23rd post, "The Interoperability Conundrum: Arguing for a std data dictionary."


I've pretty much lost that fight, I know. Nonetheless...

__

apropos,


Interesting paper. Free, but registration required for access.
While Big Data enjoys widespread media coverage, not enough attention has been paid to what practitioners think — data scientists who manage and analyze massive volumes of data.
We wanted to know, so Paradigm4 teamed up with Innovation Enterprise to ask over 100 data scientists for their help separating Big Data hype from reality. What we learned is that data scientists face multiple challenges achieving their company’s analytical aspirations. The upshot is that businesses are leaving data — and money — on the table...
  • We’ve all heard how hard it is to analyze massive and rapidly growing data volumes. But data scientists say variety presents a bigger challenge. They are at times leaving data out of their analyses as they wrestle with how to integrate and analyze more types of data such as time-stamped sensor, location, image and behavioral data as well as network data.
  • For complex analytics, data scientists are forced to move large volumes of data from existing data stores to dedicated mathematical and statistical computing software. This time-consuming and coding-intensive step adds no analytical value and impedes productivity.
  • While Hadoop has garnered widespread media coverage, 76 percent of data scientists have encountered serious limitations using it. Hadoop is well suited for embarrassingly-parallel problems but falls short for large-scale complex analytics.
  •  Incorporating the diverse data types into analytical workflows is a major pain point for data scientists using traditional relational database software.
The overwhelming volume of corporate and organizational data continues to generate headlines but it’s the diverse types of data that pose a bigger challenge. Nearly three-quarters of data scientists — 71 percent — said Big Data had made their analytics more difficult and data variety, not just volume, was the challenge.
Many new analytical uses require significantly more powerful algorithms and computational approaches than what’s possible in Hadoop or relational databases. Data scientists increasingly need to leverage all data sources in novel ways, using tools and analytical infrastructures suitable for the task. As we have already seen in this survey, organizations are moving from simple SQL aggregates and summary statistics to next-generation analytics such as machine learning, clustering, correlation, and principal components analysis on moderately sized data sets. The move from simple to complex analytics on Big Data presages an emerging need for analytics that scale beyond single server memory limits and handle sparsity, missing values and mixed sampling frequencies appropriately. These complex analytics methods can also provide data scientists with unsupervised and assumption-free approaches, letting all the data speak for itself [sic]...
Well, at the end of the day, BI and clinical users of data are still only interested in three types of data: [1] text (both "structured" and unstructured narrative), numbers (integers and floating-point), and [3] images (both static and dynamic). How they originally get defined, and the interop problems of such heterogeneity, points directly back to the issue of dictionary standardization. It is indeed less of problem of data magnitude and more one of the n-dimensional translation issues inherent in the heterogeneity.

Interoperababble.
___

More to come...

No comments:

Post a Comment