Search the KHIT Blog

Wednesday, August 29, 2012

Single Source of Truth

The Consolidated CDA implementation guide defines nine different types of commonly used CDA documents, including:
  • Continuity of Care Document
  • Consultation Notes
  • Discharge Summary
  • Imaging Integration, and DICOM Diagnostic Imaging Reports
  • History and Physical
  • Operative Note
  • Progress Note
  • Procedure Note
  • Unstructured Documents
Each of these nine documents has a document template defined in The Consolidated CDA implementation guide, which will now be the single source of truth for implementing these CDA documents.
___

Dr. Farzad Mostashari / National Coordinator for Health Information Technology
Common Standards and Implementation Specifications for Electronic Exchange of Information: The Meaningful Use Stage 2 final rules define a common dataset for all summary of care records, including an impressive array of structured and coded data to be formatted uniformly and sent securely during transitions of care, upon discharge, and to be shared with the patient themselves. These include:
  • Patient name and demographic information including preferred language (ISO 639-2 alpha-3), sex, race/ethnicity (OMB Ethnicity) and date of birth
  • Vital signs including height, weight, blood pressure, and smoking status (SNOMED CT)
  • Encounter diagnosis (SNOMED CT or ICD-10-CM)
  • Procedures (SNOMED CT)
  • Medications (RxNorm) and medication allergies (RxNorm)
  • Laboratory test results (LOINC)
  • Immunizations (CVX)
  • Functional status including activities of daily living, cognitive and disability status
  • Care plan field including goals and instructions
  • Care team including primary care provider of record
  • Reason for referral and referring provider’s name and office contact information (for providers)
  • Discharge instructions (for hospitals)
In addition, there are a host of detailed standards and implementation specifications for a number of other transactions including quality reporting, laboratory results, electronic prescribing, immunizations, cancer registries, and syndromic surveillance (see below for a detailed list).

What does this mean? It means that we are able to break down barriers to the electronic exchange of information and decrease the cost and complexity of building interfaces between different systems while ensuring providers with certified electronic health record (EHR) technology have the tools in place to share, understand, and incorporate critical patient information. It also means that providers can improve workflow and dig deeper into the data. Certified EHR technology must be able to support identity reconciliation—matching the right record to the right person—and will give doctors the tools to reconcile a new document with the information already on file, for instance by incorporating medications and problems identified by another provider into a patient’s record,  thus creating a single source of truth. [emphases mine - BG]
___

"In Information Systems design and theory, as instantiated at the Enterprise Level, Single Source Of Truth (SSOT) refers to the practice of structuring information models and associated schemata such that every data element is stored exactly once (e.g., in no more than a single row of a single table). Any possible linkages to this data element (possibly in other areas of the relational schema or even in distant federated databases) are by reference only. Thus, when any such data element is updated, this update propagates to the enterprise at large, without the possibility of a duplicate value somewhere in the distant enterprise not being updated (because there would be no duplicate values that needed updating)."

"ADT 44"

From an email I found online today, embedded in a RHIO pdf document (it's an HL7 messaging error thing):

ADT A44 (Move account information — patient account number)
The intention of these messages is to move an account number, which may or may not have associated documents, from one patient to another.
These types of messages could result from inadvertently selecting "John Doe SR", as opposed to "John Doe JR", for example. Whereas "John Doe JR" is the correct patient. Once the patient is changed/corrected, the resulting A44 message initiates a move of all the documents associated with this account to the corrected patient ID.
In this example, the correct Patient ID = "John Doe JR", and the Prior Patient ID = "John Doe SR".
Investigation
Historic data analysis for all A44 messages received for CFRHIO:
  • Total A44 messages received thru 7/31 = 6,417
  • Total "Prior Patient IDs" with associated documents = 368
  • FHO = 213
  • OH = 155
  • Total associated documents accessed = 0
Action Plan
A44 message processing moving forward:
  • Short-Term: (Complete) Manually disable the viewing of documents for the "Prior Patient ID" or MRG-1, as specified in the A44 messages.
  • Medium-Term: (In Progress) Manually process the account move for all A44 messages, both historic and new inbound.
  • Long-Term: (Investigating) Implement non-manual process to properly process new A44 messages.
Thank you for your understanding.
Interesting. Maybe it should have been subjected to the RDBMS equivalent of "rigorability." I'm not at liberty to discuss what specifically brought this issue to my attention, but, it points up the widespread chronic data linkage problem posed by a continuing lack of a unique (no-dupes, no nuls), secure national patient identifier (Like, you know, a HIC number writ large).

NO DUPES, NO NULS, 
AUTHENTICATED PRIMARY KEY IDENTIFIER


What shall be the reliable identifier? to wit, the much unloved SSN:
What happens to the money assigned to people using false identities, or names matched with the wrong Social Security Number (SSN), or newlyweds who forgot to register their name changes with the Social Security administration?

The answer lies in a little-known aspect of the Social Security behemoth known as the Earnings Suspense File (ESF).
A necessarily dynamic probabilistic combination of Last/First/DoB/Gender/SSN?

I worked for a number of years in the Risk Management Department of a credit card bank. Among my duties was ongoing portfolio modeling, management, and fraud monitoring, all which entailed a good bit of data mining, repeatedly running SAS code against millions of customer account records.

Wherein, as I wrote in 2002,
...our department has the endless and difficult task of trying to statistically separate the “goods” from the “bads” using data mining technology and modeling methods such as factor analysis, cluster analysis, general linear and logistic regression, CART analysis (Classification and Regression Tree) and related techniques.

Curiously, our youngest cardholder is 3.7 years of age (notwithstanding that the minimum contractual age is 18), the oldest 147. We have customers ostensibly earning $100,000 per month—odd, given that the median monthly (unverified self-reported) income is approximately $1,700 in our active portfolio.
Yeah. Mistakes. We spend a ton of time trying to clean up such exasperating and seemingly intractable errors. Beyond that, for example, we undertake a new in-house credit score modeling study and immediately find that roughly 4% of the account IDs we send to the credit bureau cannot be merged with their data (via Social Security numbers or name/address/phone links).

I guess we’re supposed to be comfortable with the remaining data because they matched up -- and for the most part look plausible. Notwithstanding that nearly everyone has their pet stories about credit bureau errors that gave them heartburn or worse...

In addition to credit risk modeling, an ongoing portion of my work involves cardholder transaction analysis and fraud detection. Here again the data quality problems are legion, often going beyond the usual keystroke data processing errors that plague all businesses. Individual point-of-sale events are sometimes posted multiple times, given the holes in the various external and internal data processing systems that fail to block exact dupes. Additionally, all customer purchase and cash advance transactions are tagged by the merchant processing vendor with a 4-digit “SIC code” (Standard Industrial Classification) categorizing the type of sale. These are routinely and persistently miscoded, often laughably. A car rental event might come back to us with a SIC code for “3532- Mining Machinery and Equipment”; booze purchases at state-run liquor stores are sometimes tagged “9311- Taxation and Monetary Policy”; a mundane convenience store purchase in the U.K. is seen as “9711- National Security”, and so forth.

Interestingly, we recently underwent training regarding our responsibilities pursuant to the Treasury Department’s FinCEN (Financial Crimes Enforcement Network) SAR program (Suspicious Activity Reports). The trainer made repeated soothing references to our blanket indemnification under this system, noting approvingly that we are not even required to substantiate a “good faith effort” in filing a SAR. In other words, we could file egregiously incorrect information that could cause an innocent customer a lot of grief, and we can’t be sued.

 He accepted uncritically that this was a necessary and good idea.
I spent inordinate episodic FoxPro and SAS coding time (re) "cleaning the data," cross-referencing and correcting thousands of bad entries (Last, First, DOB, Gender, SSN) -- chiefly and most important among them bad "Socials" (the result of the legion input keystroke errors).

Then, we hired this enthusiastic but in some ways hapless H-1B crew to establish an Oracle data warehouse, and they thereupon had the brilliant idea to store SSNs as integers in lieu of character strings. So, were a cardholder Social to have been something like "012-34-5678," you now gotta write logic that will take #12345678 and convert/parse/substring re-concatenate it back to char(11) "012-34-5678" for proper ASCII collation and ease of view.

Now, in those circumstances of crapped-up but unremediated primary keys maybe the worst case upshot (in addition to my recurrent cube rants) would have been someone being denied a credit line increase or APR decrease.

In the case of a HL7 "ADT 44," on the other hand, someone could get the wrong meds dose and die from it.**

Close Only Counts in Horseshoes and Hand Grenades.
** and, yes, to be fair, I know that "Last+First+DoB+Gender+Social" would itself be imperfect (and highly variable; See Latanya Sweeney's work). Maybe someday we'll have Genes+Retinal Scan ID proxies. Maybe. But, we can do way better than the current cludgy "Master Patient Index" databases currently in use.
apropos,
MY LIGHT BEDTIME READING FOR TONIGHT


CMMI® for Development, Version 1.3
CMMI-DEV, V1.3 CMMI Product Team
Improving processes for developing better products and services
November 2010 TECHNICAL REPORT
CMU/SEI-2010-TR-033
ESC-TR-2010-033
Software Engineering Process Management Program
Unlimited distribution subject to the copyright.
http://www.sei.cmu.edu
and


Close to another 1,200 pages of reading Now, with respect to the former, my interest has been piqued by this "Agile" thing. The Fad of The Year?


From CMMI V1.3:
All of the notes begin with the words, “In Agile environments” and are in example boxes to help you to easily recognize them and remind you that these notes are examples of how to interpret practices and therefore are neither necessary nor sufficient for implementing the process area.

Multiple Agile approaches exist. The phrases “Agile environment” and “Agile method” are shorthand for any development or management approach that adheres to the Manifesto for Agile Development [Beck 2001].

Such approaches are characterized by the following:
  • Direct involvement of the customer in product development
  • Use of multiple development iterations to learn about and evolve the product
  • Customer willingness to share in the responsibility for decisions and risk
Many development and management approaches can share one or more of these characteristics and yet not be called “Agile.” For example, some teams are arguably “Agile” even though the term Agile is not used. Even if you are not using an Agile approach, you might still find value in these notes. PDF pg 70
___
  • In Agile environments, configuration management (CM) is important because of the need to support frequent change, frequent builds (typically daily), multiple baselines, and multiple CM supported workspaces (e.g., for individuals, teams, and even for pair-programming). Agile teams may get bogged down if the organization doesn’t: 1) automate CM (e.g., build scripts, status accounting, integrity checking) and 2) implement CM as a single set of standard services. At its start, an Agile team should identify the individual who will be responsible to ensure CM is implemented correctly. At the start of each iteration, CM support needs are re-confirmed. CM is carefully integrated into the rhythms of each team with a focus on minimizing team distraction to get the job done. PDF pg 151
  • In Agile environments, product integration is a frequent, often daily, activity. For example, for software, working code is continuously added to the code base in a process called ―continuous integration.‖ In addition to addressing continuous integration, the product integration strategy can address how supplier supplied components will be incorporated, how functionality will be built (in layers vs. ―vertical slices‖), and when to ―refactor.‖ The strategy should be established early in the project and be revised to reflect evolving and emerging component interfaces, external feeds, data exchange, and application program interfaces. PDF pg 270
  • In Agile environments, the sustained involvement of customer and potential end users in the project’s product development activities can be crucial to project success; thus, customer and end-user involvement in project activities should be monitored. PDF pg 287
  • For product lines, there are multiple sets of work activities that would benefit from the practices of this process area. These work activities include the creation and maintenance of the core assets, developing products to be built using the core assets, and orchestrating the overall product line effort to support and coordinate the operations of the inter-related work groups and their activities. In Agile environments, performing incremental development involves planning, monitoring, controlling, and re-planning more frequently than in more traditional development environments. While a high-level plan for the overall project or work effort is typically established, teams will estimate, plan, and carry out the actual work an increment or iteration at a time. Teams typically do not forecast beyond what is known about the project or iteration, except for anticipating risks, major events, and large-scale influences and constraints. Estimates reflect iteration and team specific factors that influence the time, effort, resources, and risks to accomplish the iteration. Teams plan, monitor, and adjust plans during each iteration as often as it takes (e.g., daily). Commitments to plans are demonstrated when tasks are assigned and accepted during iteration planning, user stories are elaborated or estimated, and iterations are populated with tasks from a maintained backlog of work. PDF pg 294
  • In Agile environments, teams tend to focus on immediate needs of the iteration rather than on longer term and broader organizational needs. To ensure that objective evaluations are perceived to have value and are efficient, discuss the following early: (1) how objective evaluations are to be done, (2) which processes and work products will be evaluated, (3) how results of evaluations will be integrated into the team’s rhythms (e.g., as part of daily meetings, checklists, peer reviews, tools, continuous integration, retrospectives). PDF pg 315
  • In Agile environments, customer needs and ideas are iteratively elicited, elaborated, analyzed, and validated. Requirements are documented in forms such as user stories, scenarios, use cases, product backlogs, and the results of iterations (working code in the case of software). Which requirements will be addressed in a given iteration is driven by an assessment of risk and by the priorities associated with what is left on the product backlog. What details of requirements (and other artifacts) to document is driven by the need for coordination (among team members, teams, and later iterations) and the risk of losing what was learned. When the customer is on the team, there can still be a need for separate customer and product documentation to allow multiple solutions to be explored. As the solution emerges, responsibilities for derived requirements are allocated to the appropriate teams. PDF pg 339
  • In Agile environments, requirements are communicated and tracked through mechanisms such as product backlogs, story cards, and screen mock-ups. Commitments to requirements are either made collectively by the team or an empowered team leader. Work assignments are regularly (e.g., daily, weekly) adjusted based on progress made and as an improved understanding of the requirements and solution emerge. Traceability and consistency across requirements and work products is addressed through the mechanisms already mentioned as well as during start-of-iteration or end-of-iteration activities such as ―retrospectives ‖ and ―demo days. ‖ PDF pg 354
  • In Agile environments, some risk management activities are inherently embedded in the Agile method used. For example, some technical risks can be addressed by encouraging experimentation (early ―failures ‖) or by executing a ―spike ‖ outside of the routine iteration. However, the Risk Management process area encourages a more systematic approach to managing risks, both technical and non-technical. Such an approach can be integrated into Agile’s typical iteration and meeting rhythms; more specifically, during iteration planning, task estimating, and acceptance of tasks. PDF pg 362
  • In Agile environments, the focus is on early solution exploration. By making the selection and tradeoff decisions more explicit, the Technical Solution process area helps improve the quality of those decisions, both individually and over time. Solutions can be defined in terms of functions, feature sets, releases, or any other components that facilitate product development. When someone other than the team will be working on the product in the future, release information, maintenance logs, and other data are typically included with the installed product. To support future product updates, rationale (for trade-offs, interfaces, and purchased parts) is captured so that why the product exists can be better understood. If there is low risk in the selected solution, the need to formally capture decisions is significantly reduced. PDF pg 386
  • In Agile environments, because of customer involvement and frequent releases, verification and validation mutually support each other. For example, a defect can cause a prototype or early release to fail validation prematurely. Conversely, early and continuous validation helps ensure verification is applied to the right product. The Verification and Validation process areas help ensure a systematic approach to selecting the work products to be reviewed and tested, the methods and environments to be used, and the interfaces to be managed, which help ensure that defects are identified and addressed early. The more complex the product, the more systematic the approach needs to be to ensure compatibility among requirements and solutions, and consistency with how the product will be used. PDF pg 414

I have much yet to learn. In particular, a simple explanation of the foregoing graphic as it pertains to an effective methodology for HIT development. Dubiety endures, and suspicion of bamboozlement reeks more broadly.

concern, to wit:
Because the companies using QFD [Quality Function Deployment] are already fairly sophisticated in their approaches to quality control, the apparent success of QFD as a software quality approach may be misleading. 

QFD is a very formal, structured group activity involving clients and product development personnel. QFD is sometimes called “the house of quality” because one of the main kinds of planning matrices resembles the peaked roof of a house. 

In the course of the QFD sessions, the users’ quality criteria are exhaustively enumerated and defined. Then the product’s quality response to those requirements is carefully planned so that all of the quality criteria are implemented or accommodated. 

For the kinds of software where client quality concerns can be enumerated and where developers can meet and have serious discussions about quality, QFD appears to work very well: embedded applications, medical devices, switching systems, manufacturing support systems, fuel-injection software controls, weapons systems, and the like. 

Also, QFD requires development and QA personnel who know a lot about quality and its implications. QFD is not a “quick and dirty” approach that works well using short-cuts and a careless manner. This is why QFD and Agile are cited as being “antagonistic.” 

The QFD software projects that we have examined have significantly lower rates of creeping requirements and also much lower than average volumes of both requirements and design defects than U.S. norms. However, the kinds of software projects that use QFD typically do not have highly volatile requirements.

Jones, Capers; Bonsignour, Olivier (2011-07-19). The Economics of Software Quality (Kindle Locations 3299-3311). Pearson Education (USA). Kindle Edition.
All very interesting.
QFD requires development and QA personnel who know a lot about quality and its implications. QFD is not a “quick and dirty” approach that works well using short-cuts and a careless manner. This is why QFD and Agile are cited as being “antagonistic.” 

UPDATE

Ran across an interesting website and blog post:

Leadership Skills: Building Collaborative Teams
Work teams can be very effective. They can also be a disaster, as anyone with even a passing knowledge of organizational dynamics understands. In today’s world of instant information, many of these impediments to real team performance are being overcome, while new challenges are emerging. New teams must be highly communicative, collaborative, mutually supportive, multitalented, and quick to respond, often without having a complete picture of the “facts.” New teams must be able to act with relative autonomy, demanding higher levels of accountability, unparalleled access to information, and commensurate authority. Team leadership can shift as demands for expertise change, although accountability remains with the titular team leader. The new team leader, therefore, must be both highly talented and politically savvy to survive and thrive as organizations adapt to new models. He or she must either have the stature or authority to withstand great pressure to avoid producing the “same old stuff,” which is tantamount to team failure.

In organizations with traditional structures and loyalties, teams are easily compromised by the often divergent pull from multiple constituencies that provide lip service to team success while providing minimum support or even actively working to sabotage team efforts. Teams that cannot pull themselves loose through the efforts of a strong, grounded leader or who have a patron high up in the organization often are teams in name only...

I forwarded this around our shop. We suffer from having too many "teams," all frequently busily doing "Work About The Work About the Work." A rather common affliction, unfortunately (including ASQ Divisions).

ERRATUM...


OK...
AGILE LEAN SIX SIGMA QFD RAPID-CYCLE PDSA CQI TQM!



One of my long favorite philosophers is the late Alan Watts, who once wryly observed something to the effect that "a problem with Christianity is that people have replaced the religion of Jesus with a religion about Jesus."

"Six Sigma" accords us a couple of lovely metaphors: [1] the boundary within plus or minus six standard deviations around a process average, assuming a perfectly Gaussian ("bell curve") dispersion, and [2] all those cool martial-arts green and black "Belts."

A "religion" "about."

In the software realm, "Agile" ups the allusive ante.


Will this be a hot new business line of professional certifications? Lordy. Agile Grasshoppers to Agile Samurai? And, to further jumble the metaphors, will they be donning rubgy attire for "scrums" and track gear for "sprints"?

Back to Jones and Bonsignour:
The phrase Six Sigma, which originated at Motorola, refers to defect densities of 3.4 “opportunities” for defects per million chances. The approach was originally used for complex manufactured devices such as cell phones but has expanded to scores of manufactured products and software, too. 

The original Six Sigma approach was formal and somewhat expensive to deploy. In recent years subsets and alternatives have been developed such as “lean Six Sigma” and “Six Sigma for software” and “design for Six Sigma.” 

Because the Six Sigma definition is hard to visualize for software, an alternative approach would be to achieve a cumulative defect removal efficiency rate of 99.999999% prior to delivery of a product. This method is not what Motorola uses, but it helps to clarify what would have to be done to achieve Six Sigma results for software projects. 

Given that the current U.S. average for software defect removal efficiency is only about 85%, and quite a few software products are even below 70%, it could be interpreted that software producers have some serious catching up to do. Even top-ranked software projects in the best companies do not go beyond about 98% in cumulative defect removal efficiency.

Jones, Capers; Bonsignour, Olivier. The Economics of Software Quality (Kindle Locations 3381-3384).
"[D]efect densities of 3.4 “opportunities” for defects per million chances." Yeah, as a process average, one assuming a perfectly Gaussian distribution,


which, of course, exists only on college chalkboards and in textbooks (and, of course, on Wall Street -- and, we see where that got them).


Color me presumptively Chebyshev-ista.


"Chance is lumpy." - Abelson's Laws

Consequently, your outer bound is 2.78% defect rate at 6 sigma (percent, not per million) under Chebyshev.

It gets worse when you go all 3+ dimensional. Think about it.


I guess here's the cut-to-the-chase point (in addition to and beyond the Watts analogy): I can pretty quickly convey the essentials of "Lean" to the average assemblage of high school- educated clinic front office staff. Priesthoods of belt-laden QI In-Tongue speakers, well, on the other hand, makes for a nice market in books and webinars.
___

REMINDER


Money on the table.
___

INTERESTING NEWS


Demand for meaningful use (MU) assistance has exploded, increasing competition between third-party consulting firms--most of whom are excelling in MU-related work.
Not one word about RECs.

We just can't get any love these days. In my email inbox this morning:


MORE NEWS

 Which components of health IT will drive financial value?
A framework that describes the ability of specific health information exchange (HIE) and EHR functionalities to drive financial savings could help efforts to develop meaningful use measures and measure the financial impact of health IT, according to research published in the August issue of the American Journal of Managed Care.

“Previous work in this area has largely modeled the financial effect of whole health IT applications, assuming that the effects of those applications were similar across different contexts,” wrote lead researcher Lisa M. Kern, MD, MPH, of Weill Cornell Medical College in New York City, and colleagues. “This assumption may not be true because health IT is an inherently heterogeneous intervention. EHRs and HIE are themselves applications composed of functionalities that are variably implemented, configured and used.”...  
Courtesy of Cardiovascular Business. Full Journal article here.

___

BTW - 
"We support technology enhancements for medical health records and data systems while affirming patient privacy and ownership of health information."

- GOP 2012 Platform. That's it. The only reference to Health IT.
___

AUGUST 31 UPDATE:
YET ANOTHER "HEALTHCARE TRANSFORMATION" INSTITUTE


I got a LinkedIn email heads-up about these people today. Pretty interesting report, actually (in light of what I've perused thus far).

Like a person suffering from a debilitating disease, healthcare delivery in the United States is ailing. The U.S. spends significantly more per capita and a higher percentage of GDP on healthcare than other developed nations, yet our patient outcomes (e.g., mortality, safety, access to medical care) are disparate and inconsistent. Moreover, the rapidly rising costs of healthcare delivery are making medical care increasingly unaffordable to the average citizen and threaten our national financial viability.

How did we get here? Although unhealthy lifestyles and the growing and aging population are undoubtedly contributing to the rise in healthcare costs, two key factors must not be underestimated: a) advances in medical technology and b) powerful system incentives that inadvertently advance unchecked utilization throughout the healthcare delivery system.

So what can we do?

Where do we start? Read some Dr. John Toussaint (e.g., see my August 4th post), along with this report.

Read on.
___

TROUBLEMAKER in the TWITTERVERSE


Details shortly.

SEPT 2nd REC ASS'N AMBER ALERT

 ___

More to come...

1 comment:

  1. Yes, I proven this solution in a company that totally not believe in Six Sigma, it was a hard time to get management buy in but now they are convinced when they personally witness more than 30 GB project achieve more than 50% quality improvement and secure mufti million saving for the company. This is the reason why I earn a chance to share in Munich lean six sigma training

    ReplyDelete