Search the KHIT Blog

Tuesday, September 25, 2012

The 2014 CEHRT 3rd Wave

So, after work tonight I downloaded and combined the draft NIST 2014 CEHRT stds to date in Adobe Acrobat, "waves 1 through 3" (nearly 300 pgs).

After which I keyword-searched the following:
  • usability;
  • ease of use;
  • efficiency; and
  • workflow.
"Usability"? Nada. Zilch.
"ease of use"? Nada. Zilch.

"efficiency"? Maybe 8 references, all unrelated to "usability."
"workflow"? 13 or so, all also unrelated to "usability."

2014 Edition Test Procedure for §170.314.a.16
Electronic medication administration record – inpatient setting only For Public Comment, September 21, 2012

The Tester shall use and apply the provided test data during the test, without exception, unless one of the following conditions exists:
  • The tester determines that the Vendor-selected message format requires some modification to the test data.
  • The Tester determines that the Vendor product is sufficiently specialized that the provided test data needs to be modified in order to conduct an adequate test. Having made the determination that some modification to the provided test data is necessary, the Tester shall record the modifications made as part of the test documentation.
  • The Tester determines that changes to the test data will improve the efficiency of the testing process; primarily through using consistent demographic data throughout the testing workflow. The Tester shall ensure that the functional and interoperable requirements identified in the criterion can be adequately evaluated for conformance and that the test data provides a comparable level of robustness.
CMS/ONC/NIST have effectively just punted on the "usability" thing this time around.

"Stop it! This is Hard!"

BTW: I found this repetitive statement equally revealing:
Any departure from the provided test data shall strictly focus on meeting the basic capabilities required of EHR technology relative to the certification criterion rather than exercising the full breadth/depth of capability that installed EHR technology might be expected to support.
Emphasis mine. I keep having that Clinic Monkey Moment.

So, here's the logic: EHR "usability" will flow naturally as a byproduct of vendors having a "QMS" in place (Quality Management System). I guess they'd intended to promulgate one, but, well...

From the 2014 CEHRT Final Rule (pdf):
We have adopted a certification criterion that accounts for the fact that we did not publish the quality management document as we had proposed. The certification criterion we have adopted is more general and provides more flexibility. The certification criterion expresses that for each capability an EHR technology includes and for which that capability's certification is sought, the use of a QMS in the development, testing, implementation and maintenance of that capability must be identified. Unlike our proposal, any QMS may be used to meet this certification criterion and even an indication that no QMS was used for particular capabilities for which certification is requested is permitted. The commenter who stated that they are implementing the FDA’s Quality System (QS) regulations (for example, under the MDDS rule) would – by definition – be meeting this certification criterion so long as they cite their compliance with FDA’s QS regulations for certification. Given this flexibility, we cannot foresee any reason why this certification criterion cannot be satisfied nor do we believe that it will be a significant burden to indicate the QMS used (or not used) in the development of capabilities for which certification is sought.

We understand that some EHR technology developers have several teams who work on different functional components of EHR technology. In the case where the whole development organization uses the same QMS (or not at all) across all teams, then this certification criterion may be met with one report. Where there is variability across teams, the EHR technology developer will need to indicate the individual QMS’ followed for the applicable certification criteria for which the EHR technology is submitted for certification.

We encourage EHR technology developers to choose an established QMS, but developers are not required to do so, and may use either a modified version of an established QMS, or an entirely “home grown” QMS. We also clarify that we have no expectation that there will be detailed documentation of historical QMS or their absence. As specified above, we believe that the documentation of the current status of QMS in an EHR technology development organization is sufficient.
[pp 106-107]
Emphases mine.

I have searched my PDF Waves 1 through 3 NIST stds repeatedly. Neither the acronym "QMS" nor the phrase "quality management" appear anywhere therein. I guess NIST didn't get The Memo.

Maybe I'm missing something. I again reviewed ARRA/HITECH itself (pdf):

Subtitle A—Promotion of Health Information Technology


The Public Health Service Act (42 U.S.C. 201 et seq.) is amended by adding at the end the following:

42 USC 300jj.    ‘‘SEC. 3000. DEFINITIONS.

‘‘In this title: ‘‘(1) CERTIFIED EHR TECHNOLOGY.—The term ‘certified EHR
technology’ means a qualified electronic health record that is certified pursuant to section 3001(c)(5) as meeting standards adopted under section 3004 that are applicable to the type of record involved (as determined by the Secretary, such as an ambulatory electronic health record for office-based physicians or an inpatient hospital electronic health record for hospitals)...
Again, emphasis mine. So, it's within the purview of HHS to set forth the specs of certification -- inclusive of requiring vendor documentation (or not) of an operative QMS. Well, I guess they're not exceeding their regulatory Brief.

Quite the contrary.

BTW, I'm not the only crank here.

EHR Certification 2014—Darwinian Implications?

The final EHR certification criteria for 2014 were released a few weeks ago, and I am surprised by how many of the more forward-thinking proposals made it into the final set.  The proposed criteria, released in March, contained suggestions that I thought were good ideas (e.g., usability testing, price transparency, and data portability requirements), but which seemed unlikely to survive the comment period.   I was shocked to see they made the cut!  Since these criteria affect how products are developed and sold, they may very well change the dynamics of the EHR market by helping some vendors and hastening the demise of others.  Their impact on the EHR market could be huge!  Let’s take a closer look at the three with greatest potential impact.

Usability Testing
This was the biggest surprise.  I thought usability testing would become part of certification, but not for the 2014 cycle. According to comments received by the ONC, EHR buyers liked the idea–vendors, not so much...

EHR usability rises to the forefront
September 27, 2012 | Bernie Monegain - Editor

Interview excerpt, Robert Tennant, senior policy advisor for the Medical Group Management Association:
Q. Is it a good idea to have usability as one of the measures for EHR certification?

A. I really think it’s a good step for ONC [the Office of the National Coordinator for Health IT] to start pushing the vendors toward more user-friendly systems, because if they’re not easy to use, it slows the clinician down, which we certainly don’t want to happen. It can frustrate them. It could lead to errors, and not taking full advantage of what these systems offer the clinician. User friendliness is especially important for what I would call the next wave of clinicians to adopt EHRs. The first wave, many of them had the systems in place, [and] a lot of them were technology-savvy. Now we’re trying to get at those physicians that aren’t necessarily technology-savvy. It’s absolutely critical that the interface be very friendly. Again, that’s going to increase the adoption rate among physician practices.

Q. Why has usability not been a consideration from the get-go?

A. We've been pushing the idea of usability for a long time. I was involved in the formation of CCHIT, the Certification Commission for Health Information Technology, which one of the six ATCBs now authorized to certify these products. I remember in the early days of CCHIT, I kept saying, "You know it’s not good enough just to test the functionality. If it takes a thousand keystrokes to get to something, well, it’s not very user friendly." The pushback came from two sides. One was from CCHIT, saying, "Well I don't know if we can test that. It’s difficult; it’s subjective." And the vendors themselves were leery about including usability in the testing criteria. We continued to harp on it over the years, and finally CCHIT added it to their test script. They have a whole protocol now in place to test it. My understanding is the product is assigned a rating from 1 to 5 in terms of its usability.


Implementation of a health IT product is one of the most challenging and time- consuming stages of the health IT life cycle. A database of user experience, including identification of new safety risks, would provide organizations with suggestions for ways to improve the efficiency, effectiveness, and safety of implementation. These organizations will benefit from lessons learned by similar health care organizations about how to improve the performance and safety of their existing systems. The information may include ideas on effective product upgrade practices or applications, safety risks associated with a specific vendor’s product, warnings about data entry combinations that could result in erroneous clinical orders, or suggestions for ways to enhance user training.

Health professionals responsible for setting up and managing clinical information ex- change will benefit from data on user experiences. The integration of health IT products with other clinical IT products (e.g., picture archiving and communications systems, pharmacy systems) and devices is complex and requires a significant amount of upfront configuration to en- sure that data are not lost or corrupted along any phase of the information exchange. A user experience database will help identify integration-related problems to avoid.

Health care organizations should also find comparative user experiences useful for workflow redesigns and new personnel infrastructures needed in a health IT–enabled health care system. In addition, organizations need to plan for the appropriate time and approaches to decommission an existing system. This can be facilitated through review of reports submitted by other health care organizations at similar stages of their health IT products’ life cycles. Many of these organizations will likely be working on the process of replacing their products with new products and should have information to share about what has worked well for them and what problems they encountered during the transition.

Ideally, this information could be parsed so that consumers of the data could easily find information relevant to their particular questions or contexts. In addition, an open and transparent approach to sharing user experience could create organic learning communities among different categories of professionals within the health care industry.
What's not to love there?
Multi-Modal Approach to Characterizing and Comparing User Experiences

No single measure can meet all needs for comparative user experiences. Different audiences—clinicians, vendors, implementers (e.g., chief medical informatics officer [CMIO]), organizational decision makers (e.g., board of directors)—will have different needs. Furthermore, no single measure can fully capture the strengths and weakness of a particular health IT product. Thus, multiple modalities of acquiring and reporting user experiences are recommended, including

  • in vitro “flight simulator” laboratory evaluation of test scenarios; 
  • in vivo point-of-use reporting; 
  • data mining of use patterns; 
  • third party–administered user surveys;
  • direct user-to-public reporting; and 
  • a formalized system of hazards reporting (see Table 1).
Some modalities will have more scientific rigor, such as formal surveys and “flight simulator” lab testing, while other modalities will be less scientific, but will provide an opportunity for input from a large number of users and hold the potential for discovery of unanticipated user experiences...
Stipulated. Nice report (PDF). Read the whole thing.
At present, some vendors prohibit users from sharing screenshots and otherwise effectively communicating with others about a problem with an EHR. There is currently no place for health IT users to share publicly the experiences they have had with their health IT products. However, even if a place were designated and developed a following, its use would be limited because of contractual prohibitions on sharing screenshots.
A voluntary multi-modal, multi-stakeholder approach to health IT safety reporting and communication may deter a more heavy-handed approach to regulating health IT vendors. Regulation by the Food and Drug Administration is a serious possibility on the horizon if improve-ment in health IT safety and usability is not achieved through a voluntary process...
The goal of collecting and publicly reporting user experiences is to improve products across the industry and promote safety. After a decade of development and experience, EHRs and other health IT products have not advanced sufficiently; nor have they been adopted widely and enthusiastically, in step with other consumer products such as smartphones and iPads. Some have referred to this as a market failure (Mandl and Kohane, 2012). With EHRs, unlike other consumer product areas, there has been little opportunity for cross-vendor comparison, which has stifled the evolution of this technology.

We believe that the development of meaningful metrics of comparative user experiences in domains such as cognitive workload, accuracy of decision making, time required to perform tasks, and implementation experience will support the purchaser in making wise decisions when choosing a health IT product, and will simultaneously provide the vendor community with incentives to improve products.
Finally, we believe that public reporting of user experiences, in a variety of forums, is essential to leveraging the power of the user and purchaser to affect change.
More thoughts on this shortly.


My Fetch FTP upgrade.

One, two, done, use...


Lucie wandered into my life in 1999, a red chow-huskie mix stray (one of my 3 rescue dogs and 1 rescue cat). She died slowly and peacefully in my arms last night. I shot that pic in the back yard earlier this year. I am heartbroken. It was a long night.

U Cal Davis HIE developments

California Health eQuality (CHeQ)

On May 16,2012, the California Health and Human Services Agency announced that IPHI had been selected to implement California's Health Information Exchange (HIE) programs under the State’s Cooperative Grant Agreement with the federal Office of the National Coordinator for Health Information Technology (ONC).

The administrative transition from Cal eConnect to IPHI is now complete; we appreciate your patience as we build out the California Health eQuality (CHeQ) website. In the interim, information on the status of funding opportunities begun during the transition is now available; new opportunities will be posted in the near future...


I relentlessly search out HIT related stuff every day and night. I should legally change my name to "Boolean Substring()"

I've increasingly noted, with a bit of frustration and irritation, that when I search out items via Google some of the top responses are links to my own REC blog!

Sept 28th update:

The comments section of this post just in oughta be interesting.


More to come...

No comments:

Post a Comment