Search the KHIT Blog

Tuesday, June 28, 2011

Meaningful Manual Labor

A few EHR vendors have begun to distribute detailed Meaningful Use upgrade instruction manuals (via PDFs -- I'm becoming rather adept with the comb binder). A couple are very good, essentially A to Z workflow instructions (replete with "money field" screen shots) for hitting all of the Stage One measures.

Now all we need are about 30 more of these, given that we at the HealthInsight REC are "vendor neutral" and have to support providers on close to three dozen ONC certified platforms.
___

e-prescribing is a really big deal at HHS. We have the Meaningful Use Core Measure piece, PQRS, and the MIPPA piece. e-Rx purports to bring a host of improvement/cost reduction benefits.

Well...

Cautionary reporting from Bloomberg:
“...Providers appear to be rapidly adopting electronic health records and computerized prescribing, and one of the major anticipated benefits is expected to be through medication-error reduction,” the researchers wrote. “Many of these benefits will not be realized if the electronic prescribing applications are not mature and either do not catch or even cause new medication errors.”

Wrong Dose
The most common error was the omission of key information, such as the dose of medicine and how long or how many times a day it should be taken, the researchers said. Other issues included improper abbreviations, conflicting information about how or when to take the drug and clinical errors in the choice or use of the treatment, the researchers said.

“With more than 3 billion prescriptions written annually in the U.S. alone, this could amount to 385 million errors each year, with 128 million of them having the potential to cause patient harm, said researcher Jeffrey Rothschild, from the center for patient safety research and practice at Brigham and Women’s Hospital in Boston..."

No shortage of improvement work to be tended to. "Usability," anyone? Beyond workflow improvement more generally.
___

A QUICK DIVERSION:
FEDERAL CONTRACT GS23F8127H_HHSP233201100252G

My last couple of posts have dwelled at some length on ePHI privacy and security issues. Check this out: Beltway veteran KPMG was just awarded a $9,179,011 contract to administer ARRA-HIPAA audits for the HHS OCR.
The protocol and audit program performance requested under this contract shall assist OCR in operating an audit program that effectively implements the statutory requirement to audit covered entity and business associate compliance with the HIPAA privacy and security standards as amended by ARRA...

...The government anticipates completing 150 audits of entities varying in size and scope...

$9,179,011 divided by 150 = $61,193.41 per audit.

I'm in the wrong business.
___

YET MORE ON THAT WHOLE "USABILITY" THING
NIST, ONC plan measures, testing to improve health IT usability
Posted on April 25, 2011 by Theo Mandel, Ph.D.

I’ve worked in healthcare usability for a long time and with the impetus to move all of healthcare to electronic platforms, there have been many, many unusable implementations of EHRs and EMRs.

The National Institute for Standards and Technology (NIST) and Office of the National Coordinator for Health IT (ONC) are working to provide guidelines for healthcare software usability...

au contraire

The following was submitted to ONC during their public review and comment period for their Usability certification proposal.
Usability Testing for EHRs is a Bad Idea
May 6, 2011 – by Richard M. Low, MD, CEO of Praxis® Electronic Medical Records

As a physician and as the developer of Praxis EMR, a product that has consistently ranked #1 in user satisfaction, I am concerned about the government's plans for EHR usability testing.

The EHR Usability Testing Plan you have submitted for public comment contains a fundamental flaw. The problem is that you have equated what usability means to doctors with what ease-of-use means to software developers. You begin with an observation that is on target (usability has been the #1 barrier to EHR adoption) and proceed with a well-intentioned but unwise goal (to use the power of government to improve the usability of EHRs). From there, you propose a testing approach that is based on several misconceptions.

Up till now, you deserve praise for your agency's excellent work and for your transparent decision-making process, especially considering the extreme pressures under which the agency is working. Indeed, you have done precisely what government should do: help create order out of industry chaos by setting standards; obtain critical information for public health purposes; and protect the public in terms of information security.

Equally impressive is what you decided not to do, which is to delve into product workability. This decision demonstrated an understanding on your part that the industry is in full innovation and development mode, and that rulings of this sort would harm rather than help the development of excellence in Electronic Medical Records.

It would be a shame to tarnish your exceptional record thus far by venturing into the dubious area of usability testing.

No doubt the move in this direction is prompted by the large number of unusable EHRs in the market and by the great physician dissatisfaction with many of the solutions out there. Further, computer experts have assured you that usability testing is not only possible, but an effective solution to the problem. And so the government is now seriously considering a program that will rate EHR usability.

Those of us who have been in the industry long enough remember CCHIT's attempt to do the same thing. The ratings that it issued were highly suspect, with doctors giving damning reviews to EHRs that received CCHIT's seals of approval.

To give credit where credit is due, you begin by defining usability in terms that we can all agree with:

The effectiveness, efficiency, and satisfaction with which the intended users can achieve their tasks in the intended context of product use.

From there, however, you create a testing approach that is based on several misconceptions.

First, you assume that an EHR is a "computer thing." It is not. It is a medical tool. To think of an EHR as a "computer thing" is akin to the early days of the printing press—500 years ago—when a book buyer would ask a printer for his opinion on a tome. The printer would then examine the cover, the parchment, and the binding, and pass judgment on the book's worth. Today, we all know not to judge a book by its cover. Nonetheless, by advocating usability testing, the government is creating an analogous situation; when it comes to EHRs, usability experts are the "printers" of the Information Age.

And because of the assumption of EHRs as computer things, you arrive at your second misconception, which is that EHRs should reduce their learning-curves by following the lead of commonly-used software programs. In other words, since most people who have used a computer are familiar with how pulldown menus and templates work, EHRs should go down this well-travelled road. Doing so would make the use of EHRs intuitive, or so the thinking goes.

In effect, you are applying computer thinking to a medical tool. This is a bad idea.

In this short paper, I would like to share with you my experience with EHR usability and dispel the assumptions that underlie the proposed usability testing of EHRs. My views are based on what I have learned from the development of Praxis for over 20 years, and from studying hundreds of clients who use our EHR. In addition, these views are informed by my 20 years as a Board Certified Internist and by my experiences in the areas of Emergency Medicine and Primary Care.


Templates Do Not Work
I am rather fond of Robert Frost, and to paraphrase a line from one of his most famous poems, at Praxis, we decided to go down the road less travelled by.

The first EMRs came out when I was working as a physician in Los Angeles. As an amateur computer programmer, I have always been intrigued by computers and their potential to dramatically improve the practice of medicine. I immediately came to the conclusion, however, that these first EMRs were headed down the wrong road. They were all based on templates, replete with drop-down menus and pick lists.

It has now been widely-reported in medical journals that the use of templates in EMRs pose several challenges. First, they slow down the doctor by requiring him or her to click through menus and find the right template for the case at hand. Second, they lead to increased medical and coding errors; this has been amply documented and you may refer to articles such as Hidden Malpractice Dangers in EMRs in the Medscape Business of Medicine, The Problems with EHRs and Coding in Modern Medicine, and Compliance Risks Grow with Electronic Medical Record Systems in AIS Compliance, to list just a few from the growing body of literature on just this one issue. Third and perhaps most important, templates restrict the way doctors think through a medical case; see, for instance, Dr. Jerome Groopman's book, How Doctors Think and two articles from the New England Journal of Medicine, Off the Record—Avoiding the Pitfalls of Going Electronic and Can Electronic Clinical Documentation Help Prevent Diagnostic Errors?

My disappointment with the early EMRs led me to found Praxis Electronic Medical Records. I assembled a team of developers, explained what an EMR should do, and gave them one prohibition: templates were not to be used.

I also gave them two guiding principles: 1) the noise to information ratio must be optimal, letting the user teach the program which is which, not the other way around; and 2) the program should never make any attempt to "practice medicine," but rather strive to follow the doctor's lead.

The Road Less Travelled By
Praxis EMR has been on the market for over 20 years now. It has consistently ranked #1 in user satisfaction surveys conducted by the American Academy of Family Physicians and in KLAS.

However, I suspect we would do poorly in usability under your proposed testing approach.

Praxis EMR does not look like other EMRs. It is not template-based and it does not utilize the logic of computer science. Rather, we use the logic of medical thinking as our guiding principle.**

Doctors do not all think through a medical case the same way. In fact, there is a certain art to the practice of medicine. Doctors, therefore, must be given full liberty to express themselves when they document a medical case. An EHR, then, should give them the flexibility to use their own words and to input information in the order they prefer.

Praxis gives them this freedom. The heart of Praxis is a neural network engine that learns directly from the user. As a doctor inputs his or her medical concepts, Praxis remembers them and presents them to the doctor the next time a similar case comes up. As Praxis presents these concepts, the doctor can edit them if the situation calls for it. Praxis then remembers this revised concept, as well. Over time, the need for editing is drastically reduced and soon a doctor is documenting cases in mere seconds.

In the proposed testing environment, however, Praxis would be just beginning to learn from the tester and its full potential would not be realized.

The Proper Measure
I submit to you that the most reliable measure of an EHR's usability is user satisfaction. Of course, user satisfaction is not the same as usability, but it can not be denied that the two are directly linked. Usability leads to user satisfaction. Therefore, instead of wrestling with the proper way to measure usability, the government would better serve the medical profession by measuring user satisfaction based on the real world experiences of doctors using different EMR products. This has already been done successfully by medical associations such as the American Academy of Family Physicians and by research firms like KLAS.

I understand the pressures you must be under to mitigate the terrible press that EMRs have received in general. But usability testing is not the solution. If you go down this path, you will pave the road to mediocrity. Fantastic, innovative ideas presently percolating in the EMR community would likely never see the light of day. The industry would kowtow to your lead, shelving innovative ideas to focus solely on scoring the highest on your usability tests. We have seen this happen before with CCHIT.

It is not the government's role to determine EHR usability. Presently, there are innovative EHRs in the market receiving good press and high marks on usability. Do not burden them with processes and requirements that will lead them to mediocrity. Rather, let the real experts, the medical professionals who use EHRs as part of their important everyday work, voice their preferences in the market.

Please trust the market!

A lot to ponder. Including, tangentially, apropos of ** "
the logic of medical thinking as our guiding principle..."

See also my prior "cognitive traps."

___

JULY 17th UPDATE
July 16, 2011
Seeing Promise and Peril in Digital Records
By STEVE LOHR (NY TIMES)

TECHNICAL standards may seem arcane, but they are often powerful tools of economic development and social welfare. They can be essential building blocks for innovation and new industries. The basic software standards for the Web are striking proof.

Safety is also a potent argument for standards. History abounds with telling examples, like the Baltimore fire of 1904. That inferno blazed for 30 hours, destroying more than 1,500 buildings across 70 city blocks. Fire engines from other cities came to help, but could not. Their hose couplings — each a different size — did not fit the Baltimore fire hydrants. Until then, cities saw little reason to adopt a standard size coupling, and local equipment manufacturers did not want competition. So competing interests undermined the usefulness of, and investment in, the technology of the day.

Today, the matter of standards for electronic health records is raising similar concerns, prompting heated debate in recent meetings of representatives from medicine, industry, academia and government. The stakes, they say, could scarcely be higher. They agree that, when well designed and wisely used, digital records can deliver the power of better information to medicine, improving care and curbing costs. But computer forms, they add, can also be difficult to use, cluttered and distracting, causing more harm than good in health care.

“This is an issue that potentially affects the health and safety of every American,” says Ben Shneiderman, a computer scientist at the University of Maryland.

The controversy points to the delicate balancing of interests involved when creating technical standards that inherently limit some design choices yet try to keep the door open to innovation. It also raises the question of the appropriate role for government in devising such technology requirements.

At issue is the Obama administration’s plan to develop standards to measure how effective and easy digital patient records are to use — applying a research discipline known as human-computer interaction or human factors. (The International Organization for Standardization, which is based in Geneva, defines the usability of a product by three attributes: “effectiveness, efficiency and satisfaction.”)

The need to improve the usability of computerized records is clearly evident — and has been for some time. A 2009 study by the National Research Council, an arm of the National Academy of Sciences, found that electronic health record systems were often poorly designed, and so could “increase the chance of error, add to rather than reduce work flow, and compound the frustrations doing the required tasks.”...

Interesting. See also
So Many EHRs. So Expensive.
By MARGALIT GUR-ARIE

There are currently 386 software packages certified by an ONC approved certification body as ambulatory Complete EHRs, which means that the software should allow the user to fulfill all Meaningful Use requirements and possibly qualify the proud owner for all sorts of CMS incentives. There are 204 more software packages which are certified as ambulatory EHR Modules, and a proper combination of these packages could result in a Complete product, which if used appropriately could lead to the same fortuitous results...

More to come...

1 comment: