Search the KHIT Blog

Wednesday, April 30, 2014

$22.9 billion in MU payments through March 2014

Data just released. Billions with a "B":

Notice the aggregated registrant counts for 2011 and 2012, after which they only give monthly registrations for 2013. Hmmm... 123,648 Medicare EPs in 2011, and 113,658 in 2012, for example. Maybe they just don't want you to easily see the dramatic fall-off in 2013 registrants. There were only 54,062 Medicare docs registered for MU in 2013, less than half, relative to 2012.

In fairness, there's been an uptick thus far in 2014: 19,237 through Q1. Last Chance for Romance, I guess.

Below, another interesting graphic, this one from a HITPC presentation.

Early adopter attrition. Pretty severe, I would think. Twenty five percent of 2011 Attesters bailed in 2012 after picking up their Yeah 1 Stage 1 money? What cannot be clear at this point is how many of the 2011 cohort who made it through 2013 will stick around for the relative chump change of Stage 2 Year 1.

Also unclear form the above is the Medicare vs Medicaid mix of the dropouts. Are a significant proportion of them the Free Money "A/I/U" registrants? On the Medicaid side, recall, an EP or EH can take "a year off."

More to come...

Tuesday, April 29, 2014

You want some cheese with that whine?

Continuing with a recent theme  (re: my April 25th, April 22nd, and April 15th posts).

Is it time to “damn the mandates” and forget meaningful use?
Jennifer Bresnick, April 29, 2014
Even as the healthcare industry marches dutifully into Stage 2 of Meaningful Use, there are still plenty of physicians that have not yet accepted the requirements put forth by CMS in the EHR Incentive Programs.  Dr.  Daniel F. Craviotto Jr., an orthopedic surgeon in Santa Barbara, California, took to the Wall Street Journal this week to protest the restrictive chains of EHR adoption, quality penalties, shrinking Medicare reimbursements, and bureaucratic red tape that prevent a physician from focusing on what’s really important: engaging with and treating patients.
“In my 23 years as a practicing physician, I’ve learned that the only thing that matters is the doctor-patient relationship,” Craviotto writes. “I acknowledge that there is a problem with the rising cost of health care, but there is also a problem when the individual physician in the trenches does not have a voice in the debate and is being told what to do and how to do it. When do we say damn the mandates and requirements from bureaucrats who are not in the healing profession? When do we stand up and say we are not going to take it anymore?”
Aaron Carroll has a beaut of a response in The Incidental Economist.

Once more unto the breach...
April 29, 2014 at 10:15 am  Aaron Carroll
Austin has been working on his trolling skills. He’s alerted me to another op-ed in the WSJ written by an orthopedic surgeon threatening to walk away from it all.

Look, I’m not suggesting that we limit anyone’s free speech in any way. I’m not suggesting that we shouldn’t hear from unhappy doctors. But I’m going to offer them a bit of (unsolicited) advice. You’re starting to be the docs who cried wolf.

In the interest of providing some media strategy, I’m going to go through this bit by bit. Let’s begin:

In my 23 years as a practicing physician, I’ve learned that the only thing that matters is the doctor-patient relationship. How we interact and treat our patients is the practice of medicine. I acknowledge that there is a problem with the rising cost of health care, but there is also a problem when the individual physician in the trenches does not have a voice in the debate and is being told what to do and how to do it.
OK, right off the bat, you’re claiming that you have no voice in the debate as you are being published in one of the most read op-ed pages in the country. You know who doesn’t have a voice? The 300-plus million people who don’t get to have their thoughts heard in the WSJ...
Ouch. Read the whole thing. An utter smackdown.
Across the country, doctors waste precious time filling in unnecessary electronic-record fields just to satisfy a regulatory measure. I personally spend two hours a day dictating and documenting electronic health records just so I can be paid and not face a government audit. Is that the best use of time for a highly trained surgical specialist?
I’m totally with you here. I think this kind of thing sucks. But do you really think that the average American doesn’t spend a whole lot of time doing things at work that they don’t enjoy? Do you really think lawyers don’t hate billing? Do you really think educators don’t hate teaching to tests and grading essays? Do you really think that small businessmen don’t hate regulations? I think many, if not most Americans, will read this and say, “Wait a minute. You only have to do two hours of crap a day? Lucky ducky!”
Again, read all of it.


Just in. Relating to the environmental factors of health:
Supreme Court Upholds Air Pollution Regulation
The Supreme Court has given the Environmental Protection Agency an important victory in its effort to reduce power plant pollution that contributes to unhealthy air in neighboring states.

The court's 6-2 decision Tuesday means that a rule adopted by EPA in 2011 to limit emissions from plants in more than two-dozen Midwestern and Southern states can take effect. The pollution drifts into the air above states along the Atlantic Coast and the EPA has struggled to devise a way to control it.
Power companies and several states sued to block the rule from taking effect, and a federal appeals court in Washington agreed with them in 2012.

Justice Ruth Bader Ginsburg wrote the court's majority opinion. Justices Antonin Scalia and Clarence Thomas dissented.
Wow. 6-2. That rarely happens at SCOTUS these days. The smell in the air today is that of climate-denier wingnut hair on fire.




"Interactive." Move your mouse, stylus, or finger around the map for state-by-state data.


I use the free open source FileZilla FTP upload utility. Very handy. But, twice now I have slipped with my mouse and inadvertently wiped out all of my logins and passwords. Those "clear" items should either be moved elsewhere or backed up with a yes/no "are you sure?" pop-up warning.

More to come...

Monday, April 28, 2014


(Not a real logo, just an allusive BobbyG Photoshop quickie)
In light of my last couple of posts dealing with the continuing issues of EHR dissatisfaction, there's interesting post on THCB.
What an EMR Built on Twitter Would Look Like
WILD PREDICTION: It won’t be long before every patient has a Twitter feed, and doctors subscribe to them for real-time updates.

This is a time when the demands of being a physician are changing, and we need to leverage technology to maintain awareness of a huge number of patients. There is also increasing need for handoffs and communication between providers.

Here’s the bottom line: how can we improve technology when doctors seem so resistant? They are not happy with their EMRs, and rightly so, because they were built to do too much for too many.

Current system is inefficient
The EMR has become essential for documentation, billing, medical reasoning, and communication, among other things. Currently, documentation is built on a system of daily progress notes. If I consult a cardiologist about a case, he needs to go through each note, containing narratives, laboratory values, vital signs, and physical exams.

A patient with a seven-day hospital stay may have twenty notes that need synthesis to put together the story–this can take hours per patient!

In an age where more providers are involved in a patient’s care (whether due to duty hour restrictions, or the increasing presence of specialists for every problem), this inefficiency is not acceptable...

One THCB commenter noted a company proposing to do this very thing.

From their FAQs:
  1. Is Medyear secure?
    Yes, very. Medyear is HIPAA-compliant and would not be able to accept your clinical records in the first place if it did not meet strict government regulations (HIPAA) and standards (Blue Button) for handling healthcare data. Moreover, we apply very powerful database technologies that allow us to secure information at a granular level. No system is perfectly secure, but we obsess over security so you don't have to.
  2. Is my information private?
    It depends on you. You can share your information however you want. You might share with a friend, a doctor, an insurer, a stranger, or with science. You might share just one small part of your health record, or the entire thing. You might share for one hour, or one year. Your privacy is your prerogative, and its up to you to decide.
  3. What is the logo a snowflake?
    We are all unique as individuals, like snowflakes. So we know our body and our lives best. It is up to us to take charge of our healthcare destiny. Medyear is a people's movement to claim our uniqueness and let the healthcare system evolve around us, to suit our unique needs.
  4. How is Medyear related to health reform?
    Many of the key changes in the law and regulations that now make Medyear possible did not exist a few years ago. The government has played an important role by implementing policies like HIPAA-HITECH, Blue Button, Meaningful Use, and Affordable Care Act. As such, we embrace government leadership.
  5. What's in it for patients?
    Typically people will share health information for one of two reasons: to give help, or to get help. Perhaps your sharing can help a family member manage a chronic condition because you have had it yourself. Or perhaps you are the one who needs to share information frequently, in order to get the best medical care possible. Whatever the reason, empathy powers when we share to help, and empathy powers the help we might also someday receive.
  6. What's in it for doctors?
    Medyear allows patients and healthcare professionals to collaborate directly and privately through Consults. The Consult works a lot like other types of encounters (phone calls, office visits, emails). But instead of recapping your health issues, fumbling with paper records, or expressing concerns in one sitting, you can simply share the information you've already collected. This makes everyone's life easier.
  7. What's in it for scientists?
    Scientists working at the very bleeding edge of research need LOTS of data to make important discoveries that lead to the life-saving drugs or procedures of the future. But getting this valuable patient information is not easy, and often when data is obtained it is without patients knowing. With many people perhaps we should just let people share their data voluntarily.
  8. Where can I get my records?
    In late February, the US Government has published a directory, known as Blue Button Connector of the healthcare organizations that currently adopt Blue Button. Over time, it is estimated that over 500 organizations - from small clinics, to lab companies, to large hospitals, to the largest insurers - will adopt Blue Button and be listed in the registry. If your healthcare provider is listed, you simply provide them with your Medyear address and your records are securely transmitted into you private Medyear account.
Intriguing stuff. You have to applaud people for attempting viable, disruptive, value-adding things. This is not materially different from the "Health Record Bank" idea I've written of before (scroll down in the linked page).

I just have a couple of concerns. As I noted in a comment (fixed my typos):
For both “Covered Entities” and (now with the Omnibus Rule) their “Business Associates,” EVERY time “protected health information” (PHI – specific legal definition) is, created, viewed, updated, transmitted, or deleted, there must be a date-time stamped transaction log of the event identifying the authorized person who “created, viewed, updated, transmitted, or deleted” the PHI.

Moreover, once an episode of care note is finished and “locked” for billing, it becomes a legal record, “updates” to which can only be done via appended addenda (and those too must be HIPAA-logged).

No small undertaking to make an app such as the one proffered here fully HIPAA-compliant. Yes, some of the PHI “tweets” are their own “transaction log entries,” but that is likely to not be the entire story. Anyone developing or using such an app had better have done their legal due diligence.
Medyear may claim to be "HIPAA compliant," but I'd want to review the gamut of their Business Associate 45.CFR.164.308 et seq documentation. The audit log requirements here would span multiple individuals and organizations. I don't just sanguinely assume that clinicians using a service like this would uniformly not be in violation of their own Covered Entities' HIPAA Policies (and, we must recall, "privacy" [164.5 et seq] is separate from "security" [164.3 et seq]. ePHI privacy is subject to the potential HIPAA-trumping laws and regulations of every state in the U.S. (not to mention the international privacy ramifications). 

Moreover, being able to connect and merge various individual time-sequential, subject-disparate medical "tweets" into efficient and necessary synthesized clinical "views" is likely to be a significant RDBMS challenge that is no different, really, than those at the heart of traditional EHRs. Anyone routinely using Twitter knows that tweets fly by at warp speed. Medyear clients (in particular, clinicians) will need near-instant access to reassembled "old" pertinent information "100-500 screen-scrolls down" at the point of care. Moreover, might there not be the equivalent of random "inbox overload" with medico-legal liability concerns? I'm just sure my Primary wants to be hammered 24/7 with "medical tweets" ranging from the consequential to the trivial.

Nonetheless. let's wish these folks well. Whatever helps.

I couldn't help one other THCB comment:

They appear to be privately held. Time for a VC-assisted IPO?

More to come...

Friday, April 25, 2014

"Maybe we should cancel Stage 2 and Stage 3"

The critics are again piling on.
In the early days of EMRs, the pioneers like Intermountain, Vanderbilt, Duke, and Partners differentiated themselves by developing their own proprietary EMRs and then using them in a meaningful way, without any financial incentive except their own to do so. Meaningful Use Stage 1 served a valuable purpose; it jump-started the adoption of commercially supported EMRs in an industry that needed jump-starting. Maybe we should cancel Stage 2 and Stage 3, spend some of that money to seed true innovation (think DARPA for healthcare IT), and let survival of the fittest play a role in deciding which organizations will utilize their EMRs, and subsequent data, most effectively to improve healthcare.
From "Is It Time To Eliminate Meaningful Use?"

"Spend some of that money to seed true innovation"?

Seriously? What money? In case you've not been paying attention, the bulk of the MU money has already been dispensed, with the largest proportion of the relatively little that remains earmarked for late-adopting Stage 1 participants. You're not going to "seed" anything of substance on a national scale with remaining MU funds.

Do people even listen to what they're saying?


Process measures like Meaningful Use, CQMs, PQRS, etc, as I've noted before, are tangential proxies for effectiveness in health care. It's assumed that if you are doing and reporting on X, Y, Z, A, B, C, D, E, and F, improved outcomes will eventually follow.

How about if we lay on a concerted effort to measure actual outcomes directly? And, in that regard, a1c (the gamut of lab results, actually), BP, BMI, etc are themselves still proxies, not "outcomes" in terms of end-results health.

Virtually every dx of a suboptimal medical/health condition has an associated prognosis and tx plan (the "P" component of the "SOAP") aimed at improved outcome/resolution (even if it's sadly limited to the palliative for the fatal dx's). We should be mapping realistic interim and end-state "outcomes" goals to every dx. Some will be simple, others maddeningly complex and often problematic. But, without them, we will simply continue to argue endlessly and fruitlessly about health care effectiveness and "value."

Time to seriously get off the dime.

AHRQ website, on "outcomes" -
What is outcomes research?
Outcomes research seeks to understand the end results of particular health care practices and interventions. End results include effects that people experience and care about, such as change in the ability to function. In particular, for individuals with chronic conditions—where cure is not always possible—end results include quality of life as well as mortality. By linking the care people get to the outcomes they experience, outcomes research has become the key to developing better ways to monitor and improve the quality of care. Supporting improvements in health outcomes is a strategic goal of the Agency for Healthcare Research and Quality (AHRQ, formerly the Agency for Health Care Policy and Research).

The urgent need for outcomes research was highlighted in the early 1980s, when researchers discovered that "geography is destiny." Time and again, studies documented that medical practices as commonplace as hysterectomy and hernia repair were performed much more frequently in some areas than in others, even when there were no differences in the underlying rates of disease. Furthermore, there was often no information about the end results for the patients who received a particular procedure, and few comparative studies to show which interventions were most effective. These findings challenged researchers, clinicians, and health systems leaders to develop new tools to assess the impact of health care services...

Measuring Outcomes
Historically, clinicians have relied primarily on traditional biomedical measures, such as the results of laboratory tests, to determine whether a health intervention is necessary and whether it is successful. Researchers have discovered, however, that when they use only these measures, they miss many of the outcomes that matter most to patients. Hence, outcomes research also measures how people function and their experiences with care...

Future Directions
No longer just the domain of a small cadre of researchers, outcomes research has altered the culture of clinical practice and health care research by changing how we assess the end results of health care services. In doing so, it has provided the foundation for measuring the quality of care. The results of AHRQ outcomes research are becoming part of the "report cards" that purchasers and consumers can use to assess the quality of care in health plans. For public programs such as Medicaid and Medicare, outcomes research provides policymakers with the tools to monitor and improve quality both in traditional settings and under managed care. Outcomes research is the key to knowing not only what quality of care we can achieve, but how we can achieve it.
OK. In the same vein, how about this, from Academy Health, HEALTH OUTCOMES RESEARCH: A PRIMER (pdf):
What is outcomes research?
Outcomes research studies the end results of medical care – the effect of the health care process on the health and well-being of patients and populations. It spans a broad spectrum of issues from studies evaluating the effectiveness of a particular medical or surgical procedure to examinations of the impact of insurance status or reimbursement policies on the outcomes of care. It also ranges from the development and use of tools to measure health status to analyses of the best way to disseminate the results of outcomes research to physicians or consumers to encourage behavior change.
The field of outcomes research emerged from a growing concern about which medical treatments work best and for whom. In large part because of its potential to address the interrelated issues of cost and quality of health care, public and private sector interest in outcomes research has grown dramatically in the past several years...
The Setting It Studies
Outcomes research evaluates the results of the health care process in the real-life world of the doctor’s office, hospital, health clinic and even the home. This contrasts with traditional randomized controlled studies, funded mainly through the National Institutes of Health, which test the success of treatments in controlled environments. These are called efficacy studies. Research in real-life settings is called effectiveness research.

The Health Status Measures It Uses
Traditionally, studies have measured health status, or health outcomes, in terms of physiological measurements – through laboratory test results, complication rates (e.g. infections) or death. These measures alone do not adequately capture health status. A patient’s functional status, well-being, and satisfaction with care must compliment the traditional measures...
That was published in 1994, twenty years ago. What are we waiting for? While I don't underestimate the difficulties involved with establishing standardized "operational definitions" of outcome measures, it is not impossible. Surely adding uniform, basic quantitative progress/outcomes metrics to the "Active dx" lists now a requisite staple of certified EHRs is do-able.

"Innovation," anyone?


We refer to the "SOAP" process, documented in the "SOAP note" now firmly in the center of the EHR.
  • Subjective;
  • Objective;
  • Assessment;
  • Plan.
The subjective and objective data (including those comprising the relevant aspects of FH, SH, PMH, Active Rx, Active dx, HPI, ROS, Vitals, Labs, PE) converge to underpin the physician's "assessment" (dx) and resultant "plan" (Rx/tx/px) for mitigating (or curing) the patient's current problem(s) and arriving at better health (the desired "outcome" from the patient POV).

As my HealthInsight Sup Keith Parker (an astute, Harley-riding former Special Forces medic) always liked to admonish, there should be an explicit "E" (evaluation) at the end of the traditional SOAP model. My quickie Photoshop visualization of the process cycle:

In terms of the PDSA improvement model,
  • Plan;
  • Do;
  • Study;
  • Act,
S, O, and A of "SOAPE" comprise the PDSA planning phase (you plan based on the analytic aggregation and synthesis of your current data), the "Plan" (P) is the PDSA "Do" phase, the "Study" is the "E" of SOAPE, and, -- most often -- recursively, we subsequently revisit the "A" (assessment" phase) of SOAPE. Did we hit the mark or not? If not, what next? The "Act" of PDSA.

Fundamental to experimental science broadly is the explicit statement of the empirical (quantitated) goal in the planning phase, answering the question "what will constitute a 'significant' improvement vis a vis the status quo?" You don't get to run an experiment and then arbitrarily decide whether you've "improved" things or not.

Maybe that's "the 'art' of medicine," but it's not science.

Doctors Should Be Paid for Outcomes. But Which Outcomes?

Should we be paid for outcomes?
This is often proposed, but I have trouble understanding it. Real outcomes are not blood pressure or blood sugar numbers; they are deaths, strokes, heart attacks, amputations, hospital-acquired infections and the like.

In today’s medicine-as-manufacturing paradigm, such events are seen as preventable and punishable.

Ironically, the U.S. insurance industry has no trouble recognizing “Acts of God” or “force majeure” as events beyond human control in spheres other than healthcare.

There is too little discussion about patients’ free choice or responsibility. Both in medical malpractice cases and in the healthcare debate, it appears that it is the doctor’s fault if the patient doesn’t get well.

If my diabetic patient doesn’t follow my advice, I must not have tried hard enough, the logic goes, so I should be penalized with a smaller paycheck.

The dark side of such a system is that doctors might cull such patients from their practices in self defense and not accept new ones...
Vik Khanna responds.


More to come...

Tuesday, April 22, 2014

"Electronic medical charts have become ground zero for deteriorating patient care"

Dr. Val Jones
For the past couple of years I’ve been working as a traveling physician in 13 states across the U.S. I chose to adopt the “locum tenens lifestyle” because I enjoy the challenge of working with diverse teams of peers and patient populations. I believe that this kind of work makes me a better doctor, as I am exposed to the widest possible array of technology, specialist experience, and diagnostic (and logistical) conundrums. During my down times I like to think about what I’ve learned so that I can try to make things better for my next group of patients.

This week I’ve been considering how in-patient doctoring has changed since I was in medical school. Unfortunately, my experience is that most of the changes have been for the worse. While we may have a larger variety of treatment options and better diagnostic capabilities, it seems that we have pursued them at the expense of the fundamentals of good patient care. What use is a radio-isotope-tagged red blood cell nuclear scan if we forget to stop giving aspirin to someone with a gastrointestinal bleed?

At the risk of infecting my readers with a feeling of helplessness and depressed mood, I’d like to discuss my findings in a series of blog posts. Today’s post is about why electronic medical charts have become ground zero for deteriorating patient care.

1. Medical notes are no longer used for effective communication, but for billing purposes. When I look back at the months of training I received at my alma mater regarding the proper structure of intelligent medical notes, I recall with nostalgia how beautiful they were. Each note was designed to present all the observed and collected data in a cohesive and logical format, justifying the physician’s assessment and treatment plan. Our impressions of the patient’s physical and mental condition, reasons for further testing, and our current thought processes regarding optimal treatments and follow up (including citation of scientific literature to justify the chosen course) were all crisply presented.

Nowadays, medical notes consist of randomly pre-populated check box data lifted from multiple author sources and vomited into a nonsensical monstrosity of a run-on sentence. It’s almost impossible to figure out what the physician makes of the patient or what she is planning to do. Occasional “free text” boxes can provide clues, when the provider has bothered to clarify. One needs to be a medical detective to piece together an assessment and plan these days. It’s both embarrassing and tragic… if you believe that the purpose of medical notes is effective communication. If their purpose is justifying third-party payer requirements, then maybe they are working just fine?

My own notes have been co-opted by the EMRs, so that when I get the chance to free-text some sensible content, it still forces gobbledygook in between. I can see why many of my peers have eventually “given up” on charting properly. No one (except coders and payers interested in denying billing claims) reads the notes anymore. The vicious cycle of unintelligible presentation drives people away from reading notes, and then those who write notes don’t bother to make them intelligent anymore. There is a “learned helplessness” that takes over medical charting. All of this could (I suppose) be forgiven if physicians reverted back to verbal handoffs and updates to other staff/peers caring for patients to solve this grave communication gap. Unfortunately, creating gobbledygook takes so much time that there is less old fashioned verbal communication than ever.

2. No one talks to each other anymore. I’m not sure if this is because of a general cultural shift away from oral communication to text-based, digital intermediaries (think zombie-like teens texting one another incessantly) or if it’s related to sheer time constraints. However, I am continually astonished by the lack of face-to-face or verbal communication going on in hospitals these days. When I first observed this phenomenon, I attributed it to the facility where I was working. However, experience has shown that this is an endemic problem in the entire healthcare system.

When you are overworked, it’s natural to take the path of least resistance – checking boxes and ordering consults in the EMR is easier than picking up a phone and constructing a coherent patient presentation to provide context for the specialist who is about to weigh in on disease management. Nursing orders are easier to enter into a computer system than actually walking over and explaining to him/her what you intend for the patient and why.

But these shortcuts do not save time in the long run. When a consultant is unfamiliar with the partial workup you’ve already completed, he will start from the beginning, with duplicate testing and all its associated expenses, risks, and rabbit trails. When a nurse doesn’t know that you’ve just changed the patient to “NPO” status (or for what reason) she may give him/her scheduled medications before noticing the change. When you haven’t explained to the physical therapists why it could be dangerous to get a patient out of bed due to a suspected DVT, the patient could die of a sudden pulmonary embolism. Depending upon computer screen updates for rapid changes in patient care plans is risky business. EMRs are poor substitutes for face-to-face communication...

3. It’s easy to be mindless with electronic orders. There’s something about the brain that can easily slip into “idle” mode when presented with pages of check boxes rather than a blank field requiring original input. I cannot count the number of times that I’ve received patients (from outside hospitals) with orders to continue medications that should have been stopped (or forgotten medications that were not on the list to be continued). In one case, for example, a patient with a very recent gastrointestinal bleed had aspirin listed in his current medication list. In another, the discharging physician forgot to list the antibiotic orders, and the patient had a partially-treated, life-threatening infection.

As I was copying the orders on these patients, I almost made the same mistakes. I was clicking through boxes in the pharmacy’s medication reconciliation records and accidentally approved continuation of aspirin (which I fortunately caught in time to cancel). It’s extremely unlikely that I would have hand-written an order for aspirin if I were handling the admission in the “old fashioned” paper-based manner. My brain had slipped into idle… my vigilance was compromised by the process.

In my view, the only communication problem that EMRs have solved is illegible handwriting. But trading poor handwriting for nonsensical digital vomit isn’t much of an advance. As far as streamlining orders and documentation is concerned, yes – ordering medications, tests, and procedures is much faster. But this speed doesn’t improve patient care any more than increasing the driving speed limit from 60 mph to 90 mph would reduce car accidents.  Rapid ordering leads to more errors as physicians no longer need to think carefully about everything. EMRs have sped up processes that need to be slow, and slowed down processes that need to be fast. From a clinical utility perspective, they are doing more harm than good...
Props to for this article. Read all of it.

Pretty stark indictment of EHRs. Her admonitions ought be taken seriously. "Nonsensical digital vomit." LOL, I am so stealin' that.

I'm still trying to dispositively ID just who this "Dr. Val Jones" is. No bio associated with the article. The link cited for the doc's "alma mater," though (Columbia), squares with her being this Dr. Val Jones, affiliated with (SBM), a daily priority surf-by and hang for me.

Dr. Jones also produces audio podcasts.

Nicely done.

Interestingly, in the article cited above, she does tout one EHR product, "MD-HQ."
*Note: there is at least one excellent, private practice EMR (for use in the outpatient setting) that is designed for communication (not billing). It is in use by direct primary care practices and was designed by physicians for supporting actual thinking and relevant information capture. I highly recommend it!
Never heard of them. How many times have we heard that "designed by physicians for physicians" thing? They're apparently not ONC 2014 CERHT listed. So, however good the platform might be on its own usability merits, if you're in the Meaningful Use program, this is not your product.

Nice "look and feel" aesthetics.

Click to enlarge.

UPDATE: I cited this post on LinkedIn. One reply:

I tried repeatedly, to no avail, to get SBM to take up the contentious issues (mainly re patient safety) related to Health IT. But, they're too busy fretting loud and long over the otherwise target-rich "woo" / "SCAM" ("So-Called Alternative Medicine") environment. Necessary priorities, one supposes, in a world of so much B.S. at every turn.

SBM turned me on to Mario Bunge. For that alone I have to always be quite grateful. See my December post "Philosophia sana in ars medica sana."


Renowned writer James Fallows has published an excellent Atlantic Monthly series on Health IT:
From the opening article:
The health-care system is one of the most technology-dependent parts of the American economy, and one of the most primitive. Every patient knows, and dreads, the first stage of any doctor visit: sitting down with a clipboard and filling out forms by hand.

David Blumenthal, a physician and former Harvard Medical School professor, was from 2009 to 2011 the national coordinator for health information technology, in charge of modernizing the nation’s medical-records systems. He now directs The Commonwealth Fund, a foundation that conducts health-policy research. Here, he talks about why progress has been so slow, and when and how that might change...
The series is replete with numerous observations by physicians and others. Required reading. Glad to see this, coming from someone of major publishing stature outside the healthcare space.

ECRI's 2014 top 10 patient safety concerns:
  1. Data integrity failures with health information technology systems*
  2. Poor care coordination with patient’s next level of care
  3. Test results reporting errors
  4. Drug shortages
  5. Failure to adequately manage behavioral health patients in acute care settings
  6. Mislabeled specimens
  7. Retained devices and unretrieved fragments*
  8. Patient falls while toileting
  9. Inadequate monitoring for respiratory depression in patients taking opioids
  10. Inadequate reprocessing of endoscopes and surgical instruments*
*These items were also included in ECRI's top 10 health hazards list.
Courtesy of Healthcare IT News.




Gotta love it.

More to come...

Saturday, April 19, 2014

Interoperababble update

Report: Lack of Interoperability Limits Meaningful Use Program
April 17, 2014,

Meaningful use stages 1 and 2 fall short of implementing the interoperability among electronic health records that is necessary to facilitate information exchange and develop a robust health data infrastructure, according to a new report from a task force assembled by the MITRE Corporation, Health Data Management reports...

HHS released the report, which was developed by JASON, an independent task force that advises the federal government on issues pertaining to science and technology (DeSalvo, "Health IT Buzz," 4/16). The report was funded by the Agency for Healthcare Research and Quality.

In the report, the task force concluded that the criteria for meaningful use stages 1 and 2 "fall short of achieving meaningful use in any practical sense," adding that "large-scale interoperability amounts to little more than replacing fax machines with the electronic delivery of page-formatted medical records."

According to the task force, "most patients still cannot gain electronic access to their health information," and "rational access to EHRs for clinical care and biomedical research does not exist outside the boundaries of individual organizations."...
Link to the contractor's report here: "A Robust Health Data Infrastructure" (pdf)
1.4 Facing the Major Challenges
A meaningful exchange of information, electronic or otherwise, can take place between two parties only when the data are expressed in a mutually comprehensible format and include the information that both parties deem important. While these requirements are obvious, they have been major obstacles to the practical exchange of health care information.

With respect to data formats, the current lack of interoperability among the data resources for EHRs is a major impediment to the effective exchange of health information. These interoperability issues need to be solved going forward, or else the entire health data infrastructure will be crippled. One route to an interoperable solution is via the adoption of a common mark-up language for storing electronic health records, and this is already being undertaken by the HHS Office of the National Coordinator for Health IT (ONC) and other groups. However, simply moving to a common mark-up language will not suffice. It is equally necessary that there be published application program interfaces (APIs) that allow third-party programmers (and hence, users) to bridge from existing systems to a future software ecosystem that will be built on top of the stored data...
Open the pdf, hit Ctrl-F (PC) or Command-F (for us Mac snobs), search the document for keywords/phrases "dictionary," "data dictionary," "schema," or "RDBMS."

Negative. Zip. Zilch. Nada. Nein. Nyet.

1.5 A New Software Architecture
The various implementations of data formats, protocols, interfaces, and other elements of a HIT system should conform to an agreed-upon specification. Nonetheless, the software architecture that supports these systems must be robust in the face of reasonable deviations from the specification. The term “architecture” is used in this report to refer to the collective components of a software system that interact in specified ways and across specified interfaces to ensure specified functionality. This is not to be confused with the term “enterprise architecture,” referring to the way a particular enterprise’s business processes are organized. In this report, “architecture” is always used in the former sense...
...There would be opportunities to operate within the new software architecture even as it is starting to be implemented. The APIs provide portals to legacy HIT systems at four different levels within the architecture: medical records data, search and index functionality, semantic harmonization, and user interface applications...
"Semantic harmonization"? Lordy. Recall "Rigorability"?

They finish up:
7  Concluding Remarks
This report has expressed disappointment in current US progress towards the creation of a robust health data infrastructure, while praising ONC and HHS for their persistence in trying to tackle one of the most vexing problems of today’s society. JASON believes that the two overarching goals, improved health care and lower health care costs, can be achieved by moving to EHRs and the comprehensive electronic exchange of health information. JASON has provided a path toward realizing the promise of a robust health data infrastructure through the development of a unifying HIT software architecture that adheres to the following core principles, all embodying a focus on the patient:

  • Be agnostic as to the type, scale, platform, and storage location of the data
  • Use public APIs and open standards, interfaces, and protocols
  • Encrypt data at rest and in transit
  • Separate key management from data management
  • Include with the data the corresponding metadata, context, and provenance information
  • Represent the data as atomic data with associated metadata
  • Follow the robustness principle: be liberal in what you accept and conservative in what you send
  • Provide a migration pathway from legacy EHR systems.
Yeah, "interoperability is good." All that're needed are yet more irrelevancies ("encrypt data at rest and in transit"), broad, vague cliches ("robustness principle..." "atomic data...") and blinding glimpses of the obvious.

In fairness, the report contains much of substantive concern, e.g., noting that there is "a growing trend towards capturing large quantities of data associated with particular aspects of patient phenotype, analyzing those data, and reporting relevant information back to the patient. These come under the general heading of “omics” technologies, a designation derived from genomics, the first of such data types..."
  1. Genome sequence. The haploid human genome contains 3 x 109 base pairs of DNA. Humans are diploid, so each person has two copies of their genome, one maternal and one paternal. These two copies differ by approximately 0.1%, so it is necessary to sequence the DNA sufficiently deeply to capture all of the genetic variation of an individual in comparison to the reference human genome sequence. The current standard for individual genomes is to sequence to approximately 30-fold coverage, or approximately 1011 bases of sequence data. In the case of cancer, for which it is important to know the genotype of the tumor in comparison to that of normal tissue, a similar level of sequencing might be applied to a tumor sample, and this could include a sample of both the primary tumor and its metastases. Although these data can be compressed by denoting only the difference with respect to the reference human genome sequence, there is clearly a rapidly growing need to incorporate vast amounts of genome sequence information into individual EHRs.
  2. Transcriptome. The transcriptome is a quantitative description of the types and amounts of messenger RNA molecules transcribed from the genomic DNA. Most cells in the body have the same genome sequence, but differential expression of that genome allows cells to become differentiated. Differential expression also defines disease states; for example, breast cancers can be divided into subtypes based on gene expression patterns. The transcriptome can be assessed by microarray analysis or, increasingly, by “RNAseq,” in which DNA copies of the messenger RNAs are sequenced with high coverage. The amount of information generated in a transcriptomics experiment is typically similar to that of a genome sequence, although because every cell type is different and there are many possible variables of cell state, there is the potential for much larger datasets.
  3. Epigenome. The epigenome is a description of the modification states of the genomic DNA and the RNA and proteins that are physically associated with the DNA in the form of chromatin. These modifications are part of the basis for the differential expression of the genome that is manifested in the transcriptome. The epigenome is assessed by a variety of methods that allow for spatial resolution of particular modifications in the genome (e.g., “ChIP-Seq” for measuring modifications of histone proteins, bisulfite sequencing for determining sites of methylation along the DNA, and DNase I hypersensitivity analysis for assessing chromatin structure). Efforts are currently underway to establish reference epigenome information for all genes in all tissue types.
  4. Proteome. The proteome is a description of the types and amounts of proteins expressed from the genome; it is the protein analog of the transcriptome. The proteome is determined in part by the transcriptome from which it is derived, but also by the many subsequent processes that affect proteins, including their translation, transport, post-translational modification, and degradation. The proteome is usually assessed by mass spectrometry. Both the sensitivity of detection and the methods for determining the amount of each protein detected by mass spectrometry are improving rapidly.
  5. Microbiome. The human body contains approximately 10 times more microbial cells than human cells by cell number (although only about 1% by mass). The microbiome is the complete description of this microbial population, including commensal and symbiotic organisms as well as pathogens. The microbiome of an individual is a unique signature, changing with time and environment, and likely responsible for some elements of phenotype. Because many of the microorganisms living in and on humans cannot be cultured, the microbiome is usually assessed by deep sequencing of the genomic DNA of microbiome organisms. There is growing evidence that several pathogenic conditions are due to aberrant states of the microbiome, some of which can be corrected by altering or replacing an individual’s microbiome.
  6. Immunome. The immunome is a description of the state of the immune system of an individual, focusing on the diversity of immune responses based on past exposures. In a narrower sense, such information has long been a part of health records. For example, the Mantoux (or PPD) skin test, and its predecessor the tuberculin tine test, assess the immune response to Mycobacterium tuberculosis antigens as a measure of previous exposure to this pathogen. High-throughput methods now allow testing for reactivity to thousands of antigens at once, in combination with deep sequencing to characterize the genome rearrangements that occur in each immune cell and define its reactivity.
Indeed. But, if we permit haphazard foundational dictionary specifications of "omics" data, things are only going to get much worse.

I refer you to another of my posts:

To coin a technical term,


More to come...

Thursday, April 17, 2014

"There is no true value of anything"

- The late W. Edwards Deming.

The broader context of his observation is that, once you go beyond the mere "enumeration" (counting) of discrete objects (and even that gets fraught), you are estimating.


Good stuff. Although some of the algebra will still lose a lot of people. A number of whom will be cherubically grinding healthcare "Big Data" in pursuit of The Next Big Epiphany.

I know my "Sensitivity vs Specificity" and "Bayes" pretty well. See here as well.

Additionally, count me squarely a "Talebist." And, a "Chebyshev"-ist.

"There is no true value of anything."

That applies to "probabilities" as well. When I hear people state "the probability is..." my reflexive reaction is that "you mean your probability estimate is..." Just as with any other type of statistical calculation, so too do "p-values" form distributions. No serious, competent modern analytics practioner takes undergrad axioms such as "set alpha at 0.05" etc seriously anymore. Such conveniences comprise naive methodological dilettantism. You are interested in outcomes differentials, i.e., "expected values" (the multiplied result of prob(x) times the payoff/payout of x) -- the estimated benefit or cost. Just knowing that two means (including regression trendlines) differ "significantly" is of little practical value. We need to be able, as "accurately" as possible estimate the upshot in terms of differential outcomes (be they scientific, clinical, or business/financial.

Knowing such things to finely-grained, stress-tested valuations -- inclusive of assessing "normality" assumptions -- is how Las Vegas makes its money.

The "normal curve" of undergrad stats angst is a model, the expression of a best-case theoretical bi-directionally asymptotically smooth curvilinear exponential function that exists only in theory.
  1. Chance is lumpy.
  2. Overconfidence abhors uncertainty.
  3. Never flout a convention just once.
  4. Don't talk Greek if you don't know the English translation.
  5. If you have nothing to say, don't say anything.
  6. There is no free hunch.
  7. You can't see the dust if you don't move the couch.
  8. Criticism is the mother of methodology.
-Abelson's Laws

Assessing Absolute vs Relative Risk

Say the ambient prevalence of condition "c" is 1 out of 100, or 1%. We select an appropriate  random sample of 2,000 subjects, splitting half via a double-blind RCT experiment into control group CG (no tx) and half into treatment group TG that gets tx "t". We find that the post-treatment prevalence of "c" is 8 of out 1,000 (0.8%), whereas, true to form, there are 10 subjects with condition "c" in the 1,000 person CG (1%).

Well, our tx seems to have reduced the relative risk prevalence by 20%, ja?

Yeah, but the absolute risk reduction estimate from this trial is just one one-hundreth of that, 0.2%

Prevalence matters. Along with these other empirical considerations cited above.

See "Estimating the size of the treatment effect."

More to come...

Dispatch from the Irony-Free Zone

Recent post by Vik and Al on The Health Care Blog. I really like these guys, but I had to call bullshit on this one in the comments.
Kim il Bezos

"Amazon’s system of employee monitoring is the most oppressive I have ever come across and combines state-of-the-art surveillance technology with the system of “functional foreman,” introduced by Taylor in the workshops of the Pennsylvania machine-tool industry in the 1890s. In a fine piece of investigative reporting for the London Financial Times, economics correspondent Sarah O’Connor describes how, at Amazon’s center at Rugeley, England, Amazon tags its employees with personal sat-nav (satellite navigation) computers that tell them the route they must travel to shelve consignments of goods, but also set target times for their warehouse journeys and then measure whether targets are met.
All this information is available to management in real time, and if an employee is behind schedule she will receive a text message pointing this out and telling her to reach her targets or suffer the consequences. At Amazon’s depot in Allentown, Pennsylvania (of which more later), Kate Salasky worked shifts of up to eleven hours a day, mostly spent walking the length and breadth of the warehouse. In March 2011 she received a warning message from her manager, saying that she had been found unproductive during several minutes of her shift, and she was eventually fired. This employee tagging is now in operation at Amazon centers worldwide.

Whereas some Amazon employees are in constant motion across the floors of its enormous centers— the biggest, in Arizona, is the size of twenty-eight football fields— others work on assembly lines packing goods for shipping. An anonymous German student who worked as a temporary packer at Amazon’s depot in Augsburg, southern Germany, has given a revealing account of work on the line at Amazon. Her account appeared in the daily Frankfurter Allgemeine Zeitung, the stern upholder of German financial orthodoxy and not a publication usually given to accounts of workplace abuse by large and powerful corporations. There were six packing lines at Amazon’s Augsburg center, each with two conveyor belts feeding tables where the packers stood and did the packing. The first conveyor belt fed the table with goods stored in boxes, and the second carried the goods away in sealed packages ready for distribution by UPS, FedEx, and their German counterparts.

Machines measured whether the packers were meeting their targets for output per hour and whether the finished packages met their targets for weight and so had been packed “the one best way.” But alongside these digital controls there was a team of Taylor’s “functional foremen,” overseers in the full nineteenth-century sense of the term, watching the employees every second to ensure that there was no “time theft,” in the language of Walmart. On the packing lines there were six such foremen, one known in Amazonspeak as a “coworker” and above him five “leads,” whose collective task was to make sure that the line kept moving. Workers would be reprimanded for speaking to one another or for pausing to catch their breath (Verschnaufpause) after an especially tough packing job.
The functional foreman would record how often the packers went to the bathroom and, if they had not gone to the bathroom nearest the line, why not. The student packer also noticed how, in the manner of Jeremy Bentham’s nineteenth-century panopticon, the architecture of the depot was geared to make surveillance easier, with a bridge positioned at the end of the workstation where an overseer could stand and look down on his wards. 23 However, the task of the depot managers and supervisors was not simply to fight time theft and keep the line moving but also to find ways of making it move still faster. Sometimes this was done using the classic methods of Scientific Management, but at other times higher targets for output were simply proclaimed by management, in the manner of the Soviet workplace during the Stalin era.
Onetto in his lecture describes in detail how Amazon’s present-day scientific managers go about achieving speedup. They observe the line, create a detailed “process map” of its workings, and then return to the line to look for evidence of waste, or Muda, in the language of the Toyota system. They then draw up a new process map, along with a new and faster “time and motion” regime for the employees. Amazon even brings in veterans of lean production from Toyota itself, whom Onetto describes with some relish as “insultants,” not consultants: “They are really not nice. . . . [T]hey’re samurais, the real last samurais, the guys from the Toyota plants.” But as often as not, higher output targets are declared by Amazon management without explanation or warning, and employees who cannot make the cut are fired. At Amazon’s Allentown depot, Mark Zweifel, twenty-two, worked on the receiving line, “unloading inventory boxes, scanning bar codes and loading products into totes.” After working six months at Amazon, he was told, without warning or explanation, that his target rates for packages had doubled from 250 units per hour to 500.

Zweifel was able to make the pace, but he saw older workers who could not and were “getting written up a lot” and most of whom were fired. A temporary employee at the same warehouse, in his fifties, worked ten hours a day as a picker, taking items from bins and delivering them to the shelves. He would walk thirteen to fifteen miles daily. He was told he had to pick 1,200 items in a ten-hour shift, or 1 item every thirty seconds. He had to get down on his hands and knees 250 to 300 times a day to do this. He got written up for not working fast enough, and when he was fired only three of the one hundred temporary workers hired with him had survived.

At the Allentown warehouse, Stephen Dallal, also a “picker,” found that his output targets increased the longer he worked at the warehouse, doubling after six months. “It started with 75 pieces an hour, then 100 pieces an hour. Then 150 pieces an hour. They just got faster and faster.” He too was written up for not meeting his targets and was fired. At the Seattle warehouse where the writer Vanessa Veselka worked as an underground union organizer, an American Stakhnovism pervaded the depot. When she was on the line as a packer and her output slipped, the “lead” was on to her with “I need more from you today. We’re trying to hit 14,000 over these next few hours.”

Beyond this poisonous mixture of Taylorism and Stakhnovism, laced with twenty-first-century IT, there is, in Amazon’s treatment of its employees, a pervasive culture of meanness and mistrust that sits ill with its moralizing about care and trust— for customers, but not for the employees. So, for example, the company forces its employees to go through scanning checkpoints when both entering and leaving the depots, to guard against theft, and sets up checkpoints within the depot, which employees must stand in line to clear before entering the cafeteria, leading to what Amazon’s German employees call Pausenklau (break theft), shrinking the employee’s lunch break from thirty to twenty minutes, when they barely have time to eat their meal…”

Perhaps the biggest scandal in Amazon’s recent history took place at its Allentown, Pennsylvania, center during the summer of 2011. The scandal was the subject of a prizewinning series in the Allentown newspaper, the Morning Call, by its reporter Spencer Soper. The series revealed the lengths Amazon was prepared to go to keep costs down and output high and yielded a singular image of Amazon’s ruthlessness— ambulances stationed on hot days at the Amazon center to take employees suffering from heat stroke to the hospital. Despite the summer weather, there was no air-conditioning in the depot, and Amazon refused to let fresh air circulate by opening loading doors at either end of the depot— for fear of theft. Inside the plant there was no slackening of the pace, even as temperatures rose to more than 100 degrees.

On June 2, 2011, a warehouse employee contacted the US Occupational Safety and Health Administration to report that the heat index had reached 102 degrees in the warehouse and that fifteen workers had collapsed. On June 10 OSHA received a message on its complaints hotline from an emergency room doctor at the Lehigh Valley Hospital: “I’d like to report an unsafe environment with an Amazon facility in Fogelsville. . . . Several patients have come in the last couple of days with heat related injuries.” 

On July 25, with temperatures in the depot reaching 110 degrees, a security guard reported to OSHA that Amazon was refusing to open garage doors to help air circulate and that he had seen two pregnant women taken to a nursing station. Calls to the local ambulance service became so frequent that for five hot days in June and July, ambulances and paramedics were stationed all day at the depot…"

Head, Simon (2014-02-11). Mindless: Why Smarter Machines are Making Dumber Humans (p. 42-44). Basic Books. Kindle Edition.

It gets worse. Lots more about the odious management culture at Amazon. Walmart too.

More to come...