Latest HIT news, on the fly...


Monday, September 22, 2014

Health 2.0 2014 Day One

Made it. Up at 4:30, 66 mile drive, took two hours. Ugh.

apropos of nothing. Standing in line in the Hyatt waiting for a Starbucks, looked up.
Went by the 8 a.m. "Employers 2.0" session for a bit. The usual "Transparency Is Good" stuff.

Walking back down the hall, I bumped into Fred Trotter. He's gonna be a "wearable tech runway" fashion show "model" in the Mission Ballroom for some Addidas products. Sitting here watching the "runway rehearsal" at the moment.

Just getting started. Going to be a long, fun day.

More to come...

Sunday, September 21, 2014

Up next, Health 2.0 in Santa Clara

That's one of my pics of Indu, from day one of the 2012 Conference. A byline would have been nice; they didn't ask permission. That's just life online, though. After my friend Michael Grimm won America's Got Talent, bootleg "autographed" photos I'd taken of him started showing up on eBay for sale, asking about 100 bucks each. Like, 'whatever?' I let it pass. Some of my friends who me alerted to the pilfered pics were aghast, but, as a practical matter, chasing the perps down would not have been worth my time, and I doubt that anyone banked any serious coin from the scams. Michael didn't know anything about them.

I've covered Dr. Topol before, e.g., at HIMSS13.

My conference M.O. is to do a lot of photography, in order to give readers/viewers a visually vivid sense of what attending is like. The traditional Healthcare / Health IT press are always there in droves covering a lot of topical content in text-only detail. I've yet to see anyone doing what I do (I have no way to know, but I guess I'm doing something right; organizers keep inviting me back). I've seriously thought about getting a GoPro camera to do some walking-around video. There are a gazillion "videographers" at these conferences any more -- all of them doing standard "talking heads" stuff out in the halls. Anyone with a DSLR with HD video and an iMovie or Final Cut app is now a "Video Producer."

I'm scheduled for MGMA (Vegas) and the IHI Forum (Orlando) later this year. Maybe I'll pop for a GoPro.

Registration and agenda are here. Some of what will be going on is cited in a recent TCHB post:
The Must-See Digital Health Startups of 2014

Ten new digital health companies will demo their products for the first time in the Launch! session during the Health 2.0 8th Annual Fall Conference being held in Santa Clara, CA on September 21-24. Launch! is a contest held at 12:00 p.m PDT on Wednesday, September 24th, where the technology is demoed in three and a half minutes. At the end, the audience votes for their favorites. Previous Launch! winners have included Castlight Health, Basis, and OM*Signal. This year’s finalists:
  • Symptify helps the user navigate a series of questions to narrow down the cause of their symptoms while also helping them find a nearby medical facility.
  • Open Source Health MyAVA, uses Open Source health IT for a collaborative patient – physician educational and informational sharing platform for women. Advocating everything from female health, fertility, to healthy aging.
  • Intake.Me is a communication and patient engagement product that allows patients to check-in for their doctor’s visit from anywhere and attach medical records stored in their own virtual private health cloud.
  • Livongo Health introduces the brand new InTouch, which is a diabetes monitor, advisory and coaching service, community, and communications tool—all rolled into one. The concept is so exciting that it’s got Glen Tullman out of his post Allscripts “retirement” and back into the startup game.
  • builds apps with a mission – introducing a PHR that consolidates existing health records onto the user’s mobile device.
  • Hale Health’s founder Anna Larsonn left Apple to develop a new communication platform where patients can digitally connect, talk, and message directly with their doctors.
  • Curve Tomorrow is in partnership with the Murdoch Childrens Research Institute in Melbourne, Australia, launching Sonny. It uses the XBOX Kinect to more accurately track the progress of physical rehab treatment.
  • InpharmD helps physicians make more informed decisions by digitizing the literature search experience. Supervised pharmacy students respond to their inquiries and help them practice according to the latest evidence.
  • ChatrHealth’s Cascadia is set of iPad-based modules that adapt the checklist functionality used by airline pilots to improve safety in in clinical situations, starting with anesthesia and surgery, and moving to other medical procedures.
  • ThriveOn uses a series of personality-based steps to help people take the small steps towards more balanced mental health. Interventions like “cooking dinner for friends” or “going for a healthy walk” are accompanied by the support of a live mental health professional.
While Launch! is in its 7th year of showing brand new technologies, Traction is a brand new competition focusing on the business plans of already successful companies, which are on the verge of raising Series A rounds of $2-$15M. The companies are mentored by experts and judged by a hard nosed panel of venture capitalists who will determine which can scale most quickly...

BTW: in addition to my cameras, I'm bringing my audio gear.

If you want to hook up and chat one-on-one for podcast publication, let me know. In addition to the obvious tech stuff, my interests run to all manner of healthcare improvement issues, ranging from clinical science (including medical education pedagogy) to workflow processes to empirical analytics to public policy to healthcare workforce "psychosocial health" -- i.e., "Just Culture."

See you in Santa Clara. Oh, yeah, in addition to my cameras and audio gear, I'm bringin' my own half and half. :) Rice milk? Soy milk? Seriously?

Bringing my axe too. Even after 55 years on it, I still need to practice every day.


Courtesy of THCB:

In this brilliantly fun mockumentary from German filmmaker Till Nowak, a man named Dr. Nick Laslowicz from the Institue for Centrifugal Research (ICR) recounts his “achievements in the realms of brain manipulation, excessive G-Force and prenatal simulations,” stating unequivocally that “gravity is a mistake.” What follows is a series of increasingly terrifying and equally absurd roller coasters that fling passengers into the sky in an attempt to theoretically improve their cognitive function. Pulling no stops, the fictional ICR has a fully active Facebook page and website, though in reality the film has won numerous awards in screenings around the world in the last year. (via the creators project, swissmiss)
Well, OK...

More to come...

Friday, September 19, 2014


There is nothing wrong with trying to improve the value of health care. But better value will depend as much on doing more of what’s good as it will upon doing less of what’s bad ... You don’t bill for talking to a patient about how he wants to die. There’s no code for providing reassurance rather than ordering a test. And, for all the talk about transforming our health-care system from one that treats illness to one that promotes health, no one pays you to talk to patients about how they might lead healthier lives.

- Lisa Rosenbaum, What Big Data Can't tell us about health care
Props to The Incidental Economist for the heads-up on her writings. Below, another interesting cite from another of her New Yorker articles, apropos of "Big Data" and EBM:
Because cardiovascular disease is so common, cardiologists have been able to study millions of patients, leading to what is arguably the most robust evidence base in all of clinical medicine. Yet despite innumerable outstanding clinical trials, we are always held back by what we don’t know.

Consider the claim that medications are as good as stents for treatment of stable coronary artery disease. This is the idea upon which most accusations about the overuse of stents are predicated. Several studies support this assertion, but the seminal one is the COURAGE trial, which randomly assigned patients with stable blockages to either medication or a stent. The study found that those treated with medications lived just as long as those with stents. COURAGE is a super-star trial, the best of its kind. So why can’t we say, once and for all, that it’s inappropriate to use stents for patients with stable coronary disease?

The answer is that it’s because such a statement is a colossal oversimplification. The fundamental challenge of translating data into practice is what we call generalizability: Can we extrapolate the findings from a trial to real life? If you are a doctor who is trying to practice evidence-based care, the first thing you want to ask yourself is, Would my patient have been enrolled in the trial? Sun Kim would not have been eligible for the COURAGE trial, which excluded all patients with high-risk features—or nine out of ten otherwise eligible patients.

But let’s say that your patient is among the ten per cent who would have been enrolled in COURAGE. Can’t you say now, with certainty, that your patient should not get a stent? Still no. Real life rarely resembles clinical trials, which are, by definition, rarefied environments—well-oiled machines with incredible depth of resources and a staff to orchestrate patient care.

If, as in the case of COURAGE, you set out to show that medications are as effective as stents in treating chronic disease, you want to make sure that the patients in the trial are actually taking those medications. In reality, the rate of adherence to medications is about fifty per cent, but COURAGE not only provided medications for free—it also hired nurse managers who saw patients regularly and adjusted dosages. Not only did these patients adhere to medications at a far higher rate than patients usually do—this adherence also translated to excellent blood pressure and cholesterol control.

If it’s hard to apply the findings from any one trial to the treatment of a particular patient, it’s harder still to use data from many trials to create guidelines that can be applied to any patient. When a group of expert cardiologists were asked to do just that, they recognized that there are many factors to be considered in addition to medication—including the acuity of the disease, the patient’s degree of chest pain, the results of stress tests, and which of four main arteries are blocked. When you account for all of these factors, you come up with over four thousand clinical scenarios for which stenting may or may not be appropriate, many of which can’t be mapped precisely to a clinical trial.

It was in these gaps between data and life where I lost Sun Kim. There is no guideline that says, “This is how you manage an elderly man who asks nothing of anyone, who may or may not be taking his medications, and who has difficulty coming to see you because he vomits every time he gets on the bus.” In a world with infinite resources, we could conduct clinical trials to address every permutation of coronary disease and every circumstance. But that’s not the world we live in. And in our world, I reached a point where I could not keep Sun Kim out of the hospital.
 - When Is a Medical Treatment Unnecessary?
Good stuff.


I'll be there early Monday morning. Register here if you've not already done so.  Should be a very interesting event. My 2013 M-W coverage, here, here, and here. I also went to the Sunday prelims last year.

More to come...

Wednesday, September 17, 2014

EHR programming, discrete mathematics, and interoperability

From the consistently fine EHR Science blog:

Yes, Programming Is Math…
Jerome Carter MD, September 15, 2014

I read a wide range of articles and posts on a daily basis.Many of them are aimed at software developers and entrepreneurs. At least once each year or so, I run across a blog post that takes the very adamant stance that programming is not really related to math. The interesting thing is that many professional programmers in their comments strongly agree with that sentiment. And, to be honest, I saw no strong connection between programming and mathematics until I tried to get a better understanding of clinical workflow. As it turns out, programming is not simply related to mathematics, programming is mathematics...

Susanna Epp, in Discrete Mathematics with Applications, Fourth Edition, offers this view of discrete mathematics.
Discrete mathematics describes processes that consist of a sequence of individual steps. This contrasts with calculus, which describes processes that change in a continuous fashion. Whereas the ideas of calculus are fundamental to the science of technology of the industrial revolution, the ideas of discrete mathematics underlie the science and technology of the computer age.
Over the course of one’s math education, many formulas are learned and the algorithms used to compute their results are internalized. But what happens when a function must be computed for sets containing words and no tried-and-true formula exists? An algorithm is still required.

Consider two sets. Set X = {Jones, Green, Smith, …} and represents patients at a large  provider organization who must be assigned to a primary care provider. Set Y = {Brown, Doe, Gray, …} and contains  the organization’s primary care providers (PCPs). A decision is made to assign each patient to a PCP. Here are the rules: 1) every patient must have a PCP, 2) no patient may have more than one PCP, 3) once a provider has reached 100 patients, no more patients can be assigned until all other PCPs have at least 100 patients.

One way to approach this algorithm might be:

WHILE Unassigned Patients > 0
Select patient from Set X;
IF patient already has a PCP
THEN Select next Patient
Select PCP from Set Y
IF PCP has >= 100 Patients
THEN Select New PCP
ELSE Assign Patient to PCP

This algorithm computes a function between sets that do not contain numbers. This is math, and this is programming. The rules that constrain the algorithm’s operations are similar to rules such as “no division by zero” or “place the number with the fewest columns on the bottom” when multiplying two numbers.

Discrete mathematics covers many topics. Logic, functions, relations, algorithmic efficiency, graphs, sets, proofs, discrete probability, and combinatorics are taught in typical introductory courses, and usually only to math and computer science majors, which explains why most have never heard of discrete math. The algorithm above makes use of logical implication (IF, THEN), sets, and functions.

Relations are assumed to have been a by-product of relational databases
Functions are a type of relation. Relations are more general and have fewer rules. Relations may occur between an arbitrary number of sets. Like functions, they require algorithms to compute outcomes. The outcome of a relation is a tuple. For example, given the following sets: providers, patients, drugs, and dates, one could create an algorithm for a relation in which a provider selects a drug from a formulary, and assigns it to a patient on a specific date: (Doe, Jones, Lasix, 9/15/2014). The resulting tuple represents the relation “Current Medication List.” The rules are: the provider must be the patient’s PCP; the drug must be in the formulary; and the patient cannot be taking any interacting drugs. From this example, it can be seen that relations are common in health care (problem list, prescription, visit history, etc.). Relations existed prior to the invention of relational databases...
[G]iven the following sets: providers, patients, drugs, and dates, one could create an algorithm for a relation in which a provider selects a drug from a formulary, and assigns it to a patient on a specific date: (Doe, Jones, Lasix, 9/15/2014). The resulting tuple represents the relation “Current Medication List.”
Nice. Having begun my white collar career as a 3GL programmer, I understand where he's coming from -- notwithstanding that my "discrete/finite math" acumen is by now a long-atrophied artifact of my distant undergrad days.

OK, let's indeed "consider two sets" (or "n" sets) from a metadata perspective, in light of the foregoing "what happens when a function must be computed for sets containing words and no tried-and-true formula exists?"

Yes, in an EHR we have "structured data" (e.g., integer and floating-point numeric data and "word" acronyms comprising code set entries), and "unstructured data" (e.g., actual "words" comprising free-form narrative fields captured in the RDBMS -- everything from the Admin/Demographics to pieces of FH, SH, PMH, Active Meds, Active dxs, CC, HPI, ROS, and case narratives in the progress note).

Now, on the horse-already-out-of-the-barn "interoperability" front these days, everyone continues to nibble from the outsides in, with a primary focus on jiggering together various kinds of proxy "unique patient identifier," given our political refusal to assign one nationally, like the beleaguered Social Security number (The Medicare HIC number, though, is in effect precisely that; a Social with and alphabet character appended).

One recent comment from the ONC Nationwide Interoperability Roadmap Community Home:
Attribute based identity services are critical given that the US does not have a single identification number per person.  Even with a single identifier, there are errors in attribute updates like married name, SSN, or even changes in biometrics.  Therefor [sic] a smarter reconciliation solution takes advantage of probabilistic matching algorithms of key attributes of a person.  For example: first name, middle name, last name, SSN, DoB, finger print, hand print, retinae, facial recognition. Any 3 of these attributes when correlated provide a 99.999% match. A smart identity service allows for the updating of these attributes as a service (cloud based preferred) so that all applications can subscribe to the service.  Moreover, attribute management by the patient enables access by other entities as determined by the patient.  This sets the stage for patient empowered, patient driven, patient centered models of care, as all health IT systems ultimately depend on robust “Identity Services”. 
Gotta love the "99.999%" assertion. Dude, you can't back that soothing claim up empirically. Such recombinant matching efforts may well be fairly precise relative to provincial single key PID alternatives, but in the programming and data world I came from, you claim "99.999%," you have to be ready to show to the auditors that you can discriminate statistically from between "99.998%" and "100%" in repeated trials -- a.k.a. "significant figures rounding." 

In a world of students now taking their (subjective ordinal rank) GPA's out to 4 decimal places, we've lost sight of such material nuance.

Irrespective of such pedantic nitpicking, we can all agree that a clean (approaching 100% accurate), "no dupes," no nuls" PID will be critical for interop. Perhaps near-universal deployment of biometric factors into a "two-factor auth" ID will mitigate the concerns (though you can just hear the angry cries from TinFoilHat-istan). But, I see a salient secondary concern as well, one that goes back to my rant regarding The Interoperabiity Conundrum.
One. That’s what the word “Standard” means -- er, should mean. To the extent that you have a plethora of contending “standards” around a single topic, you effectively have none. You have simply a no-value-add “standards promulgation” blindered busywork industry frenetically shoveling sand in the Health IT gears under the illusory guise of doing something goalworthy.

One. Then stand back and watch the private HIT market work its creative, innovative, utilitarian magic in terms of features, functionality, and usability. Let a Thousand RDBMS Schema and Workflow Logic Paths Bloom. Let a Thousand Certified Health IT Systems compete to survive on customer value (including, most importantly, seamless patient data interchange for that most important customer). You need not specify by federal regulation (other than regs pertaining to ePHI security and privacy) any additional substantive “regulation” of the “means” for achieving the ends that we all agree are necessary and desirable. There are, after all, only three fundamental data types at issue: text (structured, e.g., ICD9, those within other normative vocabulary code sets, and unstructured, e.g., open-ended free-form SOAP note narratives), numbers (integer and floating-point decimal), and images. All things above that are mere “representations” of the basic data...

Why don’t we do this? Well, no vendors want to have to “re-map” their myriad proprietary RDBMS schema to link back to a single data hub dictionary standard. And, apparently the IT industry doesn’t come equipped with any lessons-learned rear view mirrors.

That’s pretty understandable, I have to admit. In the parlance, it goes to opaque data silos, profitable “vendor lock,” etc. But, such is fundamentally anathema to efficient and accurate reciprocal data interchange (the “interoperability” misnomer) that patients ultimately need and deserve...
Visualize going to Lowe’s or Home Depot to have to choose among 800+ ONC Stage 2 CHPL Certified sizes and shapes of 120VAC 15 amp grounded 3-prong wall outlets.

Imagine ASCII v3.14.2.a.7. Which, uhh…, no longer supports ASCII v2.05.1 or earlier…

Ya with me here, Vern?

NIST/ANSI/ISO Health IT ICDDS – Interoperability Core Data Dictionary Standard...
Consider that no clinical datum has any actionable patient or provider value in isolation. For one thing, as a patient, you get a abnormally high lab analyte result back, and you'll be anxiously demanding a re-do promptly. Moreover, the most fundamental measures of, say, BP, pulse, temp, weight, blood or UA parameters, etc derive their dx significance from their mutual context (often in terms of the one-to-many flowsheet trending variety).

For that context (and its diagnostic utility) to be precise over time, you benefit from metadata standardization. Now, within the "walls" of a single EHR, this is not an issue -- with the exception of those posed by data entry errors that are the bane of all digital IT-based enterprise. But, once you venture outside the walled gardens / data silos to exchange patient data (the "interoperability" misnomer), Dr. Carter's "tuples" are vulnerable to the iterative and recursive messes of "error propagation" and dx analytic degradation, as repeated infusions of data from myriad non-metadata-standardized sources frequently dirty up your data.

See also my July 4th post Interoperababble update.

AMA presses for better EHRs
Calls for design overhaul to boost usability
Docs still unconvinced of EHRs' worth
Meanwhile, half of practices are girding for big headaches caused by ICD-10
The usual gripes.

ERRATUM: Medicine 2064 with Dr. Daniel Kraft

This in the video (below) bothered me a bit.

I've read elsewhere that the planet's "carrying capacity" is now estimated to be ~ 9 billion people. From one of my other blogs:
We comprise roughly 5% of world population, while consuming 25% of its resources. As climate scientist Tim Flannery observed in his book "The Weather Makers,"
In 1961 there was still room to maneuver. In that seemingly distant age, there were just 3 billion people, and they were using only half of the total resources that our global ecosystem could sustainably provide. A short twenty-five years later, in 1986, we had reached a watershed, for that year our population topped 5 billion, and such was our thirst for resources that we were using all of Earth's sustainable production.
In effect, 1986 marks the year that humans reached Earth's carrying capacity, and ever since we have been running the environmental equivalent of a budget deficit, which is sustained only by plundering our capital base. The plundering takes the form of overexploiting fisheries, overgrazing pasture until it becomes desert, destroying forests, and polluting our oceans and atmosphere, which in turn leads to the large number of environmental issues we face. In the end, though, the environmental budget is the only one that really counts...
...By 2001 humanity's deficit had ballooned to 20 percent, and our population to over 6 billion. By 2050, when the population is expected to level out at around 9 billion, the burden of human existence will be such that we will be using -- if they can still be found -- nearly two planets' worth of resources." [pp. 78-79]
As I wrote in a prior post in response:
We can choose to continue to drill, mine, cut down, and grind up the planet in pursuit of short-term business-as-usual, unevenly distributed consumerist comforts, but the day of tragically harsh mass reckoning draws ever closer. The lessons to be drawn from Jared Diamond's "Collapse" are compelling in this regard. There is no shortage whatsoever of constructive and remediative work to be done in support of a sustainable and broadly prosperous future for all of humanity. But, let's not kid ourselves that an unregulated "invisible hand free market" alone will suffice to insure its emergence. Recent economic history alone refutes that assertion.
Complicating this unsustainably destructive "consuming the future" market ethos, consider this: In this decade, more than 40% U.S. corporate profits have come from the "financial services sector," with perhaps 3/4 of those "profits" accruing from iterative, non-tangible-value-adding, grotesquely leveraged "fee income" that fueled the recent economic bubble and caused the current recession.

Let me repeat what I wrote a few days ago (scroll back up a bit):

"The most excruciating of economic and ethical choices inescapably await us. And, while much of the public continues to sleepwalk into this morally daunting future (abetted by the well-funded stultifyingly fear-stoking fallacies of corporate status quo interests), we really don't have the luxury of time to continue to kick this policy can down the road. Sadly, it appears to a worrisome degree that that is precisely where we're headed at best."

I would love to be wrong. I would love to hear from others, too. I certainly don't have all the answers.
OK, sustainable human population "carrying capacity" may well be a moving target. Nonetheless, one cannot but look at developments such as overpopulation and anthropocene global warming with rational concern. The glorious new future of health care will no doubt have its hands full.

Hospital execs: Meaningful Use, interoperability discussions can't be separated
September 17, 2014 | By Dan Bowman

Provider-based health IT professionals stated their case for why they need more flexibility for Meaningful Use Stage 2 during a briefing at the Russell Senate Building in the District of Columbia on Tuesday.

The often tension-filled discussion, hosted by the College of Healthcare Information Management Executives, included hospital CIOs Randy McCleese of Morehead, Kentucky-based St. Claire Regional Medical Center and Marc Probst of Salt Lake City-based Intermountain Healthcare, as well as Peter Basch, medical director for ambulatory EHR and health IT policy at MedStar Health in the nation's capital; William Bria, president of the Association of Medical Directors of Information Systems; and ONC Interoperability Portfolio Manager Erica Galvez.

Following comments from Galvez that Meaningful Use is only "one lever" in the push for interoperability and that the industry needs to "shift the conversation from being so Meaningful Use-centered to thinking about other business drivers," Probst said that as much as he'd like to, separating the incentive program and interoperability conversations isn't an option.

"We can't ignore Meaningful Use" because of the incentives and penalties, Probst said. "If indeed we do move on Oct. 1 with a 365-day reporting period ... it leaves very little flexibility. I know we want to move the discussion from Meaningful Use ... but it is part of the discussion and it does make it a challenge to move to this next part of the discussion. ... We're still making decisions about Meaningful Use Stage 3 that we have yet to introduce to the population. I don't think we can ignore the fact that Meaningful Use is sitting over" our heads...
 Dr. Carter makes an interesting observation over at THCB:

[U]sability is an important trait of good software. However, current EHR systems are doing exactly what they were designed to do. Current EHR systems were conceived as electronic versions of paper charts. Paper charts are passive; they do not assist in patient care. Rather, they exist to provide a record of what has occurred. Using the paper chart as a design guide resulted in electronic systems that emulated their paper forebears. Thus, EHR systems are geared toward data collection and reporting.

EHR systems do not contain models of clinical work. They do not have user-configurable workflows. They are not aware of the cognitive needs of users, and they have no representation of collaborative care. No, they are what they were designed to be: electronic repositories of patient data. And, since the advent of MU, they have become even more focused on administrative and regulatory requirements.

I am greatly encouraged to see a medical professional organization take an active role in creating software that supports clinical care delivery. However, turning an electronic data repository into a clinical care system that intimately supports clinical work is going to be a huge renovation project. Perhaps it would be better to start from scratch using clinical work as the design paradigm instead of the paper chart. After all, building from scratch is cheaper and less problematic than renovating...
Dr. Carter, in that very comment thread, agreed with my argument for a standard data dictionary, btw.
"A data dictionary standard or a standard data set, which might well be the same thing, would be a good start."
I would argue that it's fundamental. The reason that math works -- algorithmically -- owes at root to the uniform standardization of what the symbols (operators and operands) mean (the "metadata"). With respect to the "math" that is EHR programming, on the non-integer/rational number side of things in particular, the lack of strict standardization and the random variability among the metadata inevitably begets the interoperababble that dogs our efforts to this day.


The government needs to act quickly to remedy the impaired usability of electronic health records (EHR) if the technology's touted benefits are to be realized, AMA Board of Trustees Chair Steven J. Stack, MD (left), told officials during a federal hearing last week.

"The AMA and most physicians believe that, done well, EHRs have the potential to improve patient care," Dr. Stack, an emergency physician in Lexington, Ky., said during his 30-minute testimony (pdf). "At present, however, these EHRs present substantial challenges to the physicians and other clinicians now required to use them."

He emphasized that many of today's EHR systems require significant changes before they can deliver the promised outcomes. Referring to Medicare's meaningful use program, he pointed to undesired consequences of pushing EHR systems on physicians before the technology was completely ready for prime time.

"Attempting to transform the entire health system in such a rapid and proscriptive manner has compelled providers to purchase tools not yet optimized to the end-user's needs and that often impeded, rather than enable, efficient clinical care," he said.

He noted that physicians are "prolific technology adopters" but that adoption of EHR systems has required federal incentives because the technology still is "at an immature stage of development."

"EHRs have been and largely remain clunky, confusing and complex," he said.
According to a recent survey by AmericanEHR Partners, physician dissatisfaction with EHR systems has increased. Nearly one-third of those surveyed in 2012 said they were "very dissatisfied" with their system, and 39 percent said they would not recommend their EHR system to a colleague—up from 24 percent in 2010.

Dr. Stack spoke at a "listening session" hosted by the Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC), a division of the U.S. Department of Health and Human Services (HHS). The agencies coordinated the session to examine how a marked increase in code levels billed for some Medicare services might be tied to the increased use of EHRs.

Dr. Stack noted that some Medicare carriers have begun denying payment for charts that are too similar to other records.

"In this instance, even when clinicians are appropriately using the EHR, a tool with which they are frustrated and the use of which the federal government has mandated under threat of financial penalty, they are now being accused of inappropriate behavior, being economically penalized, and being instructed ‘de facto' to re‐engineer non‐value‐added variation into their clinical notes," he said. "This is an appalling Catch‐22 for physicians."

Dr. Stack advised officials that three key actions are necessary to rectify these issues with EHR systems:
  • The ONC promptly should address EHR usability concerns raised by physicians and add usability criteria to the EHR certification process.
  • CMS should provide clear and direct guidance to physicians concerning use of EHRs for documentation, coding and billing.
  • Stage 2 of the meaningful use program should allow more flexibility for physicians to meet requirements as EHR systems are improved.
The AMA will continue to work with federal agencies to improve EHR systems and the Medicare meaningful use program.

More to come...


Monday, September 15, 2014

IT frustration and criticism are by no means unique to healthcare

Read an awesome book last week:

I've been reading a lot of books and other stuff online lately, anything pertaining to "education," "training," and "learning," e.g., see my Sept. 2nd post.

Excerpt from Getting Schooled:
At some point before the first passing bell I will need to turn on my laptop and log onto PowerSchool in order to take attendance as quickly as possible. The process is always slower than I hope. My habit in past years was to take a few minutes at the start of each period to set a positive tone, tell a joke, praise a student’s achievements in another school activity, recount a current event, or read a passage I’d come across that seemed worth reading aloud, all the while taking attendance out of the corner of my eye and noting it in my grade book. The present system is more jealous of my attention. Often my first words to the class are related to the minutiae of the record keeping. We are expected, for example, to record missing homework, so that teachers in subsequent study halls can follow up and see that it gets done. To start a class by asking for a show of hands from those who haven’t done their homework (something I’ve always preferred to do confidentially and at least a few minutes into the period by walking discreetly among the students) is hardly the best way to set a positive tone. If I hit the wrong key, I need to cancel out my notations and do all of them again. If a tardy student walks into the room a minute after I’ve hit “submit,” then I need to call up the screen and do the entire roster over. As noted by the outside consultants who manage the system, it “currently lacks the capability” of maintaining a daily record of absences beyond “total to date”— a must for any teacher who hopes to keep track of when a student was and wasn’t in class and therefore was or wasn’t responsible for a missing assignment or on hand for an essential presentation of material. This means I must take attendance twice, once on the computer and once in a notebook I consult whenever I wish to know for sure what a student may have missed on a given day. The bottom line here— and I use the phrase with an eye to the mind-set that promotes these “systems”— is that I am increasingly devoting more time to the generation and recording of data and less time to the educational substance of what the data is supposed to measure. Think of it as a man who develops ever more elaborate schemes for counting his money, even as he forfeits more and more of his time for earning the money he counts.
Keizer, Garret (2014-08-05). Getting Schooled: The Reeducation of an American Teacher (pp. 51-52). Henry Holt and Co.. Kindle Edition.

My notes home become a moot point after a while because my Friday afternoons— chosen for my latest stays because of the hour and a half it takes my wife to get home from her Friday job at Dartmouth— are soon taken up with other tasks, many of them occasioned by the modern school’s almost insatiable thirst for “data” and the timely (i.e., as close to instantaneous as possible) recording of the same. In addition to grades and homework assignments, we are required to do a “productivity rubric,” which must be tallied for each student for each marking period and for the “progress report” periods in between; in other words, eight times a year. The productivity rubric is a feature of the PowerSchool grading system that allows teachers to assign numbers of 1 to 4, with 4 being the highest, to criteria presumably not subsumed by academic grades, such as “initiative,” “cooperation,” “attendance,” “behavior,” and “responsibility.” A faculty committee has designed a two-page spreadsheet that defines the meaning for each criterion of “productivity”— what distinguishes a 3 for behavior from a 2, for instance— and also attempts to reduce the vexing overlap between categories like “initiative” and “responsibility.” It goes without saying that the guide creates as many questions as it answers. What score should I give to a student who is missing far too many days of school but who does a better job of meeting her deadlines than a number of students with close to perfect attendance? Do I give her a number that amounts to a wink at truancy or a number that turns a blind eye to the efforts of a kid who’s anything but a deadbeat? What conclusions will she, or her parents, draw from the word unsatisfactory or the word acceptable? I can only give a number that designates a word; I cannot put the word into a sentence.

Though I approach the process with as much care and diligence as I can, vowing to myself that I will never allow skepticism to be a cover for shoddiness, I resent the chore deeply. I see it as part and parcel of the way in which “the school of the twenty-first century” is continually trying to mask the ambiguities of evaluating student performance by a pretense of rigorous objectivity. In English classes, for example, we avoid assigning an “arbitrary” grade for a piece of writing by constructing a “scoring rubric” of roughly ten criteria and assigning ten no less arbitrary scores to each, adding them up to achieve a grand total of subjectivity that is undoubtedly as solid as a Freddie Mac mortgage or a Miss America scoring card.

Even more I resent the way in which our jobs are increasingly dictated by the tools we employ. Form doesn’t follow function; form dictates function. I don’t want to sound dogmatic or, worse, ungrateful. Without a doubt, the PowerSchool program, once mastered, offers a more efficient way of recording grades than I’ve ever encountered. Every time you add a grade to the roster, the student’s average for the marking period is automatically computed and displayed. The end-of-marking-period all-nighter with a roped-in spouse doing backup duty on a calculator has mercifully gone the way of the mimeograph blues. But digital technology abhors a vacuum even more than nature does; it insists on reinvesting whatever time it saves, and it insists on doing so according to its own agenda. The purchaser’s need to justify the cost of the technology also plays a part. If a school system invests money in a sophisticated computer program that includes a feature for calculating the daily growth rate of a user’s moustache, then don’t we owe it to the taxpayers to see that every man, woman, and child capable of growing a moustache begins doing so at once?

The first time I try to do my productivity rubric it takes me several hours. I have roughly eighty students and five criteria, which means four hundred separate considerations and data entries. Times eight, that comes to thirty-two hundred by the close of the year; I try not to think too much about that. There are few people still left at school on a Friday afternoon, but I have received a good tutorial in advance. I should note that I never find myself floundering with a computer task because someone has handed it off with an attitude of sink or swim. But somewhere in the inner sanctums of the school’s IT system, or in the empyrean of cloud computing, or perhaps in the domain of PowerSchool itself, there resides a spore of latent indignation. Suddenly my screen is taken over by red headlines accusing me of things I’m not sure I even understand. The launching of a North Korean nuclear warhead could hardly produce a more alarmist screen. I’m unable to give a precise account of the wording because my screen goes black before I can read it a second time. Fearing that one inadvertent keystroke may have caused a digital meltdown, I run for the English teacher in the room next door, who is also working late, and ask for her help. She is a compassionate, careful woman who teaches both Advanced Placement and remedial-level English with the gentle hand that each requires, and I can tell that my stress is causing stress for her. I can also tell that she is doing her best to avoid any insinuation of stupidity on my part when she asks, “As you were going along, did you happen to hit save?”

Not once. I feared that saving before I could double-check my entries would lock in mistakes that I might not be able to change, a foolish notion perhaps, though not inconsistent with what I’ve seen so far of the system’s potential for capricious finality. As for the Armageddon screen display, it strikes my colleague as nothing more than what these machines will sometimes do. Every so often a gargantuan gorilla will seize a woman in his paw and climb to the top of the Empire State Building— just the nature of the beast, I guess, no different from the way that an exhausted human being overcome by a sense of futility will sometimes break down and sob. I will do that only once in the entire school year, and I keep myself under control until my colleague leaves the room. Anyone who stops in thereafter might wonder which member of my family has died. But there are no casualties to speak of beyond the loss of an hour or two with my wife and the jettisoning of a few quaint intentions. I entered the scores first in my paper grade book, so it seems I’ve “saved” them after all. I’ll find some other use for the fancy note cards.
[ibid, pp. 87-89]
Health IT grousing certainly has its counterparts in other domains. Easy to forget that.

This (below) is also worth noting. In the wake of his being out sick for an extended period of serious illness (PNU), Garret is given a teacher's aide to help him work his way back up to speed:
It’s remarkable how much smoother things go with a competent assistant. Some teachers have the benefit of an aide, though strictly speaking the aide is often not the teacher’s but a particular kid’s. Which is to say that the need for an aide is usually predicated on a handicapping condition in a student, not by the limits of what one human being with two hands and two feet can accomplish in a room full of twenty to thirty kids. It might surprise you, though it shouldn’t, that teachers are among the few professionals with no assistants. Think of a doctor without a nurse or a receptionist, a lawyer without a law clerk, a chef without a prep cook, even a clergyperson without an acolyte or deacon. Plumbers and electricians routinely have helpers. Rock musicians have guitar techs. Golfers have caddies. So much for the important professions. A teacher in charge of the educational development of fifty to a hundred diverse and needy human beings is routinely on his or her own. [ibid, pp. 226-227]
I could not recommend this book more highly. I posted more excerpts on one of my other blogs.

So, I seem to have sort of a study group core connect-the-dots "seminar curriculum" accruing:

By no means exhaustive. But, IMHO, useful for probing and synergizing the salient elements of effective education for a "Just Culture/Talking Stick" healthcare workforce.

I've not given any prior play to Daniel Pink's book "Drive," so I will cite from the Recap Summary:
Drive: The Recap 
This book has covered a lot of ground— and you might not be able to instantly recall everything in it. So here you’ll find three different summaries of Drive. Think of it as your talking points, refresher course, or memory jogger. 

Carrots & sticks are so last century. Drive says for 21st century work, we need to upgrade to autonomy, mastery & purpose. 

When it comes to motivation, there’s a gap between what science knows and what business does. Our current business operating system— which is built around external, carrot-and-stick motivators— doesn’t work and often does harm. We need an upgrade. And the science shows the way. This new approach has three essential elements: (1) Autonomy— the desire to direct our own lives; (2) Mastery— the urge to make progress and get better at something that matters; and (3) Purpose— the yearning to do what we do in the service of something larger than ourselves...
Chapter 4. Autonomy
Our “default setting” is to be autonomous and self-directed. Unfortunately, circumstances— including outdated notions of “management”— often conspire to change that default setting and turn us from Type I to Type X. To encourage Type I behavior, and the high performance it enables, the first requirement is autonomy. People need autonomy over task (what they do), time (when they do it), team (who they do it with), and technique (how they do it). Organizations that have found inventive, sometimes radical, ways to boost autonomy are outperforming their competitors.

Chapter 5. Mastery
While Motivation 2.0 required compliance, Motivation 3.0 demands engagement. Only engagement can produce mastery— becoming better at something that matters. And the pursuit of mastery, an important but often dormant part of our third drive, has become essential to making one’s way in the economy. Indeed, making progress in one’s work turns out to be the single most motivating aspect of many jobs. Mastery begins with “flow”— optimal experiences when the challenges we face are exquisitely matched to our abilities. Smart workplaces therefore supplement day-to-day activities with “Goldilocks tasks”— not too hard and not too easy. But mastery also abides by three peculiar rules. Mastery is a mindset: It requires the capacity to see your abilities not as finite, but as infinitely improvable. Mastery is a pain: It demands effort, grit, and deliberate practice. And mastery is an asymptote: It’s impossible to fully realize, which makes it simultaneously frustrating and alluring.
Chapter 6. Purpose
Humans, by their nature, seek purpose— to make a contribution and to be part of a cause greater and more enduring than themselves. But traditional businesses have long considered purpose ornamental— a perfectly nice accessory, so long as it didn’t get in the way of the important things. But that’s changing— thanks in part to the rising tide of aging baby boomers reckoning with their own mortality. In Motivation 3.0, purpose maximization is taking its place alongside profit maximization as an aspiration and a guiding principle. Within organizations, this new “purpose motive” is expressing itself in three ways: in goals that use profit to reach purpose; in words that emphasize more than self-interest; and in policies that allow people to pursue purpose on their own terms. This move to accompany profit maximization with purpose maximization has the potential to rejuvenate our businesses and remake our world.

Pink, Daniel H. (2011-04-05). Drive: The Surprising Truth About What Motivates Us (Kindle Locations 2737-2798). Penguin Group US. Kindle Edition.

Much of alternative medicine originated with a “lone genius” who had an epiphany, thought he had discovered something no one had ever noticed before, extrapolated from a single observation to construct an elaborate theory that promised to explain all or most human ills, and began treating patients without any attempt to test his hypotheses using the scientific method. Some of them were uneducated laymen, others were scientifically trained medical doctors who should have known better.
From my requisite daily priority blog surfing stops. Good stuff also here at one of my other destinations, The Neurologica Blog:
INTEROPERABILITY SHOULD HAVE COME FIRST: A leading health policy voice in Congress said this Monday that the nation may have put the cart before the horse when it comes to the exchange of electronic health information. Rep. Mike Burgess, a physician, noted that federal health laws and regulations placed an emphasis on providers adopting EHRs, while putting off until later the EHR systems’ exchange of patient information. “I don’t know if the focus being on meaningful use originally, maybe that focus should have been on interoperability, and the meaningful use stuff come later,” Burgess said at ONC’s Consumer Health IT Summit. He reiterated calls for wiser use of meaningful use dollars — almost $25 billion have been spent to date. Burgess said after his 10-minute talk that the House Energy and Commerce Committee, on which he is vice chair of the health subcommittee, is looking at changes to the meaningful use program as part of its 21st Century Cures work. “What that would look like is still under discussion,” he said.
From Healthcare Dive:
Part of what sucks the value out of EMRs is the reality that providers can't share data with one another. Free, compatible data flow from doctors to hospitals to other health facilities is still at a primitive stage. That's the case despite demands from policymakers that EMRs become "interoperable," a nice way of asking that vendors drop the walls forcing providers to use their product and their product only.
Yeah, but it continues to be the prevailing business case that "Opacity (+barriers to entry) = Margin." Efficient Markets Hypothesis 101.

More to come...

Friday, September 12, 2014

"The Fallacy of Value-Based Health Care" - Margalit Gur-Arie

Margalit Gur-Arie writes the ever-astute ON HEALTH CARE TECHNOLOGY blog. You would do well to bookmark it. Cross-posted with permission.

Value-based health care is antithetic to patient-centered care. Value-based health care is also diametrically opposed to excellence, transparency and competitive markets. And value-based health care is a shrewdly selected and disingenuously applied misnomer. Value-based pricing is not a health-care innovation. Value-based pricing is why a plastic cup filled with tepid beer costs $8 at the ballpark, why a pack of gum costs $2.50 at the airport and why an Under Armour pair of socks costs $15. Value-based pricing is based on manipulating customer perceptions and emotions, lack of sophistication, imposed shortages and limitations. Finally, value-based prices are always higher than the alternative cost-based prices, and profitability can be improved in spite of lower sales volumes.

Health care pricing is currently a smoldering mixture of ill-conceived cost-based pricing with twisted value-based pricing components. For simplicity purposes, let’s examine the pricing of physician services. As for all health care, the pricing of physician services is driven by Medicare. The methodology is neither cost-based nor value-based and simultaneously it is both. How so? Medicare fees are based on relative value units, which are basically coefficients for calculating the cost of providing various services in various practices, of various types and specialties. The price, which is also the cost since it includes physician take home compensation, is calculated by plugging in a dollar value, called conversion factor. The conversion factor, which is supposed to represent costs, is not in any way related to actual production costs, but instead it is calculated so the total cost of physician services will not exceed the Medicare budget for these services. Buried in this complex pricing exercise is a value-based component. A committee of physicians gets to decide the requisite amount of physician effort, skills and education, for each service. Whereas in other markets the value decision hinges on buyer perceptions, in health care it is masquerading as cost.

The commercial insurance market adds a more familiar layer of complexity to the already convoluted Medicare fee schedule baseline. Unlike Medicare fees, which are nonnegotiable, private payers will engage in value-based negotiations with larger physician groups and health systems that employ them. Monopolistic health systems in a given geographical area can pretty much charge whatever the market can bear, just like the beer vendor at your favorite ballpark does, and brand name institutions get to flex their medical market muscles no differently than Under Armour does for socks. This is value-based pricing at its best. Small practices have of course no negotiation power in the insurer market, but as shortages of physician time and availability begin to emerge, a direct to consumer concierge market is being created, providing a new venue for independent physicians, primary care in particular, to move to a more profitable value-based pricing model.

Unsurprisingly this entire scheme is not working very well for any of the parties involved, except private insurers who thrive on complexity and the associated waste of resources. Upon what must have been a very careful examination of the payment system, Medicare concluded that it does not wish to pay physicians for services that fail to lower Medicare expenditures, and Medicare named this new payment strategy value-based health care, not because it has anything in common with value-based pricing, but because it sounds good. Another frequently used term in health care is value-based purchasing, which is attempting to inject the notion of quality as the limiting factor for cost containment. However, since Medicare is de facto setting the prices for its purchases, there is really no material difference between these two terms.

We need to be very clear here that value-based health care is not the same as quality-based health care. The latter means that physicians provide the best care they know how for their patients, while the former means that physicians provide good health care for the buck. To illustrate this innovative way of thinking, let’s look at the newest carrots and sticks initiative, scheduled to take effect for very large medical groups (over 100 physicians) in 2015. Below is a table that summarizes the incentives and penalties that will be applied through the new Medicare Value-based Payment Modifier (pdf).

There are several things to note here. First, if your patients receive excellent care and have excellent outcomes, you will receive no perks if that excellence involves expensive specialty and inpatient services, whether those are the accepted standard of care or not. You would actually be better off financially if you took it down a notch and provided mediocre care on the cheap. The second thing to notice is that you will not get penalized for providing horrendously subpar care, if you do that without wasting Medicare’s money.

Another intriguing aspect of this new program is that you have no idea how big the incentives, if any, are going to be. The upside numbers in the table are not percentages. They are multipliers for the x factor. The x factor is calculated by first figuring out the total amount of penalties, and that amount is then divided among those who are due incentives. If there are few penalties, there will be meager incentives. Lastly, those asterisks next to the upside numbers, indicate that additional incentives (one more x factor) are available to those who care for Medicare patients with a risk score in the top 25 percent of all risk scores.

As with everything Medicare does, this too is a zero sum game. For there to be winners, there must be losers. One is compelled to wonder how pitting physician groups against one another advances collaboration, dissemination of best practices, or sharing of information, and how it benefits patients. Leaving philosophical questions aside, the optimal strategy for obtaining incentives seems to be transition to a Medicare Advantage type of thinking: get and keep the healthiest possible patients, and make sure you regularly code every remotely plausible disease in their chart. Stay away from those dually eligible for Medicare and Medicaid, the very frail, the lonely, the infirm, or the very old, and don’t be tempted to see a random person who is in a pinch, because there is always the chance that he or she will be attributed to your panel following some hospitalization or other misfortune.

The Value-based Payment Modifier is for beginners. It is just the training wheels for the full-fledged risk assumption that Medicare is seeking from physicians and health care delivery systems in general. The grand idea is not much different than providing an aggregated and risk adjusted defined contribution for a group of assigned members, and having the health care delivery system absorb budget overruns, or keep the change if they come in under budget. There is great value in such a system for Medicare and commercial payers certain to follow in its footsteps, and perhaps this is why they decided to call it value-based. Ironically, the equally savvy health care systems are fighting back precisely by building the capacity to create a true value-based pricing model for their services through consolidation, monopolies, corralled customers, artificial shortages, confusing marketing, and diminished physicians.

It is difficult to lay blame at the feet of health systems for these seemingly predatory practices, because transition to a perpetual volume-reducing health care system is by definition unsustainable. The infrastructure and resources needed to satisfy all the strategizing, optimizing, counting and measuring activities required for value-based health care, whether the modest payment modifier or the grown up accountable care organization (ACO), are fixed costs added to health system expenses year after year. However, the incentives or shared-savings are temporary at best, because at some point volumes cannot be reduced further without actually killing people. Either way, in the near future, and for already frugal systems, in the present, all incentives will dry up leaving only massive outlays for avoiding penalties coupled with increased risk for malpractice suits.

And as these titans are clashing high above our little heads, two outcomes are certain: individual physicians will be paid less and individual patients will be paying more for fewer services. This is how we move from volume to value. Less volume for us, more value for them.

Below, apropos?

BREAKING: Merger creates largest nonprofit system in Illinois
By Katie Bo Williams | September 12, 2014

...This merger is a big deal, according to Jordan Shields, vice president at Juniper Advisory, which provides hospital M&A services. The deal "is going to shake people," Shields said. "What this does is change the gravity in the metropolitan area."

Still, despite the scale of the merger, the combined system will still control only 25% of the local market, which the Chicago Tribune calls "fragmented" compared to other metropolitan areas. Perhaps its biggest influence on the Illinois healthcare market will be a fresh spate of mergers in the area. Michael Sachs, chairman Skokie-based healthcare consulting firm Sg2, expects a rise in M&A activity following the deal:

"This will probably trigger another set of consolidations; it's bound to occur," said Sachs.

The big win for Advocate here—beyond the normal benefits of merging, like reducing costs through coordinated care and boosting buying power with suppliers—is the incorporation of NorthShore's patient base into its business model. The North Shore suburbs are a wealthy area and as such, have a lot of patients with commercial insurance, as opposed to less-lucrative Medicare or Medicaid coverage. Moreover, the four-hospital system has a reputation as an efficiently-run operation with a strong balance sheet.

There will likely be layoffs as the system reduces redundancies.
"Nonprofit." Right.