Search the KHIT Blog

Sunday, April 13, 2014

CQM am•phib•o•ly (æmˈfɪb ə li)


I've been reviewing the Meaningful Use EP and EH Stage 2 CQMs (Clinical Quality Measures). This one jumped right out on page one, first for the potential Boolean amphiboly ("and" vs "or" vagueness) and then for the fairly extensive internal EHR coding logic (or user-initiated report query logic) it would take to calculate and submit this one measure (of 64).

"Alcohol and Other Drug Dependence"? Taken literally (?), you could see it interpreted as -- to get into the numerator -- I have to have both a reported drinking problem and I admittedly habitually smoke pot, etc. What they really intend is "Alcohol or Other Drug Dependence," right? e.g., see the denominator column on the right: "alcohol or other drug dependency."

Boolean amphiboly goes to the fact that computers are stupid, if consistent. They follow their Boolean logic / Order of Ops rules reflexively. They don't "get" what you "really intended."

Query a database for "...where category = "Cabernet" .and. volume = "750" .and. price = 9.99 .and. price = 10.99..."

You get zero items in the search results. You really meant ".or." as it goes to price. You wanted to see a list of all 750 ml bottles of Cab priced at exactly either $9.99 or $10.99.

I learned my amphiboly lessons the hard way, writing computational code in a forensic-level radiation lab in Oak Ridge in the 80's, where the consequences of error were immediate and severe.

Sloppy use of language leads to sloppy thinking (and the other way around). Boolean amphiboly remains a major reason we have bugs in software.

Below, Soapware addresses NQF 0004.


"Chemical dependency issues." Alcohol "and" other psychoactive substances all rolled up into one "dependency" syndrome -- via ".or." logic under the hood.

It'd be one thing if CMS or NIST were delivering standardized plug 'n play "black box" software functions to the EHR vendors to be embedded in their respective products, but, given the lack of a standard data dictionary and hundreds of differing RDBMS schema, that cannot be the case. Everyone has to code their own proprietary CQM computational and reporting functionalities. All we can do is hope that the 2014 CEHRT process verifies the accuracy of the CQM I/O for each certified product.

You comfortable with that?

THE MEDIAN IS NOT THE MESSAGE

Couldn't help the play on the late Stephen Jay Gould's essay title.

NQF 0495



The "median," recall, is the midpoint of a distribution that has been sorted and ranked from smallest to largest value. to wit,
1  2  3  4  5  6  7  8  9
The median is 5, the precise midpoint, with four values on either side (as is the arithmetic average in this case: 45/9=5). Where the array contains an even number of items,
1  2  3  4  5  6  7  8  9  10
The median here is 5.5 (you "interpolate" by averaging the two central values, 5 and 6). Again, in this case, the arithmetic average is the same as the median, 5.5. For this discussion, we could think of these as "hours to admission from entry to the ER." Probably wouldn't be far off, lol...

We use medians in lieu of averages where the "weighting"/biasing effect of extreme high or low values would misleadingly skew the simple average "central tendency" measure.

e.g., say employer I.B. Gready has 9 workers, each of whom is paid $10,000 per year. He pays himself $910,000 a year. Total payroll, then, is a million dollars. "Average" salary is an impressive $100k ($1M/10).

Median salary, though, is still $10k, representative (exactly in this case) of what everyone but I.B. Gready earns.

Using a median for this CQM is no doubt appropriate, but I question the utility of just reporting the median. From an Ops improvement perspective, I'd want to see the entire distributions from low to high, the range, the skew, the kurtosis ("fat" or "thin" "tails"). I'd want to be looking for "special cause" variabilities -- correlations indicative of contingent process suboptimalities. Two hospitals might report quite similar medians, but one experiences extreme volatility whereas the other has only a tight, relatively more predictable variability (the "sigma"). You'd want to know why.

Simply reporting medians tells us way less than we need to know for QI. Again, the coding logic for these measures can be rather complex. Are we getting actionable value for the efforts. If not, we're in Quadrant Three ("Urgent but not Important").

CMS:
Measuring and reporting CQMs helps to ensure that our health care system is delivering effective, safe, efficient, patient-centered, equitable, and timely care.
One hopes. How about some outcomes measures? I know that these CQMs are "proxies" -- it's assumed that if you minimally do X, Y, Z, A, B, C, D, E, and F and report on them to the feds, better patient outcomes will eventually follow.

A leap, IMO.

BTW, eClinicalWorks has a nice tabulation of the CQMs here.
__

JUST IN

Saw this in a LinkedIn email.


Expat gigs in the Middle East are gravy postings. Tax-free salary.
___

More to come...

No comments:

Post a Comment