Search the KHIT Blog

Tuesday, July 28, 2015

Healthcare shards update. More sand in the gears.

Mixing a couple of metaphors, I know.

Updating my ongoing personal story.

Finally had my endo-rectal coil pelvic/prostate MRI, at Muir Medical Center in Walnut Creek on July 9th ("Bay Medical Imaging," yet another subcontractor).

Boy, that was fun. How far you gonna keep pushing that probe? You gonna floss my teeth with that thing?

Lordy. When they inflated the balloon at the tip of the probe post-insertion was really Special.

My Stanford 2nd-opinion consulting radiation oncologist, who'd ordered the study, called me the next day to apprise me of some net good news, on a number of counts. He was pleased with what he called "great images." Clear absence of evidence of any mets, one solid tumor contained inside the prostate capsule, left side.

We joked about the MRI px. "Yeah, I'm never sure how much I should tell patients about what to expect with that one."

He gave me names of five experienced docs he trusted who specialize in one or more of my tx options; one at Stanford, one at UCSF, one at Alta Bates/Sutter, and two at Diablo Valley Oncology. His NP then called me a couple of days later. She'd reviewed my chart, and assured me that the outcomes data for whatever I chose -- permanent low-dose brachytherapy, inpatient high-dose removable brachytherapy, or IMRT (Intensity Modulated Radiation Therapy) -- were "statistically equivalent," with long-term "cure" rates running in the high 90%'s.

Nuke that puppy.

I'm opting for "Calypso" IMRT, and on the 17th I had an initial consult with the radiation oncologist I'd decided on (the doc at UCSF couldn't even see me until September).

Really like this doctor. Feeling very comfortable with the choice of both doctor and treatment. I have a dear friend who had this same tx a year ago, with a great outcome.

The doc said he would convey the px order for the Calypso beacon implants to my urologist the following Monday. No time to waste; it's a nine-week commitment of M-F low-dose focused-beam rad tx. I may end up missing most of this year's Health 2.0 Conference.

The following Monday I called Urology. No, they had nothing yet.

I called again several times through the week. Left messages. No one got back to me until Friday.

"We've submitted to BCBS for pre-authorization. We can't schedule you until we have it."

I called the radiation oncology clinic. For one thing, no point in getting the Calypso implants if BCBS is gonna deny my treatment. What can you tell me?

Same story. I was told they had to also submit for pre-auth for the IMRT tx. "It could take 7 to 10 business days. We will let you know."

[Expletive deleted]

Given my prior initial MRI denial dustup with BCBS, I was not a happy camper at this news. Anxious to get this thing moving. The relative aggressiveness of my particular adenocarcinoma is in an indeterminate range, based on all of the indices to date (including the OncoType dx genetic assay).

Calypso IMRT is regarded as a "premium" tx option in terms of cost -- which is proving extremely difficult to pin down with any precision (that whole healthcare pricing opacity thing). I worried that BCBS would again throw sand in my gears, interposing themselves between me and my doc.

Which, of course, could also have the perverse effect of making my condition more expensive to treat should significant delay result in mets getting loose.

Then, yesterday, I got calls from both the urologist's office and the radiation oncology clinic.

A 180, from both calls.

RadOnco: "No, we don't need a pre-auth. You have a PPO plan" (meaning, I guess, that my insurance was accepted). The urology office staffer also said Calypso beacons implant px pre-auth was not necessary after all. Given that I've been seeing their doc for more than a year, and he's billed and been paid by BCBS multiple times, I already knew that my insurance was accepted there, but their initial assertion that a pre-auth was necessary for this procedure still has me a bit puzzled. A relatively minor error, but, still, more sand in the gears.

I am now assured that the Calypso beacons are on order and I will be scheduled for the implant px forthwith.

Thrilled to know that it's a "trans-rectal" px akin to the biopsy.

Once I heal up from that (about a week), I will first undergo a Calypso "planning/targeting alignment dry run," after which the nuking festivities will begin as soon as I can be scheduled.

One aspect of this experience, which will effectively be part of my "co-pay," will be the roughly $400 in gas I'm gonna spend schlepping to and from the clinic daily over nine weeks.


They can't do my Calypso beacons implant px until August 11th, and I have to go to Berkelely to get it done, not the Concord office. This means I am unlikely to begin my rad tx until late August, a good 6-7 weeks after my Muir endo-rectal coil MRI.



The Muir ER doc contractor group finally got paid. Only took 3 months. No wonder you're all going broke. Your revenue cycle processes are abysmal.

They billed $623. BCBS gave them $414.88. My balance due (which I paid immediately) was $46.09. Yeah, that's a "10% co-pay" under my plan [$46.09/($414.88+$46.09)]. Total net charge, $460.97 (payor reimbursement plus my co-pay). I guess they're "in-network" after all (there had been some doubt voiced initially).

Interesting, the varying Radiation Oncology charges ("new patient" visit):

My initial rad onc referral visit (less than an hour) yielded a charge of $694.35. BCBS allowed $318.85. My piece was $35.43, again, 10%.

My 2nd-opinion visit at Stanford: billed @ $425, BCBS paying $235.44, my co-pay cut $23.54 (9.09%, go figure).

Other random stuff to date:

This was weird: the endo-rectal coil MRI, billed @ $2,925.00. BCBS paid them $96.00 (that's not a typo: they paid only 3.3% of the charge), my cut was $9.60 (note that $9.60/($96+$9.60) is 9.1%, not 10%).
You have to wonder whether these exorbitant "chargemaster" type gross billings are really just accounting fictions designed to show for tax purposes that they're "losing money."
Finally, this one is another head-scratcher. I had to go get another PSA test blood draw on the 17th. LabCorp makes you leave a credit card number for balances not covered. They gave me a printout saying they would be billing BCBS $192.00 for this single parameter assay.

Well, according to my BCBS subscriber portal, LabCorp got $19.83 for the test. My cut is 21 cents (10.48% co-pay). I checked my AMEX account. No "balance billing" in my current activity. I have not the slightest doubt that LabCorp Accounts Receivables will spend about ten bucks (postage, paper, processing labor) making sure they collect their $0.21 from me. I am reminded of my years in banking.

Before this is all over, I will have blown way past my BCBS Max OoP for the year. Should be interesting to watch how all of it shakes out. I fully expect more bozo EOB stuff.

apropos, I will have to cite this best-selling book by Steven Brill. Just downloaded it. I've known of it for a while but haven't bought it until now. I know there's gotta be good stuff in there pertaining to our absurd billing practices.

For now (and on the topic), an excerpt from an interesting segment of "All In with Chris Hayes" on MSNBC last week (talking of the hapless guy who got a $153,000 bill for treatment of a rattlesnake bite):
HAYES: OK, Pro tip here: no selfies with rattlesnakes. You want to do whatever you can to avoid being near them as Mr. Fassler learned.

Now the San Diego local was lucky enough to have medical care. But the drugs and care that saved his life did come with a shockingly hefty price. After first reporting on the rattlesnake selfie snafu, KGTV reporter Dan Hagarty tweeted a picture of the hospital bill he says Fassler sent him. His bill, a grand total of $153,000. The majority of that, $83,000, coming from pharmacy costs, meaning the cost of drugs.

The bill of over $150,000, due July 27, 2015, less than three weeks after he left the hospital.
Now, we don't know if Fassler has insurance. It is more than likely that even if he did, it wouldn't be asked to pay the full amount. But the $153,000 snakebite bill is a window into what the U.S. health care system so distinct and so dysfunctional, the insane way the U.S. health care system prices drugs and services.

Joining me now, Dr. Zeke Emanuel, MSNBC contributor, chair of medical ethics and health policy at the University of Pennsylvania.

All right, what is going on here with this bill? How do you, as someone who has spent a lot of time studying this system makes sense of this?

EZEKIEL EMANUEL, UNIVERSITY OF PENNSYLVANIA: This is just funny money, it's what in the field is called charges. It`s what hospital charges. They`re totally made up. They bear no relationship to reality.  They bear no relationship to how much effort is needed to care for a patient.

I would note there, Chris, that $40,000 roughly of that bill is for five nights in the hospital, $8,000 a night. You could virtually rent a Caribbean island for that kind of price.
HAYES: You can stay at the nicest place in all of America.

EMANUEL: The whole world.

HAYES: But then explain to me why -- I have to...

EMANUEL: So we call those charges. And then there's costs, which are really what's going to be paid. And of course, there is no such thing as a cost, because commercial insurers like Aetna or United, they pay one rate, Medicare pays a different rate, typically lower, Medicaid pays a different rate, typically even lower than that.

The only people who pay that kind of bill are people paying full price, Chinese billionaires or oil sheikhs, really no relationship to reality.

The problem is, the guys who run the hospital, they don't know how much of that really costs them in terms of effort, activity and supplies.  They`re just guestimating it. And then they rack up the numbers. And they really go into a bathroom and negotiate with the insurers about the prices and Medicare tells them what they`re going to pay.

HAYES: So here -- you have put your finger on something that drives me insane, just personally, in my own life, right. I consider myself a fairly smart individual. I consider myself relevantly erudite.

EMANUEL: You don`t have to prove it, Chris, we agree.

HAYES: Well, let me tell you this, I cannot make sense out of hospital bills. For months and months and months after my second child was born, things would show up in the mail and it would be this inscrutable nonsense spreadsheet that I would devote all of my cognitive capacity to and come up completely empty about what the heck was going on, who was charging who, who was paying for what. It's nonsense.

EMANUEL: Chris, if it makes you feel any better, I've been getting some physical therapy for a problem I have with plantar fasciitis. I get virtually the same things week after week, and the bill varies by hundreds of dollars, and I can`t make any sense of out it.

And I'm a kind of expert in this field. So, they bear no relationship to reality.

And, look, the hope is, my hope certainly, is that as we move further into the reform effort, as we pay doctors and hospitals differently, these bills, this kind of funny money is going to go away.

The most progressive places in the country -- ironically not far from where this guy got treated for his rattlesnake -- they actually have done what are called time and motion studies, they know what it actually costs to deliver care and they can tell you that. And then they can put all the price together and give you actually a price that reflects the actual cost of caring for someone.

Now that's not to say the actual cost of caring for someone is any more rational.
One of the points I like to make is $153,000 for five days -- five nights in the hospital, that is three times the average income in America, three times the yearly income of the average person in America. That is an insane amount of money. We don't -- I mean, they could have done all of that...
Shards and sand.


I'm now deep into Steve Brill's "American's Bitter Pill." It is excellent.

Jumping right out at me:

I USUALLY KEEP MYSELF OUT OF THE STORIES I WRITE, BUT THE ONLY way to tell this one is to start with the dream I had on the night of April 3, 2014.

Actually, I should start with the three hours before the dream, when I tried to fall asleep but couldn’t because of what I thought was my exploding heart.

THUMP. THUMP. THUMP. If I lay on my stomach it seemed to be pushing down through the mattress. If I turned over, it seemed to want to burst out of my chest.

When I pushed the button for the nurse, she told me there was nothing wrong. She even showed me how to read the screen of the machine monitoring my heart so I could see for myself that all was normal. But she said she understood. A lot of patients in my situation imagined something was going haywire with their hearts when it wasn’t. Everything was fine, she promised, and then gave me a sedative.

All might have looked normal on that monitor, but there was nothing fine about my heart. It had a time bomb appended to it. It could explode at any moment— tonight or three years from tonight— and kill me almost instantly. No heart attack. No stroke. I’d just be gone, having bled to death.

That’s what had brought me to the fourth-floor cardiac surgery unit at New York– Presbyterian Hospital. The next morning I was having open-heart surgery to fix something called an aortic aneurysm.

It’s a condition I had never heard of until a week before, when a routine checkup by my extraordinarily careful doctor had found it.

And that’s when everything changed.

Until then, my family and I had enjoyed great health. I hadn’t missed a day of work for illness in years. Instead, my view of the world of healthcare was pretty much centered on a special issue I had written for Time magazine a year before about the astronomical cost of care in the United States and the dysfunctions and abuses in our system that generated and protected those high prices.

For me, an MRI had been a symbol of profligate American healthcare— a high-tech profit machine that had become a bonanza for manufacturers such as General Electric and Siemens and for the hospitals and doctors who billed billions to patients for MRIs they might not have needed.

But now the MRI was the miraculous lifesaver that had found and taken a crystal clear picture of the bomb hiding in my chest. Now a surgeon was going to use that MRI blueprint to save my life...

Brill, Steven (2015-01-05). America's Bitter Pill: Money, Politics, Backroom Deals, and the Fight to Fix Our Broken Healthcare System (Kindle Locations 58-78). Random House Publishing Group. Kindle Edition.

The chargemaster bill for my MRI had been $ 1,950, which my insurance company knocked down to $ 294.

My doctor had sent me for the test because he became suspicious when he took my pulse during a routine checkup. However, most doctors aren’t skillful enough or cautious enough to weed out possible victims that way, nor is taking a pulse anything close to a foolproof way to find an aneurysm. So let’s suppose we spent $ 300, through private insurance coverage or Medicare and Medicaid, to test all 240 million American adults to see if they had aortic aneurysms growing in their chests. That would cost $ 72 billion.

Suppose we economized and tested everyone only every four years. That would average out to $ 18 billion a year. Yet it would potentially save more than three times as many Americans as were lost in the September 11, 2001, attacks, which we have spent hundreds of billions to prevent from happening again, even trillions if you count foreign wars. So, would an aortic aneurysm lobby, consisting perhaps (if I had not been saved) of my widow or children (and backed by the MRI lobby), be so unreasonable in demanding that the country spend the $ 18 billion?

Then again, wouldn’t it be crazy to spend all that money to find the fraction of a fraction of a percent of people— 10,000 out of 240 million— who are susceptible to that, rather than spend it on general preventive care or on cancer research?

Put simply, money is a scarce healthcare resource. We have left it to Washington to allocate it based too often on who has the best lobby or the hottest fund-raising campaign. [ibid, Kindle Locations 7256-7268]
Yeah. Interesting, in light of my own imaging experiences of late. The book is replete with stories of other patients' maddening encounters with our ruinously expensive fragmented non-system. Highly recommended. Much more to cite as I complete reading it.

Stay tuned.

Also recommended, Mr. Brill's related Time article, Bitter Pill: Why Medical Bills Are Killing Us (pdf).

The Bill
Steven Brill on how health-care reform went wrong.


..The economic team felt that health care could use a good dose of market incentives. The Lambrew-DeParle view, on the other hand, was that health care is different: the complex nature of the relationship between patients and their health-care provider is so unlike ordinary economic transactions that it can be governed only through cost controls and complicated regulatory mechanisms. When the two sides argued, they weren’t just reflecting a difference in tactics or emphasis. Their disagreement was philosophical: each held a distinct view about the nature of the transactions that take place around medical care.

Brill sides with the DeParle camp. His solution for the health-care problem is to treat the industry like a regulated oligopoly: he believes in price controls and profit limits and strict regulations for those who work within the health-care world, restrictions that he almost certainly thinks would be inappropriate for other sectors of the economy. A patient, he explains at the beginning of his book, is a not a rational consumer. That was the lesson he took from his own heart surgery. “In that moment of terror,” he writes, of blacking out after his surgery, “I was anything but the well-informed, tough customer with lots of options that a robust free market counts on. I was a puddle.”

But Brill spends very little time examining why he thinks this means that the market can’t have a big role in medicine, where most care is routine, not catastrophic. He just takes it for granted. And because he is not much engaged by the philosophical argument at the heart of the health-care debate, he can never really explain why someone involved in health-care reform might be unhappy with the direction that the Affordable Care Act ended up taking. He tells us who controlled the PowerPoint. But he can’t tell us why it mattered.

It is useful to read “America’s Bitter Pill” alongside David Goldhill’s “Catastrophic Care.” Goldhill covers much of the same ground. But for him the philosophical question—is health care different, or is it ultimately like any other resource?—is central. The Medicare program, for example, has a spectacularly high loss ratio: it pays out something like ninety-seven cents in benefits for every dollar it takes in. For Brill, that’s evidence of how well it works. He thinks Medicare is the most functional part of the health-care system. Goldhill is more skeptical. Perhaps the reason Medicare’s loss ratio is so high, he says, is that Medicare never says no to anything. The program’s annual spending has risen, in the past forty years, from eight billion to five hundred and eighty-five billion dollars. Maybe it ought to spend more money on administration so that it can promote competition among its suppliers and make disciplined decisions about what is and isn’t worth covering...
Not sure how much I agree with Gladwell's take on this book (which I have now finished, btw). He cites Goldhill. I did as well, years ago, in my post "Public Optional."
Then there's David Goldhill's thoughtful Atlantic Monthly essay "How American Health Care Killed My Father."
I’m a Democrat, and have long been concerned about America’s lack of a health safety net. But based on my own work experience, I also believe that unless we fix the problems at the foundation of our health system—largely problems of incentives—our reforms won’t do much good, and may do harm. To achieve maximum coverage at acceptable cost with acceptable quality, health care will need to become subject to the same forces that have boosted efficiency and value throughout the economy. We will need to reduce, rather than expand, the role of insurance; focus the government’s role exclusively on things that only government can do (protect the poor, cover us against true catastrophe, enforce safety standards, and ensure provider competition); overcome our addiction to Ponzi-scheme financing, hidden subsidies, manipulated prices, and undisclosed results; and rely more on ourselves, the consumers, as the ultimate guarantors of good service, reasonable prices, and sensible trade-offs between health-care spending and spending on all the other good things money can buy...
Six years later we continue to argue about the same stuff.


Large payor mergers have been in the news of late. Anthem + Cigna, and Aetna + Humana. Will the administrative system fragmentation actually get worse, at least in the near term -- a "near term" that may well extend a decade?

The "Big Five" (which includes United Healthcare) will soon become the "Big Three" (absent Justice Department denials on antitrust grounds).

Again, going back to my original "shards" post:
[N]otwithstanding all of recent years' progressive policy push (including those contained in the PPACA) toward "P4P" (Pay for Performance), "ACO's" (Accountable Care Organizations), "PCMHs" (Patient Centered Medical Homes), "patient-centered continuity of care," etc, the legacy FFS (Fee For Service) paradigm and the inertia of industry incumbency will not go quietly into the sunset. Moreover, I have to wonder whether, despite all the news in recent years of healthcare space "consolidations" and "M&A's" (mergers and acquisitions), the fragmentation is actually getting worse, not better. More shards strewn about, and sharper around the edges. Corporate acquisitions are driven by near-term profit potential, not by noble, altruistic notions of materially improved care delivery unwaveringly focused on patients...
What do you think?

No shortage of press reaction.
How merger mania will impact the healthcare industry
By Heather Caspi, July 29, 2015

Anthem’s announcement last week that it will acquire Cigna—on the tail of Aetna’s recent purchase of Humana—officially brings the health insurance industry’s “big 5” companies down to the “big 3,” just as insiders predicted.

While much of the discussion on payer and provider mergers has revolved around the race in the industry for size and scale, and the accompanying antitrust concerns, there’s more to the subject than these major points, says Frank Ingari, CEO of NaviNet. NaviNet is the nation’s largest healthcare communications network, connecting providers to health plans including Aetna, Cigna and UnitedHealthcare. Ingari sat down with Healthcare Dive this week to discuss what he sees ahead for the industry amid all of the “merger mania.”

One point Ingari notes is that it’s likely to be business as usual for quite some time. Based on past events, the regulatory review period for these mergers could be at least a year, he says. Even after that, it’s a lengthy process for such large companies to actually integrate their strategies and operations. In the meantime, they’re likely to call upon their components to continue using their current business models to deliver bottom line performance.

“It tends to be that changes take a lot longer than people realize,” Ingari says. “It could be four to five years before you see one of these mega-mergers operating as if it were one company.”...
Yes, what I've argued as a concern.

More to come...

Friday, July 24, 2015

The Lean Transformation illustrated in 7:39

Nicely done. Concise. Five simple guiding principles to internalize and deploy. A quick animated recap of some of the detail I witnessed during the June Lean Healthcare Transformation Summit in Dallas. Props to LEI. I was touting Lean methodology from the first post of this blog in May 2010.


This was very nice, especially coming from John Lynn.

Well, yeah, I have some strong opinions on core issues (e.g., "interoperabbable," "organizational culture," "clinical methodology," etc). But if you spend much time reading my numerous accrued posts on this blog, you find that I devote most of my space to excerpting and citing evidence consisting of the works of learned, skilled others, spanning the overlapping gamut from technology to clinical science and pedagogy to process QI to policy. I feel like I'm in perpetual graduate school. I don't get paid to do this. I continue to do it because it's important. Now personally traversing the "shards" of the healthcare system renders that importance all the more vivid to me these days.

Speaking of "opinions," I have started a new little Twitter thing called "Your Daily Donald™." A bit of diversionary sport. The jokes just write themselves. Yesterday's Daily Donald.

Oh, and I updated my drought page. We have a raging out-of-control 7,000+ acre forest/brush fire going on this morning just east of Napa. Too close to home.


From Dr. Carter's always-excellent EHR Science:
EHR Data Accuracy is Essential for Decision Support and Data Exchange
by JEROME CARTER on JULY 20, 2015

Current EHR and HIT thinking places significant value on immediate and downstream use of EHR data. The expected benefits of interoperability, clinical decision support, and data analytics all depend on accurate EHR data. Yet, somehow, data quality has not gotten the attention that it should. While clinical researchers are increasingly focused on improving phenotyping algorithms for EHR data extractions(1), there is much less focus on how EHR data collection and validation practices can improve data quality.

Data validation can occur in a number of ways. Basic validation techniques (e.g., missing data, spelling checks, correct formatting) are easy to do, and are simply good software engineering. For clinical systems, the next level up is range-checking for standard data elements. At this level, unreasonable values, such as temperatures of 200o F or blood pressures of 1200/80, are prevented. The highest and hardest level of checking for EHR data is that of “truth” – that is, assuring that the information in the chart that makes it past the first two levels of validation is factually correct. Diagnosis accuracy – the correspondence between the coded diagnosis (ICD or SNOMED) and the remaining chart data–is an example of the challenges inherent in assuring accurate EHR data...
Indeed. Data quality and software QA have been an interest of mine since the 80's. See my first Oak Ridge tech paper "Laboratory Software Applications Development: Quality Assurance Considerations" (pdf). See also my recent post "Personalized Medicine" and "Omics" -- HIT and QA considerations.

See as well my 2013 post "(404)^n, the upshot of dirty data."


From The Neurological Blog:
I have noticed a common arc to many technologies. First they are known and discussed only by scientists and experts in the field. Then they are picked up technophiles who read nerdy magazines and websites. This is all while the research is preliminary and the technology just a distant hope for the future.

Then something happens that makes awareness of the potential technology go mainstream. This is often a movie depicting the technology, but can also be just an article in a more mainstream magazine or newspaper, an early demonstration of the potential for the technology, or a political controversy surrounding it. Then the hype begins.

The hype phase is driven by the researchers looking for more funding, the technophiles who have already been salivating over the technology for years, and a sensationalist media.

We then get into the dark phase of a technology’s arc – the exploitation phase. At this point the hype is running way ahead of the technology, and the public has this false sense that we are on the cusp of major applications.  This makes them vulnerable to charlatans who will claim that they have the technology, long before the tech actually exists.

As the hype and exploitation phase linger, the public is likely to move on to disillusionment. They have been waiting for years for the new technology, and nothing real has manifested. This often leads to the feeling that the whole thing was hype and will never manifest. Even the technophiles may be getting frustrated at this point.

Finally, we start to see real applications of the technology as it comes into its own. The application phase may be rapid – suddenly all the promises and hype of two decades before are not only met but exceeded. Of course, some technologies fizzle and never get to this phase, or may require fundamental advances in other areas to become viable, requiring many decades...
The core focus of this post is that of stem cells. But, it applies to other tech domains as well. Think about the "Gartner Hype Cycle." where are we today in terms of Health IT?

"Trough of Disillusionment" rings true to me a lot, at least with regard to the mainstream EHR and HIE spaces. The Policy ADHD folks on The Hill continue to lambast the "failure" of the Meaningful Use program. There are renewed calls to delay Stage 3, and also to yet again delay the ICD-10 coding conversion now slated for October 1st.

From Health Data Management:
Congress Considers Putting Brakes on Stage 3
Federal lawmakers are noticing some dark clouds surrounding the electronic health records meaningful use program to prod providers to adopt EHRs. With rising recent struggles in the program, lawmakers may be poised to intervene to push back the program’s third stage.

Problems with the current stage are all too apparent. As of mid-June 2015, 11 percent of eligible physicians have participated in Stage 2 of the electronic health records financial incentive program, and 42 percent of eligible hospitals have participated.

Now, there are rumblings in Congress about delaying Stage 3, which is supposed to start with an optional year in 2017 and with all participants moving to the third stage in 2018.

Sen. Lamar Alexander (R-Tenn.), chair of the Senate Health, Education, Labor & Pensions Committee, is broaching the subject of delaying Stage 3, and this month even mentioned the idea to Health and Human Services Secretary Sylvia Burwell. At a committee meeting, Alexander discussed his talk with Burwell and added: “There’s been some discussion about delaying Meaningful Use Stage 3, about whether it’s a good idea, whether it’s a bad idea, whether to delay part of all of it. My instinct is to say to Secretary Burwell, ‘Let’s not go backwards on electronic healthcare records.’ ”

Alexander also said it may be wise to slow down Stage 3, “not with the idea of backing up on it, but with the idea of saying, ‘Let’s get this right.’ ” At the same committee meeting, David Kibbe, MD, president and CEO of DirectTrust, a coalition of 150 organizations supporting the Direct secure messaging protocols, recommended “an immediate moratorium on Stage 3 until Stage 2 is fixed.”...
From The Health Care Blog:
Avoid ICD-10? Yes, You Can!
Jacob Reider, MD
...Technology should capture the diagnosis in a terminology that I understand – MY language (HLI, IMO or SNOMED-CT) and if additional data is required – I should always be prompted for it – in the most elegant manner possible.  The information that I capture can/should then be stored in the patient’s problem list if it’s not already there (and of course if it IS already there – it should be offered as an initial selection to avoid replicating work that was already done!) and then translated in the background into the administrative code.  This should be opaque to the user.  Accessible?  Yes – sure.  Just as I can “view source” in my browser to see the HTML.  But really – who wants to do that?  Not me (most of the time).  Not you.  Nor will I need to see the ICD-10 code 99% of the time.

Don’t burden your clinicians with ICD-10!  Avoid it.  Yes you can.  And you should.  Anything less is irresponsible.  Yes – some Who have been “educated” by high-priced consultants will ask for it.  But you shouldn’t give them a faster horse.  Give them what they need.
Well, we do live in interesting times. We only have a little more than two months now before the October 1st ICD-10 deadline and the new federal fiscal year. The contentious 60 day congressional review period of the proposed Iranian nuke deal falls right into that, plus there's that now-hardy perennial GOP federal shutdown threat, replete with poison pills like the umpieth "repeal ObamaCare" legislative amendments, etc.

Interesting indeed.

More to come...

Monday, July 20, 2015

AI vs IA: At the cutting edge of IT R&D

In truth, I've not paid all that much attention ongoing to developments in the really leading-edge IT R&D space. People have been waxing rhapsodic about the ostensibly ever-incipient promise of "Artificial Intelligence" (AI) since my code-writing days of the 1980's. Significant advances have seemed to always stay just around the corner.

My concerns have been more mundane, mainly commercial software "usability" and RDBMS design efficiency and effectiveness (i.e., "software QA"). As EHRs have evolved into providing at least rudimentary Clinical Decision Support functionality ("CDS"), such capability is really "IA" (Intelligence Augmentation) rather that "AI" (Artificial Intelligence). True, given advances in "neural net" software development, those separate concepts are beginning to blur, but, still, most of the applied work in the area focuses on IA, not AI.

Review my April post section regarding the Weeds' seminal book "Medicine in Denial."
Essential to health care reform are two elements: standards of care for managing clinical information (analogous to accounting standards for managing financial information), and electronic tools designed to implement those standards. Both elements are external to the physician’s mind. Although in large part already developed, these elements are virtually absent from health care. Without these elements, the physician continues to be relied upon as a repository of knowledge and a vehicle for information processing. The resulting disorder blocks health information technology from realizing its enormous potential, and deprives health care reform of an essential foundation. In contrast, standards and tools designed to integrate detailed patient data with comprehensive medical knowledge make it possible to define the data and knowledge taken into account for decision making. Similarly, standards for organizing patient data over time in medical records make it possible to trace connections among the data collected, the patient’s problems, the practitioner’s assessments, the actions taken, the patient’s progress, the patient’s behaviors and ultimate outcomes...
Larry Weed's digital POMR (Problem Oriented Medical Record) focus has clearly been that of applied IA.

My May 22nd post "The Robot will see you now..." cites Martin Ford's bracing new book 'The Rise of the Robots." That book takes us off more in the direction of the implications of increasing (and sometimes troubling) "AI."

Well, my new Harpers issue arrived.

Interesting article therein.
The Transhuman Condition
By John Markoff, from Machines of Loving Grace, out this month from Ecco Books. Markoff has been a technology and business reporter for the New York Times since 1988.
I look forward to getting this book when it's released next month.

Some excerpts from the Harpers piece:
Bill Duvall grew up on the peninsula south of San Francisco. The son of a physicist who was involved in classified research at Stanford Research Institute (SRI), a military-oriented think tank, Duvall attended UC Berkeley in the mid-1960s; he took all the university’s computer-programming courses and dropped out after two years. When he joined the think tank where his father worked, a few miles from the Stanford campus, he was assigned to the team of artificial-intelligence researchers who were building Shakey.

Although Life magazine would later dub Shakey the first “electronic person,” it was basically a six-foot stack of gear, sensors, and motorized wheels that was tethered — and later wirelessly connected — to a nearby mainframe. Shakey wasn’t the world’s first mobile robot, but it was the first that was intended to be truly autonomous. It was designed to reason about the world around it, to plan its own actions, and to perform tasks. It could find and push objects and move in a planned way in its highly structured world.

At both SRI and the nearby Stanford Artificial Intelligence Laboratory (SAIL), which was founded by John McCarthy in 1962, a tightly knit group of researchers was attempting to build machines that mimicked human capabilities. To this group, Shakey was a striking portent of the future; they believed that the scientific breakthrough that would enable machines to act like humans was coming in just a few short years. Indeed, among the small community of AI researchers who were working on both coasts during the mid-Sixties, there was virtually boundless optimism...

Late on the evening of October 29, 1969, Duvall connected the NLS system in Menlo Park, via a data line leased from the phone company, to a computer controlled by another young hacker in Los Angeles. It was the first time that two computers connected over the network that would become the Internet. Duvall’s leap from the Shakey laboratory to Engelbart’s NLS made him one of the earliest people to stand on both sides of a line that even today distinguishes two rival engineering communities. One of these communities has relentlessly pursued the automation of the human experience — artificial intelligence. The other, human-computer interaction — what Engelbart called intelligence augmentation — has concerned itself with “man-machine symbiosis.” What separates AI and IA is partly their technical approaches, but the distinction also implies differing ethical stances toward the relationship of man to machine...
...[T]oday, AI is beginning to meet some of the promises made for it by SAIL and SRI researchers half a century ago, and artificial intelligence is poised to have an impact on society that may be greater than the effect of personal computing and the Internet...

...[T]he falling costs of sensors, computer processing, and information storage, along with the gradual shift away from symbolic logic and toward more pragmatic statistical and machine-learning algorithms, have made it possible for engineers and programmers to create computerized systems that see, speak, listen, and move around in the world.

As a result, AI has been transformed from an academic curiosity into a force that is altering countless aspects of the modern world. This has created an increasingly clear choice for designers — a choice that has become philosophical and ethical, rather than simply technical: will we design humans into or out of the systems that transport us, that grow our food, manufacture our goods, and provide our entertainment?

As computing and robotics systems have grown from laboratory curiosities into the fabric that weaves together modern life, the AI and IA communities have continued to speak past each other. The field of human-computer interface has largely operated within the philosophical framework originally set down by Engelbart — that computers should be used to assist humans. In contrast, the artificial-intelligence community has for the most part remained unconcerned with preserving a role for individual humans in the systems it creates...

...Google mined the wealth of human knowledge and returned it in searchable form to society, while reserving for itself the right to monetize the results.

Since it established its search box as the world’s most powerful information monopoly, Google has yo-yoed between IA and AI applications and services. The ill-fated Google Glass was intended as a “reality-augmentation system,” while the company’s driverless-car project represents a pure AI — replacing human agency and intelligence with a machine. Recently, Google has undertaken what it loosely identifies as “brain” projects, which suggests a new wave of AI...

...[I]t is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society...

AI and machine-learning algorithms have already led to transformative applications in areas as diverse as science, manufacturing, and entertainment. Machine vision and pattern recognition have been essential to improving quality in semiconductor design. Drug-discovery algorithms have systematized the creation of new pharmaceuticals. The same breakthroughs have also brought us increased government surveillance and social-media companies whose business model depends on invading privacy for profit.

Optimists hope that the potential abuses of our computer systems will be minimized if the application of artificial intelligence, genetic engineering, and robotics remains focused on humans rather than algorithms. But the tech industry has not had a track record that speaks to moral enlightenment. It would be truly remarkable if a Silicon Valley company rejected a profitable technology for ethical reasons. Today, decisions about implementing technology are made largely on the basis of profitability and efficiency. What is needed is a new moral calculus.

Any import here with respect to the topic of my prior post "Personalized Medicine" and "Omics" -- HIT and QA considerations? My earlier citations of Nicholas Carr's book "The Glass Cage"? My citations of Morozov's compelling book "To Save Everything, Click HERE"? Peter Thiel's book "Zero to One"? Simon Head's "Mindless"?


Meanwhile, in the Health IT trenches:
Physicians Vent EHR Frustrations
Lena J. Weiner, for HealthLeaders Media, July 21, 2015

The American Medical Association gives physicians a platform to air their grievances about electronic health records systems, but the technology is here to stay, says an executive with the College of Healthcare Information Management.

Longstanding physician dissatisfaction over electronic health record systems, Meaningful Use, and the federal regulations behind them lit up a town hall-style meeting Monday night, hosted in Atlanta by the American Medical Association and the Medical Association of Georgia and webcast live.

Rep. Tom Price, MD, (R-GA), formerly medical director of the orthopedic clinic at Grady Memorial Hospital in Atlanta and co-host of the town hall kicked things off with one specific complaint of doctors, "inconsistency is a problem." The event was part of the AMA's Break the Red Tape campaign, which aims to postpone the finalization of MU Stage 3 regulations.

AMA President Steven J. Stack, MD, told attendees that the meeting was an opportunity for them to be heard. "This is not for you to hear me talking to you, but for me to hear you talking to me... Has workflow in your office changed?" he goaded the crowd. At least 80% raised their hands. A sole hand remained raised when Stack asked if the change was for the better.

Almost immediately, physicians gave voice to the barriers to care they say are caused by electronic health records systems. Over the course of the 90-minute meeting they raised concerns over reduced productivity, the security of private patient medical records, interoperability, and government regulation.

"We're removing the science from medicine," said one physician who described having to check "yes" and "no" boxes rather than being able to note subtle nuances his patients reported.

"We're removing the science from medicine," said one physician who described having to check "yes" and "no" boxes rather than being able to note subtle nuances his patients reported.

"Thank God I learned to type in high school—I never thought I'd use it," said another, explaining that she now has to make sure every employee she hires can type, regardless of the job for which they are hired.

Some physicians tweeted their frustrations during the meeting, using the hashtag #fixEHR...
"We're removing the science from medicine"

Well, coding and categorical check-box documentation are inescapably what I call "lossy compression." To what extent "nuance loss" is "unscientific" is not all that clear, though. Subjective "nuance impressions" may really come more under the "Art of Medicine."




More to come...

Thursday, July 16, 2015

"Personalized Medicine" and "Omics" -- HIT and QA considerations

Recall my citing of this book back in May?

As I wrote back then:
I have a couple of concerns. Docs often don’t have enough time TODAY to get through an electronic SOAP note effectively, given workflow constraints. Adding in torrents of “omics” data may be problematic, both in terms of the sheer number of additional potential dx/tx variables to be considered in a short amount of time, and questions of “omics” analytic competency. To that latter point, what will even constitute dx "competency" in the individual patient dx context, given the relative infancy of the research domain? (Not to mention issues of genomic lab QC/QA -- a particular focus that I will have, in light of my 80's lab QC background).
President Obama’s current infatuation with “Precision Medicine” notwithstanding, just dumping bunch of “omics” data into EHRs (insufficiently vetted for accuracy and utility, and inadequately understood by the diagnosing clinician) is likely to set us up for our latest HIT disappointment -- and perhaps injure patients in the process. 
Relatedly, see my June 29th post "It's not so elementary, Watson." Developments in Health IT. Will IBM's Watson sort through the massive deluge of "omics" data to help significantly to cure cancer?

This stuff has now gotten personal for me. I recently had a genetic test done on a biopsy specimen. Recall from my "Shards" post:
...Without asking me, my urologist sent my biopsy to a company that performs "OncoType dx" genetic assays of biopsies for several cancers, including prostate. He simply wanted to know whether mine was a good candidate for this type of test.

They just went ahead and ran it, without the urologist's green light order, or my knowledge or consent. I got a call out of the blue one day from a lady wanting to verify my insurance information, and advising me that this test might be "out of your network," leaving me on the hook for some $400, worst case (I subsequently came to learn that it's a ~$4,000 test). I thought it odd, and thought I'd follow up with my doc.

My urologist called me. He was embarrassed and pissed. A young rep from the OncoType dx vendor also called me shortly thereafter. He was in fear of losing his job, having tee'd up the test absent explicit auth.

I've yet to hear anything further. I think they're all just trying to make this one go away. Though, it would not surprise me one whit to see a charge pop up in my BCBS/RI portal one of these days.

The OncoType test result merely served to confirm (expensively -- for someone) what my urologist already suspected. The malignancy aggressiveness in my case is in a sort of "grey zone." The merging composite picture is "don't dally with this."...
The Wiki on the budding "omics" domain:
Kinds of omics studies
  • Genomics: Study of the genomes of organisms.
    • Cognitive genomics examines the changes in cognitive processes associated with genetic profiles.
    • Comparative genomics: Study of the relationship of genome structure and function across different biological species or strains.
    • Functional genomics: Describes gene and protein functions and interactions (often uses transcriptomics).
    • Metagenomics: Study of metagenomes, i.e., genetic material recovered directly from environmental samples.
    • Personal genomics: Branch of genomics concerned with the sequencing and analysis of the genome of an individual. Once the genotypes are known, the individual's genotype can be compared with the published literature to determine likelihood of trait expression and disease risk. Helps in Personalized Medicine
  • Epigenomics: Study of the complete set of epigenetic modifications on the genetic material of a cell, known as the epigenome. ChIP-Chip and ChIP-Seq technologies used.
Lipidome is the entire complement of cellular lipids, including the modifications made to a particular set of lipids, produced by an organism or system.
  • Lipidomics: Large-scale study of pathways and networks of lipids. Mass spectrometry techniques are used.
Proteome is the entire complement of proteins, including the modifications made to a particular set of proteins, produced by an organism or system.
  • Proteomics: Large-scale study of proteins, particularly their structures and functions. Mass spectrometry techniques are used.
    • Immunoproteomics: study of large sets of proteins (proteomics) involved in the immune response
    • Nutriproteomics: Identifying the molecular targets of nutritive and non-nutritive components of the diet. Uses proteomics mass spectrometry data for protein expression studies
    • Proteogenomics: An emerging field of biological research at the intersection of proteomics and genomics. Proteomics data used for gene annotations.
    • Structural genomics: Study of 3-dimensional structure of every protein encoded by a given genome using a combination of experimental and modeling approaches.
Foodomics was defined in 2009 as "a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge"
Transcriptome is the set of all RNA molecules, including mRNA, rRNA, tRNA, and other non-coding RNA, produced in one or a population of cells.
  • Transcriptomics: Study of transcriptomes, their structures and functions.
  • Metabolomics: Scientific study of chemical processes involving metabolites. It is a "systematic study of the unique chemical fingerprints that specific cellular processes leave behind", the study of their small-molecule metabolite profiles
  • Metabonomics: The quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification
Nutrition, pharmacology, and toxicology
  • Nutritional genomics: A science studying the relationship between human genome, nutrition and health.
    • Nutrigenetics studies the effect of genetic variations on the interaction between diet and health with implications to susceptible subgroups
    • Nutrigenomics: Study of the effects of foods and food constituents on gene expression. Studies the effect of nutrients on the genome, proteome, and metabolome
  • Pharmacogenomics investigates the effect of the sum of variations within the human genome on drugs;
  • Pharmacomicrobiomics investigates the effect of variations within the human microbiome on drugs.
  • Toxicogenomics: a field of science that deals with the collection, interpretation, and storage of information about gene and protein activity within particular cell or tissue of an organism in response to toxic substances.
  • Mitointeractome
  • Psychogenomics: Process of applying the powerful tools of genomics and proteomics to achieve a better understanding of the biological substrates of normal behavior and of diseases of the brain that manifest themselves as behavioral abnormalities. Applying psychogenomics to the study of drug addiction, the ultimate goal is to develop more effective treatments for these disorders as well as objective diagnostic tools, preventive measures, and eventually cures.
  • Stem cell genomics: Helps in stem cell biology. Aim is to establish stem cells as a leading model system for understanding human biology and disease states and ultimately to accelerate progress toward clinical translation.
  • Connectome: The totality of neural connections in the brain.
That's a lot of scientific/technical ground to cover. Will there have be a new medical specialty called "Clinical Geneticology?" Doesn't apparently exist yet.

I guess we'll have to make do for a time with "Genetic Counselors." From the "American Board of Genetic Counseling, Inc." -
How Do I Train To Become a Certified Genetic Counselor?

Graduate Requirements
In order to become a Certified Genetic Counselor (CGC©), one must obtain a Master’s degree in Genetic Counseling from an ACGC Accredited Program. Once all requirements have been met, one may apply and sit for the Certification Examination.

At this time, there are no other pathways through which a person can become a Certified Genetic Counselor.

Undergraduate Requirements
Applicants to accredited master degree programs in genetic counseling must have earned a baccalaureate degree from an accredited undergraduate institution prior to applying. Most often individuals that are interested in pursuing a career in genetic counseling are those with a prior interest in or a baccalaureate degree in medical sciences, psychology or healthcare. However, undergraduate degrees in these areas are not required for entrance into an accredited master’s degree program in genetic counseling.

For more information about the genetic counseling profession, visit the National Society of Genetic Counselors at
People with these kinds of academic and experiential chops won't be found hanging around the parking lots of Home Depot.

At least, mercifully, we won't have to worry now about the for-profit diploma mills of Corinthian Colleges et al churning out legions of incompetent and unemployable Omics grads who paid (or borrowed from the feds) tens of thousands of dollars for their worthless AA certs.

Notwithstanding the well-deserved demise of the Corinthian grifters, the new Omics gold rush will surely entice myriad others into the curricular fray.

e.g., none other than the august Stanford (where I was recently 2nd-opinion evaluated for my prostate cancer) is now hawking their online "Stanford Genetics and Genomics Certificate." to wit,

"The Human Genome Project has ushered in a dramatic expansion and acceleration of genetics and genomics research. DNA sequencing technologies promise to advance the field of personalized healthcare, and genome-wide studies have identified numerous genes associated with common diseases and human traits. Commercial applications of genetics research are growing rapidly.

The Stanford Genetics and Genomics Certificate program utilizes the expertise of the Stanford faculty along with top industry leaders to teach cutting-edge topics in the field of genetics and genomics. Beginning with the fundamentals of genetics and genomics, participants will build a solid base of knowledge to explore and understand advanced genetics topics of interest.

Who Should Enroll
  • Medical sales representatives
  • Emerging technology leaders, strategists and venture capitalists who trade within the science-medical space
  • R&D managers and new product teams
  • Medical practitioners looking to expand their knowledge in the scientific world
  • Directors, Managers or Administrators who work in non-scientific roles in scientific environments
Earning the Certificate
Participants have the flexibility of taking individual courses within the program or earning the Stanford Genetics and Genomics Certificate by completing 2 core courses and 4 elective courses. Courses may be taken at your own pace online. We strongly recommend that participants complete Fundamentals of Genetics: The Genetics You Need to Know (XGEN101) and Genomics and the Other Omics: The Comprehensive Essentials (XGEN102).

Each course consists of online lecture videos, self-paced exercises, and a final exam. Lecture videos are presented in short segments for easier viewing and frequent self-reflection.


  • 5 years of work experience, preferably in science or tech related fields
  • Bachelor’s degree or equivalent
  • High school level knowledge of biology and chemistry
  • An investigative and scientific spirit
$695 per Required Course
$495 per Elective Course
$75 one-time document fee
Time to Complete Certificate
Courses are self-paced. Each course is available for 90 days after date of enrollment.
Looks interesting. Were I not hemorrhaging cash via my high-deductible BCBS Co-Pays and related OOP this year, I might bite.

But, when I look at their coursework, all of it  -- focused on the basic and applied science -- assumes that genetic lab assays are reliable -- accurate and precise (they're not the same thing). Coming from a forensic lab background, I don't share such sanguinity. Last time I checked there were no federal QA standards regulating commercial genetic testing. That does not portend well. Do a Google search on "genetic testing quality assurance." You find dated stuff like this above the fold. First, from a CLIA slide deck.

That's pretty sad.

Then, a 16 year old PubMed cite referencing JAMA:
JAMA. 1999 Mar 3;281(9):835-40.
Quality assurance in molecular genetic testing laboratories.
McGovern MM1, Benach MO, Wallenstein S, Desnick RJ, Keenlyside R.


Specific regulation of laboratories performing molecular genetic tests may be needed to ensure standards and quality assurance (QA) and safeguard patient rights to informed consent and confidentiality. However, comprehensive analysis of current practices of such laboratories, important for assessing the need for regulation and its impact on access to testing, has not been conducted.
To collect and analyze data regarding availability of clinical molecular genetic testing, including personnel standards and laboratory practices.
A mail survey in June 1997 of molecular genetic testing laboratory directors and assignment of a QA score based on responses to genetic testing process items.
Hospital-based, independent, and research-based molecular genetic testing laboratories in the United States.
Directors of molecular genetic testing laboratories (n = 245; response rate, 74.9%).
Laboratory process QA score, using the American College of Medical Genetics Laboratory Practice Committee standards.
The 245 responding laboratories reported availability of testing for 94 disorders. Personnel qualifications varied, although all directors had doctoral degrees. The mean QAscore was 90% (range, 44%-100%) with 36 laboratories (15%) scoring lower than 70%. Higher scores were associated with test menu size of more than 4 tests (P = .01), performance of more than 30 analyses annually (P = .01), director having a PhD vs MD degree (P = .002), director board certification (P = .03), independent (P <.001) and hospital (P = .01) laboratories vs research laboratory, participation in proficiency testing (P<.001), and Clinical Laboratory Improvement Amendment certification (P = .006). Seventy percent of laboratories provided access to genetic counseling, 69% had a confidentiality policy, and 45% required informed consent prior to testing.
The finding that a number of laboratories had QA scores that may reflect suboptimal laboratory practices suggests that both personnel qualification and laboratory practice standards are most in need of improvement to ensure quality in clinical molecular genetic testing laboratories.
Doesn't exactly give me Warm Fuzzies. I'll have to keep digging.

For one thing, I turn to what looks to be a timely (2015 release) and otherwise authoritative resource (expensive too, $142 in hardcover, $77 Kindle).

Laboratory process, data generation, and quality control
Preanalytical and quality laboratory processes
The critical difference distinguishing a clinical genomic sequencing service from sequencing performed in a research setting is the establishment, implementation, and documentation of Standard Operation Procedures (SOPs) that ensure integrity and quality of samples, data, patient privacy, interpretation, and final reports returned to the physician. It is also essential to have an available staff consisting of licensed and trained professionals, including genetic counselors, clinical genetic molecular biologists, medical geneticists, and bioinformaticians; in addition to working with each other, many members of this team will need to be able to work directly with ordering physicians and to produce a final report that positions can use to make patient healthcare decisions. The application of genomic sequencing services to the medical field requires physicians and patients to be adequately informed and trained to handle the information presented in the final report, while also limiting the data that may not be useful or is distracting from the patient’s healthcare decisions.

While a major focal point of running a clinical genomic sequencing facility centers upon ensuring the accuracy of data and report generation, the majority of errors in a clinical laboratory are made during the preanalytical phase of performing clinical tests. The bulk of laboratory errors occur in the pre-analytical phase (~60%), with the post-analytical phase being the second most problematic (~25%). The majority of pre-analytical errors can be attributed to sample handling and patient identification. Many of these errors can be avoided by developing a chain of custody program that is scalable for the number of samples entering the lab. Examples include sample collection kits that include matched, barcoded sets of tubes, forms, and instructions for proper patient handling, the implementation of a laboratory information management system, and regular, ongoing training and assessment of staff performance. Many other suggestions exist for how to develop high quality processing and procedures that minimize the probability of such errors.

Given the complexity and personalized nature of each genomic sequencing tests, appropriate tools  should be developed to optimize communication between clinical genomics laboratory personnel, and the physician to ensure the test is being appropriately ordered and analyzed for the patient. Ideally, genetic counselors should be available to communicate with ordering physicians regarding why the test is being ordered, what the deliverable will be, and how the results can be used appropriately. It is also appropriate to offer training and support tools (for example, podcasts/downloadable instructions) to the ordering physician in order to prepare them for navigating through multiple steps of this process. While onerous for both the laboratory and physician, such steps taken ahead of test ordering are likely to optimize the results and significantly reduce probability of error, confusion, and misunderstanding.

Test results from genomic sequencing can offer much more data than the position requires for patient diagnosis, which can be distracting and potentially stressful to the patient. In order to circumvent issues with excess data, the clinical genomic sequencing laboratory should be forthright in verifying the types of information the patient/physician would prefer to receive or not to receive. The physician or genetic counselor should consider the specific situation of the patient when assessing the appropriateness of potentially delivering sensitive incidental findings, such as susceptibility to late onset conditions, cancer, or concerns about non-paternity. The American College of Medical Genetics and Genomics (ACMG)  Working Group on incidental findings in clinical Exome and GS recently  released a list of genes and categories of variance that should be conveyed in a clinical genomics report, even if found as secondary or incidental results. The ACMG Working Group feels that although these variants may not be immediately necessary for the patient diagnosis decision, they could be critical for the patient’s well-being and should not be withheld. Position training should therefore include how to assist the patient in making decisions and what should be reported, and how to subsequently complete a required informed consent signed by the patient which outlines the terms of the genomic test request. Since patient genomic sequencing results in a very complex data set with a plethora of information, tools should be developed to enable easy navigation of results as questions arise, and support should be offered to the position before the test results are delivered in order to both maximize value for the patient and minimize the time and effort set forth by the physician.

Patient sample tracking can be one of the most difficult components of clinical genomic sequencing because the process requires many steps that occur over a period of several days, among several people, and the process must protect the patient’s privacy in accordance with the Health Insurance Portability and Accountability Act (HIPAA) regulations. However, this challenge is not specific to GS, and has been recognized as an ongoing challenge for all clinical laboratories.

The processing of one genome can be divided up into three categories: wet lab processing, bioinformatics analysis, and interpretation and report generation. As the authors of this chapter use an Illumina platform, the details describe below are consistent with that platform; though other platforms vary in specifics of certain steps, the general principles nonetheless remain the same. These principles include DNA extraction, DNA shearing and size selection, ligation of oligonucleotide adapters to create a size selected library, and physical isolation of the library fragments during amplification and sequencing.

As discussed more fully in Chapter 1, for library preparation, intact gDNA is first sheered randomly. Prior to adapter labeling this year gDNA is blunt ended and and an adenosine overhang is created through a “dATP” tailing reaction. The adapters are comprised of the sequencing primer and an additional oligonucleotide that will hybridize to the flow cell. After adapter ligation the samples is size selected by gel electrophoresis, gel extracted, and purified.

After the library has been constructed, it is denatured and loaded onto a glass flow cell where it hybridizes to a lawn of oligonucleotides that are complementary to the adapters on an automated liquid handling instrument that adds appropriate reagents at appropriate times and temperatures. The single-stranded, bound the library fragments are then extended and the free and cyber dies to the neighboring lawn of complementary oligonucleotides. This “bridge” is then grown into a cluster through a series of PCR amplification’s. In this way, a cluster of approximately 2000 clonally amplified molecules is formed; across a single lane of a flow cell, there can be over a 37 million individual amplified clusters. The flow cell is then transferred to the sequencing instrument. Depending on the target of the sequencing assay, it is also possible to perform a paired-end read, where the opposite end of the fragment is also sequenced.

As was mentioned previously, NGS works by individually sequencing fragmented molecules. Each of these molecules represents a haploid segment of DNA. In order to ensure that both chromosomes of a diploid region of DNA are represented, it is therefore necessary to have independent sampling events. This is a classic statistical problem, in which the number of independent sampling events required to have a given probability of detecting both chromosomes can be represented by the formula:

In this case, the number of independent sampling events needed to detect a variant is dependent on the number of total sampling events and the number of times each allele is detected, where N is the number of sampling events, X is the number of alternative observations, and P equals allele 1 and Q equals allele 2, each of which should (in constitutional testing) be present in equal proportions. Using this principle, it is possible to estimate the minimum number of sampling events required to have confidence that the call represents both chromosomes, and therefore if a variant were present, it would be detected. However, this formula represents the ideal, in which all calls are perfect; in reality calls are not always perfect, so additional quality monitoring of the call, and how well the call is mapped to a position in the genome, must also be considered. During assay validation, the number of independent sampling events, typically referred to as the depth of coverage, should be measured on known samples to understand what thresholds of depth result in what confidence of detection. In the Illumina Clinical Services Laboratory (ICSL), evaluation of multiple known samples run at various steps and a subsample of bootstrapping analysis showed that the results generally track well with the hypothetical expectations. Based on these results, the average coverage of a call at 30-fold depth results in greater than 99.9% sensitivity and greater than 99.99% specificity of variant detection however, when the average call is made at 30-fold depth, something less than half of the total calls are made at less than 30-fold depth, and thus have a lower sensitivity and specificity for variant detection. Using a minimal depth of coverage at any position to make a call, specifically tenfold depths, yields a 97% sensitivity and 99.9% specificity. Mapping the distribution of calls for every call made makes it possible to evaluate the corresponding confidence for any given individual call, and essentially back-calculate the required average depth of coverage required to meet specific sensitivity and specificity test metrics. The same approach can be applied to the formula for non-diploid situations, for example, for detection of header applies me a somatic variant, or sequence variants in different stranger of microbes…
Figure 2.1 [click image to enlarge] For any given average depth of coverage for the genome, the coverage at individual loci will be distributed around that average. This graph display the distribution of coverage when the average depth of coverage for the genome is about 40-fold. Since most bioinformatics pipelines cannot reliably detect variants at positions with fewer than 10 independent sampling events (less than the 10-fold depth of coverage), the graph does not include that region of the distribution.

The genomic sequencing process must be tightly monitored with quality assessments at each step to ensure that the sample is progressing with the highest possible quality. The steps of the sequencing process that for which monitoring is useful include DNA extraction, library preparation, cluster generation, and the sequencing run. Each one of these steps must be assayed for quality. In each case, the quality thresholds established during validation, and the average performance metrics of other samples should be compared to the sample being processed; if a sample is showing metrics outside of the normal range, the sample should be investigated.

Robotics and automation are valuable additions that can be made to protocol to minimize the possibility of human error. DNA extraction, library preparation, and cluster generation can be performed with a robot. New or generation sequencing machines combine these steps by performing both the cluster generation and sequencing processes, further limiting the possibility of human error in the transfer of the flow cell between steps. Future advances to further combine the sequencing laboratory steps with automation will increasingly assure a reduction in potential errors. In fact, it is easy to imagine that, in the very near future, sequencing will be form performed with full automation that does not require human touch after the first sample loading.

Quality control of clinical genomic sequencing services is an ongoing responsibility that should be continually evolving based on monitoring and evaluation, as well as externally monitored via proficiency testing, in which laboratories using similar or or follow gust techniques compare their results. The use of appropriate metrics and controls for each step of the process will ensure robust sequencing runs and the ability to identify where and how errors may have arisen. External controls, such as lambda DNA fragments, can be spiked into samples and follow the process additionally, controls internal to a sample can also be used effectively. In addition to controls, specific run and performance metrics should be established during the validation phase that can be used to monitor individual run performance and ensure that the equipment and chemistries are performing as expected; and or followed us assay such as micro-array analysis is one such approach, but other possibilities exist by comparing the con cordons of calls from a genomic level micro-array, not only can a measure of quality of the sequence be obtained, but also sample swabs or contaminations can be detected…
I like that they take a broad view of "quality," one going beyond the tech details of assay accuracy and precision. They make a distinction between "research" analytics and "commercial," but there' no mention of a "forensic" level of reliability (but, maybe it's addressed further down in the book).

Tightwad here didn't buy this book (yet). I read this excerpt in via Dragon from the extensive Amazon "Look Inside" sample. My lab QC chops may be a bit dated, but, not all that much, it would seem after reading this. Instrumentation and assay targets may have indeed advanced materially, but the fundamental concepts of lab QC/QA remain as they have been for decades. Reference stds (internal and external), matrix and DI spikes, dupes, blanks, etc.

This was interesting:
"Robotics and automation are valuable additions that can be made to protocol to minimize the possibility of human error. DNA extraction, library preparation, and cluster generation can be performed with a robot."
See my post "The Robot will see you now -- assuming you can pay."

On the "Sensitivity" and "Specificity" stuff: I've written about that elsewhere in a different context (scroll down).

More broadly, there's a ton to watch (and question) with respect to the burgeoning field of Omics entrepreneurs. "" Someone has squatted it.

Googling "Geneticology" turns up this website.

Pretty much consumer-facing information. Nothing on the organization behind it.

Stay tuned.

More to come...