Search the KHIT Blog

Tuesday, December 2, 2014

"Health Data Outside the Doctor's Office" - My latest THCB comment

Yeah. My comment:
@BobbyGvegas says:
“In fact, a minority of our overall health is the result of the health care we receive. If we’re to have an accurate picture of health, we need more than what is currently captured in the electronic health record.”

Minority indeed. Only ~10% by some estimates. Most of the causal and contributory factors are “upstream.” But, beyond the often politically radioactive socioeconomic factors, included in that upstream estimate are the huge sets of “omics,” which many HIT people want to see included in next generation EHRs. Current CHPL certified ambulatory EHRs today house perhaps 3,500 – 4,000 variables or more “under the hood” in the RDBMS tables and schema. Adding “omics” data to those arrays will be problematic absent [1] a transformative shift the the current payment paradigm, and [2] widespread clinician competence with respect to accurately including “omics” data in the dx and tx. A typical 99213 visit today requires a fleeting time-constrained “access/view/update/append/transmit” drive-by of the several hundred (or more) variables that go into the SOAP and progress note (many of which are pro-forma in order to get paid, which goes to point 1). I will soon be 69. I’m a 99213, so this is no abstraction for me.
The cost of “omics” assays is coming down dramatically. Their irreducible analytical and predictive complexity will remain. For Old Coots in Training like me, the “omics” horse is likely largely already out of the barn. Mandating their inclusion, given the age-related pt encounter UTIL may well be a net loser, writ large.

“Data sharing is a critical piece of this equation. While we need infrastructure to capture and organize this [sic] data, we also need to ensure that individuals, health care professionals and community leaders can access and exchange this [sic] data, and use it to make decisions that improve health.”

No small undertaking. We need to figure out how move beyond the fundamental “Opacity = Margin” market imperative. I’m midway through the new Schmidt-Rosenberg book “How Google Works” (which I will soon cite and review on my blog re its import for Health IT). While they laud the Google ethos of “Default of Open,” they’re not about to publish their search and ad placement algorithms. They note that such a stance opens them to charges of hypocrisy, but, so be it.

“With a few exceptions, Google defaults to open, and for these exceptions we are often criticized as being hypocritical, since we preach open in some areas but then sometimes ignore our own advice. This isn’t hypocritical, merely pragmatic. While we generally believe that open is the best strategy, there are certain circumstances where staying closed works as well.”

Schmidt, Eric; Rosenberg, Jonathan (2014-09-23). How Google Works (Kindle Locations 1163-1166). Grand Central Publishing. Kindle Edition.
I guess I should have more accurately noted that "omics" data are considered "upstream" only to the extent that they largely remain pretty much "outside the doctor's office." The other principal "upstream" factors include, in addition to socioeconomic and environmental metrics, issues of "lifestyle" All of these latter factors are correlated to a significant degree.

BTW: I'll be attending and reporting on the event below this Thursday.


I got onto the "How Google Works" book by way of a New Yorker article when the latest issue showed up in my snailmail box the other day.

When G.M. Was Google
The art of the corporate devotional.

...What’s Google’s secret? This is an irresistible question, because Google is the most successful new business corporation of the twenty-first century. Still only fifteen years old, it is worth about three hundred and eighty billion dollars; its revenues are more than fifty billion dollars a year, and around a quarter of that is profit. More than a billion people perform a Google search every month. It’s natural to wonder whether there’s something each of us can do to emulate Google, with directionally similar, if perhaps more modest, results. What makes the Google model especially alluring is that, as Page and Sergey Brin put it in the statement that accompanied their initial public offering, ten years ago, “Google is not a conventional company.” Getting very rich is always fascinating, but getting very rich while proclaiming that you’re breaking the rules about how to run a business is even more so...
Epic is reported to have earned $1.7 billion in 2013. Maybe Google could just buy them and open them up. They could pay cash and never miss the money.

From "How Google Works"
Three powerful technology trends have converged to fundamentally shift the playing field in most industries. First, the Internet has made information free, copious , and ubiquitous— practically everything is online. Second, mobile devices and networks have made global reach and continuous connectivity widely available. And third, cloud computing 10 has put practically infinite computing power and storage and a host of sophisticated tools and applications at everyone’s disposal, on an inexpensive, pay-as-you-go basis...
Today the components are all about information, connectivity, and computing. Would-be inventors have all the world’s information, global reach, and practically infinite computing power. They have open-source software and abundant APIs that allow them to build easily on each other’s work. They can use standard protocols and languages. They can access information platforms with data about things ranging from traffic to weather to economic transactions to human genetics to who is socially connected with whom , either on an aggregate or ( with permission) individual basis. So one way of developing technical insights is to use some of these accessible technologies and data and apply them in an industry to solve an existing problem in a new way. Besides these common technologies, each industry also has its own unique technical and design expertise. We have always been involved in computing companies, where the underlying technical expertise is computer science. But in other industries the underlying expertise may be medicine, mathematics, biology, chemistry, aeronautics, geology, robotics, psychology, logistics, and so on...

Schmidt, Eric; Rosenberg, Jonathan (2014-09-23). How Google Works (Kindle Locations 176-180, 972-2980). Grand Central Publishing. Kindle Edition.

Recall my earlier post citing Dr. Jerome Carter's "EHR Science."
Looking at clinical care and its computing needs, I see requirements that are distinct when compared to standard business computing. Clinical data are varied and numerous. Clinical work consists of interacting with patients to obtain information, consulting information sources  (e.g., chart, guidelines, articles, other clinicians), making decisions, recording information, and moving on. Support for clinical work requires large, searchable data stores, fast networks, sophisticated communications functionality, and portable computers capable of displaying text, pictures, sound and video. Tablets and smartphones are the first computers to meet all of these requirements.

Writing for mobile means stepping back from web and client/server applications and being willing to see a problem purely from the standpoint of mobile computing; that is, adopting a “mobile first” attitude.

Mobile first requires a willingness to rethink past approaches. At the top of the list is use of cloud capabilities. Like mobile computers, the cloud is a new way of doing things. Building mobile applications that link to cloud storage and use APIs to interact with other applications is a new way of delivering functionality. There is no reason to have local terminology services if they can be obtained via a cloud application. The same is true of workflow engines or another service that supports clinical work. Mobile first also means not taking a client/server app and putting a mobile face on it. That will not work any better than putting a browser interface on a standard desktop app. It might work to some extent, but the original design limitations will show through.

Until the 64-bit chips arrived, the amount of computing power in mobile systems made them useful only for limited applications. However, Apple has shown in its A8X chip that tablets and smartphones are rapidly gaining sufficient computing power and communications capability to make serious clinical applications possible in a way that has never existed–and this is only the chip’s second generation! The fourth generation will appear in 24 months, if Apple sticks to form. What will those systems be capable of doing?
How many EHR vendors will bite the bullet and start serious mobile-first projects? Few, I imagine, because if the past is prologue, most will cling to the prevailing wisdom that mobile devices are not real computers. And we know how that story ends…

 Finished "How Google Works" yesterday afternoon. A fun read. A few of the authors' closing thoughts:
We see most big problems as information problems, which means that with enough data and the ability to crunch it, virtually any challenge facing humanity today can be solved. We think computers will serve at the behest of people— all people— to make their lives better and easier. And we are quite sure that we, as a couple of Silicon Valley guys, will come under a lot of criticism for this Pollyannaish view of the future. But that doesn’t matter. What matters is that there is a bright light at the end of the tunnel.

There are solid reasons underlying our optimism . The first is the explosion of data and a trend toward the free flow of information. From geological and meteorological sensors to computers that record every single economic transaction to wearable technology (such as Google’s smart contact lenses) 210 that continuously tracks a person’s vital signs, types of data are being collected that simply have never been available before, at a scale that was the stuff of science fiction only a few years ago. And there is now practically limitless computing power with which to analyze that data. Infinite data and infinite computing power create an amazing playground for the world’s smart creatives to solve big problems.

This will result in greater collaboration among smart creatives— scientists, doctors, engineers, designers , artists— trying to solve the world’s big problems, since it is so much easier to compare and combine different sets of data. As Carl Shapiro and Hal Varian note in Information Rules, information is costly to produce but cheap to reproduce. So if you create information that can help solve a problem and contribute that information to a platform where it can be shared (or help create the platform), you will enable many others to use that valuable information at low or no cost. Google has a product called Fusion Tables, which is designed to “bust your data out of its silo” by allowing related data sets to be merged and analyzed as a single set, while still retaining the integrity of the original data set. Think of all the research scientists in the world working on similar problems, each with their own set of data in their own spreadsheets and databases. Or local governments trying to assess and solve environmental and infrastructure issues, tracking their progress in systems sitting on their desks or in the basement. Imagine the power of busting down these information silos to combine and analyze the data in new and different ways...

And the advent of networks is giving rise to greater collective wisdom and intelligence . When reigning world champ Garry Kasparov lost his chess match to IBM’s Deep Blue computer in 1997, we all thought we were witnessing a seminal passing of the torch. But it turns out that the match heralded a new age of chess champions: not computers, but people who sharpen their skills by collaborating with computers. Today’s grandmasters (and there are twice as many now as there were in 1997 ) use computers as training partners, which makes the humans even better players. Thus a virtuous cycle of computer-aided intelligence emerges: Computers push humans to get even better, and humans then program even smarter computers. This is clearly happening in chess; why not in other pursuits?
Schmidt, Eric; Rosenberg, Jonathan (2014-09-23). How Google Works (Kindle Locations 3358-3390). Grand Central Publishing. Kindle Edition. 
Think "Watson." That latter paragraph also goes straight to Dr. Weed, et al. See, e.g., my earlier post "Back down in the Weeds': A Complex Systems Science Approach to Healthcare Costs and Quality." And, my first post on Larry Weed, blogged nearly two years ago: "Down in the Weeds'."

Continuing with Eric Schmidt and Jonathan Rosenberg:
The future’s so bright…
It is hard for us to look at an industry or field and not see a bright future. In health care, for example, real-time personal sensors will enable sophisticated tracking and measurement of complex human systems. Combine all that data with a map of risk factors generated by in-depth genetic analysis , and we will have unprecedented abilities (only with an individual’s consent) to identify and prevent or treat individual health issues much earlier. Aggregating that data can create platforms of information and knowledge that enable more effective research and inform smarter health-care policies.
Health-care consumers suffer from a dearth of information: They have virtually no data on procedural outcomes and doctor and hospital performance, and often have a hard time accessing their own health data, especially if it is held by different institutions. And pricing for medical services, medicine, and supplies is completely opaque and varies widely from patient to patient and facility to facility . Just bringing even a basic level of information transparency to health care could have a tremendous positive impact, lowering costs and improving outcomes... [ibid, Kindle Locations 3391-3399]
An excellent book. (It's only $3.75 in Kindle edition at Amazon.) I share their optimism. Guardedly. The comments are piling up in the Health Care Blog post that launched this one: "Health Data Outside the Doctor's Office." Check them out. Lots of cynicism out there, much of it openly hostile.

Here's one of the better ones.

One answer I have for the cynics can be found in my recent post "Physician, Health Thy System."

We also do well to stay mindful of the thoughtful caveats proffered in Nicholas Carr's excellent book "The Glass Cage," a topic of my October 27th, 2014 post. See also my citation of Simon Head's book "Mindless: Why Smart Machines Are Making Dumber Humans."

No shortage of diligent and cautious work to be done in the Health IT space. No shortage of dots to be firmly connected. We will need legions of Schmidt's and Rosenberg's "smart creatives" at their best.

Not As Loony As It Sounds
Google’s “impossible" plan to beam Internet from solar-powered balloons is actually working. Here’s how.  

The majority of people in the world lack access to the Internet. Either they can’t afford a connection, or none exists where they live. Of all the efforts to bring those people online, Google’s “Project Loon” sounds like the most far-fetched. At the secretive Google X labs, it’s a moonshot among moonshots.

When the search company announced in June 2013 that it was building “Wi-Fi balloons” to blanket the world’s poor, remote, and rural regions with Internet beamed down from the skies, expert reaction ranged from skeptical to dismissive—with good reason. The plans called for Google to put hundreds of solar-powered balloons in the air simultaneously, each coordinating its movements in an intricate dance to provide continuous service even as unpredictable, high-speed winds buffeted them about the stratosphere.

“Absolutely impossible,” declared Per Lindstrand, a Swedish aeronautical engineer and perhaps the world’s best-known balloonist, in an early Wired article about the project...

"Who could have conceived that the Information Super Highway would usher in a new Dark Age; a return to blind faith over science, cant over reason?"
The unilateral rejection of facts may be the ultimate metaphor and irony of the Information Age; the more official the source the less likely it is to be believed.

Skepticism is as old as king and country. History itself has been aptly described as an argument un-ended. But there is a profound difference between revisionist history based on new evidence and evolving social mores and the rejection of facts.

In our digital world, all the accumulated knowledge of human history is available in the palm of our hands. But intermingled with hard-won truths are half-baked theories and outright lunacy—decorated with footnotes, graphs, pie charts, and citations from credentialed “experts”—proving that the Earth is warming, the Earth is cooling, or the Earth is flat.

If you seek it, you’ll find it. That’s the problem.

We are overwhelmed with data from every quarter, and our capacity to filter fact from fraud is limited. But the web never rests. Men and women of good intent who simply seek “the truth” upon which to base their opinions find themselves awash in folderol.

No longer confident in any single source for simple truths, more and more of us today are choosing to believe what we are predisposed to believe, period. Contravening facts are dismissed as lies or propaganda.

In a more circumspect time Daniel Patrick Moynihan famously said, “You are entitled to your own opinion, but you are not entitled to your own facts.” In the dotcom world we all have our own facts...
From The Facts About Ferguson Matter, Dammit

More to come...

No comments:

Post a Comment