Search the KHIT Blog

Thursday, September 21, 2017

Precision medicine update


My latest AAAS Science Magazine hardcopy arrived yesterday. Interesting (firewalled) article by veteran AAAS writer Jocelyn Kaiser therein:
PRECISION MEDICINE
NIH's massive health study is off to a slow start
       
Nearly 3 years after then-President Barack Obama laid out a vision for perhaps the most ambitious and costly national health study ever, the U.S. National Institutes of Health (NIH) is still grappling with the complexities of the effort, which aims to probe links between genes, lifestyle, and health. The study plan, or protocol, is ready, the biobank is taking samples, and the enrollment site is up, but study leaders are still in early “beta testing” of the 1-million-person precision medicine study.

The All of Us study, as it is now dubbed, had once expected that by early 2017 it would enroll at least 10,000 participants for a pilot testing phase; it is up to just 2000. Its national kickoff, envisioned for 2016 and then mid-2017, has been delayed as staff work out the logistics of the study, which is funded at $230 million this year and projected to cost $4.3 billion over 10 years. But study leaders say that for an endeavor this complex, delays are inevitable. “The pressure on our end is really about getting it right and doing this when we're ready,” says Joni Rutter, director of scientific programs for the project in Bethesda, Maryland...
Plans are to enroll a million consenting patients in the ambitious longitudinal research.

The study will recruit most volunteers through health care provider organizations (HPOs), or networks of clinics and physicians, and the rest directly from the general population. Volunteers age 18 or older will give blood and urine samples and undergo a few basic tests such as blood pressure, height, and weight during a 1-hour clinic visit, after which they will receive $25, the protocol says. They will complete three online surveys about their health and lifestyle. The study will also ask volunteers to allow access to their electronic health records so researchers can track their health over time.
They've dubbed it the "All of Us" study. From the NIH page:
"The All of Us Research Program is a historic effort to gather data from one million or more people living in the United States to accelerate research and improve health. By taking into account individual differences in lifestyle, environment, and biology, researchers will uncover paths toward delivering precision medicine."

Among the potentially daunting issues to be successfully traversed: "Self-selection bias" (though they claim to be mounting an intense disparities-mitigation recruitment effort on that front), "consent" issues (which can vary legally from state to state), particularly since they intend to delve into DNA analytics (oversold of late?), and "data quality"/EHR "interoperability" hurdles.

Other thoughts of mine: "A million?" Sounds great (but is it really "Big Data?"). When I was a bank credit risk analyst we ran highly effective credit underwriting modeling campaigns (pdf) using our internal customer data comprised of several million accounts (our team was astute; the bank had successive record profits each of the five years of my tenure).

But, it's one thing to do adroit CART and stress-tested correlation/regression studies using clean data comprising a relatively small number of independent variables. It's entirely another matter to do so where the databases have to house hundreds to thousands of (longitudinal) clinical variables per patient (of sometimes uncertain data origination pedigree). Study enrollee dropouts over time, and necessary stratifications (whether through subective "expert judgment" or data-driven via methods such as cluster analysis) may well make the "million" start to look pretty small in relatively short order.

The article concludes:
Others question whether the massive investment in DNA-based medicine will pay off. It may merely “confirm that for the vast majority of things, environment and behavior trump gene variants by a wide margin,” says physiologist Michael Joyner of the Mayo Clinic in Rochester, Minnesota. What's more, personalized medical care may not mean much for the millions of Americans who lack health insurance, says bioethicist Mark Rothstein of the University of Louisville in Kentucky. “We'd get a lot more bang for the buck by having everyone have access to the low-tech medicine that's available right now.”
Indeed. I forget the source at the moment, but I recall from one of my myriad books that I study and cite here the assertion that, for most clinical dx issues, "an old-fashioned, assiduously-taken SOAP note "FH" (Family History) spanning three generations still outperforms a genomic assay." Again, the applied "omics" are still in their analytical infancy and rife with challenges.

Also apropos, today's political "Repeal" fracas. From my iPhone this afternoon:


I also have to wonder about the security of the "All of Us" projected long-term $4.3 billion funding, given the Trump administration's zest for budget-cutting at every turn (Jocelyn does not share that concern, citing "congressional support." I guess we'll see).


UPDATE

Jocelyn Kaiser apprised me of this. I'd have seen it in any event, given that STATnews is one of my priority daily stops, but thanks!

By LEV FACHER @levfacher
 

The National Institutes of Health would like six vials of your blood, please.

Its scientists would like to take a urine sample, measure your waistline, and have access to your electronic health records and data from the wearable sensor on your wrist. And if you don’t mind sharing, could they have your Social Security number?
It is a big ask, the NIH knows, and of an equally big group — the agency eventually hopes to enroll over 1 million participants in the next step of what four researchers referred to in a 2003 paper as “a revolution in biological research.”

The appeals, however, are befitting the biggest-ever bet on precision medicine, now more than a decade in the making. The paper’s lead author, Dr. Francis Collins, has been devoted to the project from its inception, riding his vision for a more precise world of medical treatment to his current post as the NIH’s director.

The data-gathering experiment, dubbed “All of Us,” is an important stop on the way to making personalized medicine a reality around the world, Collins and others say. The NIH hopes that the trove of information could one day enable doctors to use increasingly precise diagnostic tests. Eventually, scientists could shape treatments based on an individual’s genetic characteristics.

But one of Collins’s stated goals is ensuring more than half of the participants come from communities that are historically underrepresented in biomedical research — and that’s a gamble…
Good article. A fairly long read. Worth your time. Nice examination of the ongoing logistical issues that will have to be surmounted.

ERRATUM

The 11th Annual Health 2.0 Conference draws nigh. Be there. I will.

CASSIDY-GRAHAM "REPEAL" UPDATE

From The Incidental Economist, another of my daily stops.
The only option left to the Senate is to make health care reform someone else’s problem
…If it hasn’t become abundantly clear, the only thing left for Republican Senators to try is to kick the can down the road. Again. They’re going to try and pass a bill which gives less money overall to states, a lot less money to some states, and then tells them to “figure it out”. Later, they can claim that they gave the states all the tools they needed to fix the health care system, so now it’s THEIR fault things don’t work.

This is ridiculous.

There is no magic. There is no innovation. If there was a way to make the health care system broader, cheaper, and better, we would do it right now. We would have done it years ago…

THERE IS NO WAY TO SPEND LESS, COVER MORE, AND MAKE IT BETTER...
The opening paragraph of my 2009 post "The U.S. health care policy morass."
Some reform advocates have long argued that we can indeed [1] extend health care coverage to all citizens, with [2] significantly increased quality of care, while at the same time [3] significantly reducing the national (and individual) cost. A trifecta "Win-Win-Win." Others find the very notion preposterous on its face. In the summer of 2009, this policy battle is now joined in full fury...
Eight years later, the can continues to be kicked.

THE LATEST

Senator John McCain has announced that he will again vote against the latest GOP repeal bill. That will probably doom the effort.

____________

More to come...

Tuesday, September 19, 2017

GAFA, Really Big Data, and their sociopolitical implications

My September issue of my ASQ "Quality Progress" journal showed up in the snailmail.


Pretty interesting cover story. Some of it ties nicely into recent topics of mine such as "AI" and its adolescent cousin "NLP" as they go to the health IT space. More on that in a bit, but, first, let me set the stage via some relevant recent reading.


Franklin Foer has had a good book launch. Lots of interviews and articles online and a number of news show appearances.


I loved the book, read it in one day. I will cite just a bit from his book, "GAFA," btw, is the EU's  sarcastic shorthand for "Google-Amazon-Facebook-Apple," the for-profit miners of the biggest of "big data."
UNTIL RECENTLY, it was easy to define our most widely known corporations. Any third grader could describe their essence. Exxon sells oil; McDonald’s makes hamburgers; Walmart is a place to buy stuff. This is no longer so. The ascendant monopolies of today aspire to encompass all of existence. Some of these companies have named themselves for their limitless aspirations. Amazon, as in the most voluminous river on the planet, has a logo that points from A to Z; Google derives from googol, a number (1 followed by 100 zeros) that mathematicians use as shorthand for unimaginably large quantities.

Where do these companies begin and end? Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow. Google now aims to build driverless cars, manufacture phones, and conquer death. Amazon was once content being “the everything store,” but now produces television shows, designs drones, and powers the cloud. The most ambitious tech companies— throw Facebook, Microsoft, and Apple into the mix— are in a race to become our “personal assistant.” They want to wake us in the morning, have their artificial intelligence software guide us through the day, and never quite leave our sides. They aspire to become the repository for precious and private items, our calendar and contacts, our photos and documents. They intend for us to unthinkingly turn to them for information and entertainment, while they build unabridged catalogs of our intentions and aversions. Google Glass and the Apple Watch prefigure the day when these companies implant their artificial intelligence within our bodies.

More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They believe that they have the opportunity to complete the long merger between man and machine— to redirect the trajectory of human evolution. How do I know this? Such suggestions are fairly commonplace in Silicon Valley, even if much of the tech press is too obsessed with covering the latest product launch to take much notice of them. In annual addresses and townhall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature— a view of human nature that they intend to impose on the rest of us.

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, which isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, that’s not the worldview that emerges. In fact, it is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies believe we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. By stitching the world together, they can cure its ills. Rhetorically, the tech companies gesture toward individuality— to the empowerment of the “user”— but their worldview rolls over it. Even the ubiquitous invocation of users is telling, a passive, bureaucratic description of us.

The big tech companies— the Europeans have charmingly, and correctly, lumped them together as GAFA (Google, Apple, Facebook, Amazon)— are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility to intellectual property. In the realm of economics, they justify monopoly with their well-articulated belief that competition undermines our pursuit of the common good and ambitious goals. When it comes to the most central tenet of individualism— free will— the tech companies have a different way. They hope to automate the choices, both large and small, that we make as we float through the day. It’s their algorithms that suggest the news we read, the goods we buy, the path we travel, the friends we invite into our circle.

It’s hard not to marvel at these companies and their inventions, which often make life infinitely easier. But we’ve spent too long marveling. The time has arrived to consider the consequences of these monopolies, to reassert our own role in determining the human path. Once we cross certain thresholds— once we transform the values of institutions, once we abandon privacy— there’s no turning back, no restoring our lost individuality…


Foer, Franklin. World Without Mind: The Existential Threat of Big Tech (pp. 1-3). Penguin Publishing Group. Kindle Edition.
I've considered thses issues before. See, e.g., "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

A useful volume of historical context also comes to mind.

Machines are about control. Machines give more control to humans: control over their environment, control over their own lives, control over others. But gaining control through machines means also delegating it to machines. Using the tool means trusting the tool. And computers, ever more powerful, ever smaller, and ever more networked, have given ever more autonomy to our instruments. We rely on the device, plane and phone alike, trusting it with our security and with our privacy. The reward: an apparatus will serve as an extension of our muscles, our eyes, our ears, our voices, and our brains.

Machines are about communication. A pilot needs to communicate with the aircraft to fly it. But the aircraft also needs to communicate with the pilot to be flown. The two form an entity: the pilot can’t fly without the plane, and the plane can’t fly without the pilot. But these man-machine entities aren’t isolated any longer. They’re not limited to one man and one machine, with a mechanical interface of yokes, throttles, and gauges. More likely, machines contain a computer, or many, and are connected with other machines in a network. This means many humans interact with and through many machines. The connective tissue of entire communities has become mechanized. Apparatuses aren’t simply extensions of our muscles and brains; they are extensions of our relationships to others— family, friends, colleagues, and compatriots. And technology reflects and shapes those relationships.

Control and communication began to shift fundamentally during World War II. It was then that a new set of ideas emerged to capture the change: cybernetics. The famously eccentric MIT mathematician Norbert Wiener coined1 the term, inspired by the Greek verb kybernan, which means “to steer, navigate, or govern.” Cybernetics; or, Control and Communication in the Animal and the Machine, Wiener’s pathbreaking book, was published in the fall of 1948. The volume was full of daredevil prophecies about the future: of self-adaptive machines that would think and learn and become cleverer than “man,” all made credible by formidable mathematical formulas and imposing engineering jargon…

…From today’s vantage point, the future is hazy, dim, and formless. But these questions aren’t new. The future of machines has a past. And mastering our future with machines requires mastering our past with machines. Stepping back twenty or forty or even sixty years brings the future into sharper relief, with exaggerated clarity, like a caricature, revealing the most distinct and marked features. And cybernetics was a major force in molding these features.

That cybernetic tension of dystopian and utopian visions dates back many decades. Yet the history of our most potent ideas about the future of technology is often neglected. It doesn’t enter archives in the same way that diplomacy and foreign affairs would. For a very long period of time, utopian ideas have dominated; ever since Wiener’s death in March 1964, the future of man’s love affair with the machine was a starry-eyed view of a better, automated, computerized, borderless, networked, and freer future. Machines, our own cybernetic creations, would be able to overcome the innate weaknesses of our inferior bodies, our fallible minds, and our dirty politics. The myth of the clean, infallible, and superior machines was in overdrive, out of balance.

By the 1990s, dystopia had returned. The ideas of digital war, conflict, abuse, mass surveillance, and the loss of privacy— even if widely exaggerated— can serve as a crucial corrective to the machine’s overwhelming utopian appeal. But this is possible only if contradictions are revealed— contradictions covered up and smothered by the cybernetic myth. Enthusiasts, driven by hope and hype, overestimated the power of new and emerging computer technologies to transform society into utopia; skeptics, often fueled by fear and foreboding, overestimated the dystopian effects of these technologies. And sometimes hope and fear joined forces, especially in the shady world of spies and generals. But misguided visions of the future are easily forgotten, discarded into the dustbin of the history of ideas. Still, we ignore them at our own peril. Ignorance risks repeating the same mistakes.

Cybernetics, without doubt, is one of the twentieth century’s biggest ideas, a veritable ideology of machines born during the first truly global industrial war that was itself fueled by ideology. Like most great ideas, cybernetics was nimble and shifted shape several times, adding new layers to its twisted history decade by decade. This book peels back these layers, which were nearly erased and overwritten again and again, like a palimpsest of technologies. This historical depth, although almost lost, is what shines through the ubiquitous use of the small word “cyber” today…


Rid, Thomas (2016-06-28). Rise of the Machines: A Cybernetic History (Kindle Locations 167-233). W. W. Norton & Company. Kindle Edition.
Still reading this one. A fine read. spans roughly the period from WWII to 2016.

I can think of a number of relevant others I've heretofore cited, but these will do for now.

BACK TO THE QUALITY PROGRESS ARTICLE

The Deal With Big Data

Move over! Big data analytics and standardization are the next big thing in quality
by Michele Boulanger, Wo Chang, Mark Johnson and T.M. Kubiak

 
Just the Facts


  • More and more organizations have realized the important role big data plays in today’s marketplaces.

  • Recognizing this shift toward big data practices, quality professionals must step up their understanding of big data and how organizations can use and take advantage of their transactional data.

  • Standards groups realize big data is here to stay and are beginning to develop foundational standards for big data and big data analytics.
The era of big data is upon us. While providing a formidable challenge to the classically trained quality practitioner, big data also offers substantial opportunities for redirecting a career path into a computational and data-intensive environment.

The change to big data analytics from the status quo of applying quality principles to manufacturing and service operations could be considered a paradigm shift comparable to the changes quality professionals experienced when statistical computing packages became widely available, or when control charts were first introduced.

The challenge for quality practitioners is to recognize this shift and secure the training and understanding necessary to take full advantage of the opportunities.

What’s the big deal?
What exactly is big data? You’ve probably noticed that big data often is associated with transactional data sets (for example, American Express and Amazon), social media (for example, Facebook and Twitter) and, of course, search engines (for example, Google). Most formal definitions of big data involve some variant of the four V’s:

  • Volume: Data set size.
  • Variety: Diverse data types residing in multiple locations.
  • Velocity: Speed of generation and transmission of data.
  • Variability: Nonconstancy of volume, variety and velocity.
This set of V’s is attributable originally to Gartner Inc., a research and advisory company, and documented by the National Institute of Standards and Technology (NIST) in the first volume of a set of seven documents. Big data clearly is the order of the day when the quality practitioner is confronted with a data set that exceeds the laptop’s memory, which may be by orders of magnitude.

In this article, we’ll reveal the big data era to the quality practitioner and describe the strategy being taken by standardization bodies to streamline their entry into the exciting and emerging field of big data analytics. This is all done with an eye on preserving the inherently useful quality principles that underlie the core competencies of these standardization bodies.

Primary classes of big data problems
The 2016 ASQ Global State of Quality reports included a spotlight report titled "A Trend? A Fad? Or Is Big Data the Next Big Thing?" hinting that big data is here to stay. If the conversion from acceptance sampling, control charts or design of experiments seems a world away from the tools associated with big data, rest assured that the statistical bases still apply. 
Of course, the actual data, per the four V’s, are different. Relevant formulations of big data problems, however, enjoy solutions or approaches that are statistical, though the focus is more on retrospective data and causal models in traditional statistics, and more forward-looking data and predictive analytics in big data analytics. Two primary classes of problems occur in big data:
  • Supervised problems occur when there is a dependent variable of interest that relates to a potentially large number of independent variables. For this, regression analysis comes into play, for which the typical quality practitioner likely has some background.
  • Unsupervised problems occur when unstructured data are the order of the day (for example, doctor’s notes, medical diagnostics, police reports or internet transactions).
Unsupervised problems seek to find the associations among the variables. In these instances, cluster and association analysis can be used. The quality practitioner can easily pick up such techniques...
Good piece. It's firewalled, unfortunately, but, ASQ does provide "free registration" for non-member "open access" to certain content, including this article.

The article, as we might expect, is focused on the tech and operational aspects of using "big data" for analytics -- i.e., the requisite interdisciplinary skill sets needed, data heterogeneity problems going to 'interoperability," and the chronic related problems associated with "data quality." These considerations differ materially from those of "my day-" -- I joined ASQ in the 80's when it as still "ASQC," the "American Society for Quality Control." Consequently, I am an old school Deming - Shewhart guy. I successfully sat for the rigorous ASQ "Certified Quality Engineer" exam (CQE) in 1992. It was comprised at the time of about 2/3rds applied industrial statistics -- sampling theory, probability calculations, design of experiments, all aimed principally at assessing and improving things like "fraction defectives" amid production, and maintaining "SPC," Statistical Process Control, etc.

That kind of work pretty much assumed relative homogeneity of data under tight in-house control.
Such was even the case during my time in bank credit risk modeling and management. See, e.g., my 2003 whitepaper "FNBM Credit Underwriting Model Development" (large pdf). Among our data resources, we maintained a fairly large Oracle data warehouse comprising several million accounts from which I could pull customer-related data into SAS for analytics.
While such analytic methods do in fact continue to be deployed, the issues pale in comparison the the challenges we face in a far-flung "cloud-based" "big data" world comprised of data of wildly varying pedigree. The Quality Progress article provides a good overview of the current terrain and issues requiring our attention.

One publicly available linked resource the article provides:

The NBD-PWG was established together with the industry, academia and government to create a consensus-based extensible Big Data Interoperability Framework (NBDIF) which is a vendor-neutral, technology- and infrastructure-independent ecosystem. It can enable Big Data stakeholders (e.g. data scientists, researchers, etc.) to utilize the best available analytics tools to process and derive knowledge through the use of standard interfaces between swappable architectural components. The NBDIF is being developed in three stages with the goal to achieve the following with respect to the NIST Big Data Reference Architecture (NBD-RA), which was developed in Stage 1:
  1. Identify the high-level Big Data reference architecture key components, which are technology, infrastructure, and vendor agnostic;
  2. Define general interfaces between the NBD-RA components with the goals to aggregate low-level interactions into high-level general interfaces and produce set of white papers to demonstrate how NBD-RA can be used;
  3. Validate the NBD-RA by building Big Data general applications through the general interfaces.
The "Use Case" pages contains links to work spanning the breadth of big data application domains, e.g., government operations, commercial, defense, healthcare and life sciences, deep learning and social media, the ecosystem for research, astronomy and physics, environmental and polar sciences, and energy.

From the Electronic Medical Record (EMR) Data "use case" document; Shaun Grannis, Indiana University
As health care systems increasingly gather and consume electronic medical record data, large national initiatives aiming to leverage such data are emerging, and include developing a digital learning health care system to support increasingly evidence-based clinical decisions with timely accurate and up-to-date patient-centered clinical information; using electronic observational clinical data to efficiently and rapidly translate scientific discoveries into effective clinical treatments; and electronically sharing integrated health data to improve healthcare process efficiency and outcomes. These key initiatives all rely on high-quality, large-scale, standardized and aggregate health data.  Despite the promise that increasingly prevalent and ubiquitous electronic medical record data hold, enhanced methods for integrating and rationalizing these data are needed for a variety of reasons. Data from clinical systems evolve over time. This is because the concept space in healthcare is constantly evolving: new scientific discoveries lead to new disease entities, new diagnostic modalities, and new disease management approaches. These in turn lead to new clinical concepts, which drives the evolution of health concept ontologies. Using heterogeneous data from the Indiana Network for Patient Care (INPC), the nation's largest and longest-running health information exchange, which includes more than 4 billion discrete coded clinical observations from more than 100 hospitals for more than 12 million patients, we will use information retrieval techniques to identify highly relevant clinical features from electronic observational data. We will deploy information retrieval and natural language processing techniques to extract clinical features. Validated features will be used to parameterize clinical phenotype decision models based on maximum likelihood estimators and Bayesian networks. Using these decision models we will identify a variety of clinical phenotypes such as diabetes, congestive heart failure, and pancreatic cancer…

Patients increasingly receive health care in a variety of clinical settings. The subsequent EMR data is fragmented and heterogeneous. In order to realize the promise of a Learning Health Care system as advocated by the National Academy of Science and the Institute of Medicine, EMR data must be rationalized and integrated. The methods we propose in this use-case support integrating and rationalizing clinical data to support decision-making at multiple levels.
This document is dated August 11, 2013. Let's don't hurry up or anything. "Interoperababble."

___

BEYOND TECHNICAL CAPABILITY ISSUES

Back to the themes with which I began this post. From Wired.com

AI RESEARCH IS IN DESPERATE NEED OF AN ETHICAL WATCHDOG

ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data and Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go…
Yeah, it's about "AI" rather than "big data" per se. But, to me the direct linkage is pretty obvious.

Again, apropos, I refer you to my prior post "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

Tangentially, see also my 'Watson and cancer" post.

SOME ADDITIONAL RECOMMENDED TOPICAL READING FROM MY KINDLE STASH



By no means exhaustive. Also recommend you read a bunch of Kevin Kelly among others.

ADDENDUM

From Scientific American: 
Searching for the Next Facebook or Google: Bloomberg Helps Launch Tech Incubator
The former mayor speaks with Scientific American about the new Cornell Tech campus in New York City: “Culture attracts capital a lot quicker than capital will attract culture.”
____________

More to come...

Thursday, September 14, 2017

Defining "Interoperability" down, an update

 
Pretty interesting Modern Healthcare article arrived in my inbox this morning.
Epic takes a step towards interoperability with Share Everywhere
By Rachel Z. Arndt


Epic Systems Corp. is making headway in the quest for interoperability and potentially fulfilling meaningful use requirements with a new way for patients to send their medical records to their providers.

The new product, called Share Everywhere and unveiled Wednesday, allows patients to give doctors access to their medical records through an internet browser. That means doctors don't have to have their own electronic health record systems and the information can be shared despite not having EHRs that can communicate.

"This is really patient-driven interoperability," said Sean Bina, Epic's vice president of access applications. "This makes it so the patient can direct their existing record to the clinician of their choice," he said, regardless of whether that patient is one of the 64% of Americans who have records in Epic EHRs or whether that doctor is one of the 57.8% of ambulatory physicians who use Epic.

The receiving doctors need only have a computer and an internet connection—a requirement that might have been helpful in the recent hurricanes…
As I've noted before, my world of late has been "all Epic all the time," mostly in my role as next-of-kin caregiver, as well as an episodic chronic care F/up patient. Kaiser Permanente? Epic. Muir? Epic. UCSF? Epic. Stanford Medical Center? Epic. (I think Sutter Health is also on Epic, but I'm not certain.)

They pretty much rule the Bay Area (which is a pretty good thing for us, net). Their national footprint is huge as well.


More from Rachel's article:
"Where we are now is we're looking at how do we get the data into the workflow of the clinician so it's not something else they have to do, it's just there," said Charles Christian, vice president of technology and engagement for the Indiana Health Information Exchange. "There is a massive amount of data that's being shared for a variety of reasons. The question is, how much of it is actually being used to have an impact on the outcomes?"
Interesting. Goes to my long irascible (pedantic?) beef regarding the misnomer "interoperability," which I've called "interoperababble." To recap: no amount of calling point-to-point interfaced data exchange "interoperability" will make it so -- should you take the IEEE definition seriously.

As do I.

"How do we get the data into the workflow so it's not something else they have to do, it's just there?"

Indeed. That is in fact the heart of the IEEE definition. To the extent that Epic's "Share Everywhere" service facilitates that "seamlessness" workflow goal, it may in fact be a substantive step in the right direction.
Though, it does appear to be "read-only." Minimally, the recipient provider would have to "screen-scrape" the data into her own EHR (or otherwise save them "to file") for "write" updating in the wake of acting upon the data. Chain-of-custody/last record update concerns, anyone?
We'll see. From Epic's press release:
Epic Announces Worldwide Interoperability with Share Everywhere
Patients can authorize any provider to view their record in Epic and to send progress notes back


Verona, Wis. – Epic, creator of the most widely used electronic health record, today announced Share Everywhere, a big leap forward in interoperability. Share Everywhere will allow patients to grant access to their data to any providers who have internet access, even if they don’t have EHRs. In addition, using Share Everywhere, a provider granted access can send a progress note back to the patient’s healthcare organization for improved continuity of care…
"Send a progress note back?" Perhaps that might suffice to allay "last update" concerns. Perhaps. A fundamental issue in QA broadly is that of "document version control."

ERRATUM


Have you registered for the Oct 1st-4th Santa Clara Health 2.0 Annual Conference yet? Hope to see you there.
____________

More to come...

Monday, September 11, 2017

Watson and cancer

"...there’s a rather basic but fundamental problem with Watson, and that’s getting patient data entered into it. Hospitals wishing to use Watson must find a way either to interface their electronic health records with Watson or hire people to manually enter patient data into the system. Indeed, IBM representatives admitted that teaching a machine to read medical records is “a lot harder than anyone thought.” (Actually, this rather reminds me of Donald Trump saying, “Who knew health care could be so complicated?” in response to the difficulty Republicans had coming up with a replacement for the Affordable Care Act.) The answer: Basically anyone who knows anything about it. Anyone who’s ever tried to wrestle health care information out of a medical record, electronic or paper, into a form in a database that can be used to do retrospective or prospective studies knows how hard it is..."
From Science Based Medicine. They've picked up on and run with the reporting first published by STATnews.
"Hospitals wishing to use Watson must find a way either to interface their electronic health records with Watson..."
Ahhh.. that pesky chronic 'interoperababble" data exchange problem.

SBM continues:
What can Watson actually do?
IBM represents Watson as being able to look for patterns and derive treatment recommendations that human doctors might otherwise not be able to come up with because of our human shortcomings in reading and assessing the voluminous medical literature, but what Watson can actually do is really rather modest. That’s not to say it’s not valuable and won’t get better with time, but the problem is that it doesn’t come anywhere near the hype...
Necessarily, Watson has to employ the more difficult "Natural Language Understanding" (NLU) component of Natural Language Processing (NLP). I have previously posted on my NLP/NLU concerns here.

Search Google news for "Watson oncology" or "Watson cancer."


I'm sure you've all seen the numerous Watson TV commercials by now.

Are we now skiing just past the "Peak of Inflated Expectations?"

Everything "OncoTech" is of acute interest to me these days amid my daughter's cancer illness. apropos, see my prior post "Siddhartha Mukherjee's latest on cancer."

UPDATE

THCB has a nice post on the topic.
7 Ways We’re Screwing Up AI in Healthcare
BY LEONARD D’AVOLIO


The healthcare AI space is frothy. Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.

And yet, we seem to be blowing it.

The latest example is an investigation in STAT News pointing out the stumbles of IBM Watson followed inevitably by the ‘is AI ready for prime time’ debate. If course, IBM isn’t the only one making things hard on itself. Their marketing budget and approach makes them a convenient target. Many of us – from vendors to journalists to consumers – are unintentionally adding degrees to an already uphill climb.

If our mistakes led to only to financial loss, no big deal. But the stakes are higher. Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are important because they help us learn from our data – something healthcare is notoriously bad at. Finally using our data to improve really is a matter of life and death…
Indeed. Good post. Read all of it.

Also of recent relevant note:
Athelas releases automated blood testing kit for home use
Silicon Valley-based startup Athelas today introduced a smartphone app that it says can do simple blood diagnosis at home and return results in just 60 seconds.

The kit itself looks a bit like an Amazon Echo device and is coupled with a smartphone app to reveal the results of the test. In a demonstration, co-founder Deepika Bodapati showed TechCrunch that from taking a sample of blood and sliding it into the device, within seconds users can see their white blood count, neutrophils, lymphocytes and platelets.

Bodapati and co-founder Tanay Tandon are well aware of the fate of a similar device that promised to deliver results but wasn’t exactly what it said it was. That was the blood testing startup, Theranos, that soared to a valuation of $9 billion and then crashed and burned after its effectiveness was called into question.

“Theranos proved there was clear interest in the space, it would have been a great company if it worked,” Tandon said in an interview with Bloomberg. “Now, investors say they need proof before we can raise money.”

Athelas has published papers on the accuracy of its data and has also been FDA-approved as a device to image diagnostics. Before it can be sold over the counter, it will have to receive further approval stating that it’s as accurate as a standard test in lab conditions…
"Theranos?" Remember them? I've had my considerable irascible sport with them here.

Athelas is specifically pitching the utility of their product for oncology blood assay monitoring.


Interesting. My daughter has to run over to Kaiser today for her routine blood draw in advance of her upcoming every-other-week chemo infusion. I'm not sure her oncologist (who is also a hematologist) would be comfortable leaning on DTC single-drop-of-blood assay alternatives.

I think the Athelas people will be at the upcoming Health 2.0 Conference, and we will be hooking up for discussion. I'll have to look back through the Conference agenda to see whether any Watson peeps will be there.

Also, in the wake of my recent cardiology workup, I have to wonder about apps like that now marketed DTC by AliveCor:
Meet Kardia Mobile.
Your personal EKG.

Take a medical-grade EKG in just 30 seconds. Results are delivered right to your smartphone. Now you can know anytime, anywhere if your heart rhythm is normal, or if atrial fibrillation is detected.

Is this widely useful or just another 'Worried Well" toy? I showed this pitch to my cardiologist. He was dubious -- with respect to my case, that is.

ERRATUM
On "big data" and "Big Silicon Valley firms." New book release on Sept 12th. Saw a number of articles with and by the author.
"…More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They think they have the opportunity to complete the long merger between man and machine - to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the Founding Fathers of these companies often make big, bold pronouncements about human nature - a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: "Your program algorithms aren't that complicated," he says. And if humans function like computers, why not hasten the day we become fully cyborg? To take another grand theory, Facebook chief Mark Zuckerberg has exclaimed his desire to liberate humanity from phoniness, to end the dishonesty of secrets.

"The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly," he has said. "Having two identities for yourself is an example of a lack of integrity." Of course, that's both an expression of idealism and an elaborate justification for Facebook's business model…"

Looks interesting. I will be reading and reviewing it. I had a run at some of his issues in 2015. See "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

UPDATE

Finished the Franklin Foer book. Riveting read. Read it "cover to cover" pretty much straight through in one day. Contextual review coming, stay tuned.

CODA

____________

More to come...

Friday, September 8, 2017

Hurricane Irma: even worse than Hurricane Harvey


On the devastating heels of Harvey. Yikes. As I post this (~5 pm PDT), it looks like Irma will enter the U.S. around Sunday morning with the eye somewhere between Key West and Miami (likely a Cat 4 at the time, perhaps even a 5).

Unreal (and, there's a now-Cat 4 Hurricane Jose further east in the central Atlantic behind Irma). I have kin and friends living up and down the length of Florida. Some have gotten out, others cannot or will not.

It is estimated that there are approximately 75,000 Florida LTC patients (nursing homes / "long-term care" facilities). I have no idea how many Floridians are patients in acute care facilities (the state is most recently reported to have more than 92,000 acute care beds). This is truly an evac nightmare.

Beyond the potential for mass loss of lives, the U.S. economic tab for these two storms alone will probably grow to close to a trillion dollars, counting all of damage/destruction and business losses from all of the widespread downtime. Trump can forget his absurd Mexico wall.

SATURDAY 5 P.M. PDT UPDATE

Satellite map above, 24 hours later (again obtained from the Miami Herald website).


The storm is now expected to cross the Keys and hit SW Florida as a Cat 4, likely all the way up to the Tampa area. Ugh.

CODA

While we're obsessing over ourselves in the U.S...


Thigh-deep grody water in the streets of Havana, Cuba in the wake of Irma. The trail of utter devastation stretches all the way back to the far eastern lower Caribbean islands.
____________

More to come...

Wednesday, September 6, 2017

Siddhartha Mukherjee's latest on cancer


In my New Yorker (a great long-read; may be firewalled):

Given our current family circumstance regarding my daughter's illness, I found this extremely interesting.

"It is a basic insight that an undergraduate ecologist might find familiar: the “invasiveness” of an organism is always a relative concept."

He's analogically alluding to "mets," -- metastasis. It's the mets that end up killing you. Brain mets killed my elder daughter in 1998.
One evening this past June, as I walked along the shore of Lake Michigan in Chicago, I thought about mussels, knotweed, and cancer. Tens of thousands of people had descended on the city to attend the annual meeting of the American Society of Clinical Oncology, the world’s preĆ«minent conference on cancer. Much of the meeting, I knew, would focus on the intrinsic properties of cancer cells, and on ways of targeting them. Yet those properties might be only part of the picture. We want to know which mollusk we’re dealing with; but we also need to know which lake...
"We’ve tended to focus on the cancer, but its host tissue—“soil,” rather than “seed”—could help us predict the danger it poses."
We aren’t particularly adept at predicting whether a specific patient’s cancer will become metastatic or not. Metastasis can seem “like a random act of violence,” Daniel Hayes, a breast oncologist at the University of Michigan, told me when we spoke at the asco meeting in Chicago. “Because we’re not very good at telling whether breast-cancer patients will have metastasis, we tend to treat them with chemotherapy as if they all have potential metastasis.” Only some fraction of patients who receive toxic chemotherapy will really benefit from it, but we don’t know which fraction. And so, unable to say whether any particular patient will benefit, we have no choice but to overtreat. For women like Guzello, then, the central puzzle is not the perennial “why me.” It’s “whether me..."
Why was the liver so hospitable to metastasis, while the spleen, which had similarities in blood supply, size, and proximity, seemed relatively resistant? As Paget probed deeper, he found that cancerous growth even favored particular sites within organ systems. Bones were a frequent site of metastasis in breast cancer—but not every bone was equally susceptible. “Who has ever seen the bones of the hands or the feet attacked by secondary cancer?” he asked. Paget coined the phrase “seed and soil” to describe the phenomenon. The seed was the cancer cell; the soil was the local ecosystem where it flourished, or failed to. Paget’s study concentrated on patterns of metastasis within a person’s body. The propensity of one organ to become colonized while another was spared seemed to depend on the nature or the location of the organ—on local ecologies. Yet the logic of the seed-and-soil model ultimately raises the question of global ecologies: why does one person’s body have susceptible niches and not another’s?
I found the following of particular interest, in light of my own 2015 wrangle with prostate cancer.
Widely used gene-expression assays, such as MammaPrint and Oncotype DX, have helped doctors identify certain patients who are at low risk for metastatic spread and can safely skip chemotherapy. “We’ve been able to reduce the overuse of chemotherapy in about one-third of all patients in some subtypes of breast cancers,” Daniel Hayes said.

Hayes is also grateful for the kind of genetic tests that indicate which patients might benefit from a targeted therapy like Herceptin (those whose breast cancers produce high levels of the growth-factor receptor protein HER2) or from anti-estrogen medications (those whose tumors have estrogen receptors). But, despite our advances in targeting tumor cells using genetic markers as guides, our efforts to predict whose cancers will become metastatic have advanced only slowly. The “whether me” question haunts the whole field. What the oncologist Harold Burstein calls “the uncertainty box” of chemotherapy has remained stubbornly closed...
 "OncoType DX"?

From my 2015 blog post:
...Without asking me, my urologist sent my biopsy to a company that performs "OncoType dx" genetic assays of biopsies for several cancers, including prostate. He simply wanted to know whether mine was a good candidate for this type of test.

They just went ahead and ran it, without the urologist's green light order, or my knowledge or consent. I got a call out of the blue one day from a lady wanting to verify my insurance information, and advising me that this test might be "out of your network," leaving me on the hook for some $400, worst case (I subsequently came to learn that it's a ~$4,000 test). I thought it odd, and thought I'd follow up with my doc.

My urologist called me. He was embarrassed and pissed. A young rep from the OncoType dx vendor also called me shortly thereafter. He was in fear of losing his job, having tee'd up the test absent explicit auth.

I've yet to hear anything further. I think they're all just trying to make this one go away. Though, it would not surprise me one whit to see a charge pop up in my BCBS/RI portal one of these days.

The OncoType test result merely served to confirm (expensively -- for someone) what my urologist already suspected. The malignancy aggressiveness in my case is in a sort of "grey zone." The emerging composite picture is "don't dally with this."
That was not a fun year (but, I'm OK now). Neither is this one.

BACK TO MUKHERJEE
David Adams’s father never suffered a recurrence of melanoma; he died from prostate cancer that had spread widely through his body. “Years ago, I would have thought of the melanoma versus the prostate cancer in terms of differences in the inherent metastatic potential of those two cell types,” Adams said. “Good cancer versus bad cancer. Now I think more and more of a different question: Why was my father’s body more receptive to prostate metastasis versus melanoma metastasis?”

There are important consequences of taking soil as well as seed into account. Among the most successful recent innovations in cancer therapeutics is immunotherapy, in which a patient’s own immune system is activated to target cancer cells. Years ago, the pioneer immunologist Jim Allison and his colleagues discovered that cancer cells used special proteins to trigger the brakes in the host’s immune cells, leading to unchecked growth. (To use more appropriate evolutionary language: clones of cancer cells that are capable of blocking host immune attacks are naturally selected and grow.) When drugs stopped certain cancers from exploiting these braking proteins, Allison and his colleagues showed, immune cells would start to attack them.

Such therapies are best thought of as soil therapies: rather than killing tumor cells directly, or targeting mutant gene products within tumor cells, they work on the phalanxes of immunological predators that survey tissue environments, and alter the ecology of the host. But soil therapies will go beyond immune factors; a wide variety of environmental features have to be taken into account. The extracellular matrix with which the cancer interacts, the blood vessels that a successful tumor must coax out to feed itself, the nature of a host’s connective-tissue cells—all of these affect the ecology of tissues and thereby the growth of cancers...
...Considering the limitations of our knowledge, methods, and resources, our field may have had no choice but to submit to the lacerations of Occam’s razor, at least for a while. It was only natural that many cancer biologists, confronting the sheer complexity of the whole organism, trained their attention exclusively on our “pathogen”: the cancer cell. Investigating metastasis seems more straightforward than investigating non-metastasis; clinically speaking, it’s tough to study those who haven’t fallen ill. And we physicians have been drawn to the toggle-switch model of disease and health: the biopsy was positive; the blood test was negative; the scans find “no evidence of disease.” Good germs, bad germs. Ecologists, meanwhile, talk about webs of nutrition, predation, climate, topography, all subject to complex feedback loops, all context-dependent. To them, invasion is an equation, even a set of simultaneous equations...
"...In the field of oncology, “holistic” has become a patchouli-scented catchall for untested folk remedies: raspberry-leaf tea and juice cleanses. Still, as ambitious cancer researchers study soil as well as seed, one sees the beginnings of a new approach. It would return us to the true meaning of “holistic”: to take the body, the organism, its anatomy, its physiology—this infuriatingly intricate web—as a whole. Such an approach would help us understand the phenomenon in all its vexing diversity; it would help us understand when you have cancer and when cancer has you. It would encourage doctors to ask not just what you have but what you are."

Again, great article. Seriously worth your time. Also way worth your time is his last book:


I first cited it here.
__

MORE ON THE DX FROM HELL

One of my daily surfing stops is Wired.com. Looking for relevant tech stuff. Ran across this, concerning a Seattle AI tech startup guy:
THE DAY I FOUND OUT MY LIFE WAS HANGING BY A THREAD
Matt Benke


IT STARTED WHILE I was on a Hawaiian vacation in May. I thought I’d just tweaked my back lifting a poolside lounge chair. Back home, my back pain became severe, and I started noticing nerve pain in my legs. For eight days I could barely crawl around the house. My wife and two daughters nicknamed me “the worm.” At 45, I’m in pretty good shape—avid cyclist, runner, weightlifter, yoga enthusiast with a resting pulse in the 50s.

So it was weird when my primary care doctor put me on a cocktail of pain killers, nerve blockers, and cortisone shots. I even tried acupuncture. But as my back began to improve in late June, I started to feel off. Sick to my stomach. Weak. Couldn’t sleep. I lost more than 10 pounds. But I chalked this up to a month of too much Vicodin after a lifetime of thinking two Advil was excessive. My doctor said I was fit and healthy and that there was no need to run any blood tests. He wondered aloud if this was all in my head…

...After nearly a month of feeling horrible despite my back getting better and being off all medications, I hit a wall. On July 26, a Wednesday, I finished my day’s meetings and drove myself to the least busy ER I know of—the one at Swedish Medical Center in the Issaquah Highlands, 20 miles east of downtown.

A couple hours later I called Amy and asked her to join me. They’d already done a bunch of tests and ruled out the obvious—urinary tract infection, epidural abscess—and were sort of grasping at straws. Over the phone, I asked Amy, who is a clinical psychologist, if she could think of anything else I should tell the doctors. “Have you told them about the night sweats?” she asked, her stomach sinking. The look on the ER doc’s face when I passed that on should have been my first clue. (Night sweats are a symptom of some early cancers.) They drew more blood and did a CT scan.

About an hour later, a doctor who specializes in hospital admissions joined the ER doc to report on their findings. The ensuing scene is seared into my brain. He introduced himself to Amy and me so awkwardly that we could not understand him. I gently interrupted his prepared remarks to ask his name, hoping this might put him at ease.

It didn’t. He went on to explain that I had many tumors in my liver, pancreas, and chest. In addition, he explained that I had quite a few blood clots, including in my heart and lungs. “What is ‘many’ tumors?” I asked. He looked defeated, saying they stopped counting after 10. I thought he might cry, and then he started in with some nonsense about how maybe it was all just bad tests, or maybe I had a rare water-borne pest infection. Amy began crying, hard. I went into silent shock and just tried to get this guy to shut up and leave…
...They took a biopsy of one of the tumors on my liver. They surgically implanted a stent in my gall bladder, which immediately relieved my backed-up liver. The medical staff also looked for secondary impacts of the cancer. First among them was blood clots. A couple doctors examined my legs and said, “Slim to zero chance you have clots in your legs—they look too healthy. But let’s check.” A few hours later, bad news: My left leg had clots from my hip to my ankle, though thankfully not fully occlusive. My right leg had clots from knee to ankle...
On Friday the docs woke me with an urgent problem: They had found a blood clot the size of a Ping-Pong ball in my heart’s right ventricle. If it broke loose, I would die instantly, whether I was in an ER or my basement. To make matters worse, they showed me an image of the clot, and it was precariously wiggling on an already-loose attachment. Each time my heart beat, the ticking time bomb swayed precariously. The clot was too big to suck out with a vacuum, too risky to slice and remove bit-by-bit, and too large to remove from the side by breaking open a few ribs. Nope, removing it was urgent and would require cracking my sternum. Today...
Shit. His family has mounted a "members-only" Facebook support and updates group. I asked to join. My request was granted. I don't know any of them. Just have, beyond acute empathy, a decades-long affinity for the Seattle area, where I still have many dear friends (both of my girls were born in Seattle).

I feel enjoined to not share what I've learned there from their frank updates. I just wish them all the strength they will all need.

Pancreatic cancer. Truly "the dx from Hell."

My daughter had a brain scan this morning. No rad report yet.

DX TECH UPDATE

IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close

It was an audacious undertaking, even for one of the most storied American companies: With a single machine, IBM would tackle humanity’s most vexing diseases and revolutionize medicine.

Breathlessly promoting its signature brand — Watson — IBM sought to capture the world’s imagination, and it quickly zeroed in on a high-profile target: cancer.

But three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM’s goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care...
Another long-read. Rather scathing.

"Free Beer Tomorrow?"

Good audio discussion at NPR's WBUR:

____________

More to come...