Search the KHIT Blog

Tuesday, September 19, 2017

GAFA, Really Big Data, and their sociopolitical implications

My September issue of my ASQ "Quality Progress" journal showed up in the snailmail.

Pretty interesting cover story. Some of it ties nicely into recent topics of mine such as "AI" and its adolescent cousin "NLP" as they go to the health IT space. More on that in a bit, but, first, let me set the stage via some relevant recent reading.

Franklin Foer has had a good book launch. Lots of interviews and articles online and a number of news show appearances.

I loved the book, read it in one day. I will cite just a bit from his book, "GAFA," btw, is the EU's  sarcastic shorthand for "Google-Amazon-Facebook-Apple," the for-profit miners of the biggest of "big data."
UNTIL RECENTLY, it was easy to define our most widely known corporations. Any third grader could describe their essence. Exxon sells oil; McDonald’s makes hamburgers; Walmart is a place to buy stuff. This is no longer so. The ascendant monopolies of today aspire to encompass all of existence. Some of these companies have named themselves for their limitless aspirations. Amazon, as in the most voluminous river on the planet, has a logo that points from A to Z; Google derives from googol, a number (1 followed by 100 zeros) that mathematicians use as shorthand for unimaginably large quantities.

Where do these companies begin and end? Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow. Google now aims to build driverless cars, manufacture phones, and conquer death. Amazon was once content being “the everything store,” but now produces television shows, designs drones, and powers the cloud. The most ambitious tech companies— throw Facebook, Microsoft, and Apple into the mix— are in a race to become our “personal assistant.” They want to wake us in the morning, have their artificial intelligence software guide us through the day, and never quite leave our sides. They aspire to become the repository for precious and private items, our calendar and contacts, our photos and documents. They intend for us to unthinkingly turn to them for information and entertainment, while they build unabridged catalogs of our intentions and aversions. Google Glass and the Apple Watch prefigure the day when these companies implant their artificial intelligence within our bodies.

More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They believe that they have the opportunity to complete the long merger between man and machine— to redirect the trajectory of human evolution. How do I know this? Such suggestions are fairly commonplace in Silicon Valley, even if much of the tech press is too obsessed with covering the latest product launch to take much notice of them. In annual addresses and townhall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature— a view of human nature that they intend to impose on the rest of us.

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, which isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, that’s not the worldview that emerges. In fact, it is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies believe we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. By stitching the world together, they can cure its ills. Rhetorically, the tech companies gesture toward individuality— to the empowerment of the “user”— but their worldview rolls over it. Even the ubiquitous invocation of users is telling, a passive, bureaucratic description of us.

The big tech companies— the Europeans have charmingly, and correctly, lumped them together as GAFA (Google, Apple, Facebook, Amazon)— are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility to intellectual property. In the realm of economics, they justify monopoly with their well-articulated belief that competition undermines our pursuit of the common good and ambitious goals. When it comes to the most central tenet of individualism— free will— the tech companies have a different way. They hope to automate the choices, both large and small, that we make as we float through the day. It’s their algorithms that suggest the news we read, the goods we buy, the path we travel, the friends we invite into our circle.

It’s hard not to marvel at these companies and their inventions, which often make life infinitely easier. But we’ve spent too long marveling. The time has arrived to consider the consequences of these monopolies, to reassert our own role in determining the human path. Once we cross certain thresholds— once we transform the values of institutions, once we abandon privacy— there’s no turning back, no restoring our lost individuality…

Foer, Franklin. World Without Mind: The Existential Threat of Big Tech (pp. 1-3). Penguin Publishing Group. Kindle Edition.
I've considered thses issues before. See, e.g., "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

A useful volume of historical context also comes to mind.

Machines are about control. Machines give more control to humans: control over their environment, control over their own lives, control over others. But gaining control through machines means also delegating it to machines. Using the tool means trusting the tool. And computers, ever more powerful, ever smaller, and ever more networked, have given ever more autonomy to our instruments. We rely on the device, plane and phone alike, trusting it with our security and with our privacy. The reward: an apparatus will serve as an extension of our muscles, our eyes, our ears, our voices, and our brains.

Machines are about communication. A pilot needs to communicate with the aircraft to fly it. But the aircraft also needs to communicate with the pilot to be flown. The two form an entity: the pilot can’t fly without the plane, and the plane can’t fly without the pilot. But these man-machine entities aren’t isolated any longer. They’re not limited to one man and one machine, with a mechanical interface of yokes, throttles, and gauges. More likely, machines contain a computer, or many, and are connected with other machines in a network. This means many humans interact with and through many machines. The connective tissue of entire communities has become mechanized. Apparatuses aren’t simply extensions of our muscles and brains; they are extensions of our relationships to others— family, friends, colleagues, and compatriots. And technology reflects and shapes those relationships.

Control and communication began to shift fundamentally during World War II. It was then that a new set of ideas emerged to capture the change: cybernetics. The famously eccentric MIT mathematician Norbert Wiener coined1 the term, inspired by the Greek verb kybernan, which means “to steer, navigate, or govern.” Cybernetics; or, Control and Communication in the Animal and the Machine, Wiener’s pathbreaking book, was published in the fall of 1948. The volume was full of daredevil prophecies about the future: of self-adaptive machines that would think and learn and become cleverer than “man,” all made credible by formidable mathematical formulas and imposing engineering jargon…

…From today’s vantage point, the future is hazy, dim, and formless. But these questions aren’t new. The future of machines has a past. And mastering our future with machines requires mastering our past with machines. Stepping back twenty or forty or even sixty years brings the future into sharper relief, with exaggerated clarity, like a caricature, revealing the most distinct and marked features. And cybernetics was a major force in molding these features.

That cybernetic tension of dystopian and utopian visions dates back many decades. Yet the history of our most potent ideas about the future of technology is often neglected. It doesn’t enter archives in the same way that diplomacy and foreign affairs would. For a very long period of time, utopian ideas have dominated; ever since Wiener’s death in March 1964, the future of man’s love affair with the machine was a starry-eyed view of a better, automated, computerized, borderless, networked, and freer future. Machines, our own cybernetic creations, would be able to overcome the innate weaknesses of our inferior bodies, our fallible minds, and our dirty politics. The myth of the clean, infallible, and superior machines was in overdrive, out of balance.

By the 1990s, dystopia had returned. The ideas of digital war, conflict, abuse, mass surveillance, and the loss of privacy— even if widely exaggerated— can serve as a crucial corrective to the machine’s overwhelming utopian appeal. But this is possible only if contradictions are revealed— contradictions covered up and smothered by the cybernetic myth. Enthusiasts, driven by hope and hype, overestimated the power of new and emerging computer technologies to transform society into utopia; skeptics, often fueled by fear and foreboding, overestimated the dystopian effects of these technologies. And sometimes hope and fear joined forces, especially in the shady world of spies and generals. But misguided visions of the future are easily forgotten, discarded into the dustbin of the history of ideas. Still, we ignore them at our own peril. Ignorance risks repeating the same mistakes.

Cybernetics, without doubt, is one of the twentieth century’s biggest ideas, a veritable ideology of machines born during the first truly global industrial war that was itself fueled by ideology. Like most great ideas, cybernetics was nimble and shifted shape several times, adding new layers to its twisted history decade by decade. This book peels back these layers, which were nearly erased and overwritten again and again, like a palimpsest of technologies. This historical depth, although almost lost, is what shines through the ubiquitous use of the small word “cyber” today…

Rid, Thomas (2016-06-28). Rise of the Machines: A Cybernetic History (Kindle Locations 167-233). W. W. Norton & Company. Kindle Edition.
Still reading this one. A fine read. spans roughly the period from WWII to 2016.

I can think of a number of relevant others I've heretofore cited, but these will do for now.


The Deal With Big Data

Move over! Big data analytics and standardization are the next big thing in quality
by Michele Boulanger, Wo Chang, Mark Johnson and T.M. Kubiak

Just the Facts

  • More and more organizations have realized the important role big data plays in today’s marketplaces.

  • Recognizing this shift toward big data practices, quality professionals must step up their understanding of big data and how organizations can use and take advantage of their transactional data.

  • Standards groups realize big data is here to stay and are beginning to develop foundational standards for big data and big data analytics.
The era of big data is upon us. While providing a formidable challenge to the classically trained quality practitioner, big data also offers substantial opportunities for redirecting a career path into a computational and data-intensive environment.

The change to big data analytics from the status quo of applying quality principles to manufacturing and service operations could be considered a paradigm shift comparable to the changes quality professionals experienced when statistical computing packages became widely available, or when control charts were first introduced.

The challenge for quality practitioners is to recognize this shift and secure the training and understanding necessary to take full advantage of the opportunities.

What’s the big deal?
What exactly is big data? You’ve probably noticed that big data often is associated with transactional data sets (for example, American Express and Amazon), social media (for example, Facebook and Twitter) and, of course, search engines (for example, Google). Most formal definitions of big data involve some variant of the four V’s:

  • Volume: Data set size.
  • Variety: Diverse data types residing in multiple locations.
  • Velocity: Speed of generation and transmission of data.
  • Variability: Nonconstancy of volume, variety and velocity.
This set of V’s is attributable originally to Gartner Inc., a research and advisory company, and documented by the National Institute of Standards and Technology (NIST) in the first volume of a set of seven documents. Big data clearly is the order of the day when the quality practitioner is confronted with a data set that exceeds the laptop’s memory, which may be by orders of magnitude.

In this article, we’ll reveal the big data era to the quality practitioner and describe the strategy being taken by standardization bodies to streamline their entry into the exciting and emerging field of big data analytics. This is all done with an eye on preserving the inherently useful quality principles that underlie the core competencies of these standardization bodies.

Primary classes of big data problems
The 2016 ASQ Global State of Quality reports included a spotlight report titled "A Trend? A Fad? Or Is Big Data the Next Big Thing?" hinting that big data is here to stay. If the conversion from acceptance sampling, control charts or design of experiments seems a world away from the tools associated with big data, rest assured that the statistical bases still apply. 
Of course, the actual data, per the four V’s, are different. Relevant formulations of big data problems, however, enjoy solutions or approaches that are statistical, though the focus is more on retrospective data and causal models in traditional statistics, and more forward-looking data and predictive analytics in big data analytics. Two primary classes of problems occur in big data:
  • Supervised problems occur when there is a dependent variable of interest that relates to a potentially large number of independent variables. For this, regression analysis comes into play, for which the typical quality practitioner likely has some background.
  • Unsupervised problems occur when unstructured data are the order of the day (for example, doctor’s notes, medical diagnostics, police reports or internet transactions).
Unsupervised problems seek to find the associations among the variables. In these instances, cluster and association analysis can be used. The quality practitioner can easily pick up such techniques...
Good piece. It's firewalled, unfortunately, but, ASQ does provide "free registration" for non-member "open access" to certain content, including this article.

The article, as we might expect, is focused on the tech and operational aspects of using "big data" for analytics -- i.e., the requisite interdisciplinary skill sets needed, data heterogeneity problems going to 'interoperability," and the chronic related problems associated with "data quality." These considerations differ materially from those of "my day-" -- I joined ASQ in the 80's when it as still "ASQC," the "American Society for Quality Control." Consequently, I am an old school Deming - Shewhart guy. I successfully sat for the rigorous ASQ "Certified Quality Engineer" exam (CQE) in 1992. It was comprised at the time of about 2/3rds applied industrial statistics -- sampling theory, probability calculations, design of experiments, all aimed principally at assessing and improving things like "fraction defectives" amid production, and maintaining "SPC," Statistical Process Control, etc.

That kind of work pretty much assumed relative homogeneity of data under tight in-house control.
Such was even the case during my time in bank credit risk modeling and management. See, e.g., my 2003 whitepaper "FNBM Credit Underwriting Model Development" (large pdf). Among our data resources, we maintained a fairly large Oracle data warehouse comprising several million accounts from which I could pull customer-related data into SAS for analytics.
While such analytic methods do in fact continue to be deployed, the issues pale in comparison the the challenges we face in a far-flung "cloud-based" "big data" world comprised of data of wildly varying pedigree. The Quality Progress article provides a good overview of the current terrain and issues requiring our attention.

One publicly available linked resource the article provides:

The NBD-PWG was established together with the industry, academia and government to create a consensus-based extensible Big Data Interoperability Framework (NBDIF) which is a vendor-neutral, technology- and infrastructure-independent ecosystem. It can enable Big Data stakeholders (e.g. data scientists, researchers, etc.) to utilize the best available analytics tools to process and derive knowledge through the use of standard interfaces between swappable architectural components. The NBDIF is being developed in three stages with the goal to achieve the following with respect to the NIST Big Data Reference Architecture (NBD-RA), which was developed in Stage 1:
  1. Identify the high-level Big Data reference architecture key components, which are technology, infrastructure, and vendor agnostic;
  2. Define general interfaces between the NBD-RA components with the goals to aggregate low-level interactions into high-level general interfaces and produce set of white papers to demonstrate how NBD-RA can be used;
  3. Validate the NBD-RA by building Big Data general applications through the general interfaces.
The "Use Case" pages contains links to work spanning the breadth of big data application domains, e.g., government operations, commercial, defense, healthcare and life sciences, deep learning and social media, the ecosystem for research, astronomy and physics, environmental and polar sciences, and energy.

From the Electronic Medical Record (EMR) Data "use case" document; Shaun Grannis, Indiana University
As health care systems increasingly gather and consume electronic medical record data, large national initiatives aiming to leverage such data are emerging, and include developing a digital learning health care system to support increasingly evidence-based clinical decisions with timely accurate and up-to-date patient-centered clinical information; using electronic observational clinical data to efficiently and rapidly translate scientific discoveries into effective clinical treatments; and electronically sharing integrated health data to improve healthcare process efficiency and outcomes. These key initiatives all rely on high-quality, large-scale, standardized and aggregate health data.  Despite the promise that increasingly prevalent and ubiquitous electronic medical record data hold, enhanced methods for integrating and rationalizing these data are needed for a variety of reasons. Data from clinical systems evolve over time. This is because the concept space in healthcare is constantly evolving: new scientific discoveries lead to new disease entities, new diagnostic modalities, and new disease management approaches. These in turn lead to new clinical concepts, which drives the evolution of health concept ontologies. Using heterogeneous data from the Indiana Network for Patient Care (INPC), the nation's largest and longest-running health information exchange, which includes more than 4 billion discrete coded clinical observations from more than 100 hospitals for more than 12 million patients, we will use information retrieval techniques to identify highly relevant clinical features from electronic observational data. We will deploy information retrieval and natural language processing techniques to extract clinical features. Validated features will be used to parameterize clinical phenotype decision models based on maximum likelihood estimators and Bayesian networks. Using these decision models we will identify a variety of clinical phenotypes such as diabetes, congestive heart failure, and pancreatic cancer…

Patients increasingly receive health care in a variety of clinical settings. The subsequent EMR data is fragmented and heterogeneous. In order to realize the promise of a Learning Health Care system as advocated by the National Academy of Science and the Institute of Medicine, EMR data must be rationalized and integrated. The methods we propose in this use-case support integrating and rationalizing clinical data to support decision-making at multiple levels.
This document is dated August 11, 2013. Let's don't hurry up or anything. "Interoperababble."



Back to the themes with which I began this post. From


ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data and Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go…
Yeah, it's about "AI" rather than "big data" per se. But, to me the direct linkage is pretty obvious.

Again, apropos, I refer you to my prior post "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

Tangentially, see also my 'Watson and cancer" post.


By no means exhaustive. Also recommend you read a bunch of Kevin Kelly among others.


From Scientific American: 
Searching for the Next Facebook or Google: Bloomberg Helps Launch Tech Incubator
The former mayor speaks with Scientific American about the new Cornell Tech campus in New York City: “Culture attracts capital a lot quicker than capital will attract culture.”

More to come...

Thursday, September 14, 2017

Defining "Interoperability" down, an update

Pretty interesting Modern Healthcare article arrived in my inbox this morning.
Epic takes a step towards interoperability with Share Everywhere
By Rachel Z. Arndt

Epic Systems Corp. is making headway in the quest for interoperability and potentially fulfilling meaningful use requirements with a new way for patients to send their medical records to their providers.

The new product, called Share Everywhere and unveiled Wednesday, allows patients to give doctors access to their medical records through an internet browser. That means doctors don't have to have their own electronic health record systems and the information can be shared despite not having EHRs that can communicate.

"This is really patient-driven interoperability," said Sean Bina, Epic's vice president of access applications. "This makes it so the patient can direct their existing record to the clinician of their choice," he said, regardless of whether that patient is one of the 64% of Americans who have records in Epic EHRs or whether that doctor is one of the 57.8% of ambulatory physicians who use Epic.

The receiving doctors need only have a computer and an internet connection—a requirement that might have been helpful in the recent hurricanes…
As I've noted before, my world of late has been "all Epic all the time," mostly in my role as next-of-kin caregiver, as well as an episodic chronic care F/up patient. Kaiser Permanente? Epic. Muir? Epic. UCSF? Epic. Stanford Medical Center? Epic. (I think Sutter Health is also on Epic, but I'm not certain.)

They pretty much rule the Bay Area (which is a pretty good thing for us, net). Their national footprint is huge as well.

More from Rachel's article:
"Where we are now is we're looking at how do we get the data into the workflow of the clinician so it's not something else they have to do, it's just there," said Charles Christian, vice president of technology and engagement for the Indiana Health Information Exchange. "There is a massive amount of data that's being shared for a variety of reasons. The question is, how much of it is actually being used to have an impact on the outcomes?"
Interesting. Goes to my long irascible (pedantic?) beef regarding the misnomer "interoperability," which I've called "interoperababble." To recap: no amount of calling point-to-point interfaced data exchange "interoperability" will make it so -- should you take the IEEE definition seriously.

As do I.

"How do we get the data into the workflow so it's not something else they have to do, it's just there?"

Indeed. That is in fact the heart of the IEEE definition. To the extent that Epic's "Share Everywhere" service facilitates that "seamlessness" workflow goal, it may in fact be a substantive step in the right direction.
Though, it does appear to be "read-only." Minimally, the recipient provider would have to "screen-scrape" the data into her own EHR (or otherwise save them "to file") for "write" updating in the wake of acting upon the data. Chain-of-custody/last record update concerns, anyone?
We'll see. From Epic's press release:
Epic Announces Worldwide Interoperability with Share Everywhere
Patients can authorize any provider to view their record in Epic and to send progress notes back

Verona, Wis. – Epic, creator of the most widely used electronic health record, today announced Share Everywhere, a big leap forward in interoperability. Share Everywhere will allow patients to grant access to their data to any providers who have internet access, even if they don’t have EHRs. In addition, using Share Everywhere, a provider granted access can send a progress note back to the patient’s healthcare organization for improved continuity of care…
"Send a progress note back?" Perhaps that might suffice to allay "last update" concerns. Perhaps. A fundamental issue in QA broadly is that of "document version control."


Have you registered for the Oct 1st-4th Santa Clara Health 2.0 Annual Conference yet? Hope to see you there.

More to come...

Monday, September 11, 2017

Watson and cancer

"...there’s a rather basic but fundamental problem with Watson, and that’s getting patient data entered into it. Hospitals wishing to use Watson must find a way either to interface their electronic health records with Watson or hire people to manually enter patient data into the system. Indeed, IBM representatives admitted that teaching a machine to read medical records is “a lot harder than anyone thought.” (Actually, this rather reminds me of Donald Trump saying, “Who knew health care could be so complicated?” in response to the difficulty Republicans had coming up with a replacement for the Affordable Care Act.) The answer: Basically anyone who knows anything about it. Anyone who’s ever tried to wrestle health care information out of a medical record, electronic or paper, into a form in a database that can be used to do retrospective or prospective studies knows how hard it is..."
From Science Based Medicine. They've picked up on and run with the reporting first published by STATnews.
"Hospitals wishing to use Watson must find a way either to interface their electronic health records with Watson..."
Ahhh.. that pesky chronic 'interoperababble" data exchange problem.

SBM continues:
What can Watson actually do?
IBM represents Watson as being able to look for patterns and derive treatment recommendations that human doctors might otherwise not be able to come up with because of our human shortcomings in reading and assessing the voluminous medical literature, but what Watson can actually do is really rather modest. That’s not to say it’s not valuable and won’t get better with time, but the problem is that it doesn’t come anywhere near the hype...
Necessarily, Watson has to employ the more difficult "Natural Language Understanding" (NLU) component of Natural Language Processing (NLP). I have previously posted on my NLP/NLU concerns here.

Search Google news for "Watson oncology" or "Watson cancer."

I'm sure you've all seen the numerous Watson TV commercials by now.

Are we now skiing just past the "Peak of Inflated Expectations?"

Everything "OncoTech" is of acute interest to me these days amid my daughter's cancer illness. apropos, see my prior post "Siddhartha Mukherjee's latest on cancer."


THCB has a nice post on the topic.
7 Ways We’re Screwing Up AI in Healthcare

The healthcare AI space is frothy. Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.

And yet, we seem to be blowing it.

The latest example is an investigation in STAT News pointing out the stumbles of IBM Watson followed inevitably by the ‘is AI ready for prime time’ debate. If course, IBM isn’t the only one making things hard on itself. Their marketing budget and approach makes them a convenient target. Many of us – from vendors to journalists to consumers – are unintentionally adding degrees to an already uphill climb.

If our mistakes led to only to financial loss, no big deal. But the stakes are higher. Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are important because they help us learn from our data – something healthcare is notoriously bad at. Finally using our data to improve really is a matter of life and death…
Indeed. Good post. Read all of it.

Also of recent relevant note:
Athelas releases automated blood testing kit for home use
Silicon Valley-based startup Athelas today introduced a smartphone app that it says can do simple blood diagnosis at home and return results in just 60 seconds.

The kit itself looks a bit like an Amazon Echo device and is coupled with a smartphone app to reveal the results of the test. In a demonstration, co-founder Deepika Bodapati showed TechCrunch that from taking a sample of blood and sliding it into the device, within seconds users can see their white blood count, neutrophils, lymphocytes and platelets.

Bodapati and co-founder Tanay Tandon are well aware of the fate of a similar device that promised to deliver results but wasn’t exactly what it said it was. That was the blood testing startup, Theranos, that soared to a valuation of $9 billion and then crashed and burned after its effectiveness was called into question.

“Theranos proved there was clear interest in the space, it would have been a great company if it worked,” Tandon said in an interview with Bloomberg. “Now, investors say they need proof before we can raise money.”

Athelas has published papers on the accuracy of its data and has also been FDA-approved as a device to image diagnostics. Before it can be sold over the counter, it will have to receive further approval stating that it’s as accurate as a standard test in lab conditions…
"Theranos?" Remember them? I've had my considerable irascible sport with them here.

Athelas is specifically pitching the utility of their product for oncology blood assay monitoring.

Interesting. My daughter has to run over to Kaiser today for her routine blood draw in advance of her upcoming every-other-week chemo infusion. I'm not sure her oncologist (who is also a hematologist) would be comfortable leaning on DTC single-drop-of-blood assay alternatives.

I think the Athelas people will be at the upcoming Health 2.0 Conference, and we will be hooking up for discussion. I'll have to look back through the Conference agenda to see whether any Watson peeps will be there.

Also, in the wake of my recent cardiology workup, I have to wonder about apps like that now marketed DTC by AliveCor:
Meet Kardia Mobile.
Your personal EKG.

Take a medical-grade EKG in just 30 seconds. Results are delivered right to your smartphone. Now you can know anytime, anywhere if your heart rhythm is normal, or if atrial fibrillation is detected.

Is this widely useful or just another 'Worried Well" toy? I showed this pitch to my cardiologist. He was dubious -- with respect to my case, that is.

On "big data" and "Big Silicon Valley firms." New book release on Sept 12th. Saw a number of articles with and by the author.
"…More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They think they have the opportunity to complete the long merger between man and machine - to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the Founding Fathers of these companies often make big, bold pronouncements about human nature - a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: "Your program algorithms aren't that complicated," he says. And if humans function like computers, why not hasten the day we become fully cyborg? To take another grand theory, Facebook chief Mark Zuckerberg has exclaimed his desire to liberate humanity from phoniness, to end the dishonesty of secrets.

"The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly," he has said. "Having two identities for yourself is an example of a lack of integrity." Of course, that's both an expression of idealism and an elaborate justification for Facebook's business model…"

Looks interesting. I will be reading and reviewing it. I had a run at some of his issues in 2015. See "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."


Finished the Franklin Foer book. Riveting read. Read it "cover to cover" pretty much straight through in one day. Contextual review coming, stay tuned.



More to come...

Friday, September 8, 2017

Hurricane Irma: even worse than Hurricane Harvey

On the devastating heels of Harvey. Yikes. As I post this (~5 pm PDT), it looks like Irma will enter the U.S. around Sunday morning with the eye somewhere between Key West and Miami (likely a Cat 4 at the time, perhaps even a 5).

Unreal (and, there's a now-Cat 4 Hurricane Jose further east in the central Atlantic behind Irma). I have kin and friends living up and down the length of Florida. Some have gotten out, others cannot or will not.

It is estimated that there are approximately 75,000 Florida LTC patients (nursing homes / "long-term care" facilities). I have no idea how many Floridians are patients in acute care facilities (the state is most recently reported to have more than 92,000 acute care beds). This is truly an evac nightmare.

Beyond the potential for mass loss of lives, the U.S. economic tab for these two storms alone will probably grow to close to a trillion dollars, counting all of damage/destruction and business losses from all of the widespread downtime. Trump can forget his absurd Mexico wall.


Satellite map above, 24 hours later (again obtained from the Miami Herald website).

The storm is now expected to cross the Keys and hit SW Florida as a Cat 4, likely all the way up to the Tampa area. Ugh.


While we're obsessing over ourselves in the U.S...

Thigh-deep grody water in the streets of Havana, Cuba in the wake of Irma. The trail of utter devastation stretches all the way back to the far eastern lower Caribbean islands.

More to come...

Wednesday, September 6, 2017

Siddhartha Mukherjee's latest on cancer

In my New Yorker (a great long-read; may be firewalled):

Given our current family circumstance regarding my daughter's illness, I found this extremely interesting.

"It is a basic insight that an undergraduate ecologist might find familiar: the “invasiveness” of an organism is always a relative concept."

He's analogically alluding to "mets," -- metastasis. It's the mets that end up killing you. Brain mets killed my elder daughter in 1998.
One evening this past June, as I walked along the shore of Lake Michigan in Chicago, I thought about mussels, knotweed, and cancer. Tens of thousands of people had descended on the city to attend the annual meeting of the American Society of Clinical Oncology, the world’s preĆ«minent conference on cancer. Much of the meeting, I knew, would focus on the intrinsic properties of cancer cells, and on ways of targeting them. Yet those properties might be only part of the picture. We want to know which mollusk we’re dealing with; but we also need to know which lake...
"We’ve tended to focus on the cancer, but its host tissue—“soil,” rather than “seed”—could help us predict the danger it poses."
We aren’t particularly adept at predicting whether a specific patient’s cancer will become metastatic or not. Metastasis can seem “like a random act of violence,” Daniel Hayes, a breast oncologist at the University of Michigan, told me when we spoke at the asco meeting in Chicago. “Because we’re not very good at telling whether breast-cancer patients will have metastasis, we tend to treat them with chemotherapy as if they all have potential metastasis.” Only some fraction of patients who receive toxic chemotherapy will really benefit from it, but we don’t know which fraction. And so, unable to say whether any particular patient will benefit, we have no choice but to overtreat. For women like Guzello, then, the central puzzle is not the perennial “why me.” It’s “whether me..."
Why was the liver so hospitable to metastasis, while the spleen, which had similarities in blood supply, size, and proximity, seemed relatively resistant? As Paget probed deeper, he found that cancerous growth even favored particular sites within organ systems. Bones were a frequent site of metastasis in breast cancer—but not every bone was equally susceptible. “Who has ever seen the bones of the hands or the feet attacked by secondary cancer?” he asked. Paget coined the phrase “seed and soil” to describe the phenomenon. The seed was the cancer cell; the soil was the local ecosystem where it flourished, or failed to. Paget’s study concentrated on patterns of metastasis within a person’s body. The propensity of one organ to become colonized while another was spared seemed to depend on the nature or the location of the organ—on local ecologies. Yet the logic of the seed-and-soil model ultimately raises the question of global ecologies: why does one person’s body have susceptible niches and not another’s?
I found the following of particular interest, in light of my own 2015 wrangle with prostate cancer.
Widely used gene-expression assays, such as MammaPrint and Oncotype DX, have helped doctors identify certain patients who are at low risk for metastatic spread and can safely skip chemotherapy. “We’ve been able to reduce the overuse of chemotherapy in about one-third of all patients in some subtypes of breast cancers,” Daniel Hayes said.

Hayes is also grateful for the kind of genetic tests that indicate which patients might benefit from a targeted therapy like Herceptin (those whose breast cancers produce high levels of the growth-factor receptor protein HER2) or from anti-estrogen medications (those whose tumors have estrogen receptors). But, despite our advances in targeting tumor cells using genetic markers as guides, our efforts to predict whose cancers will become metastatic have advanced only slowly. The “whether me” question haunts the whole field. What the oncologist Harold Burstein calls “the uncertainty box” of chemotherapy has remained stubbornly closed...
 "OncoType DX"?

From my 2015 blog post:
...Without asking me, my urologist sent my biopsy to a company that performs "OncoType dx" genetic assays of biopsies for several cancers, including prostate. He simply wanted to know whether mine was a good candidate for this type of test.

They just went ahead and ran it, without the urologist's green light order, or my knowledge or consent. I got a call out of the blue one day from a lady wanting to verify my insurance information, and advising me that this test might be "out of your network," leaving me on the hook for some $400, worst case (I subsequently came to learn that it's a ~$4,000 test). I thought it odd, and thought I'd follow up with my doc.

My urologist called me. He was embarrassed and pissed. A young rep from the OncoType dx vendor also called me shortly thereafter. He was in fear of losing his job, having tee'd up the test absent explicit auth.

I've yet to hear anything further. I think they're all just trying to make this one go away. Though, it would not surprise me one whit to see a charge pop up in my BCBS/RI portal one of these days.

The OncoType test result merely served to confirm (expensively -- for someone) what my urologist already suspected. The malignancy aggressiveness in my case is in a sort of "grey zone." The emerging composite picture is "don't dally with this."
That was not a fun year (but, I'm OK now). Neither is this one.

David Adams’s father never suffered a recurrence of melanoma; he died from prostate cancer that had spread widely through his body. “Years ago, I would have thought of the melanoma versus the prostate cancer in terms of differences in the inherent metastatic potential of those two cell types,” Adams said. “Good cancer versus bad cancer. Now I think more and more of a different question: Why was my father’s body more receptive to prostate metastasis versus melanoma metastasis?”

There are important consequences of taking soil as well as seed into account. Among the most successful recent innovations in cancer therapeutics is immunotherapy, in which a patient’s own immune system is activated to target cancer cells. Years ago, the pioneer immunologist Jim Allison and his colleagues discovered that cancer cells used special proteins to trigger the brakes in the host’s immune cells, leading to unchecked growth. (To use more appropriate evolutionary language: clones of cancer cells that are capable of blocking host immune attacks are naturally selected and grow.) When drugs stopped certain cancers from exploiting these braking proteins, Allison and his colleagues showed, immune cells would start to attack them.

Such therapies are best thought of as soil therapies: rather than killing tumor cells directly, or targeting mutant gene products within tumor cells, they work on the phalanxes of immunological predators that survey tissue environments, and alter the ecology of the host. But soil therapies will go beyond immune factors; a wide variety of environmental features have to be taken into account. The extracellular matrix with which the cancer interacts, the blood vessels that a successful tumor must coax out to feed itself, the nature of a host’s connective-tissue cells—all of these affect the ecology of tissues and thereby the growth of cancers...
...Considering the limitations of our knowledge, methods, and resources, our field may have had no choice but to submit to the lacerations of Occam’s razor, at least for a while. It was only natural that many cancer biologists, confronting the sheer complexity of the whole organism, trained their attention exclusively on our “pathogen”: the cancer cell. Investigating metastasis seems more straightforward than investigating non-metastasis; clinically speaking, it’s tough to study those who haven’t fallen ill. And we physicians have been drawn to the toggle-switch model of disease and health: the biopsy was positive; the blood test was negative; the scans find “no evidence of disease.” Good germs, bad germs. Ecologists, meanwhile, talk about webs of nutrition, predation, climate, topography, all subject to complex feedback loops, all context-dependent. To them, invasion is an equation, even a set of simultaneous equations...
"...In the field of oncology, “holistic” has become a patchouli-scented catchall for untested folk remedies: raspberry-leaf tea and juice cleanses. Still, as ambitious cancer researchers study soil as well as seed, one sees the beginnings of a new approach. It would return us to the true meaning of “holistic”: to take the body, the organism, its anatomy, its physiology—this infuriatingly intricate web—as a whole. Such an approach would help us understand the phenomenon in all its vexing diversity; it would help us understand when you have cancer and when cancer has you. It would encourage doctors to ask not just what you have but what you are."

Again, great article. Seriously worth your time. Also way worth your time is his last book:

I first cited it here.


One of my daily surfing stops is Looking for relevant tech stuff. Ran across this, concerning a Seattle AI tech startup guy:
Matt Benke

IT STARTED WHILE I was on a Hawaiian vacation in May. I thought I’d just tweaked my back lifting a poolside lounge chair. Back home, my back pain became severe, and I started noticing nerve pain in my legs. For eight days I could barely crawl around the house. My wife and two daughters nicknamed me “the worm.” At 45, I’m in pretty good shape—avid cyclist, runner, weightlifter, yoga enthusiast with a resting pulse in the 50s.

So it was weird when my primary care doctor put me on a cocktail of pain killers, nerve blockers, and cortisone shots. I even tried acupuncture. But as my back began to improve in late June, I started to feel off. Sick to my stomach. Weak. Couldn’t sleep. I lost more than 10 pounds. But I chalked this up to a month of too much Vicodin after a lifetime of thinking two Advil was excessive. My doctor said I was fit and healthy and that there was no need to run any blood tests. He wondered aloud if this was all in my head…

...After nearly a month of feeling horrible despite my back getting better and being off all medications, I hit a wall. On July 26, a Wednesday, I finished my day’s meetings and drove myself to the least busy ER I know of—the one at Swedish Medical Center in the Issaquah Highlands, 20 miles east of downtown.

A couple hours later I called Amy and asked her to join me. They’d already done a bunch of tests and ruled out the obvious—urinary tract infection, epidural abscess—and were sort of grasping at straws. Over the phone, I asked Amy, who is a clinical psychologist, if she could think of anything else I should tell the doctors. “Have you told them about the night sweats?” she asked, her stomach sinking. The look on the ER doc’s face when I passed that on should have been my first clue. (Night sweats are a symptom of some early cancers.) They drew more blood and did a CT scan.

About an hour later, a doctor who specializes in hospital admissions joined the ER doc to report on their findings. The ensuing scene is seared into my brain. He introduced himself to Amy and me so awkwardly that we could not understand him. I gently interrupted his prepared remarks to ask his name, hoping this might put him at ease.

It didn’t. He went on to explain that I had many tumors in my liver, pancreas, and chest. In addition, he explained that I had quite a few blood clots, including in my heart and lungs. “What is ‘many’ tumors?” I asked. He looked defeated, saying they stopped counting after 10. I thought he might cry, and then he started in with some nonsense about how maybe it was all just bad tests, or maybe I had a rare water-borne pest infection. Amy began crying, hard. I went into silent shock and just tried to get this guy to shut up and leave…
...They took a biopsy of one of the tumors on my liver. They surgically implanted a stent in my gall bladder, which immediately relieved my backed-up liver. The medical staff also looked for secondary impacts of the cancer. First among them was blood clots. A couple doctors examined my legs and said, “Slim to zero chance you have clots in your legs—they look too healthy. But let’s check.” A few hours later, bad news: My left leg had clots from my hip to my ankle, though thankfully not fully occlusive. My right leg had clots from knee to ankle...
On Friday the docs woke me with an urgent problem: They had found a blood clot the size of a Ping-Pong ball in my heart’s right ventricle. If it broke loose, I would die instantly, whether I was in an ER or my basement. To make matters worse, they showed me an image of the clot, and it was precariously wiggling on an already-loose attachment. Each time my heart beat, the ticking time bomb swayed precariously. The clot was too big to suck out with a vacuum, too risky to slice and remove bit-by-bit, and too large to remove from the side by breaking open a few ribs. Nope, removing it was urgent and would require cracking my sternum. Today...
Shit. His family has mounted a "members-only" Facebook support and updates group. I asked to join. My request was granted. I don't know any of them. Just have, beyond acute empathy, a decades-long affinity for the Seattle area, where I still have many dear friends (both of my girls were born in Seattle).

I feel enjoined to not share what I've learned there from their frank updates. I just wish them all the strength they will all need.

Pancreatic cancer. Truly "the dx from Hell."

My daughter had a brain scan this morning. No rad report yet.


IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close

It was an audacious undertaking, even for one of the most storied American companies: With a single machine, IBM would tackle humanity’s most vexing diseases and revolutionize medicine.

Breathlessly promoting its signature brand — Watson — IBM sought to capture the world’s imagination, and it quickly zeroed in on a high-profile target: cancer.

But three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM’s goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care...
Another long-read. Rather scathing.

"Free Beer Tomorrow?"

Good audio discussion at NPR's WBUR:


More to come...

Friday, September 1, 2017

Coming up in one month, THE Health IT event of the year

As I've previously noted recently in some detail here. Will it commence amid the turmoil of a FY September 30th Trump federal shutdown? Or, will the exigent upshot of Hurricane Harvey force the hands of the Executive and Legislative branches to deal with far more pressing matters -- like actual responsible adults?

(Or, will Kim Dim Sun successfully jerk the POTUS chain?)


I got onto this book via one of my LinkedIn group email updates.

I signed up to get the frisbee pre-pub free pdf download copy. Just started digging into it.



Ran across this in back of the book (I'm jumping around in it):

“Healthy citizens are the greatest asset any country can have.” ― Winston S. Churchill 

As health benefits get a major overhaul in the employer arena and policymakers determine where publicly paid health care programs will go, we believe it’s imperative to take a fresh look at how we’ve organized our health care “system.” One area of near-universal agreement is that we should expect far more from our health care system, given the smarts, money, and passion poured into health care. Simply shifting who pays for care does little to address the underlying dysfunction of what we pay for and how we pay. 

A group of forward-looking individuals have developed a vision for Health 3.0 to address the future of care. It is a common framework to guide the work of everyone from clinical leaders to benefits professionals to technologists to policymakers. Each should ask whether their strategies, technologies, and policies accelerate or hinder the journey to Health 3.0. If Health 3.0 is the North Star, the Health Rosetta is the roadmap and travel tips on how to get there. 

To fix health care, we need a common vision for the future― Health 3.0 We believe this vision encompasses four key dimensions. 

1. Health Services (e.g. health care delivery and self-care)
What is the optimal way to organize health services so they build on the strengths of each piece of the health puzzle, rather than operating as an unmatched set of pieces (today’s world)? Innovative new care delivery models create a bright future (that some are already experiencing) where every member of the care team is operating at the top of his or her license and is highly satisfied with his or her role—a stark contrast to Health 2.0, where only 27 percent of a doctor’s day is spent on clinical facetime with patients. Put simply, they didn’t go to med school to become glorified billing clerks. 

2. Health Care Purchasing
Underlying virtually every dysfunction in health care is perverse economic incentives. Various industry players are acting perfectly rationally when they do things that are counterproductive to achieving Health 3.0. The Health Rosetta and Health 3.0 outline the high-level blueprint for how to purchase health and wellness services wisely. We’ve seen how a workforce can achieve what one health care innovator has described as “Twice the health care at half the cost and ten times the delight.” 

3. Enabling Technology
Technology only turbocharges a highly functional organizational process when the proper organization structure, economic incentives, and processes are in place. Unfortunately, health care breaks the first rules I learned as a new consultant fresh out of school— don’t automate a broken process and don’t throw technology on top of a broken process. Sadly, health care is riddled with these two common mistakes, stemming from the flawed assumption that technology alone can be a positive force for change.

4. Enabling government
At the local, state, and federal level, government can play a tremendously beneficial (or detrimental) role in ensuring healthcare reaches its full potential. There are four main ways that government entities contribute. 

  1. As an enabler of health (e.g., public health and social determinants of health) 
  2. As a benefits purchaser, since government entities are large employers who can accelerate acceptance of new, higher-per- forming Health 3.0 care models 
  3. As a payer of taxpayer-funded health plans 
  4. As a lawmaking or regulating entity
The first item, in particular, is frequently overlooked as a powerful tool for testing and refinement of new models of care payment and delivery. 

Failings of Health Care 1.0 and 2.0
Before defining Health 3.0 further, it’s important to outline the failings of Health care 1.0 and 2.0. Dr. Zubin Damania (aka ZDoggMD) describes the positive facets of Health care 1.0 and Health care 2.0 but also gives the two earlier eras of health care a stinging rebuke. 

Behind us lies a long-lost, nostalgia-tinged world of unfettered physician autonomy, sacred doctor-patient relationships, and a laser-like focus on the art and humanity of medicine. 

This was the world of my father, an immigrant and primary care physician in rural California. The world of Health care 1.0. While many still pine for these “good old days” of medicine, we shouldn’t forget that those days weren’t really all that good. With unfettered autonomy came high costs and spotty quality.
Evidence-based medicine didn’t exist; it was consensus and intuition. Volume-based fee-for-service payments incentivized doing things to people, instead of for people. And although the relationship was sacred, the doctor often played the role of captain of the ship, with the rest of the health care team and the patients subordinate.

So, in response to these shortcomings we now have Health care 2.0. The era of Big Medicine. Large corporate groups buying practices and hospitals, managed care and Obamacare, randomized controlled trials and evidence-based guidelines, EMRs, PQRS, HCAHPS, MACRA, Press Ganey, Lean, Six- Sigma. It is the era of Medicine As Machine...of Medicine As Assembly Line. And we—clinicians and patients—are the cogs in the machinery. Instead of ceding authority to physicians, we cede authority to government, administrators, and faceless algorithms. We more often treat a computer screen than a patient. And the doc isn’t the boss, but neither is the rest of the health care team—nor the patient. We are ALL treated as commodities...raw materials in the factory. 

Health 3.0 Vision
Dr. Damania goes on to describe Health 3.0 as follows: Taking the best aspects of 1.0 (deep sacred relationships, physician autonomy) and the key pieces of 2.0 (technology, evidence, teams, systems thinking), Health 3.0 restores the human relationship at the heart of healing while bolstering it with a team that revolves around the patient while supporting each other as fellow caregivers. What emerges is vastly greater than the sum of the parts. 

Caregivers and patients have the time and space and support to develop deep relationships. Providers hold patients accountable for their health, while empowered patients hold us accountable to be their guides and to know them—and treat them—as unique human beings. Our EHRs bind us and support us, rather than obstruct us. The promise of Big Data is translated to the unique patient in front of us. Our team provides the lift so everything doesn’t fall on one set of shoulders anymore (health coaches, nurses, social workers, lab techs, EVERYONE together). We are evidence-empowered but not evidence-enslaved. We are paid to keep people healthy, not to click boxes while trying to chase an ever-shrinking piece of the health care pie. Our administrators seek to grow the entire pie instead, for the benefit of ALL stakeholders…
Well, I don't see any use of "Health 3.0" referent trademark symbols (maybe we can put Robert Budzinski's lawyers on it). But, I can see the Health 2.0 folks not digging the the use of the phrase. And, I rather doubt that Indu and Matthew would agree with the characterization of "Health 2.0" above.

BTW: The Dave Chase book continues with an "Appendix F" further characterizing "Health 3.0." -- "Health 3.0 Vision: Implications for Providers, Government, and Startups."
Health care is frequently a jumble of uncoordinated silos organized around medical technology, rather than people. This has led to a suboptimal experience for both patients and clinicians. This is often made worse by incentives that run counter to optimizing health outcomes. [ibid, pg 257]
Can't really argue with that.

Mr. Chase's book will be released for sale on Amazon on September 5th.Stay tuned, I've much more to read and assimilate in my copy.

BTW, given the thrust of this book, you might want to triangulate with "An American Sickness."


Even my own professional society has gotta get in on the "version enumeration" act. From my inbox recently:

Lordy. Whatever. Maybe we should just ditch the long-passe word "quality" anyway. How about "Disruptive, Innovative, Lean, Agile, Excellence 2.0?"


I seem to have precipitated a bit of a Twitter kerfuffle.

All of it matters, acutely. Data, InfoTech, Med/BioTech more broadly, pharma, process QI, basic and applied science, clinical training, empathy, organizational culture, just market economics, rational policy... All of it.



More to come...