Search the KHIT Blog

Friday, September 29, 2017

On deck: the 11th Annual Health 2.0 Conference

BREAKING: HHS Secretary Tom Price resigns. See below.
AGENDA
Touted it before, e.g., here:

Some topics of interest to me:


Last year's KHIT recap. And, my 2015 "on deck" setup post, "Free Beer Tomorrow."

My newest read, just getting underway:

Chapter 1
Welcome to the Most Important Conversation of Our Time

Technology is giving life the potential to flourish like never before—or to self-destruct.
- Future of Life Institute

Thirteen point eight billion years after its birth, our Universe has awoken and become aware of itself. From a small blue planet, tiny conscious parts of our Universe have begun gazing out into the cosmos with telescopes, repeatedly discovering that everything they thought existed is merely a small part of something grander: a solar system, a galaxy and a universe with over a hundred billion other galaxies arranged into an elaborate pattern of groups, clusters and superclusters. Although these self-aware stargazers disagree on many things, they tend to agree that these galaxies are beautiful and awe-inspiring.

But beauty is in the eye of the beholder, not in the laws of physics, so before our Universe awoke, there was no beauty. This makes our cosmic awakening all the more wonderful and worthy of celebrating: it transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty and hope— and the pursuit of goals, meaning and purpose. Had our Universe never awoken, then, as far as I’m concerned, it would have been completely pointless— merely a gigantic waste of space. Should our Universe permanently go back to sleep due to some cosmic calamity or self-inflicted mishap, it will, alas, become meaningless.

On the other hand, things could get even better. We don’t yet know whether we humans are the only stargazers in our cosmos, or even the first, but we’ve already learned enough about our Universe to know that it has the potential to wake up much more fully than it has thus far. Perhaps we’re like that first faint glimmer of self-awareness you experienced when you began emerging from sleep this morning: a premonition of the much greater consciousness that would arrive once you opened your eyes and fully woke up. Perhaps life will spread throughout our cosmos and flourish for billions or trillions of years— and perhaps this will be because of decisions that we make here on our little planet during our lifetime…

Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence (Kindle Locations 436-455). Knopf Doubleday Publishing Group. Kindle Edition.
Two prior reads come readily to mind for contextual triangulation:


Stay tuned. Particularly like that Max Tegmark and Sean Carroll are both physicists. Atomic and sub-atomic physics underpin all of biochem -- inclusive of the "omics."

There will also be tie-ins with all of this AI stuff I've written about repeatedly. UPDATE: apropos of the topic see this article:
Alchemical Intelligence.
Why today’s AI-adherents sound just about as crazy as alchemists, and what it might really take to create intelligence.
__

BREAKING

Kleptocrat Flyboy Tom Price is out as HHS Secretary.


Good riddance. See my prior post. And its antecedent posts "Rationing by Price" and "The Price is Right (wing, that it)."

UPDATE 
The Leading Candidate to Replace Tom Price Seems Way Less Inclined to Sabotage Obamacare

The inglorious resignation of Health and Human Services Secretary Tom Price on Friday leaves vacant an extremely powerful position in President Donald Trump’s cabinet. The early frontrunner for the job is Seema Verma, a former healthcare consultant who currently heads the Centers for Medicare and Medicaid Services. Despite this administration’s crusade against Medicaid, Verma actually worked to expand Medicaid in Indiana while she worked as former governor Mike Pence’s protégé in that state. Verma is no friend of the Affordable Care Act, and she has long wished to impose extra burdens on Americans who receive subsidized health care. She is, however, almost certainly the most qualified and least dogmatic official who could possibly lead HHS under the Trump administration. In fact, Verma replacing Price would be a significant improvement.

A key irony of Verma’s career is that, even though she is a conservative Republican, she has spent much of her life helping to implement Obamacare. Verma founded her healthcare consulting firm, SVP Inc., in 2001, and rose to prominence in 2007 through her work with then-Indiana Gov. Mitch Daniels. A somewhat moderate Republican, Daniels asked Verma to craft a Medicaid expansion for underserved populations in the state, especially childless adults. Verma created the Healthy Indiana Plan, which expanded health coverage in the state by about 40,000 people—no small potatoes…
I guess we'll see. Let us not forget the ~1,000 times the discretion-enabling phrase "Secretary shall" appears in the PPACA. Federal agency administrative implementation regulatory authority is supposed to only reflect the "intent of the Congress" as expressed in a law (a.k.a. the "legislative construction"). Good luck clearly divining that across the breadth of a large statute.
____________

More to come...

Monday, September 25, 2017

"As the Secretary shall determine." The Cassidy-Graham bill

July 27 update: GOP withdraws the Cassidy-Graham bill.
"We didn't have the votes."


I reviewed the text of this latest "Obamacare Repeal" bill over the weekend ("Cassidy-Graham"; I got my copy from Senator's Cassidy's Senate web page).



 My pdf copy runs 141 pages. Wide, 2-inch page margins, double-spaced, with line numbering, and mostly heavily indented body text. 25,352 words, inclusive.

There's not much "there" there. Were this a single-spaced Word document with std 1" margins, no line numbers, and no block indentations, it'd probably run about 25 pages -- of utter lack of detail and specificity.

Totally sufficient to govern 18% of the U.S. economy, right?

78 references to "the Secretary" (of HHS, Tom Price), all going to the Secretary's broad regulatory and administrative authority and discretion ("...as the Secretary shall determine").
"...as the Secretary shall determine" is long a staple of legislation, generally (which is why we have the voluminous Code of Federal Regulations), albeit a convention that Republicans hated about Obamacare, never passing up an opportunity to rail against the law's discretionary regulatory provisions.
 In fairness, to the latter point, the phrase "Secretary shall" appears 866 times in the 974-pg (single-spaced 2" margins) Obamacare law. "Secretary determines"? 159 times. "by the Secretary"? 558 times. The phrase "be careful watch you ask for" comes to mind, given that "the Secretary" is now Tom Price.

Oh, btw, the phrase "affordable coverage" only shows up 12 times -- four of those in section headers.
38 Cassidy-Graham references to "waivers," going to the Secretary's authority to grant them to states that apply for exemptions to certain core coverage obligations imposed by the still-not-"repealed"-though-financially-eviscerated PPACA ("Obamacare").

12 hits for "affordable," all going to nullifying funding sections of "the Affordable Care Act."

That's it for now. Though, I know they're trying to add some Alaska bribe money provisions to buy Lisa Murkowski's vote, having lost Senator McCain.

Nicholas Bagley on some of the slim particulars:
…So what the hell does section 204 mean? Can states discriminate on the basis of health status or not? Who knows?

The craziest thing is that the sloppy drafting may be intentional. It reads to me like a deliberate effort to allow senators to read whatever they want to into the bill. Senator Cassidy and other moderates can claim it preserves the protections for preexisting conditions. Senator Cruz and other conservatives can claim it doesn’t…
Stay tuned. Federal FY clock runs out this Saturday. I will then be down in Santa Clara at the onsite Hyatt Regency prepping for Sunday's start of the Annual Health 2.0 Conference.


Stay tuned. Substantive Cassidy-Graham updates as they become available.

UPDATES
 
New at The New Yorker, from an excellent long-read article:
Is Health Care a Right?
It’s a question that divides Americans, including those from my home town. But it’s possible to find common ground.
By Atul Gawande


Is health care a right? The United States remains the only developed country in the world unable to come to agreement on an answer. Earlier this year, I was visiting Athens, Ohio, the town in the Appalachian foothills where I grew up. The battle over whether to repeal, replace, or repair the Affordable Care Act raged then, as it continues to rage now. So I began asking people whether they thought that health care was a right. The responses were always interesting…

...Liberals often say that conservative voters who oppose government-guaranteed health care and yet support Medicare are either hypocrites or dunces. But Monna, like almost everyone I spoke to, understood perfectly well what Medicare was and was glad to have it.

I asked her what made it different.

“We all pay in for that,” she pointed out, “and we all benefit.” That made all the difference in the world. From the moment we earn an income, we all contribute to Medicare, and, in return, when we reach sixty-five we can all count on it, regardless of our circumstances. There is genuine reciprocity. You don’t know whether you’ll need more health care than you pay for or less. Her husband thus far has needed much less than he’s paid for. Others need more. But we all get the same deal, and, she felt, that’s what makes it O.K.

“I believe one hundred per cent that Medicare needs to exist the way it does,” she said. This was how almost everyone I spoke to saw it. To them, Medicare was less about a universal right than about a universal agreement on how much we give and how much we get.

Understanding this seems key to breaking the current political impasse. The deal we each get on health care has a profound impact on our lives—on our savings, on our well-being, on our life expectancy. In the American health-care system, however, different people get astonishingly different deals. That disparity is having a corrosive effect on how we view our country, our government, and one another.

The Oxford political philosopher Henry Shue observed that our typical way of looking at rights is incomplete. People are used to thinking of rights as moral trump cards, near-absolute requirements that all of us can demand. But, Shue argued, rights are as much about our duties as about our freedoms. Even the basic right to physical security—to be free of threats or harm—has no meaning without a vast system of police departments, courts, and prisons, a system that requires extracting large amounts of money and effort from others. Once costs and mechanisms of implementation enter the picture, things get complicated. Trade-offs now have to be considered. And saying that something is a basic right starts to seem the equivalent of saying only, “It is very, very important.”

Shue held that what we really mean by “basic rights” are those which are necessary in order for us to enjoy any rights or privileges at all. In his analysis, basic rights include physical security, water, shelter, and health care. Meeting these basics is, he maintained, among government’s highest purposes and priorities. But how much aid and protection a society should provide, given the costs, is ultimately a complex choice for democracies. Debate often becomes focussed on the scale of the benefits conferred and the costs extracted. Yet the critical question may be how widely shared these benefits and costs are…

The reason goes back to a seemingly innocuous decision made during the Second World War, when a huge part of the workforce was sent off to fight. To keep labor costs from skyrocketing, the Roosevelt Administration imposed a wage freeze. Employers and unions wanted some flexibility, in order to attract desired employees, so the Administration permitted increases in health-insurance benefits, and made them tax-exempt. It didn’t seem a big thing. But, ever since, we’ve been trying to figure out how to cover the vast portion of the country that doesn’t have employer-provided health insurance: low-wage workers, children, retirees, the unemployed, small-business owners, the self-employed, the disabled. We’ve had to stitch together different rules and systems for each of these categories, and the result is an unholy, expensive mess that leaves millions unprotected.

No other country in the world has built its health-care system this way, and, in the era of the gig economy, it’s becoming only more problematic. Between 2005 and 2015, according to analysis by the economists Alan Krueger and Lawrence Katz, ninety-four per cent of net job growth has been in “alternative work arrangements”—freelancing, independent contracting, temping, and the like—which typically offer no health benefits. And we’ve all found ourselves battling over who deserves less and who deserves more…

Medical discoveries have enabled the average American to live eighty years or longer, and with a higher quality of life than ever before. Achieving this requires access not only to emergency care but also, crucially, to routine care and medicines, which is how we stave off and manage the series of chronic health issues that accumulate with long life. We get high blood pressure and hepatitis, diabetes and depression, cholesterol problems and colon cancer. Those who can’t afford the requisite care get sicker and die sooner. Yet, in a country where pretty much everyone has trash pickup and K-12 schooling for the kids, we’ve been reluctant to address our Second World War mistake and establish a basic system of health-care coverage that’s open to all. Some even argue that such a system is un-American, stepping beyond the powers the Founders envisioned for our government…

These days, trust in our major professions—in politicians, journalists, business leaders—is at a low ebb. Members of the medical profession are an exception; they still command relatively high levels of trust. It does not seem a coincidence that medical centers are commonly the most culturally, politically, economically, and racially diverse institutions you will find in a community. These are places devoted to making sure that all lives have equal worth. But they also pride themselves on having some of the hardest-working, best-trained, and most innovative people in society. This isn’t to say that doctors, nurses, and others in health care fully live up to the values they profess. We can be condescending and heedless of the costs we impose on patients’ lives and bank accounts. We still often fail in our commitment to treating equally everyone who comes through our doors. But we’re embarrassed by this. We are expected to do better every day…
Well worth your time. Read all of it.

As good a time as any to review "An American Sickness."


ANOTHER UPDATE

From my iPhone.


"Likely dooming it." Maybe.

UPDATE

Well, that didn't take long.


Don't kid yourselves that they won't keep trying. Moreover, given the substantial "Secretarial discretion" in the PPACA, they will certainly keep up with the Obamacare sabotage tactics.
ADDENDUM: THE PRICE IS NOT RIGHT
____________
 
More to come...

Thursday, September 21, 2017

Precision medicine update


My latest AAAS Science Magazine hardcopy arrived yesterday. Interesting (firewalled) article by veteran AAAS writer Jocelyn Kaiser therein:
PRECISION MEDICINE
NIH's massive health study is off to a slow start
       
Nearly 3 years after then-President Barack Obama laid out a vision for perhaps the most ambitious and costly national health study ever, the U.S. National Institutes of Health (NIH) is still grappling with the complexities of the effort, which aims to probe links between genes, lifestyle, and health. The study plan, or protocol, is ready, the biobank is taking samples, and the enrollment site is up, but study leaders are still in early “beta testing” of the 1-million-person precision medicine study.

The All of Us study, as it is now dubbed, had once expected that by early 2017 it would enroll at least 10,000 participants for a pilot testing phase; it is up to just 2000. Its national kickoff, envisioned for 2016 and then mid-2017, has been delayed as staff work out the logistics of the study, which is funded at $230 million this year and projected to cost $4.3 billion over 10 years. But study leaders say that for an endeavor this complex, delays are inevitable. “The pressure on our end is really about getting it right and doing this when we're ready,” says Joni Rutter, director of scientific programs for the project in Bethesda, Maryland...
Plans are to enroll a million consenting patients in the ambitious longitudinal research.

The study will recruit most volunteers through health care provider organizations (HPOs), or networks of clinics and physicians, and the rest directly from the general population. Volunteers age 18 or older will give blood and urine samples and undergo a few basic tests such as blood pressure, height, and weight during a 1-hour clinic visit, after which they will receive $25, the protocol says. They will complete three online surveys about their health and lifestyle. The study will also ask volunteers to allow access to their electronic health records so researchers can track their health over time.
They've dubbed it the "All of Us" study. From the NIH page:
"The All of Us Research Program is a historic effort to gather data from one million or more people living in the United States to accelerate research and improve health. By taking into account individual differences in lifestyle, environment, and biology, researchers will uncover paths toward delivering precision medicine."

Among the potentially daunting issues to be successfully traversed: "Self-selection bias" (though they claim to be mounting an intense disparities-mitigation recruitment effort on that front), "consent" issues (which can vary legally from state to state), particularly since they intend to delve into DNA analytics (oversold of late?), and "data quality"/EHR "interoperability" hurdles.

Other thoughts of mine: "A million?" Sounds great (but is it really "Big Data?"). When I was a bank credit risk analyst we ran highly effective credit underwriting modeling campaigns (pdf) using our internal customer data comprised of several million accounts (our team was astute; the bank had successive record profits each of the five years of my tenure).

But, it's one thing to do adroit CART and stress-tested correlation/regression studies using clean data comprising a relatively small number of independent variables. It's entirely another matter to do so where the databases have to house hundreds to thousands of (longitudinal) clinical variables per patient (of sometimes uncertain data origination pedigree). Study enrollee dropouts over time, and necessary stratifications (whether through subective "expert judgment" or data-driven via methods such as cluster analysis) may well make the "million" start to look pretty small in relatively short order.

The article concludes:
Others question whether the massive investment in DNA-based medicine will pay off. It may merely “confirm that for the vast majority of things, environment and behavior trump gene variants by a wide margin,” says physiologist Michael Joyner of the Mayo Clinic in Rochester, Minnesota. What's more, personalized medical care may not mean much for the millions of Americans who lack health insurance, says bioethicist Mark Rothstein of the University of Louisville in Kentucky. “We'd get a lot more bang for the buck by having everyone have access to the low-tech medicine that's available right now.”
Indeed. I forget the source at the moment, but I recall from one of my myriad books that I study and cite here the assertion that, for most clinical dx issues, "an old-fashioned, assiduously-taken SOAP note "FH" (Family History) spanning three generations still outperforms a genomic assay." Again, the applied "omics" are still in their analytical infancy and rife with challenges.

Also apropos, today's political "Repeal" fracas. From my iPhone this afternoon:


I also have to wonder about the security of the "All of Us" projected long-term $4.3 billion funding, given the Trump administration's zest for budget-cutting at every turn (Jocelyn does not share that concern, citing "congressional support." I guess we'll see).


UPDATE

Jocelyn Kaiser apprised me of this. I'd have seen it in any event, given that STATnews is one of my priority daily stops, but thanks!

By LEV FACHER @levfacher
 

The National Institutes of Health would like six vials of your blood, please.

Its scientists would like to take a urine sample, measure your waistline, and have access to your electronic health records and data from the wearable sensor on your wrist. And if you don’t mind sharing, could they have your Social Security number?
It is a big ask, the NIH knows, and of an equally big group — the agency eventually hopes to enroll over 1 million participants in the next step of what four researchers referred to in a 2003 paper as “a revolution in biological research.”

The appeals, however, are befitting the biggest-ever bet on precision medicine, now more than a decade in the making. The paper’s lead author, Dr. Francis Collins, has been devoted to the project from its inception, riding his vision for a more precise world of medical treatment to his current post as the NIH’s director.

The data-gathering experiment, dubbed “All of Us,” is an important stop on the way to making personalized medicine a reality around the world, Collins and others say. The NIH hopes that the trove of information could one day enable doctors to use increasingly precise diagnostic tests. Eventually, scientists could shape treatments based on an individual’s genetic characteristics.

But one of Collins’s stated goals is ensuring more than half of the participants come from communities that are historically underrepresented in biomedical research — and that’s a gamble…
Good article. A fairly long read. Worth your time. Nice examination of the ongoing logistical issues that will have to be surmounted.

ERRATUM

The 11th Annual Health 2.0 Conference draws nigh. Be there. I will.

CASSIDY-GRAHAM "REPEAL" UPDATE

From The Incidental Economist, another of my daily stops.
The only option left to the Senate is to make health care reform someone else’s problem
…If it hasn’t become abundantly clear, the only thing left for Republican Senators to try is to kick the can down the road. Again. They’re going to try and pass a bill which gives less money overall to states, a lot less money to some states, and then tells them to “figure it out”. Later, they can claim that they gave the states all the tools they needed to fix the health care system, so now it’s THEIR fault things don’t work.

This is ridiculous.

There is no magic. There is no innovation. If there was a way to make the health care system broader, cheaper, and better, we would do it right now. We would have done it years ago…

THERE IS NO WAY TO SPEND LESS, COVER MORE, AND MAKE IT BETTER...
The opening paragraph of my 2009 post "The U.S. health care policy morass."
Some reform advocates have long argued that we can indeed [1] extend health care coverage to all citizens, with [2] significantly increased quality of care, while at the same time [3] significantly reducing the national (and individual) cost. A trifecta "Win-Win-Win." Others find the very notion preposterous on its face. In the summer of 2009, this policy battle is now joined in full fury...
Eight years later, the can continues to be kicked.

THE LATEST

Senator John McCain has announced that he will again vote against the latest GOP repeal bill. That will probably doom the effort.

____________

More to come...

Tuesday, September 19, 2017

GAFA, Really Big Data, and their sociopolitical implications

My September issue of my ASQ "Quality Progress" journal showed up in the snailmail.


Pretty interesting cover story. Some of it ties nicely into recent topics of mine such as "AI" and its adolescent cousin "NLP" as they go to the health IT space. More on that in a bit, but, first, let me set the stage via some relevant recent reading.


Franklin Foer has had a good book launch. Lots of interviews and articles online and a number of news show appearances.


I loved the book, read it in one day. I will cite just a bit from his book, "GAFA," btw, is the EU's  sarcastic shorthand for "Google-Amazon-Facebook-Apple," the for-profit miners of the biggest of "big data."
UNTIL RECENTLY, it was easy to define our most widely known corporations. Any third grader could describe their essence. Exxon sells oil; McDonald’s makes hamburgers; Walmart is a place to buy stuff. This is no longer so. The ascendant monopolies of today aspire to encompass all of existence. Some of these companies have named themselves for their limitless aspirations. Amazon, as in the most voluminous river on the planet, has a logo that points from A to Z; Google derives from googol, a number (1 followed by 100 zeros) that mathematicians use as shorthand for unimaginably large quantities.

Where do these companies begin and end? Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow. Google now aims to build driverless cars, manufacture phones, and conquer death. Amazon was once content being “the everything store,” but now produces television shows, designs drones, and powers the cloud. The most ambitious tech companies— throw Facebook, Microsoft, and Apple into the mix— are in a race to become our “personal assistant.” They want to wake us in the morning, have their artificial intelligence software guide us through the day, and never quite leave our sides. They aspire to become the repository for precious and private items, our calendar and contacts, our photos and documents. They intend for us to unthinkingly turn to them for information and entertainment, while they build unabridged catalogs of our intentions and aversions. Google Glass and the Apple Watch prefigure the day when these companies implant their artificial intelligence within our bodies.

More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They believe that they have the opportunity to complete the long merger between man and machine— to redirect the trajectory of human evolution. How do I know this? Such suggestions are fairly commonplace in Silicon Valley, even if much of the tech press is too obsessed with covering the latest product launch to take much notice of them. In annual addresses and townhall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature— a view of human nature that they intend to impose on the rest of us.

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, which isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, that’s not the worldview that emerges. In fact, it is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies believe we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. By stitching the world together, they can cure its ills. Rhetorically, the tech companies gesture toward individuality— to the empowerment of the “user”— but their worldview rolls over it. Even the ubiquitous invocation of users is telling, a passive, bureaucratic description of us.

The big tech companies— the Europeans have charmingly, and correctly, lumped them together as GAFA (Google, Apple, Facebook, Amazon)— are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility to intellectual property. In the realm of economics, they justify monopoly with their well-articulated belief that competition undermines our pursuit of the common good and ambitious goals. When it comes to the most central tenet of individualism— free will— the tech companies have a different way. They hope to automate the choices, both large and small, that we make as we float through the day. It’s their algorithms that suggest the news we read, the goods we buy, the path we travel, the friends we invite into our circle.

It’s hard not to marvel at these companies and their inventions, which often make life infinitely easier. But we’ve spent too long marveling. The time has arrived to consider the consequences of these monopolies, to reassert our own role in determining the human path. Once we cross certain thresholds— once we transform the values of institutions, once we abandon privacy— there’s no turning back, no restoring our lost individuality…


Foer, Franklin. World Without Mind: The Existential Threat of Big Tech (pp. 1-3). Penguin Publishing Group. Kindle Edition.
I've considered thses issues before. See, e.g., "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

A useful volume of historical context also comes to mind.

Machines are about control. Machines give more control to humans: control over their environment, control over their own lives, control over others. But gaining control through machines means also delegating it to machines. Using the tool means trusting the tool. And computers, ever more powerful, ever smaller, and ever more networked, have given ever more autonomy to our instruments. We rely on the device, plane and phone alike, trusting it with our security and with our privacy. The reward: an apparatus will serve as an extension of our muscles, our eyes, our ears, our voices, and our brains.

Machines are about communication. A pilot needs to communicate with the aircraft to fly it. But the aircraft also needs to communicate with the pilot to be flown. The two form an entity: the pilot can’t fly without the plane, and the plane can’t fly without the pilot. But these man-machine entities aren’t isolated any longer. They’re not limited to one man and one machine, with a mechanical interface of yokes, throttles, and gauges. More likely, machines contain a computer, or many, and are connected with other machines in a network. This means many humans interact with and through many machines. The connective tissue of entire communities has become mechanized. Apparatuses aren’t simply extensions of our muscles and brains; they are extensions of our relationships to others— family, friends, colleagues, and compatriots. And technology reflects and shapes those relationships.

Control and communication began to shift fundamentally during World War II. It was then that a new set of ideas emerged to capture the change: cybernetics. The famously eccentric MIT mathematician Norbert Wiener coined1 the term, inspired by the Greek verb kybernan, which means “to steer, navigate, or govern.” Cybernetics; or, Control and Communication in the Animal and the Machine, Wiener’s pathbreaking book, was published in the fall of 1948. The volume was full of daredevil prophecies about the future: of self-adaptive machines that would think and learn and become cleverer than “man,” all made credible by formidable mathematical formulas and imposing engineering jargon…

…From today’s vantage point, the future is hazy, dim, and formless. But these questions aren’t new. The future of machines has a past. And mastering our future with machines requires mastering our past with machines. Stepping back twenty or forty or even sixty years brings the future into sharper relief, with exaggerated clarity, like a caricature, revealing the most distinct and marked features. And cybernetics was a major force in molding these features.

That cybernetic tension of dystopian and utopian visions dates back many decades. Yet the history of our most potent ideas about the future of technology is often neglected. It doesn’t enter archives in the same way that diplomacy and foreign affairs would. For a very long period of time, utopian ideas have dominated; ever since Wiener’s death in March 1964, the future of man’s love affair with the machine was a starry-eyed view of a better, automated, computerized, borderless, networked, and freer future. Machines, our own cybernetic creations, would be able to overcome the innate weaknesses of our inferior bodies, our fallible minds, and our dirty politics. The myth of the clean, infallible, and superior machines was in overdrive, out of balance.

By the 1990s, dystopia had returned. The ideas of digital war, conflict, abuse, mass surveillance, and the loss of privacy— even if widely exaggerated— can serve as a crucial corrective to the machine’s overwhelming utopian appeal. But this is possible only if contradictions are revealed— contradictions covered up and smothered by the cybernetic myth. Enthusiasts, driven by hope and hype, overestimated the power of new and emerging computer technologies to transform society into utopia; skeptics, often fueled by fear and foreboding, overestimated the dystopian effects of these technologies. And sometimes hope and fear joined forces, especially in the shady world of spies and generals. But misguided visions of the future are easily forgotten, discarded into the dustbin of the history of ideas. Still, we ignore them at our own peril. Ignorance risks repeating the same mistakes.

Cybernetics, without doubt, is one of the twentieth century’s biggest ideas, a veritable ideology of machines born during the first truly global industrial war that was itself fueled by ideology. Like most great ideas, cybernetics was nimble and shifted shape several times, adding new layers to its twisted history decade by decade. This book peels back these layers, which were nearly erased and overwritten again and again, like a palimpsest of technologies. This historical depth, although almost lost, is what shines through the ubiquitous use of the small word “cyber” today…


Rid, Thomas (2016-06-28). Rise of the Machines: A Cybernetic History (Kindle Locations 167-233). W. W. Norton & Company. Kindle Edition.
Still reading this one. A fine read. spans roughly the period from WWII to 2016.

I can think of a number of relevant others I've heretofore cited, but these will do for now.

BACK TO THE QUALITY PROGRESS ARTICLE

The Deal With Big Data

Move over! Big data analytics and standardization are the next big thing in quality
by Michele Boulanger, Wo Chang, Mark Johnson and T.M. Kubiak

 
Just the Facts


  • More and more organizations have realized the important role big data plays in today’s marketplaces.

  • Recognizing this shift toward big data practices, quality professionals must step up their understanding of big data and how organizations can use and take advantage of their transactional data.

  • Standards groups realize big data is here to stay and are beginning to develop foundational standards for big data and big data analytics.
The era of big data is upon us. While providing a formidable challenge to the classically trained quality practitioner, big data also offers substantial opportunities for redirecting a career path into a computational and data-intensive environment.

The change to big data analytics from the status quo of applying quality principles to manufacturing and service operations could be considered a paradigm shift comparable to the changes quality professionals experienced when statistical computing packages became widely available, or when control charts were first introduced.

The challenge for quality practitioners is to recognize this shift and secure the training and understanding necessary to take full advantage of the opportunities.

What’s the big deal?
What exactly is big data? You’ve probably noticed that big data often is associated with transactional data sets (for example, American Express and Amazon), social media (for example, Facebook and Twitter) and, of course, search engines (for example, Google). Most formal definitions of big data involve some variant of the four V’s:

  • Volume: Data set size.
  • Variety: Diverse data types residing in multiple locations.
  • Velocity: Speed of generation and transmission of data.
  • Variability: Nonconstancy of volume, variety and velocity.
This set of V’s is attributable originally to Gartner Inc., a research and advisory company, and documented by the National Institute of Standards and Technology (NIST) in the first volume of a set of seven documents. Big data clearly is the order of the day when the quality practitioner is confronted with a data set that exceeds the laptop’s memory, which may be by orders of magnitude.

In this article, we’ll reveal the big data era to the quality practitioner and describe the strategy being taken by standardization bodies to streamline their entry into the exciting and emerging field of big data analytics. This is all done with an eye on preserving the inherently useful quality principles that underlie the core competencies of these standardization bodies.

Primary classes of big data problems
The 2016 ASQ Global State of Quality reports included a spotlight report titled "A Trend? A Fad? Or Is Big Data the Next Big Thing?" hinting that big data is here to stay. If the conversion from acceptance sampling, control charts or design of experiments seems a world away from the tools associated with big data, rest assured that the statistical bases still apply. 
Of course, the actual data, per the four V’s, are different. Relevant formulations of big data problems, however, enjoy solutions or approaches that are statistical, though the focus is more on retrospective data and causal models in traditional statistics, and more forward-looking data and predictive analytics in big data analytics. Two primary classes of problems occur in big data:
  • Supervised problems occur when there is a dependent variable of interest that relates to a potentially large number of independent variables. For this, regression analysis comes into play, for which the typical quality practitioner likely has some background.
  • Unsupervised problems occur when unstructured data are the order of the day (for example, doctor’s notes, medical diagnostics, police reports or internet transactions).
Unsupervised problems seek to find the associations among the variables. In these instances, cluster and association analysis can be used. The quality practitioner can easily pick up such techniques...
Good piece. It's firewalled, unfortunately, but, ASQ does provide "free registration" for non-member "open access" to certain content, including this article.

The article, as we might expect, is focused on the tech and operational aspects of using "big data" for analytics -- i.e., the requisite interdisciplinary skill sets needed, data heterogeneity problems going to 'interoperability," and the chronic related problems associated with "data quality." These considerations differ materially from those of "my day-" -- I joined ASQ in the 80's when it as still "ASQC," the "American Society for Quality Control." Consequently, I am an old school Deming - Shewhart guy. I successfully sat for the rigorous ASQ "Certified Quality Engineer" exam (CQE) in 1992. It was comprised at the time of about 2/3rds applied industrial statistics -- sampling theory, probability calculations, design of experiments, all aimed principally at assessing and improving things like "fraction defectives" amid production, and maintaining "SPC," Statistical Process Control, etc.

That kind of work pretty much assumed relative homogeneity of data under tight in-house control.
Such was even the case during my time in bank credit risk modeling and management. See, e.g., my 2003 whitepaper "FNBM Credit Underwriting Model Development" (large pdf). Among our data resources, we maintained a fairly large Oracle data warehouse comprising several million accounts from which I could pull customer-related data into SAS for analytics.
While such analytic methods do in fact continue to be deployed, the issues pale in comparison the the challenges we face in a far-flung "cloud-based" "big data" world comprised of data of wildly varying pedigree. The Quality Progress article provides a good overview of the current terrain and issues requiring our attention.

One publicly available linked resource the article provides:

The NBD-PWG was established together with the industry, academia and government to create a consensus-based extensible Big Data Interoperability Framework (NBDIF) which is a vendor-neutral, technology- and infrastructure-independent ecosystem. It can enable Big Data stakeholders (e.g. data scientists, researchers, etc.) to utilize the best available analytics tools to process and derive knowledge through the use of standard interfaces between swappable architectural components. The NBDIF is being developed in three stages with the goal to achieve the following with respect to the NIST Big Data Reference Architecture (NBD-RA), which was developed in Stage 1:
  1. Identify the high-level Big Data reference architecture key components, which are technology, infrastructure, and vendor agnostic;
  2. Define general interfaces between the NBD-RA components with the goals to aggregate low-level interactions into high-level general interfaces and produce set of white papers to demonstrate how NBD-RA can be used;
  3. Validate the NBD-RA by building Big Data general applications through the general interfaces.
The "Use Case" pages contains links to work spanning the breadth of big data application domains, e.g., government operations, commercial, defense, healthcare and life sciences, deep learning and social media, the ecosystem for research, astronomy and physics, environmental and polar sciences, and energy.

From the Electronic Medical Record (EMR) Data "use case" document; Shaun Grannis, Indiana University
As health care systems increasingly gather and consume electronic medical record data, large national initiatives aiming to leverage such data are emerging, and include developing a digital learning health care system to support increasingly evidence-based clinical decisions with timely accurate and up-to-date patient-centered clinical information; using electronic observational clinical data to efficiently and rapidly translate scientific discoveries into effective clinical treatments; and electronically sharing integrated health data to improve healthcare process efficiency and outcomes. These key initiatives all rely on high-quality, large-scale, standardized and aggregate health data.  Despite the promise that increasingly prevalent and ubiquitous electronic medical record data hold, enhanced methods for integrating and rationalizing these data are needed for a variety of reasons. Data from clinical systems evolve over time. This is because the concept space in healthcare is constantly evolving: new scientific discoveries lead to new disease entities, new diagnostic modalities, and new disease management approaches. These in turn lead to new clinical concepts, which drives the evolution of health concept ontologies. Using heterogeneous data from the Indiana Network for Patient Care (INPC), the nation's largest and longest-running health information exchange, which includes more than 4 billion discrete coded clinical observations from more than 100 hospitals for more than 12 million patients, we will use information retrieval techniques to identify highly relevant clinical features from electronic observational data. We will deploy information retrieval and natural language processing techniques to extract clinical features. Validated features will be used to parameterize clinical phenotype decision models based on maximum likelihood estimators and Bayesian networks. Using these decision models we will identify a variety of clinical phenotypes such as diabetes, congestive heart failure, and pancreatic cancer…

Patients increasingly receive health care in a variety of clinical settings. The subsequent EMR data is fragmented and heterogeneous. In order to realize the promise of a Learning Health Care system as advocated by the National Academy of Science and the Institute of Medicine, EMR data must be rationalized and integrated. The methods we propose in this use-case support integrating and rationalizing clinical data to support decision-making at multiple levels.
This document is dated August 11, 2013. Let's don't hurry up or anything. "Interoperababble."

___

BEYOND TECHNICAL CAPABILITY ISSUES

Back to the themes with which I began this post. From Wired.com

AI RESEARCH IS IN DESPERATE NEED OF AN ETHICAL WATCHDOG

ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data and Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go…
Yeah, it's about "AI" rather than "big data" per se. But, to me the direct linkage is pretty obvious.

Again, apropos, I refer you to my prior post "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

Tangentially, see also my 'Watson and cancer" post.

SOME ADDITIONAL RECOMMENDED TOPICAL READING FROM MY KINDLE STASH



By no means exhaustive. Also recommend you read a bunch of Kevin Kelly among others.

ADDENDUM

From Scientific American: 
Searching for the Next Facebook or Google: Bloomberg Helps Launch Tech Incubator
The former mayor speaks with Scientific American about the new Cornell Tech campus in New York City: “Culture attracts capital a lot quicker than capital will attract culture.”
____________

More to come...

Thursday, September 14, 2017

Defining "Interoperability" down, an update

 
Pretty interesting Modern Healthcare article arrived in my inbox this morning.
Epic takes a step towards interoperability with Share Everywhere
By Rachel Z. Arndt


Epic Systems Corp. is making headway in the quest for interoperability and potentially fulfilling meaningful use requirements with a new way for patients to send their medical records to their providers.

The new product, called Share Everywhere and unveiled Wednesday, allows patients to give doctors access to their medical records through an internet browser. That means doctors don't have to have their own electronic health record systems and the information can be shared despite not having EHRs that can communicate.

"This is really patient-driven interoperability," said Sean Bina, Epic's vice president of access applications. "This makes it so the patient can direct their existing record to the clinician of their choice," he said, regardless of whether that patient is one of the 64% of Americans who have records in Epic EHRs or whether that doctor is one of the 57.8% of ambulatory physicians who use Epic.

The receiving doctors need only have a computer and an internet connection—a requirement that might have been helpful in the recent hurricanes…
As I've noted before, my world of late has been "all Epic all the time," mostly in my role as next-of-kin caregiver, as well as an episodic chronic care F/up patient. Kaiser Permanente? Epic. Muir? Epic. UCSF? Epic. Stanford Medical Center? Epic. (I think Sutter Health is also on Epic, but I'm not certain.)

They pretty much rule the Bay Area (which is a pretty good thing for us, net). Their national footprint is huge as well.


More from Rachel's article:
"Where we are now is we're looking at how do we get the data into the workflow of the clinician so it's not something else they have to do, it's just there," said Charles Christian, vice president of technology and engagement for the Indiana Health Information Exchange. "There is a massive amount of data that's being shared for a variety of reasons. The question is, how much of it is actually being used to have an impact on the outcomes?"
Interesting. Goes to my long irascible (pedantic?) beef regarding the misnomer "interoperability," which I've called "interoperababble." To recap: no amount of calling point-to-point interfaced data exchange "interoperability" will make it so -- should you take the IEEE definition seriously.

As do I.

"How do we get the data into the workflow so it's not something else they have to do, it's just there?"

Indeed. That is in fact the heart of the IEEE definition. To the extent that Epic's "Share Everywhere" service facilitates that "seamlessness" workflow goal, it may in fact be a substantive step in the right direction.
Though, it does appear to be "read-only." Minimally, the recipient provider would have to "screen-scrape" the data into her own EHR (or otherwise save them "to file") for "write" updating in the wake of acting upon the data. Chain-of-custody/last record update concerns, anyone?
We'll see. From Epic's press release:
Epic Announces Worldwide Interoperability with Share Everywhere
Patients can authorize any provider to view their record in Epic and to send progress notes back


Verona, Wis. – Epic, creator of the most widely used electronic health record, today announced Share Everywhere, a big leap forward in interoperability. Share Everywhere will allow patients to grant access to their data to any providers who have internet access, even if they don’t have EHRs. In addition, using Share Everywhere, a provider granted access can send a progress note back to the patient’s healthcare organization for improved continuity of care…
"Send a progress note back?" Perhaps that might suffice to allay "last update" concerns. Perhaps. A fundamental issue in QA broadly is that of "document version control."

ERRATUM


Have you registered for the Oct 1st-4th Santa Clara Health 2.0 Annual Conference yet? Hope to see you there.
____________

More to come...

Monday, September 11, 2017

Watson and cancer

"...there’s a rather basic but fundamental problem with Watson, and that’s getting patient data entered into it. Hospitals wishing to use Watson must find a way either to interface their electronic health records with Watson or hire people to manually enter patient data into the system. Indeed, IBM representatives admitted that teaching a machine to read medical records is “a lot harder than anyone thought.” (Actually, this rather reminds me of Donald Trump saying, “Who knew health care could be so complicated?” in response to the difficulty Republicans had coming up with a replacement for the Affordable Care Act.) The answer: Basically anyone who knows anything about it. Anyone who’s ever tried to wrestle health care information out of a medical record, electronic or paper, into a form in a database that can be used to do retrospective or prospective studies knows how hard it is..."
From Science Based Medicine. They've picked up on and run with the reporting first published by STATnews.
"Hospitals wishing to use Watson must find a way either to interface their electronic health records with Watson..."
Ahhh.. that pesky chronic 'interoperababble" data exchange problem.

SBM continues:
What can Watson actually do?
IBM represents Watson as being able to look for patterns and derive treatment recommendations that human doctors might otherwise not be able to come up with because of our human shortcomings in reading and assessing the voluminous medical literature, but what Watson can actually do is really rather modest. That’s not to say it’s not valuable and won’t get better with time, but the problem is that it doesn’t come anywhere near the hype...
Necessarily, Watson has to employ the more difficult "Natural Language Understanding" (NLU) component of Natural Language Processing (NLP). I have previously posted on my NLP/NLU concerns here.

Search Google news for "Watson oncology" or "Watson cancer."


I'm sure you've all seen the numerous Watson TV commercials by now.

Are we now skiing just past the "Peak of Inflated Expectations?"

Everything "OncoTech" is of acute interest to me these days amid my daughter's cancer illness. apropos, see my prior post "Siddhartha Mukherjee's latest on cancer."

UPDATE

THCB has a nice post on the topic.
7 Ways We’re Screwing Up AI in Healthcare
BY LEONARD D’AVOLIO


The healthcare AI space is frothy. Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.

And yet, we seem to be blowing it.

The latest example is an investigation in STAT News pointing out the stumbles of IBM Watson followed inevitably by the ‘is AI ready for prime time’ debate. If course, IBM isn’t the only one making things hard on itself. Their marketing budget and approach makes them a convenient target. Many of us – from vendors to journalists to consumers – are unintentionally adding degrees to an already uphill climb.

If our mistakes led to only to financial loss, no big deal. But the stakes are higher. Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are important because they help us learn from our data – something healthcare is notoriously bad at. Finally using our data to improve really is a matter of life and death…
Indeed. Good post. Read all of it.

Also of recent relevant note:
Athelas releases automated blood testing kit for home use
Silicon Valley-based startup Athelas today introduced a smartphone app that it says can do simple blood diagnosis at home and return results in just 60 seconds.

The kit itself looks a bit like an Amazon Echo device and is coupled with a smartphone app to reveal the results of the test. In a demonstration, co-founder Deepika Bodapati showed TechCrunch that from taking a sample of blood and sliding it into the device, within seconds users can see their white blood count, neutrophils, lymphocytes and platelets.

Bodapati and co-founder Tanay Tandon are well aware of the fate of a similar device that promised to deliver results but wasn’t exactly what it said it was. That was the blood testing startup, Theranos, that soared to a valuation of $9 billion and then crashed and burned after its effectiveness was called into question.

“Theranos proved there was clear interest in the space, it would have been a great company if it worked,” Tandon said in an interview with Bloomberg. “Now, investors say they need proof before we can raise money.”

Athelas has published papers on the accuracy of its data and has also been FDA-approved as a device to image diagnostics. Before it can be sold over the counter, it will have to receive further approval stating that it’s as accurate as a standard test in lab conditions…
"Theranos?" Remember them? I've had my considerable irascible sport with them here.

Athelas is specifically pitching the utility of their product for oncology blood assay monitoring.


Interesting. My daughter has to run over to Kaiser today for her routine blood draw in advance of her upcoming every-other-week chemo infusion. I'm not sure her oncologist (who is also a hematologist) would be comfortable leaning on DTC single-drop-of-blood assay alternatives.

I think the Athelas people will be at the upcoming Health 2.0 Conference, and we will be hooking up for discussion. I'll have to look back through the Conference agenda to see whether any Watson peeps will be there.

Also, in the wake of my recent cardiology workup, I have to wonder about apps like that now marketed DTC by AliveCor:
Meet Kardia Mobile.
Your personal EKG.

Take a medical-grade EKG in just 30 seconds. Results are delivered right to your smartphone. Now you can know anytime, anywhere if your heart rhythm is normal, or if atrial fibrillation is detected.

Is this widely useful or just another 'Worried Well" toy? I showed this pitch to my cardiologist. He was dubious -- with respect to my case, that is.

ERRATUM
On "big data" and "Big Silicon Valley firms." New book release on Sept 12th. Saw a number of articles with and by the author.
"…More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They think they have the opportunity to complete the long merger between man and machine - to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the Founding Fathers of these companies often make big, bold pronouncements about human nature - a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: "Your program algorithms aren't that complicated," he says. And if humans function like computers, why not hasten the day we become fully cyborg? To take another grand theory, Facebook chief Mark Zuckerberg has exclaimed his desire to liberate humanity from phoniness, to end the dishonesty of secrets.

"The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly," he has said. "Having two identities for yourself is an example of a lack of integrity." Of course, that's both an expression of idealism and an elaborate justification for Facebook's business model…"

Looks interesting. I will be reading and reviewing it. I had a run at some of his issues in 2015. See "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."

UPDATE

Finished the Franklin Foer book. Riveting read. Read it "cover to cover" pretty much straight through in one day. Contextual review coming, stay tuned.

CODA

____________

More to come...