"To control spiraling healthcare costs related to managing patients with chronic conditions, 70 percent of healthcare organizations worldwide will invest in consumer-facing mobile applications, wearables, remote health monitoring, and virtual care by 2018, which will create more demand for big data and analytics capability to support population health management initiatives."From "Big HIT Changes May Be Coming Soon."
Saw an article in my Atlantic Monthly iPhone app recently. Tweeted it, and then read it and the links to which it led me.
Self-tracking using a wearable device can be fascinating. It can drive you to exercise more, make you reflect on how much (or little) you sleep, and help you detect patterns in your mood over time. But something else is happening when you use a wearable device, something that is less immediately apparent: You are no longer the only source of data about yourself. The data you unconsciously produce by going about your day is being stored up over time by one or several entities. And now it could be used against you in court.Pretty interesting. I have a Fibit. It was a freebie when I upgraded to my iPhone 5s. Used to use a Jawbone (what a shoddy, flimsy product that is; I bought one for my wife. Both hers and mine flamed out in relatively short order. More hassle trying to get warranty service than the damn things are worth). I've not been wearing the Fibit lately. Put it on the USB charge cable and then forgot about it. In light of this news I may have to inquire more closely of this vendor whether they capture my activity and sleep pattern data, and, if so, what they do with them.
The first known court case using Fitbit activity data is underway. A law firm in Canada is using a client’s Fitbit history in a personal injury claim. The plaintiff was injured four years ago when she was a personal trainer, and her lawyers now want to use her Fitbit data to show that her activity levels are still lower than the baseline for someone of her age and profession to show that she deserves compensation.
As an additional twist, it is not the raw Fitbit data that will be used in the courtroom. The lawyers are relying on an analytics company called Vivametrica, which compares individual data to the general population by using “industry and public research.” Vivametrica claims that they “define standards for how data is managed, bringing order to the chaos of the wearable.” In other words, they specialize in taking a single person’s data, and comparing it to the vast banks of data collected by Fitbits, to see if that person is above or below average.
Vivametrica says that they are doing more than just enabling consumers to get access to their own data. They are also working with wearable tech companies and healthcare providers, and seeking to “reimagine employee health and wellness programs.” But what happens when there are conflicting interests between individuals who want to monitor data about their body and employers, wearable manufacturers and healthcare providers, and now the law?
Vivametrica isn’t the only company vying for control of the fitness data space. There is considerable power in becoming the default standard-setter for health metrics. Any company that becomes the go-to data analysis group for brands like Fitbit and Jawbone stands to make a lot of money. But setting standards isn’t as simple as it may seem.
Medical research on the relationship between exercise, sleep, diet, and health is moving extremely rapidly. The decisions about what is “normal” and “healthy” that these companies come to depends on which research they’re using. Who is defining what constitutes the "average" healthy person? This contextual information isn’t generally visible. Analytics companies aren’t required to reveal which data sets they are using and how they are being analyzed...
Using customers' data against them is nothing new. It predates the "cloud," social media, and biometric wearables. Back in the late 1990's when I was caring for my terminally ill daughter in L.A., a dustup hit the news regarding a major grocery chain customer who'd slipped on some liquid in an aisle (the result of a broken container that had yet to be cleaned up) and injured himself. He sued. The chain responded by introducing his "customer loyalty discount card" purchase data in their defense, alleging that his history of alcoholic beverage purchases indicated that perhaps he'd been intoxicated while in the store, and that was the proximate cause of his misfortune.
Fast-forward 16 years. Now our "Digital Panopticon" is ubiquitous, going far beyond store loyalty cards (all the way, at it most extreme, to extraconstitutional blanket NSA surveillance). Unless you fastidiously turn off Facebook location permissions, you may find it announced on your page -- for all your friends (and skulking, data-mining others) to see -- that you'd just dined at Lindo Michoacan Mexican restauarant and that you were now at the veterinarian with your dogs.
Both of those things happened to me. Trivialities, in those two instances, to be sure, but tiny snips of a much larger concern.
"Already, the lived reality of big data is suffused with a kind of surveillant anxiety — the fear that all the data we are shedding every day is too revealing of our intimate selves but may also misrepresent us."From The Anxieties of Big Data, by Kate Crawford, author of the aforementioned Atlantic Monthly piece. Kate continues.
The current mythology of big data is that with more data comes greater accuracy and truth. This epistemological position is so seductive that many industries, from advertising to automobile manufacturing, are repositioning themselves for massive data gathering. The myth and the tools, as Donna Haraway once observed, mutually constitute each other, and the instruments of data gathering and analysis, too, act as agents that shape the social world. Bruno Latour put it this way: “Change the instruments, and you will change the entire social theory that goes with them.” The turn to big data is a political and cultural turn, and we are just beginning to see its scope."With more data comes greater accuracy and truth?" Myth, indeed. The utility of any set of data is a function of its intended use. "Big data" shot through with inacurracies can still be handsomely profitable for the analytical user (or buyer), irrespective of any harms they might visit on the individuals swept up (usually without their knowledge or assent) in the data hauls and subsequent proprietary modeling.
A personal illustration. I worked for a number of years (2000 - 2005) in subprime credit risk modeling at a VISA/MC issuer. We routinely bought "pre-screened" prospect mailing lists for our direct mail marketing campaigns. Direct mail campaigns can be in the aggregate quite profitable at a one percent response rate or lower. Ours, being targeted to credit-hungry subprime prospects with blemished credit histories, typically had response rates of about 4%. Of those who responded, about half did not pass the initial in-house analytical cut for one reason or another (many owing to impossible, bad data in the individuals' dossiers). Of the remaining 2% that we actually booked, perhaps half of those would eventually "charge off" (default). These were our "false positives."
The surviving 1% were lucrative enough to pay for everything, including a nice net margin (we set new annual profit levels every year I was there). It's called "CPA" -- cost per acquisition. Ours were about $100 per new account. Fairly standard in the industry at the time.
Potentially creditworthy (and profitable) prospects that we passed on after they replied were our "false negatives." And, ~96% of our marketing targets didn't even respond, so were were "wrong" about them (the "unknown unknowns") at the outset.
To sum up; we were in, a material sense, routinely 99% "wrong," but, notwithstanding, incredibly profitable.
Now, "big data shot through with inaccuracies" is entirely another matter when it comes to, say, "terrorism surveillance" and getting it wrong. Recall the stillborn post- 9/11 federal proposal for "Total Information Awareness." I do. Wrote about it here.
From William Safire's NY Times editorial (11/14/2002)I warned back then:
"...Every purchase you make with a credit card, every magazine subscription you buy and medical prescription you fill, every Web site you visit and e-mail you send or receive, every academic grade you receive, every bank deposit you make, every trip you book and every event you attend -- all these transactions and communications will go into what the Defense Department describes as "a virtual, centralized grand database."
To this computerized dossier on your private life from commercial sources, add every piece of information that government has about you -- passport application, driver's license and bridge toll records, judicial and divorce records, complaints from nosy neighbors to the FBI, your lifetime paper trail plus the latest hidden camera surveillance -- and you have the supersnoop's dream: a "Total Information Awareness" about every U.S. citizen.
This is not some far-out Orwellian scenario. It is what will happen to your personal freedom in the next few weeks if John Poindexter gets the unprecedented power he seeks...."
In addition to the warrantless law enforcement implications of HSA's envisioned data repositories, we must also recognize that a TIA database will also constitute a commercial data-miner's wet dream of scope heretofore unimagined. Vigilance with respect to HSA collaborative "public-private partnerships" had better be tireless.While TIA was nominally killed off (proud to say I had a small hand in its demise), things inexorably got worse. See my July 9th, 2008 post "Privacy and the 4th Amendment amid the "War on Terror."
Today we have all manner of virtually unregulated big data mining, modeling, and aggregated and re-aggregated resale going on, using all of us as correlational grist -- e.g., Google, the overtly commercial Amazon and their lesser competitors, and "free" social media platforms such as Facebook, Twitter, Tumblr, Pinterest, etc, along with business sites such as LinkedIn. Digital gumshoe companies such as Palantir are hard at work quietly drilling in the tar sands of social media, modeling away and "scoring" individuals for their clients, far from any regulatory purview.
The Digital Panopticon.
UBER KNOWS IF YOU'VE BEEN NAUGHTY
The ride-sharing startup Uber has recently gotten mired in some fractious bad publicity over their privacy practices. See, e.g., "Uber’s PR stumble drives new privacy woes."
The ride-sharing app this week finds itself in the midst of a major public-relations nightmare after BuzzFeed captured an Uber executive suggesting the company might conduct opposition research on journalists — not to mention a second incident in which an Uber employee looked at a reporter’s travel history...See also "What’s Really Wrong With Uber?"
Also "7 reasons you may want to delete your Uber app."
1. Uber tracks riders’ hookupsIf you don't have "Surveillant Anxiety," you've not been paying attention.
In 2012, Uber said it was tracking user data to see which cities had the most one-night stands in a blog post titled “Rides of Glory.” Uber said it was analyzing data on its users’ car rides on Friday and Saturday nights between 10 p.m. and 4 a.m.
“The world has changed and gone are the days of the Walk of Shame,” the post says. “We live in Uber’s world now.” According to Uber’s analysis, Boston topped the list of cities with the most hookups. Uber took it a step further and even broke down its data by neighborhood. [slide 2]
TRAFFICKING IN ePHI
Matthew Holt of Health 2.0 has a post up at THCB entitled "Is Deborah Peel up to her old tricks."
Long time (well very long time) readers of THCB will remember my extreme frustration with Patients Rights founder Deborah Peel who as far as I can tell spent the entire 2000s opposing electronic health data in general and commercial EMR vendors in particular. I even wrote a very critical piece about her and the people from the World Privacy Forum who I felt were fellow travelers back in 2008. And perhaps nothing annoyed me more than her consistently claiming that data exchange was illegal and that vendors were selling personally identified health data for marketing and related purposes to non-covered entities (which is illegal under HIPAA).Dr. Peel (a psychiatrist) is the nation's pre-eminent alarmist regarding the putative perils of ePHI (electronic Protected Health Information). She apparently believes that her pure motives (and I have no doubt that they are) entitle her to being cavalier with facts and their conflation.
Matthew minces no words in conclusion.
It’s now put up or shut up time. Are personal health data resales a bigger industry than health IT? Are vendors really illegally selling identified health data? Is Deborah going to retract her statements? Or at least explain what she knows–with evidence please–that I’m missing?See the accruing comments below Matthew's post.
I had a go at her during my October 27th, 2014 post "An Epic battle: Did the EHR kill Dallas Ebola patient zero? On the double-edged sword of Health IT."
In the immediate aftermath of the the Dallas Duncan dx debacle, sharp-elbowed HIT critics wasted no time assigning blame. The ever-strident patient privacy rights advocate Deborah C. Peel, MD posted a LinkedIn piece under a inflammatory click-bait headline "Why did Mr. Duncan have to die for the US to face flaws in EHRs?"Dr. Peel's recent awkward TEDx talk.
Nothwithstanding that  she's a psychiatrist, not an ER doc,  wasn't there in the Dallas ER, and  is not an Epic user...
Make up your own minds. She asserts a lot of things that are in fact true. However,
Ms. Peel, @11:01,Nice conjunctive conflation. Name some physicians who are selling ePHI?
“So, we know, for example, that physicians and their EHRs sell the [ePHI] data…”
Beneficent, public-minded motives don't grant you a pass on documentable specifics.
A THOUGHTFUL BOOK ON PRIVACY
I read them all. This one is excellent.
In his essay “Commodities and the Politics of Value,” Arjun Appadurai has noted “the tendency of all economies to expand the jurisdiction of commoditization and of all cultures to restrict it.” The economy’s jurisdiction has now come to include human organs and genetic material, which are “mined” and “harvested” like minerals and crops. “We used to think our fate is in the stars,” says former Human Genome Project director James Watson. “Now we know, in large measure, our fate is in our genes.” So is the potential for making a lot of money, along with a new frontier for privacy abuse. A 2001 survey by the American Management Association revealed that 30 percent of large and midsize companies solicited genetic information about employees; 7 percent used the information in hiring and promotion. Meanwhile biotech companies “have flooded the federal patent office with applications to patent newly discovered genes” even though the genes occur naturally. One bioethicist has likened this trend to “patenting the alphabet and charging people every time they speak.”ERRATUM
Even when individuals manage to rise above a market mentality, they do not necessarily rise above the Market. In one especially disturbing instance, families whose children were victims of Canavan disease, a rare and fatal recessive disorder, donated their children’s tissue samples for research in the hopes that better prenatal diagnostics and new treatments would spare other families the suffering they had known. Later they discovered that researchers at Miami Children’s Hospital had patented the gene for Canavan and the hospital was charging royalty fees for diagnostic tests. In some cases, the same families whose donations had enabled the research were charged fees when they tested for the condition in other members of their households.
For scale of exploitation, few examples can match the story of Henrietta Lacks, a poor African American tobacco farmer, whose “immortal life” is recounted in a recent book by Rebecca Skloot. Prior to her death from cervical cancer in 1951, medical researchers at the “colored” ward of Johns Hopkins Hospital removed and cultured Lacks’s cells without her knowledge or consent. Since then more than 50 million metric tons of cells grown from this original “harvest” have been used for medical research, including the development of the polio vaccine, in vitro fertilization, and cloning. Of the millions in profits generated by Lacks’s unwitting donation, her family did not receive a penny. They did not even learn that her cells had been used until more than twenty years later, when they too became unwitting subjects of medical research.
In an effort to curtail such abuses, the state of Oregon passed the Genetic Privacy Act of 1995 , which mandated “informed consent for the collection , analysis, and disclosure of DNA information” and the destruction of DNA samples once testing was completed. The most controversial provision of the law was its “property clause,” stating that an “individual’s genetic information is the property of the individual.” Not surprisingly, the most vocal opposition to the property clause— in Oregon and in other states such as New Jersey and Maryland, which followed with genetic privacy statutes of their own— came from and on behalf of the biotechnical industry.
Sociologist Margaret Everett, who served on the Oregon Genetic Privacy Advisory Committee and whose son died as the result of a rare genetic disorder, opposed the property clause for different reasons. Though initially she had joined the committee with the aim of protecting the clause against legislative revision, noting that she felt “very‘proprietary’ about my son’s cells,” she eventually came to feel that “the proponents of individual property rights were encouraging, perhaps unwittingly, the very commodification and objectification that I found so troubling.” In other words, her son’s cells were not saved from economic exploitation simply by giving her an exclusive patent to exploit them.
Everett’s experience of wrestling with the property clause raises questions about how we construe privacy rights within the structures of a market economy. Even when we oppose the economic exploitation of our bodies, we find it difficult to do so in any way other than to turn them into salable property. To use Marx’s words, we cannot think of something as ours unless we “have it.” Is it possible that bodily integrity and personal privacy would find better fulfillment in a society where neither could be sold? Might our genetic material become most indisputably ours in a society that viewed its citizens— as the parents of the children with Canavan disease surely viewed themselves— as stewards of the common wealth of humankind?
Keizer, Garret (2012-08-07). Privacy (Big Ideas//Small Books) (pp. 89-92). Macmillan. Kindle Edition.
Below: this is pretty interesting, my top 10 viewing countries the past 5 days.
Thanks for continuing to read.
SPEAKING OF "SURVEILLANT ANXIETY"
How Medical Care Is Being CorruptedI am not a fan of the "population health" thing. If every physician does what's best for her patients, the population health needle will move in the right direction in relatively short order.
PAMELA HARTZBAND and JEROME GROOPMAN, NY Times OpEd
WHEN we are patients, we want our doctors to make recommendations that are in our best interests as individuals. As physicians, we strive to do the same for our patients.
But financial forces largely hidden from the public are beginning to corrupt care and undermine the bond of trust between doctors and patients. Insurers, hospital networks and regulatory groups have put in place both rewards and punishments that can powerfully influence your doctor’s decisions.
Contracts for medical care that incorporate “pay for performance” direct physicians to meet strict metrics for testing and treatment. These metrics are population-based and generic, and do not take into account the individual characteristics and preferences of the patient or differing expert opinions on optimal practice...
Physicians who meet their designated targets are not only rewarded with a bonus from the insurer but are also given high ratings on insurer websites. Physicians who deviate from such metrics are financially penalized through lower payments and are publicly shamed, listed on insurer websites in a lower tier. Further, their patients may be required to pay higher co-payments.
These measures are clearly designed to coerce physicians to comply with the metrics. Thus doctors may feel pressured to withhold treatment that they feel is required or feel forced to recommend treatment whose risks may outweigh benefits.
When a patient asks “Is this treatment right for me?” the doctor faces a potential moral dilemma. How should he answer if the response is to his personal detriment? Some health policy experts suggest that there is no moral dilemma. They argue that it is obsolete for the doctor to approach each patient strictly as an individual; medical decisions should be made on the basis of what is best for the population as a whole...
...the power belongs to the insurers and regulators that control payment. There is now a new paternalism, largely invisible to the public, diminishing the autonomy of both doctor and patient...
Medical care is not just another marketplace commodity. Physicians should never have an incentive to override the best interests of their patients.
More to come...