Interesting, in the context of the just-concluded Health 2.0 2015, where we were regaled for four days with an exuberant ("transformative?"), jaw-dropping, future-is-now onslaught comprised of all of this cool new tech that's gonna inexpensively measure, analyze, and "interoperably" transmit every little "quantified self" datum about us to those with a need to know. Ran across an article citing this book.
Doctors Hate AmbiguityI may buy the book. Not sure yet. The words of Dr. Bob Wachter come to mind:
How an obsession with certainty can hurt patients’ health.
...Ordering a test is often appropriate, but it can also be an all-too-easy response to ambiguity. Furthermore, tests results aren’t always clear, which can lead to a cascade of unnecessary tests, a phenomenon identified in a 2013 experiment led by Cornell’s Sunita Sah. Sah, Pierre Elias, and Dan Ariely recruited a group of more than 700 men between the ages of 40 and 75 and randomly assigned them to one of four conditions. One group received information about the risks and benefits of a prostate biopsy. The other groups also received one of three hypothetical results from the prostate-specific antigen screening test, which informs the decision to have a biopsy: normal, elevated, or inconclusive. An inconclusive test result, subjects were informed, “provides no information about whether or not you have cancer.” Would the men proceed with the hypothetical biopsy?
Of subjects who weren’t given the PSA screening results, only 25 percent chose to proceed with the prostate biopsy. But 40 percent of subjects who received inconclusive PSA test results opted for the procedure. That’s a meaningful increase among those who received a result clearly explaining that it “provides no information.” Somehow, the very idea of not knowing something led to a panicky commitment to more invasive testing. Since prostate biopsies are not only risky but costly, the increased call for the biopsy is not insignificant. Sah and her colleagues described the problem as one of “investigation momentum.”
Investigation momentum, Sah told me, results in “additional, potentially excessive diagnostic testing when you get a result that’s ambiguous.” She doesn’t deny that there are many other causes of overtesting in the United States. The financial incentives involved are a mammoth issue, as is defensive medicine, whereby doctors treat patients to avoid potential lawsuits. But one overlooked cause, Sah said, is the self-propelling cascade of tests encouraged because of inconclusive results, ambiguity aversion, and a disproportionate faith in testing.
Diagnostic imaging technologies, in particular, seem prone to producing ambiguous results. Medical scans can be extraordinarily good at detecting abnormalities, but they are not always very good at revealing whether those abnormalities actually pose a problem.
[W]e put too much faith in technology to physically pinpoint the causes of poor health. In neurolaw, which applies brain imaging to criminal law, this dilemma is especially apparent. Neuroimaging evidence showing brain abnormalities has helped spare murderers from the death penalty. According to a database created by Duke University’s Nita Farahany, such evidence was considered in at least 1,600 cases between 2004 and 2012. Often that evidence’s effectiveness depends on locating abnormalities in the brain. One San Diego defense attorney boasted of introducing a PET scan as evidence of his client’s moral innocence: “This nice color image we could enlarge. ... It documented that this guy had a rotten spot in his brain. The jury glommed onto that.”
James Fallon, a neuroscientist at the University of California–Irvine who has studied the brain scans of psychopathic murderers, is skeptical of applying brain scans to criminal cases. “Neuroimaging isn’t ready for prime time,” he told me. “There are simply too many nuances in interpreting the scans.” In an odd twist of fate, Fallon once subjected himself to a PET scan because his lab needed images of normal brains to contrast with abnormal ones. To his surprise, his prefrontal lobe looked the same as those of the psychopathic killers he’d long studied. The irony wasn’t lost on him. That Fallon never hurt anyone isn’t the point; it’s that one nonviolent person’s scan looked no different from a violent person’s.
Without question, doctors have used imaging technologies to great benefit, just as scientists are learning a great deal using brain scans. But images of the brain, like those of the rest of the body, do not always imply one-to-one causal relationships. Like abnormal should cartilage, brain abnormalities don’t mean that anything is necessarily wrong. Amanda Pustilnik, who teaches law at the University of Maryland, has compared neurolaw to phrenology, Cesare Lombroso’s biological criminology, and psychosurgery. Each theory or practice, Pustilnik wrote, “started out with a pre-commitment to the idea of brain localization of violence.” But the causes of violence, like the causes of poor health, do not usually begin in the body. They pass through it, and the marks they leave are often subtle and vague.
No one can blame practitioners or policymakers for their enthusiasm over new technological tools. But just as criminologists would be wiser to focus on the social—rather than biological—conditions that spur violence, physicians are usually better off treating the patient, not the scan. Most diagnoses, we know, can be made by chatting. New ways of seeing aren’t necessarily clearer ways of seeing, and sometimes, the illusion of knowing is more dangerous than not knowing at all.
[T]he medical record, already straining under the weight of more and more data, was drafted for yet another mission: to provide the raw materials for clinical research. As this research began to identify effective treatments for certain well-defined diseases, the record also needed to take in physicians’ assessments of their cases— a place where they could weave together the facts about their patients into a coherent hypothesis, accompanied by treatment plans.More on his new book "The Digital Doctor" below.
OCT 15TH UPDATE
From THCB:
Medical Errors, Or Not
Oct 15, 2015
NORTIN M. HADLER, MD
In a recent post, the renowned neurologist, Martin Samuels, paid homage to the degree to which uncertainties create more than just anxious clinicians, they can lead to clinical errors. That post was followed by another by Paul Levy, a former CEO of a Boston hospital, arguing that the errors can be diminished and the anxieties assuaged if institutions adhered to an efficient, salutary systems approach. Both Dr. Samuels and Mr. Levy anchor their perspective in the 1999 report of Institute of Medicine Report, “To Err is Human”, which purported to expose an alarming frequency of fatal iatrogenic errors. However, Dr. Samuels reads the Report as a documentation of the price we pay for imperfect knowledge; Mr. Levy as the price we pay for an imperfect organization of health care delivery. These two posts engendered numerous comments and several subsequent posts unfurling one banner or the other.
I crossed paths with Dr Samuels a long time ago when we were both speakers at a CME course held by the American Geriatrics Society and the American College of Physicians. I still remember his talk for its content and for its clinical perspective. His post on THCB is similarly worthy for championing the role of the physician in confronting the challenge of doing well by one patient at a time. Mr. Levy and his fellow travelers are convinced they can create settings and algorithms that compensate for the idiosyncrasies of clinical care. I will argue that there is nobility in Dr. Samuels’ quest for clinical excellence. I will further argue that Mr Levy is misled by systems theories that are more appropriate for rendering manufacturing industries profitable than for rendering patient care effective...
Most hospital administrators, thanks largely to the minions they hire and pay as consultants, burden and brandish Lean, six sigma, and similar rallying cries in facing the regulatory avalanche in the wake of “To Err is Human.”... Hence, the “quality” zealots hold sway to this day...Interesting. Wonder what Dr. Toussaint would have to say about that last crack?
I am also immediately reminded of "How Doctors Think," one of my long-time favorites.
For each patient, you are making scores of decisions about his symptoms, physical findings, blood tests, EKG, and x-rays. Now, multiply all of those decisions you made for each patient by three assigned in thirty minutes by the triage nurse; the total can reach several hundred. The circus performer spins only a handful of plates. A more accurate analogy might be to stacks of plates, one on top of another, and another, and another, all of different shapes and weights. Add to these factors the ecology of an emergency department, the number of people tugging at your sleeve, interrupting and distracting you with requests and demands as you spin your plates. And don't forget you are in an era of managed care, with limited money, so you have to set priorities and allocate resources parsimoniously: it costs less if you can take several plates off their sticks, meaning if you limit testing and rapidly send the patient home.Dr. Groopman continues:
Groopman, Jerome (2008-03-12). How Doctors Think (pp. 68-69). Houghton Mifflin Harcourt. Kindle Edition.
Many doctors have reacted by truncating visits to ten or fifteen minutes and increasing the volume of patients they see in a given day. This speeds up the train and fosters the kinds of errors that Pat Croskerry and Harrison Alter fear when the ER doctor is spinning plates. Working in haste can not only increase cognitive mistakes but impair the communication of even the most basic information about treatment. A study of 45 doctors caring for 909 patients found that two thirds of the physicians did not tell the patient how long to take a new medication or what side effects it might cause. Nearly half of the doctors failed to specify the dose of the medication and how often it should be taken.Let's recall Dr. Bob Wachter's excellent Health 2.0 2015 Keynote book "The Digital Doctor."
Sometimes the frenetic pace overwhelms the doctor and estranges the patient and family. Friends of mine who live in a Dallas suburb had adored their pediatrician until they came to feel that she was not paying close attention during routine visits. "She had four rooms going at once," the mother told me, with the doctor and her nurses shuttling among them. Often my friends' visit was interrupted by a nurse entering to ask the doctor a question about another child. Then, one evening after a yearly checkup, the pediatrician called my friends at home. "She apologized and told us that she had injected saline and forgotten to mix in the vaccine." My friends took their children in the next day for the vaccination and then decided to find another doctor. "We really liked her, but she just became too busy and too distracted, and we worried that she would miss something important about the kids." [ibid, pp 86-87]
Appreciating the limitations of patient stories and simple observation in unearthing the body’s deep secrets, nineteenth-century physicians struggled to find ways to get under the hood— while the patient still lived. Some began to listen to heart and lung sounds by placing their ear on the patient’s chest, but a French physician named René Laennec found this technique crude and more than a little awkward, particularly when examining a female patient. In 1816, the 35-year-old Laennec tightly rolled up a sheaf of paper and listened to a young woman’s chest through it, thus creating a crude version of what would later become the stethoscope. Foreshadowing the reaction of many to today’s new medical technologies, this diagnostic advance was not well received. In 1834, speaking of the stethoscope, the Times of London wrote,'eh?
That it will ever come into general use, notwithstanding its value, is extremely doubtful; because its beneficial application requires much time and gives a good bit of trouble both to the patient and the practitioner; because its hue and character are foreign and opposed to all our habits and associations.Some of this pushback, of course, is good old-fashioned resistance to change— a fundamental human characteristic that is hardly limited to doctors. But the early objections from within medicine also reflected physicians’ views of themselves, particularly their desire to be seen as real professionals, to distance themselves from the barber-surgeons whose haphazard— and often dangerous— practices had sullied their field in days past. (Modern-day barbers’ poles are said to be red-and-white-striped to recall the blood and bandages of medieval barber-surgeons.)
The findings from auscultation (the act of listening to the heart through the stethoscope) would eventually be accompanied by those drawn from other technologies: the otoscope (ears) and the ophthalmoscope (eyes) and, early in the next century, x-rays and blood tests. Each of these technologies produced new information (“ the cardiac exam has no murmurs or extra heart sounds”; “the serum potassium is 3.4 mEq/ L”) that needed to be recorded and analyzed.
And so the physician’s note, which had once been a chronological diary of patients’ symptoms— the doctor’s journal, in essence— morphed into a vessel brimming with observations. These observations were drawn from a multitude of sources: first from the patient’s history; then from simple tools applied by the doctor, like the stethoscope; and later from increasingly sophisticated tools, like radiology and blood tests, performed by others when “ordered” to do so by the physician.
To Reiser, the discovery of the stethoscope had even broader implications. The rise of the stethoscope, and the triumph of auscultation that resulted, also marked the beginning of what the historian called “modern therapeutic distancing,” as doctors’ attention shifted from the words spoken by patients to the sounds produced by their organs. And thus began the transformation of the doctor-patient relationship.
Looking back, we can now see that the stethoscope was small potatoes: it separated Laennec from his patient by a mere 18 inches. Today’s clinician can diagnose and treat patients based on reams of data collected through cameras and sensors. And while the people (and not all of them are doctors) analyzing these data may be in the next room, the advent of high-speed networks and wireless connectivity means that they may be across the street, or even across an ocean.
As the number of available tools and the amount of salient information about patients mushroomed, physicians entered a world in which making a diagnosis or choosing a therapy could be based on science rather than intuition, plausibility, or tradition. For example, the technique of bloodletting had been accepted as effective for hundreds of years until, in 1835, the French physician Pierre Louis analyzed whether the procedure had any impact on patients’ outcomes. His findings, using what he referred to as the “numerical method” (today we would call this a clinical trial, and the use of its results, evidence-based medicine), stood medicine on its head: bloodletting was absolutely worthless. Louis recognized the importance not only of his findings, but of his method as well. Bemoaning the slow pace at which medical research was progressing, he urged his fellow doctors to observe facts, then analyze them vigorously. “It is impossible to attain [medical progress] without classifying and counting [such facts],” he wrote, “and then therapeutics will advance not less steadily than other branches of science.”
With that, the medical record, already straining under the weight of more and more data, was drafted for yet another mission: to provide the raw materials for clinical research. As this research began to identify effective treatments for certain well-defined diseases, the record also needed to take in physicians’ assessments of their cases— a place where they could weave together the facts about their patients into a coherent hypothesis, accompanied by treatment plans.
Yet even as the medical record was being force-fed like a French goose, it was something else that turned it into a beast that could not be tamed— with either pen or keyboard: the push by outsiders to see, read, and analyze the note, each for his own purposes. This was what ultimately broke the backs of clinicians and set the stage for our current uneasy relationship with our electronic health records.
Wachter, Robert (2015-04-01). The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age (pp. 32-34). McGraw-Hill Education. Kindle Edition.
How about a bit of Bertalan Meskó?
We are facing major changes as medicine and healthcare now produce more developments than in any other era. Key announcements in technology happen several times a year, showcasing gadgets that can revolutionize our lives and our work. Only five or six years ago it would have been hard to imagine today’s ever increasing billions of social media users; smartphone and tablet medical applications; the augmented world visible through Google Glass; IBM’s supercomputer Watson used in medical decision making; exoskeletons that allow paralyzed people to walk again; or printing out medical equipment and biomaterials in three dimensions. It would have sounded like science fiction. Sooner or later such announcements will go from multiple times a year to several times a month, making it hard to stay informed about the most recent developments. This is the challenge facing all of us.Health 2.0 needs to book this Wunderkind.
At the same time, ever– improving technologies threaten to obscure the human touch, the doctor– patient relationship, and the very delivery of healthcare. Traditional structures of medicine are about to change dramatically with the appearance of telemedicine, the Internet full of misleading information and quacks offering hypnosis consultation through Skype; surgical robots; nanotechnology; and home diagnostic devices that measure almost anything from blood pressure to blood glucose levels and genetic data.
People are generally afraid of change even though there are good changes that bring value to all of us. The challenge we are facing now can lead to the best outcomes ever for the medical profession and for patients as well. My optimism though is not based on today’s trends and the state of worldwide healthcare. If you read this book you will realize, as I did, how useful technology can be if we anticipate for the future and consider all the potential risks.
Many of us have already witnessed signs of these. Patients, for example, use online networks to find kindred individuals dealing with similar problems. Doctors can now prescribe mobile applications and information in addition to conventional treatment. Given what I have seen over the last few years as a medical futurist, no stakeholder of healthcare— from policy maker, researcher, patient, to doctor— is ready for what is coming. I see enormous technological changes heading our way. If they hit us unprepared, which we are now, they will wash away the medical system we know and leave it a purely technology– based service without personal interaction. Such a complicated system should not be washed away. Rather, it should be consciously and purposefully redesigned piece by piece. If we are unprepared for the future, then we lose this opportunity.
According to “Digital Life in 2025” published by the Pew Research Internet Project in 2014, information sharing through the Internet will be so effortlessly interwoven into our daily lives that it will flow like electricity; the spread of the Internet will enhance global connectivity, fostering more worldwide relationships and less ignorance. These are going to be the driving forces of the next years. But without looking into the details, the upcoming dangers will outweigh the potentially amazing advantages.
My background as a medical doctor, researcher, and geek gives me a unique perspective about medicine’s future. My doctor self thinks that the rapidly advancing changes to healthcare pose a serious threat to the human touch, the so– called art of medicine. This we cannot let happen. People have an innate propensity to interact with one another; therefore we need empathy and intimate words from our caregivers when we’re ill and vulnerable.
The medical futurist in me cannot wait to see how the traditional model of medicine can be improved upon by innovative and disruptive technologies. People usually think that technology and the human touch are incompatible. My mission is to prove them wrong. The examples and stories in this book attempt to show that the relationship is mutual. While we can successfully keep the doctor– patient personal relationship based on trust, it is also possible to employ increasingly safe technologies in medicine, and accept that their use is crucial to provide a good care for patients. This mutual relationship and well– designed balance between the art of medicine and the use of innovations will shape the future of medicine.
Bertalan Meskó (2014-08-27). The Guide to the Future of Medicine: Technology AND The Human Touch (Kindle Locations 73-104). Dr. Bertalan Meskó (Webicina Kft.). Kindle Edition.
__
Cool new tech that's gonna inexpensively measure, analyze, and "interoperably" transmit every little "quantified self" datum about us to those with a need to know.
Yeah. But, beyond the issues of relative data relevance within the confines of the patient-doctor relationship,
LOL. From The Atlantic.
I knew we’d bought walnuts at the store that week, and I wanted to add some to my oatmeal. I called to my wife and asked her where she’d put them. She was washing her face in the bathroom, running the faucet, and must not have heard me—she didn’t answer. I found the bag of nuts without her help and stirred a handful into my bowl. My phone was charging on the counter. Bored, I picked it up to check the app that wirelessly grabs data from the fitness band I’d started wearing a month earlier. I saw that I’d slept for almost eight hours the night before but had gotten a mere two hours of “deep sleep.” I saw that I’d reached exactly 30 percent of my day’s goal of 13,000 steps. And then I noticed a message in a small window reserved for miscellaneous health tips. “Walnuts,” it read. It told me to eat more walnuts.A long read. Read all of it. Lordy. Deborah Peel will have an aneurysm. See my November 2014 post "Big Data" and "surveillant anxiety." More on all this digital gumshoeing below.
It was probably a coincidence, a fluke. Still, it caused me to glance down at my wristband and then at my phone, a brand-new model with many unknown, untested capabilities. Had my phone picked up my words through its mic and somehow relayed them to my wristband, which then signaled the app?
The devices spoke to each other behind my back—I’d known they would when I “paired” them—but suddenly I was wary of their relationship. Who else did they talk to, and about what? And what happened to their conversations? Were they temporarily archived, promptly scrubbed, or forever incorporated into the “cloud,” that ghostly entity with the too-disarming name?
It was the winter of 2013, and these “walnut moments” had been multiplying—jarring little nudges from beyond that occurred whenever I went online. One night the previous summer, I’d driven to meet a friend at an art gallery in Hollywood, my first visit to a gallery in years. The next morning, in my inbox, several spam e-mails urged me to invest in art. That was an easy one to figure out: I’d typed the name of the gallery into Google Maps. Another simple one to trace was the stream of invitations to drug and alcohol rehab centers that I’d been getting ever since I’d consulted an online calendar of Los Angeles–area Alcoholics Anonymous meetings. Since membership in AA is supposed to be confidential, these e‑mails irked me. Their presumptuous, heart-to-heart tone bugged me too. Was I tired of my misery and hopelessness? Hadn’t I caused my loved ones enough pain?
Some of these disconcerting prompts were harder to explain. For example, the appearance on my Facebook page, under the heading “People You May Know,” of a California musician whom I’d bumped into six or seven times at AA meetings in a private home. In accordance with AA custom, he had never told me his last name nor inquired about mine. And as far as I knew, we had just one friend in common, a notably solitary older novelist who avoided computers altogether. I did some research in an online technology forum and learned that by entering my number into his smartphone’s address book (compiling phone lists to use in times of trouble is an AA ritual), the musician had probably triggered the program that placed his full name and photo on my page...
I wondered whether a generation that found the concept of privacy archaic might be undergoing a great mutation, surrendering the interior psychic realms whose sanctity can no longer be assured. Masking one’s insides behind one’s outsides—once the essential task of human social life—was becoming a strenuous, suspect undertaking; why not, like my teenage acquaintance, just quit the fight? Surveillance and data mining presuppose that there exists in us a hidden self that can be reached through probing and analyses that are best practiced on the unaware, but what if we wore our whole beings on our sleeves? Perhaps the rush toward self-disclosure precipitated by social media was a preemptive defense against intruders: What’s freely given can’t be stolen...
...I still believe in the boundaries of my own skull and feel uneasy when they are crossed. Not long ago, my wife left town on business and I texted her to say good night. “Sleep tight and don’t let the bedbugs bite,” I wrote. I was unsettled the next morning when I found, atop my list of e‑mails, a note from an exterminator offering to purge my house of bedbugs. If someone had told me even a few years ago that such a thing wasn’t pure coincidence, I would have had my doubts about that someone. Now, however, I reserve my doubts for the people who still trust. There are so many ghosts in our machines—their locations so hidden, their methods so ingenious, their motives so inscrutable—that not to feel haunted is not to be awake. That’s why paranoia, even in its extreme forms, no longer seems to me so much a disorder as a mode of cognition with an impressive track record of prescience...
__
"I ONLY HAVE TO BE RIGHT 51% OF THE TIME"
Back in January during JP Morgan week in SF I got invited to a private dinner with athenaHealth CEO Jonathan Bush. One of the attendees (a venture capital analyst IIRC) made the "51%" statement during dinner discussion.
Yeah. But, it's really significantly more nuanced than even that.
Back in the 80's, while working at the International Technology Corporation environmental radiation lab in Oak Ridge, I was a Principal in an "exam cram" A/V business with a professor at UTK who taught private exam prep courses (e.g., SAT, ACT, GRE, LSAT) on the side. We started a company and then produced and marketed a series of 2-hour VHS videos and their accompanying print booklets in my South Knoxville hole-in-the-wall studio.
We bought "targeted" mailing lists (i.e., school guidance departments, libraries, and families with kids in school at the right grade levels) and sent out mass mailing brochures. The rule of thumb in Direct Mail back then was that a 1% return/sale rate was sufficient to be profitable were your costs fully baked in and your retail pricing still competitive.
I was the Managing Partner (Corporate VP/Sec'y-Treasurer/Registered Agent, actually). Producer, Director, Editor, Copywriter, Layout Artist, and Data Entry and UPS Shipping Clerk.
There's gotta be a DSM-V code for that.
So, at a 1% return rate, we were basically wrong 99% of the time in a very real sense. Mostly a waste of paper and postage and labor. We lasted a half-dozen years, and eventually dissolved without going BK. Never could get to Scale. We were lucky to break even by the time we folded after I moved to Vegas.
Fast forward to 2000 - 2005. My tenure as a subprime risk analyst / portfolio credit risk manager in a privately-held VISA/MC bank.
The bank's Marketing Department spent millions on "targeted" direct mail marketing campaigns. This being a financially-blemished, credit-hungry "subprime" demographic, our mail pitches typically had about a 4% response rate. About half of those did not pass initially in-house screening muster. Of those we booked, typically half of them eventually went "bad," "charging off" with delinquent write-off balances. We sold those accounts to 3rd party collections companies for cents on the dollars (there's a huge "holder-in-due-course" aftermarket trafficking in bad debt -- debt obligations comprise a bid/ask market commodity just like bargeloads of corn or soybeans or oil futures). See my 2008 post "Tranche Warfare."
So, of our 4% DM solicitation campaign return, we're down to a ~1% "good rate," accounts that performed well over time. We were, well, "wrong" 99% of the time.
We were wrong in a couple of senses. Beyond prospects who simply didn't take the bait and apply, accounts we booked that subsequently went bad constituted "false positives," whereas prospects we declined after they'd applied -- notwithstanding that they would have in fact been good, profitable customers -- these would comprise "false negatives." Our Rumsfeldian "Unknown Unknowns" (you'd also have to include some non-responders in that bucket as well).Not to worry. Record profits every year I was there. Our marketing "CPA" (Cost per Acquisition), was about $103. Chump change relative to all of the money we extracted from the financially halt and lame. The office joke was "The Best Things in Life are FEE!"
I couldn't take it after a while. I quit in 2005 to take a 23% pay cut to return to HealthInsight and the Health IT "DOQ-IT" initiative.
__
BIG DATA: THE IMPLICATIONS OF INTERNET (AND SPECIFICALLY SOCIAL MEDIA) MINING
Consider a book I cited at length back in March.
Excerpting just the cites going to medical data for starters:
We’re starting to collect and analyze data about our bodies as a means of improving our health and well-being. If you wear a fitness tracking device like Fitbit or Jawbone, it collects information about your movements awake and asleep, and uses that to analyze both your exercise and sleep habits. It can determine when you’re having sex. Give the device more information about yourself— how much you weigh, what you eat— and you can learn even more. All of this data you share is available online, of course.Much more in that March post. Suggest you read it all.
Many medical devices are starting to be Internet-enabled, collecting and reporting a variety of biometric data. There are already— or will be soon— devices that continually measure our vital signs, our moods, and our brain activity. It’s not just specialized devices; current smartphones have some pretty sensitive motion sensors. As the price of DNA sequencing continues to drop, more of us are signing up to generate and analyze our own genetic data. Companies like 23andMe hope to use genomic data from their customers to find genes associated with disease, leading to new and highly profitable cures. They’re also talking about personalized marketing, and insurance companies may someday buy their data to make business decisions.
Schneier, Bruce (2015-03-02). Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (p. 16). W. W. Norton & Company. Kindle Edition.
So, now we have ISPs (I'm now with Comcast), Google, Facebook, Twitter, InstaGram, Pinterest, Tumblr etc (and, don't forget Amazon and other large retailers), with their cadres of analysts busily gumshoeing and modeling each of us for sale to data brokers as part of ever-more refined "targeted marketing" campaigns.
Really, really BIG data at play here. Today, given the scale of internet users' ongoing data generation, hugely profitable data mining might only have to be "right" a miniscule fraction of the 1% of our vestigial DM days. But, should you become a False Positive or False Negative, your "mosaic" profile could impact you quite adversely without your ever knowing it.
You just didn't get the job you were qualified for. You just didn't get the mortgage approval that was well within your credit chops.
Maybe, based on your digital commerce cookie trail and social media chit-chattery, Facebook et al erroneously ID'd you as a Pedo, or a suicide risk, a druggie, a domestic abuser, an incipient deadbeat, or a potential mass campus shooter.
Or maybe a "terrorist" or just a "sympathizer."
See my old post on the mendacious DARPA "Total Information Awareness" proposal (a zombie I helped "kill," and then mocked).
Couple of things: [1] Bayesian base rates matter. [2] Purpose of the modeling matters.
The other salient observation goes to the largely unregulated nature of all of this Brave New World online big data mining. When I was at the bank, there were data elements we'd bought and paid for that we nonetheless could not lawfully use in our credit scorecard modeling (e.g. age, gender, race/ethnicity, marriage status, ZIP code, etc) regardless of their empirical risk-vetting utility.
How quaint.
So, "If You're Not Paranoid, You're Crazy?" Well, I don't know. How can we know? All of this stuff is proprietary. Talk about "opacity." The entire business model -- mining and modeling and monetizing your digital mosaic -- is based on it.
TANGENTIALLY INTERESTING
How much of your audience is fake?
Marketers thought the Web would allow perfectly targeted ads. Hasn’t worked out that way.
...Increasingly, digital ad viewers aren’t human. A study done last year in conjunction with the Association of National Advertisers embedded billions of digital ads with code designed to determine who or what was seeing them. Eleven percent of display ads and almost a quarter of video ads were “viewed” by software, not people. According to the ANA study, which was conducted by the security firm White Ops and is titled The Bot Baseline: Fraud In Digital Advertising, fake traffic will cost advertisers $6.3 billion this year...Awww... So much for Zuckerberg's vision of "authenticity."
___
More to come...
No comments:
Post a Comment