Search the KHIT Blog

Tuesday, March 24, 2015

"We value the trust and confidence that you place with BCBSRI..."

Yeah, sure you do. But, not enough to secure our personal health-related information.


A form letter they sent my wife in the wake of the recent Anthem breach:
February 27, 2015

Dear current or former plan participant,

I’m writing with an important update about the security of your personal information.

Anthem, Inc. (Anthem), which owns independent Blue Cross and Blue Shield companies, was recently the victim of a cyber attack where current and former health plan participant information was illegally accessed. Unfortunately, and thumbs investigation indicates some of your personal information has likely been affected. In the coming weeks you can expect to hear from anthem with information on the protections they are providing, including identity repair services (no enrollment is required) and credit monitoring.

Why does anthem have my information?

In the past 10 years, you likely received healthcare services in an area served by anthem's Blue Cross Blue Shield companies. 37 independent, locally operated companies across the United States — including anthem and Blue Cross and Blue Shield of Rhode Island (BCBSRI) — form the Blue Cross Blue Shield system this enables Blue Cross Blue Shield customers to get high quality, affordable health care wherever they are.

What information may be affected?

Please know that the information affected did not include credit card or medical information. However, it may include any of the following data (which may date back to 2004):

  • Name
  • Date of birth
  • Blue Cross member ID number
  • Mailing address
  • Email address 
We’re here for you.

We value the trust and confidence that you place with BCBSRI and blue plans across the country, and we are deeply sorry for the situation. To help you learn more we’ve included some frequently asked questions on the reverse side of this letter...

Sincerely,

Melissa Cummings
Senior Vice President and Chief Customer Officer,
(401) 459-5756, melissa.cummings@bcbsri.org


We have never directly been Anthem customers, But, we get our health coverage through my wife's employer via Blue Cross/Blue Shield of Rhode Island.

So much for "independent companies." If parent company Anthem has no need of our information for "treatment, payment, or health care operations" (the Covered Entity criteria, which, in this case, devolves to BCBSRI specifically), what the hell are they doing with our data? And, are only the data they list as "may be included" in fact the extent of the filched information?

I'm not going to take that assertion at face value.

We know what they're doing with peoples' data originating in subsidiary entities. It's called Big Data Analytics "business intelligence."

"We value the trust and confidence you place with BCBSRI..." What crap. I had no choice in the matter. The "independent" Anthem subsidiary is what is offered at my wife's company. Take it or leave it.

I've posted this everywhere. A LinkedIn comment exchange:


The mounting toll of all of the reports of trafficking in others' personal data -- from cavalierly legal to brazenly illicit -- is starting to wear on me. Recall my recent post "Your use of the Services constitutes your agreement to the Privacy Policy." Tangentially apropos, see also my post "(404)^n, the upshot of dirty data."

UPDATE

WHY THE ANTHEM SECURITY BREACH WAS SUCH A WAKE-UP CALL FOR THE HEALTH INDUSTRY
NOBODY WANTS TO HAVE THEIR FINANCIAL INFORMATION STOLEN. BUT THE IMPLICATIONS OF MEDICAL-RELATED HACK ATTACKS ARE FAR SCARIER.

BY LUKE DORMEHL


Anthem, the second largest health insurance provider in the United States, revealed [recently ] that its records have been compromised by hackers—resulting in the possible leaking of names, birthdays, addresses, Social Security numbers, and employment data for up to 80 million present and former customers.

Although no medical information appears have been stolen, with the exception of customers’ medical identification numbers, the attack is being viewed as a much-needed wake-up call for the heath industry.

"Cybercriminals do view health care organizations as a soft target," says Lynne Dunbrack, research VP for IDC Health Insights. "They classically have not invested too heavily in information-technology in general, and specifically in security. Going hand in hand with that is the value of medical information on the black market, which has long since exceeded the value of personal identifiers for financial data. To give you an idea, financial records may fetch just a couple of dollars, whereas medical information routinely sells for $200. That’s a real incentive for cybercriminals."


Worries about the security of health care data is a growing issue—accompanying the increasing digitization of medical records, combined with the still more recent shift toward cloud-based record holding. Anthem's mess is far from the only recent example of troubling privacy concerns regarding health data. In 2010, the Coalition Against Insurance Fraud reported that 1.4 million Americans were victims of medical identify theft, representing a significant increase from the 500,000 one year earlier.

Stolen medical data can be particularly problematic for consumers. Whereas credit-card fraud may be corrected in a relatively straightforward manner, it can be tougher to identify that medical data has been breached. Maximum insurance payout limits may be reached as a result of fraudulent claims, and this might only be discovered when a consumer's claims for legitimate services are denied.

Worse, consumers' medical records could become compromised with falsified diagnoses or procedure codes following data-theft incidents. In a worst-case scenario, vital information related to allergies or blood type could be compromised, with the wrong drugs or blood products administered to a patient as a result.

Others have expressed concern about what appears to be the misuse of private medical data. Last month, Ricardo Alonso-Zaldivar and Jack Gillum of the Associated Press reported that Healthcare.gov has shared user data—possibly including information about age, income, and whether or not a person is pregnant—with tech companies such as Google, Twitter, and Facebook. Although there is no evidence that this data has been misused, it is still likely that this will irk some individuals....
apropos, my latest read:


Excerpting just the cites going to medical data:
We’re starting to collect and analyze data about our bodies as a means of improving our health and well-being. If you wear a fitness tracking device like Fitbit or Jawbone, it collects information about your movements awake and asleep, and uses that to analyze both your exercise and sleep habits. It can determine when you’re having sex. Give the device more information about yourself— how much you weigh, what you eat— and you can learn even more. All of this data you share is available online, of course.

Many medical devices are starting to be Internet-enabled, collecting and reporting a variety of biometric data. There are already— or will be soon— devices that continually measure our vital signs, our moods, and our brain activity. It’s not just specialized devices; current smartphones have some pretty sensitive motion sensors. As the price of DNA sequencing continues to drop, more of us are signing up to generate and analyze our own genetic data. Companies like 23andMe hope to use genomic data from their customers to find genes associated with disease, leading to new and highly profitable cures. They’re also talking about personalized marketing, and insurance companies may someday buy their data to make business decisions.

Schneier, Bruce (2015-03-02). Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (p. 16). W. W. Norton & Company. Kindle Edition.
__

The general practice of amassing and saving all kinds of data is called “big data,” and the science and engineering of extracting useful information from it is called “data mining.” Companies like Target mine data to focus their advertising. Barack Obama mined data extensively in his 2008 and 2012 presidential campaigns for the same purpose. Auto companies mine the data from your car to design better cars; municipalities mine data from roadside sensors to understand driving conditions. Our genetic data is mined for all sorts of medical research. Companies like Facebook and Twitter mine our data for advertising purposes, and have allowed academics to mine their data for social research.

Most of these are secondary uses of the data. That is, they are not the reason the data was collected in the first place. In fact, that’s the basic promise of big data: save everything you can, and someday you’ll be able to figure out some use for it all.

Big data sets derive value, in part, from the inferences that can be made from them. Some of these are obvious. If you have someone’s detailed location data over the course of a year, you can infer what his favorite restaurants are. If you have the list of people he calls and e-mails, you can infer who his friends are. If you have the list of Internet sites he visits— or maybe a list of books he’s purchased— you can infer his interests.

Some inferences are more subtle. A list of someone’s grocery purchases might imply her ethnicity. Or her age and gender, and possibly religion. Or her medical history and drinking habits. Marketers are constantly looking for patterns that indicate someone is about to do something expensive, like get married, go on vacation, buy a home, have a child, and so on. Police in various countries use these patterns as evidence, either in a court or in secret. Facebook can predict race, personality, sexual orientation, political ideology, relationship status, and drug use on the basis of Like clicks alone. The company knows you’re engaged before you announce it, and gay before you come out— and its postings may reveal that to other people without your knowledge or permission. Depending on the country you live in, that could merely be a major personal embarrassment— or it could get you killed.

There are a lot of errors in these inferences, as all of us who’ve seen Internet ads that are only vaguely interesting can attest. But when the ads are on track, they can be eerily creepy— and we often don’t like it. It’s one thing to see ads for hemorrhoid suppositories or services to help you find a girlfriend on television, where we know they’re being seen by everyone. But when we know they’re targeted at us specifically, based on what we’ve posted or liked on the Internet, it can feel much more invasive. This makes for an interesting tension: data we’re willing to share can imply conclusions that we don’t want to share. Many of us are happy to tell Target our buying patterns for discounts and notifications of new products we might like to buy, but most of us don’t want Target to figure out that we’re pregnant. We also don’t want the large data thefts and fraud that inevitably accompany these large databases. [ibid, pp. 33-34]
__

Once you can correlate different data sets, there is a lot you can do with them. Imagine building up a picture of someone’s health without ever looking at his patient records. Credit card records and supermarket affinity cards reveal what food and alcohol he buys, which restaurants he eats at, whether he has a gym membership, and what nonprescription items he buys at a pharmacy. His phone reveals how often he goes to that gym, and his activity tracker reveals his activity level when he’s there. Data from websites reveal what medical terms he’s searched on. This is how a company like ExactData can sell lists of people who date online, people who gamble, and people who suffer from anxiety, incontinence, or erectile dysfunction. [ibid, pp. 41-42]
__

Google, with its database of users’ Internet searches, could de-anonymize a public database of Internet purchases, or zero in on searches of medical terms to de-anonymize a public health database. Merchants who maintain detailed customer and purchase information could use their data to partially de-anonymize any large search engine’s search data. A data broker holding databases of several companies might be able to de-anonymize most of the records in those databases.

Researchers have been able to identify people from their anonymous DNA by comparing the data with information from genealogy sites and other sources. Even something like Alfred Kinsey’s sex research data from the 1930s and 1940s isn’t safe. Kinsey took great pains to preserve the anonymity of his subjects, but in 2013, researcher Raquel Hill was able to identify 97% of them.

It’s counterintuitive, but it takes less data to uniquely identify us than we think. Even though we’re all pretty typical, we’re nonetheless distinctive. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. This is also true for our book-reading habits, our Internet-shopping habits, our telephone habits, and our web-searching habits. We can be uniquely identified by our relationships. It’s quite obvious that you can be uniquely identified by your location data. With 24/ 7 location data from your cell phone, your name can be uncovered without too much trouble. You don’t even need all that data; 95% of Americans can be identified by name from just four time/ date/ location points. [ibid, p. 44]
__

Companies quickly realized that they could set their own cookies on pages belonging to other sites—with their permission and by paying for the privilege— and the third-party cookie was born. Enterprises like DoubleClick (purchased by Google in 2007) started tracking web users across many different sites. This is when ads started following you around the web. Research a particular car or vacation destination or medical condition, and for weeks you’ll see ads for that car or city or a related pharmaceutical on every commercial Internet site you visit.

This has evolved into a shockingly extensive, robust, and profitable surveillance architecture. You are being tracked pretty much everywhere you go on the Internet, by many companies and data brokers: ten different companies on one site, a dozen on another. Facebook tracks you on every site with a Facebook Like button (whether you’re logged in to Facebook or not), and Google tracks you on every site that has a Google Plus + 1 button or that simply uses Google Analytics to monitor its own web traffic.

Most of the companies tracking you have names you’ve never heard of: Rubicon Project, AdSonar, Quantcast, Pulse 260, Undertone, Traffic Marketplace. If you want to see who’s tracking you, install one of the browser plugins that let you monitor cookies. I guarantee you will be startled. One reporter discovered that 105 different companies tracked his Internet use during one 36-hour period. In 2010, a seemingly innocuous site like Dictionary.com installed over 200 tracking cookies on your browser when you visited.

It’s no different on your smartphone. The apps there track you as well. They track your location, and sometimes download your address book, calendar, bookmarks, and search history. In 2013, the rapper Jay-Z and Samsung teamed up to offer people who downloaded an app the ability to hear the new Jay-Z album before release. The app required the ability to view all accounts on the phone, track the phone’s location, and track who the user was talking to on the phone. And the Angry Birds game even collects location data when you’re not playing.

Broadband companies like Comcast also conduct surveillance on their users. These days they’re mostly monitoring to see whether you illegally download copyrighted songs and videos, but other applications aren’t far behind. Verizon, Microsoft, and others are working on a set-top box that can monitor what’s going on in the room, and serve ads based on that information.

It’s less Big Brother, and more hundreds of tattletale little brothers. [ibid, pp. 48-49]
__

Remember the bargains I talked about in the Introduction. The government offers us this deal: if you let us have all of your data, we can protect you from crime and terrorism. It’s a rip-off. It doesn’t work. And it overemphasizes group security at the expense of individual security.

The bargain Google offers us is similar, and it’s similarly out of balance: if you let us have all of your data and give up your privacy, we will show you advertisements you want to see— and we’ll throw in free web search, e-mail, and all sorts of other services. Companies like Google and Facebook can only make that bargain when enough of us give up our privacy. The group can only benefit if enough individuals acquiesce.

Not all bargains pitting group interest against individual interest are such raw deals. The medical community is about to make a similar bargain with us: let us have all your health data, and we will use it to revolutionize healthcare and improve the lives of everyone. In this case, I think they have it right. I don’t think anyone can comprehend how much humanity will benefit from putting all of our health data in a single database and letting researchers access it. Certainly this data is incredibly personal, and is bound to find its way into unintended hands and be used for unintended purposes. But in this particular example, it seems obvious to me that the communal use of the data should take precedence. Others disagree. [ibid, pp. 235-236]
__

[M]ost of the cost of privacy breaches falls on the people whose data is exposed. In economics, this is known as an externality: an effect of a decision not borne by the decision maker. Externalities limit the incentive for companies to improve their security.

You might expect users to respond by favoring secure services over insecure ones— after all, they’re making their own buying decisions on the basis of the same market model. But that’s not generally possible. In some cases, software monopolies limit the available product choice. In other cases, the “lock-in effect” created by proprietary file formats, existing infrastructure, compatibility requirements, or software-as-a-service makes it harder to switch. In many cases, we don’t know who is collecting our data; recall the discussion of hidden surveillance in Chapter 2. In all cases, it’s hard for buyers to assess the security of any data service. And it’s not just nontechnical buyers; even I can’t tell you whether or not to entrust your privacy to any particular service provider.

Liabilities change this. By raising the cost of privacy breaches, we can make companies accept the costs of the externality and force them to expend more effort protecting the privacy of those whose data they have acquired. We’re already doing this in the US with healthcare data; privacy violations in that industry come with serious fines.

And it’s starting to happen here with data from stores, as well. Target is facing several lawsuits as a result of its 2013 breach. In other cases, banks are being sued for inadequately protecting the privacy of their customers. One way to help would be to require companies to inform users about all the information they possess that might have been compromised.

These cases can be complicated, with multiple companies involved in a particular incident, and apportioning liability will be hard. Courts have been reluctant to find a value in privacy, because people willingly give it away in exchange for so little. And because it is difficult to link harms from loss of privacy to specific actions that caused those harms, such cases have been very difficult to win. [ibid, pp. 193-195]
I've been a vocal privacy advocate since before grad school --  e.g., see here, here, here, and here. Feels like a losing battle much of the time.

Moreover, a new book that just came to my attention indicates that things may be about to get considerably worse.

Much of what we think we know about privacy, liberty, security, and threat is no longer true. Much of what we have been taught about what threatens us, about what protects us, and about the risks and benefits of state power versus individual empowerment is obsolete. In the conventional understanding, international security is a state-to-state affair; the relationships between privacy, liberty, and domestic order are matters between individuals and their governments; and civilian technologies in the hands of individuals have relatively little to do with the way we order either governance at home or international security. We built the state to mediate disputes among citizens and to protect them from outside attack. We gave it power to contend with other states and to ensure it could govern effectively. Because we feared that power, we imposed constraints on it. And we imagined its power and the security it was meant to provide as being in tension with the liberty we expected it to respect. We built walls around our countries with legal concepts such as jurisdiction. And for the most part, these intellectual, conceptual, and legal constructions have held up pretty well. Yes, we had to adjust in response to Al-Qaeda and other transnational nonstate actors. And yes, globalization has complicated the discussion. But the way we think about security— what it means, where it comes from, what threatens it, what protects it, and the relationship between individual and collective societal security— has remained remarkably stable. 

In what follows, we mean to persuade you that this way of thinking is now out of date. Indeed, we argue that our debate about the fabric of security and its governance is based on dated assumptions about a technological world that no longer exists. In our new world, you can pose a threat to the security of every state and person on the planet— and each can also threaten you. In our new world, individuals, companies , and small groups have remarkable capabilities either to protect others or to make them more vulnerable. In our new world, not only do privacy and security not generally conflict, but they are often largely the same thing. And in our new world, state power represents a critical line of defense for individual freedom and privacy, even as the state itself may be losing its ability to serve its purpose as the ultimate guarantor of security to its citizens. 

Driving this new environment is a mix of technological developments. There is the radical proliferation of both data about individuals and technologies of mass empowerment available to individuals. Notwithstanding Edward Snowden’s spectacular revelations about the National Security Agency (NSA), the state’s comparative advantage in collecting data, manipulating it, and exposing individuals to risks or protecting them from threats is actually eroding, as ever bigger companies occupying ever more powerful market positions take on data collection and analytics as their business cores. The miniaturization and automation of weapons further weakens national boundaries— as well as the front door to your house— as effective lines of defense. Biological research and biotechnology are progressing at an unprecedented pace, bringing great promise— and great danger— to human security all around the globe. 

Our new environment of highly distributed threats and defenses has already changed our lives, and it will change them more in the years to come. It will change our sense of privacy, of safety, and of danger. It will change our relationships with corporations, governments, and individuals whom we have never met. It will change the way we govern our collective security and how we manage our personal safety. And it may lead us to ask questions about how we organize ourselves politically at the local, national, and international levels. 

Today, each person needs to fear an exponentially higher number of people and entities than only a decade ago. The threats to your personal security now include not merely governments and corporations but also other individuals around the world: stalkers, identity thieves, scammers, spammers, frauds, competitors, and rivals— everyone and everything from the government of China to the NSA to Luis Mijangos. You can be attacked from anywhere— and by nearly anyone.

Wittes, Benjamin; Blum, Gabriella (2015-03-10). The Future of Violence: Robots and Germs, Hackers and Drones—. Confronting A New Age of Threat (pp. 5-7). Basic Books. Kindle Edition.
I think it's time to start drinking.
__

CODA

Something I recently posted on my Facebook page:
Question for all of my many Facebook friends: Who owns facts about you (or rumors, or outright lies)? Can everything -- tangible and otherwise -- be "owned"? We all post "data" about ourselves here on FB. Do we give up "ownership" by doing so? Facebook (and all social media) is data-mining the crap out of all of us all the time, and selling the results to the highest bidders 24/7. Have we, via the ToS, given them blanket "license"?

It's a more difficult question than it might appear.

I would love to hear your opinions. What do we even mean by the word "own."? I usually first look to my trusty Black's Law Dictionary:
__

Owner. One who has the right to possess, use, and to convey something; a proprietor. See Ownership.

Ownership. The collection of rights allowing one to use and enjoy property, including the right to convey it to others. Ownership implies the right to possess a thing, regardless of any actual or constructive control. Ownership rights or general, permanent, and inheritable. Cf. POSSESSION; TITLE.

“Possession is the de facto exercise of a claim; ownership is the dish your recognition of one. The thing owned by me when my claim to it is maintained by the will of the state as expressed in the law; it is possessed by me, when my claim to it is maintained by my own self assertive will. Ownership is the guarantee of the law; possession is the guarantee of the facts. It is well to have both forms if possible; and indeed they normally coexist.”

John Salmond, Jurisprudence 311 (Glanville L. Williams ed., 10th edition 1947).

“Ownership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional powers of those who use it.” Marsh v Alabama 326 US 501, 506, 66S. CT. 276, 278 (1946) (Black, J.).

Garner, Black’s Law Dictionary, 7th Ed.

__

That is not dispositively helpful to me, Begs some trailing questions. Even Black's nowhere includes the phrase "exclusive right(s)."

Mostly I'm interested in this stuff for the implications going to my work in medical information technology (who owns your medical data?), but it goes well beyond that. Taylor Swift, for example, not only copyrights her songs, but she (through her lawyers) trademarks specific phrases within her lyrics, down to very fine detail. Use the phrase "Lucky 13"? You owe Taylor money. And they will come after you if you're doing anything commercial with the phrase.

I am 69 years old. 5'10", 180 lbs, blue eyes, left-handed. Married (35 years), two surviving kids. Who owns those facts? Are they simply "public domain"? Anyone can use them for whatever purpose they wish?

What about, I go to a bar, have a couple of drinks, and some gumshoe swipes my glass just after I leave, and has it assayed for my genome. Maybe there's a paternity suit in the wings. Who "owns" my genetic data?

What is legitimately "private"?

I can give you all kinds of difficult scenarios.

I would love to hear your thoughts. I respect all of your opinions.

Been thinking about this stuff a lot lately on my blog.
___

More to come...

1 comment:

  1. Her title on Linkedin tells a different story:

    SVP, Chief Marketing and Sales Officer

    Marketing & Sales...hmmm

    ReplyDelete