Unless you are living your life completely "off the grid" (an increasingly difficult feat), you have become a profitable "mosaic" to those who gumshoe you 24/7. What of the ethics of this?
Your life and that of every other person in an advanced industrialized country produces a mosaic of digital information stored on public and private computer servers around the world. Most of the tiles of your personal mosaic do not reside in your hands. They consist of the electronic fingerprints you leave with increasing frequency over the course of your day-to-day existence on computers controlled by third parties: they are the websites you visit, the toll booths you pass through, the purchases you make online or with credit cards, the prescriptions you fill, the phone numbers you dial, the e-mails you send, the library books you check out, the specific pages you have read on your Kindle, the restaurants at which you make online reservations, the steps you take as measured by your Fitbit, the photos you post on Facebook, and the photos that others post of you.
Very often, your mosaic works for your protection— to keep your credit cards safe from identity thieves, for example. More often still, it works for your convenience— to give you discounts, to match you with products you want to buy, to connect you with people you want to talk to and let you keep away from those you would like to avoid. But your mosaic is also an open diary that, in other hands, exposes you to many forms of vulnerability. Your enemies, the government, foreign governments, fraudsters, identity thieves, information jockeys, and legitimate businesses can all learn more about you by diving into your mosaic— legally or illegally, benignly or maliciously, to protect or to attack— than by rifling through your desk or bedroom. It is easier. Your mosaic is actually a richer source of information about you. And just as some day someone might send a spider to watch or attack you, today your mosaic presents an inviting and fruitful source for exploitation or attack by people who do not have your best interests at heart.
And here is the rub: your individual mosaic— composed, as it is, of the transactions and data that make up your life— is itself only a single tile in the much larger mosaic that records modern technological society and its behavior. That larger metamosaic too is being stored, retained, and constantly processed by governments, companies, and individuals. Along the way, each of these tiles is potentially rendering vulnerable those same governments, companies, and individuals the metamosaic also empowers.
Wittes, Benjamin; Blum, Gabriella (2015-03-10). The Future of Violence: Robots and Germs, Hackers and Drones. Confronting A New Age of Threat (pp. 45-46). Basic Books. Kindle Edition.
It is a curious paradox of the times we live in, when no commandment is inscribed on tablets of stone but every one of our transgressions lives eternally within some data bank, effectively beyond the pale of forgiveness...
...[C]orporations mine our e-mails and Internet searches in the hopes of honing their marketing strategies. No sooner do I press the send button that e-mails a letter of recommendation to a female student or a note of thanks to a female editor and an ad for an online dating service invites me to learn more about a bevy of eligible lovelies “in your area.” More than 96 percent of Google’s $29 billion in revenue for 2010, a sum exceeding the combined advertising revenues of all newspapers, came from advertisers sold on the search engine’s ability to know our individual wants. Unlike Orwell’s Big Brother, who merely sought to sniff out dissent, corporate Big Brother wishes to know our every desire, confident that we can be pleasured into submission. And we have hardly seen the worst. Privacy expert Jeffrey Rosen speculates that “it would be a simple enough task for Facebook or Google” to launch an “Open Planet” surveillance system, by “which anyone in the world could log onto the Internet, select a particular street view … and zoom in on a particular individual.… Most of the architecture for implementing it already exists.”My interest in privacy issues, focused on 4th Amendment constitutional concerns, began in the late 1980's while I was working in a radiation lab in Oak Ridge. As I recounted in my 1998 graduate thesis,
Such a development would be good news not only for the Big Brothers of government and business but also for what Walter Kirn, writing about the Clementi case, calls “Little Brother.” He means any nosy individual with an electronic device. With the use of something called a keylogger, for instance, you can keep track of a spouse’s computer keystrokes. With the use of Google Images you can change your mind about a blind date. The surveillance state and the surveillance economy are matched by a surveillance culture, each daring the other to go one step further in vandalizing old norms...
Keizer, Garret (2012-08-07). Privacy (Big Ideas//Small Books) (pp. 6-7). Macmillan. Kindle Edition.
During this period a couple of emotionally charged episodes involving suspicionless drug testing hit the news in East Tennessee. First, the local school board sought to enact a mandatory drug test policy aimed at teachers. When the teachers’ union protested and sued to enjoin the policy, Board Superintendent Earl Hoffmeister went ballistic in the press, accusing the teachers of “hiding behind the Constitution” in order to cover up drug abuse among their members.Post- 9/11, quaint notions of 4th Amendment "privacy" were relegated to the back of the bus in the wake of the real and trumped-up, mendacious exigencies of "The War on Terror." I wrote at some length about those as well, here, and here.
There was no evidence of drug abuse among Knox County teachers.
Also during this period, Knoxville Police Chief Phil Keith made an incredible statement during an on-the-record interview with the local paper. He opined that he should have the power to order anyone “to go take a drug test right now; don’t ask me any questions, just go do it.” He had been fighting with his police officers over a proposed random drug testing policy for the department, a policy the rank-and-file vigorously opposed.
These highly visible controversies made for interesting lunchroom conversation at our lab. Our chemists derided the notion that commercial clinical labs could do high-quality work on the cheap in mass production mode. The CEO of a large local clinical lab that performed the bulk of the drug testing in East Tennessee, had stated to the press that his lab’s technology was “absolute; if we do everything correctly there is no possibility of error” (Knoxville Journal, 12/13/90, emphasis mine).
A very big if. This comment brought forth torrents of rebuke in our facility. The manager of our mixed waste lab, a bright and experienced chemist himself, remarked: “I’m exempt from that sh--; I’d have to think long and hard before going to work for a company that wanted to make me take a drug test.”
The local teachers’ union President was a member of my church. We talked about the dispute with the school board at length, and I provided him with extensive technical lab information to use in his fight against the policy. The teachers ultimately won a permanent federal court injunction against the board, and the whole idea was dropped and faded from public view.
By this time, though, the issue had gotten my continuing attention, and I followed the progress of similar disputes around the country. Suspicionless drug testing programs expanded rapidly in the late 1980’s in the wake of President Reagan’s Executive Order 12564 (Drug-free Federal Workforce) and the federal Drug-Free Workplace Act of 1998. At every turn, those who objected to forced testing were subjected to withering ad hominem attacks. Dissent was equated with “support for drug abuse” or the dissenters’ need to hide their imputed drug use and legalization agenda. Indeed, several years ago former “Drug Czar” Lee Brown, publicly rebuking then-Surgeon General Joycelyn Elders for her musings on the utility of scientific study of drug legalization issues, flatly declared that “[T]here will be no discussion on drug legalization; even the discussion is harmful.”
I even had a bit of irreverent, irascible sport with the then-Director of the moribund "Total Information Awareness" initiative. Yes, I actually mailed that to him.My more recent years have had me working on HIPAA compliance issues, both for my REC client clinics (Meaningful Use Core 15, 45 CFR 164.3 et seq compliance), and the ePHI security and privacy requirements pertaining to our HIE, HealtHIEnevada.org.
These days, my recent half-dozen reads on this topic comprise rather dispiriting evidence of just how rapidly and broadly the panoptic assault on personal privacy has advanced concomitant with the exponentially accelerating developments in information technology.
From Jacob Silverman's Terms of Service:
To understand the depth of our privacy problem, we have to look at the ideological, economic, and cultural roles that data collection and data mining have assumed in recent years. There is now so much data produced on our behalves— about a terabyte per capita per year— that a major industry has arisen, one that fits familiar patterns of techno-utopian thinking. Big Data, as this emerging field is called, promises to take the incredible amount of data collected— browsing histories, sensor information from smartphones, GPS coordinates, social-media activity, purchasing information, medical reports— and turn it into useful insights. Big Data has found supporters in health care, insurance, scientific research, education, energy, and intelligence. While some commentators have argued that the utopian possibilities of Big Data are overblown, others offer more dire outlooks: “the surveillance possibilities of the technology,” according to the director of the Human Dynamics Lab at MIT, “could leave George Orwell in the dust.” At its most far-reaching, Big Data promises predictions about the behaviors of individuals and population groups, as well as to forecast anything from traffic to weather conditions to street crime. The faith in Big Data— which is really just a trendy, catchall term for various types of bulk data collection and analytics— has led social-media companies to think they can know and predict our behaviors, that they can, as Eric Schmidt says, know us better than we do ourselves.
It’s the enthusiasm for Big Data, along with the attendant idea that one’s analytical capacities increase with the amount of data collected, which causes Gen. Keith B. Alexander, then the director of the NSA, to justify widespread surveillance by saying, “You need the haystack to find the needle.” In Alexander’s eyes, Americans’ phone records— all of them— form that haystack. For Silicon Valley firms, the haystack is potentially the full array of a user’s life— whatever can be tracked. On second consideration, “whatever can be tracked” applies to the U.S. intelligence community as well. Witness its since-abandoned Total Information Awareness project (now replaced by a smattering of connected programs and partnerships with tech firms, telecoms, and foreign intel agencies); the Mastering the Internet project, maintained by GCHQ, NSA’s close British partner; or the National Reconnaissance Office, which in December 2013 launched a spy satellite on a rocket painted with an image of a world-straddling octopus and the words, “Nothing Is Beyond Our Reach.” The ODNI, the office that oversees the entire U.S. intelligence community, was so proud of this event that it even tweeted photos of the rocket and a separate one of the logo, which also served as a mission patch. It is a matter of pride to confess one’s informational avarice.
No wonder, then, that the NSA and Silicon Valley have made such good partners. Both are in the data collection and targeting business, and Silicon Valley collects heaps of data which the NSA would love to have.* Silicon Valley is merely targeting consumers with ads and prompts and nudges that might get them to click or to buy something. They are bound together by common interests, philosophies, and methods.
One of the main problems with Big Data is that it produces correlations but not causations. We learn that two things seem to be related— for example, that people with a specific set of personal characteristics are prone to depression or bad driving— but we don’t learn why. This is ironic given that Big Data is the ultimate fact-producing discipline: it promises answers, actionable ones. But data itself can be messy and often must be smoothed over, interpreted, supplemented. It doesn’t always lead us where we hope to go. As the researchers Danah Boyd and Kate Crawford have argued, Big Data “encourages the practice of apophenia: seeing patterns where none actually exist.”...
Even the apparent presence of a pattern can lead us toward some false choices. A health insurance company may believe that people who buy six key grocery items are 30 percent more likely to develop diabetes, but does that give the insurer the right to raise this group’s premiums or deny them coverage? Is your purchase history, or your poor gym attendance, as indicated by your smartphone GPS, a preexisting condition? Should the insurer be allowed to push you toward better health by sharing this analysis with a company that sends out coupons for quinoa and kale? Must the insurer notify customers that their personal information is being used in this way? Already some health insurers have begun lowering premiums for people who use fitness monitors and let their employers and insurers collect that data. The obverse of this arrangement is that those who don’t submit to this kind of surveillance are penalized. They have to pay more just to keep their basic health information private.
David Lyon, the surveillance theorist, links the rise of Big Data to the growth of risk management as a central practice for governments and business. The more data that can be collected, the thinking goes, the more that risk can be anticipated and mitigated and hedged against with complex insurance policies. Eliminating any sense of danger or unpredictability therefore becomes the most important goal, with concerns about civil liberties, privacy rights, and adverse consequences far behind...
Silverman, Jacob (2015-03-17). Terms of Service: Social Media and the Price of Constant Connection (pp. 313-315). HarperCollins. Kindle Edition.
See also my prior citation of "Data and Goliath" here (scroll down). And here (which also cites Robert Scheer's "They Know Everything About You").
An insufficiently addressed aspect of this goes to the problem of the adverse implications of trafficking in "dirty data." See my November 2013 post "(404)^n, the upshot of dirty data."
The entire point of searching, locating, linking, retrieving, merging, reordering, indexing, and analyzing data originating in various data repositories (digital or otherwise) is to reduce uncertainty in order to make accurate, value-adding decisions. To the extent that data are "dirty" (riddled with errors), this objective is thwarted. Worse, the resulting datasets borne of such problematic inquiry then themselves frequently become source data for subsequent query, iteratively, recursively. Should you be on the receiving end of bad data manipulation, the consequences can range from the irritatingly trivial to the catastrophic...Indeed. See also my 2013 post "Data flowing at the speed of trust." Like I said, I've been giving all of this a lot of thought for a long time.
If you're not already bummed enough by now, how about a bit of Robert Scheer?
...[T]he business model that has driven the enormous profitability of Internet companies requires the ruthless exploitation of the aspirations, fantasies, relationships, and other personal information that constitute what we used to think of as the sacred territory of human privacy.Need I really connect the dots of concern here? Panoptic, largely unregulated (increasingly all online) "big data" mining is all benefit to those trafficking in your information, and all risk to you. Those who compile and sell frequently error-ridden or otherwise misleading conclusions about you will likely never see the inside of a courtroom and defendant's table. In fact, you're unlikely to ever even know the extent to which you've been "mosaic'd" for the commercial benefit of others.
Don’t confuse the thing being sold with the thing itself, an advertising guru once told me. Whereas what’s being sold on the Internet is an illusion of instant knowledge and informed choice that draws you in, the thing itself is your data to be mined by those who want to sell you stuff you most likely didn’t even know you wanted. It’s all about dressing up advertising in a technology vastly different than the pre-Internet world, its purpose so thoroughly veiled as to make the “Mad Men” ad men of the 1960s seem like honest brokers in comparison.
Yesterday’s highly compensated consultants who devised methods for the psychological manipulation of the masses would have had nothing on today’s mostly unknown data brokers, whose ability to spy on us is comparable to that of large government intelligence agencies. Acxiom, the second-largest company involved in what is known as “database marketing,” has monitored the records of hundreds of millions of Americans obtained through 1.1 billion browser cookies, 200 million mobile profiles, and an average of 1,500 pieces of data per consumer, according to Acxiom’s first-quarter 2014 report. In that report, ambitious CEO Scott Howe noted, “Our digital reach will soon approach nearly every Internet user in the US."
Toward that goal, Acxiom paid $ 310 million in cash in May 2014 for LiveRamp, a San Francisco– based company that enables marketers to use consumers’ offline purchase and other transaction data to better target online ads to those consumers. According to the statement announcing the acquisition, Acxiom is in line “to reach more than 99 percent of the adult U.S. population . . . across all channels and devices."
A brief explanation may be helpful here. To understand the scope of access to personal information under discussion, consider that offline data includes information from real estate and motor vehicle records, information from warranty cards filled out by consumers, homeownership and property values, marital status, annual income and education levels, travel records, ages of children in the home, and itemized store purchases made when a consumer swipes a loyalty discount card. In the latter case, a store can sell information regarding one’s pharmaceutical purchases to a data broker that will then provide this information to a health insurance company, for example. Government agencies participate in the data trade, too. In most states, the Department of Motor Vehicles can— and does— legally sell driver data, including driver names, addresses, car models, vehicle identification numbers, and license plate numbers. The relationship is symbiotic: government agencies like the DMV sell offline data to data brokers, and they often obtain online data from those same companies.
The Transportation Security Agency (TSA), for example, purchases data from data brokers to prescreen air travelers...
Scheer, Robert (2015-02-24). They Know Everything About You: How Data-Collecting Corporations and Snooping Government Agencies Are Destroying Democracy (pp. 58-59). Nation Books. Kindle Edition.
As I observed in my November 2014 post "Big data" and "surveillant anxiety" -
With more data comes greater accuracy and truth?" Myth, indeed. The utility of any set of data is a function of its intended use. "Big data" shot through with inacurracies can still be handsomely profitable for the analytical user (or buyer), irrespective of any harms they might visit on the individuals swept up (usually without their knowledge or assent) in the data hauls and subsequent proprietary modeling.What are the implications of all of the foregoing with respect to your personal health information (ePHI), particularly in light of this rhapsodic move toward "web-centric XML" API's transecting EHRs, HIEs, and all manner of mobile devices (including "the internet of things")?
A personal illustration. I worked for a number of years (2000 - 2005) in subprime credit risk modeling at a VISA/MC issuer. We routinely bought "pre-screened" prospect mailing lists for our direct mail marketing campaigns. Direct mail campaigns can be in the aggregate quite profitable at a one percent response rate or lower. Ours, being targeted to credit-hungry subprime prospects with blemished credit histories, typically had response rates of about 4%. Of those who responded, about half did not pass the initial in-house analytical cut for one reason or another (many owing to impossible, bad data in the individuals' dossiers). Of the remaining 2% that we actually booked, perhaps half of those would eventually "charge off" (default). These were our "false positives."
The surviving 1% were lucrative enough to pay for everything, including a nice net margin (we set new annual profit levels every year I was there). It's called "CPA" -- cost per acquisition. Ours were about $100 per new account. Fairly standard in the industry at the time.
Potentially creditworthy (and profitable) prospects that we passed on after they replied were our "false negatives." And, ~96% of our marketing targets didn't even respond, so were were "wrong" about them (the "unknown unknowns") at the outset.
To sum up; we were in, a material sense, routinely 99% "wrong," but, notwithstanding, incredibly profitable...
Today we have all manner of virtually unregulated big data mining, modeling, and aggregated and re-aggregated resale going on, using all of us as correlational grist -- e.g., Google, the overtly commercial Amazon and their lesser competitors, and "free" social media platforms such as Facebook, Twitter, Tumblr, Pinterest, etc, along with business sites such as LinkedIn. Digital gumshoe companies such as Palantir are hard at work quietly drilling in the tar sands of social media, modeling away and "scoring" individuals for their clients, far from any regulatory purview...
From the Wiki:
While many technologists tout the Internet of Things as a step towards a better world, scholars and social observers have doubts about the promises of the ubiquitous computing revolution.
Privacy, autonomy and control
Peter-Paul Verbeek, a professor of philosophy of technology at the University of Twente, Netherlands, writes that technology already influences our moral decision making, which in turns affects human agency, privacy and autonomy. He cautions against viewing technology merely as a human tool and advocates instead to consider it as an active agent.
Justin Brookman, of the Center for Democracy and Technology, expressed concern regarding the impact of IoT on consumer privacy, saying that "There are some people in the commercial space who say, ‘Oh, big data — well, let’s collect everything, keep it around forever, we’ll pay for somebody to think about security later.’ The question is whether we want to have some sort of policy framework in place to limit that."
Editorials at WIRED have also expressed concern, one stating 'What you’re about to lose is your privacy. Actually, it’s worse than that. You aren’t just going to lose your privacy, you’re going to have to watch the very concept of privacy be rewritten under your nose.'
The American Civil Liberties Union (ACLU) expressed concern regarding the ability of IoT to erode people's control over their own lives. The ACLU wrote that "There’s simply no way to forecast how these immense powers -- disproportionately accumulating in the hands of corporations seeking financial advantage and governments craving ever more control -- will be used. Chances are Big Data and the Internet of Things will make it harder for us to control our own lives, as we grow increasingly transparent to powerful corporations and government institutions that are becoming more opaque to us."
Researchers have identified privacy challenges faced by all stakeholders in IoT domain, from the manufacturers and app developers to the consumers themselves, and examined the responsibility of each party in order to ensure user privacy at all times. Problems highlighted by the report include:
- User consent – somehow, the report says, users need to be able to give informed consent to data collection. Users, however, have limited time and technical knowledge.
- Freedom of choice – both privacy protections and underlying standards should promote freedom of choice. For example, the study notes, users need a free choice of vendors in their smart homes; and they need the ability to revoke or revise their privacy choices.
- Anonymity – IoT platforms pay scant attention to user anonymity when transmitting data, the researchers note. Future platforms could, for example, use TOR or similar technologies so that users can't be too deeply profiled based on the behaviors of their "things".
Concerns have been raised that the Internet of Things is being developed rapidly without appropriate consideration of the profound security challenges involved and the regulatory changes that might be necessary. According to the BI (Business Insider) Intelligence Survey conducted in the last quarter of 2014, 39% of the respondents said that security is the biggest concern in adopting Internet of Things technology. In particular, as the Internet of Things spreads widely, cyber attacks are likely to become an increasingly physical (rather than simply virtual) threat. In a January 2014 article in Forbes, cybersecurity columnist Joseph Steinberg listed many Internet-connected appliances that can already "spy on people in their own homes" including televisions, kitchen appliances, cameras, and thermostats.
Computer-controlled devices in automobiles such as brakes, engine, locks, hood and truck releases, horn, heat, and dashboard have been shown to be vulnerable to attackers who have access to the onboard network. (These devices are currently not connected to external computer networks, and so are not vulnerable to Internet attacks.)
The U.S. National Intelligence Council in an unclassified report maintains that it would be hard to deny "access to networks of sensors and remotely-controlled objects by enemies of the United States, criminals, and mischief makers… An open market for aggregated sensor data could serve the interests of commerce and security no less than it helps criminals and spies identify vulnerable targets. Thus, massively parallel sensor fusion may undermine social cohesion, if it proves to be fundamentally incompatible with Fourth-Amendment guarantees against unreasonable search." In general, the intelligence community views Internet of Things as a rich source of data...
You might not yet have a ton of "internet of things" toys ("smart house" stuff, "quantified self" sensors, etc), but you likely have a smartphone.
If you need to be convinced that you’re living in a science-fiction world, look at your cell phone. This cute, sleek, incredibly powerful tool has become so central to our lives that we take it for granted. It seems perfectly normal to pull this device out of your pocket, no matter where you are on the planet, and use it to talk to someone else, no matter where the person is on the planet.UPDATE: OK, HERE'S A SWELL HEADLINE
Yet every morning when you put your cell phone in your pocket, you’re making an implicit bargain with the carrier: “I want to make and receive mobile calls; in exchange, I allow this company to know where I am at all times.” The bargain isn’t specified in any contract, but it’s inherent in how the service works. You probably hadn’t thought about it, but now that I’ve pointed it out, you might well think it’s a pretty good bargain. Cell phones really are great, and they can’t work unless the cell phone companies know where you are, which means they keep you under their surveillance.
This is a very intimate form of surveillance. Your cell phone tracks where you live and where you work. It tracks where you like to spend your weekends and evenings. It tracks how often you go to church (and which church), how much time you spend in a bar, and whether you speed when you drive. It tracks— since it knows about all the other phones in your area— whom you spend your days with, whom you meet for lunch, and whom you sleep with. The accumulated data can probably paint a better picture of how you spend your time than you can, because it doesn’t have to rely on human memory. In 2012, researchers were able to use this data to predict where people would be 24 hours later, to within 20 meters.
Before cell phones, if someone wanted to know all of this, he would have had to hire a private investigator to follow you around taking notes. Now that job is obsolete; the cell phone in your pocket does all of this automatically. It might be that no one retrieves that information, but it is there for the taking.
Your location information is valuable, and everyone wants access to it. The police want it. Cell phone location analysis is useful in criminal investigations in several different ways. The police can “ping” a particular phone to determine where it is, use historical data to determine where it has been, and collect all the cell phone location data from a specific area to figure out who was there and when. More and more, police are using this data for exactly these purposes.
Governments also use this same data for intimidation and social control. In 2014, the government of Ukraine sent this positively Orwellian text message to people in Kiev whose phones were at a certain place during a certain time period: “Dear subscriber, you have been registered as a participant in a mass disturbance.” Don’t think this behavior is limited to totalitarian countries; in 2010, Michigan police sought information about every cell phone in service near an expected labor protest. They didn’t bother getting a warrant first.
There’s a whole industry devoted to tracking you in real time. Companies use your phone to track you in stores to learn how you shop, track you on the road to determine how close you might be to a particular store, and deliver advertising to your phone based on where you are right now.
Your location data is so valuable that cell phone companies are now selling it to data brokers, who in turn resell it to anyone willing to pay for it. Companies like Sense Networks specialize in using this data to build personal profiles of each of us...
Schneier, Bruce (2015-03-02). Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (pp. 1-2). W. W. Norton & Company. Kindle Edition.
Samsung is warning customers about discussing personal information in front of their smart television set.__
WHO OWNS YOUR MEDICAL RECORDS?
From George Washington University's Hirsh Health Law and Policy Program and the Robert Wood Johnson Foundation.
"This comparative map outlines who owns a patient's medical records in each state. About half of the states do not have any laws on ownership of medical records because it is either presumed that the health care provider owns the record or the law is silent on the issue. In states that do have a law or regulation governing medical record ownwership, medical records are usually considered the property of the health care provider (hospital or physicians). There are very few states that grant the patient ownership of his or her own medical record."
CODA: clamorem et uthesium 2.0
Just in, via my just-arrived Harpers. On the gumshoe limits of "The Internet of You."
Black Hat, White Hat___
By Masha Gessen
In the immediate aftermath of the 2013 Boston Marathon bombing, police scanned the crowd for people who looked suspicious, which is to say Muslim, which is to say darker than Boston-white. A twenty-year-old student of English from Saudi Arabia named Abdulrahman Ali Alharbi was among the dozens of people with burns, scratches, and bruises who were making their way, with the assistance of uninjured runners, to the assembled ambulances. Alharbi had been walking to meet friends for lunch when he stopped for a glimpse of the marathon and was thrown by the second explosion. He had burn injuries on his head, back, and legs. He was covered in blood, most of it other people’s. A police officer directed him, along with other victims, toward the waiting ambulances — but when Alharbi boarded one, several officers followed him into the vehicle.
At the hospital, more than twenty police officers and FBI agents surrounded his bed. At 4:28 in the afternoon, less than two hours after the bombs went off, the New York Post reported that law-enforcement officers were talking to a suspect in the bombing. By evening, the media had Alharbi’s name and address, and the FBI had his Facebook password. By Tuesday morning, the Post had published a picture taken on his street in Revere, a Boston suburb, and Fox News had reported his name. Alharbi later said that the media had published a mistranslation of a Facebook post of his: “God is coming to the U.S.” In fact, he had written, “Thank God I arrived in the U.S. after a long trip.” Alharbi was exonerated by the FBI within twenty-four hours of the bombing, but by this time he had no home — his address was so widely known that he felt he would be unsafe there — and no money: the FBI never returned his wallet.
After Alharbi came Salaheddin Barhoum, Yassine Zaimi, and Sunil Tripathi. The first two were Moroccan immigrants — a seventeen-year-old high school track athlete and his twenty-four-year-old coach — fingered by amateur online detectives. The Post put a photograph of the pair on its front page, with the banner headline bag men: feds seek these two pictured at boston marathon. The evidence, as analyzed by the online crowd: one of the men was wearing a black backpack, and a black backpack, or what remained of one after a bomb had exploded inside it, had been found at the scene. Plus, they looked dark and were, indeed, Muslim.
Sunil Tripathi was a brown-skinned American student at Brown University who had disappeared almost a month before the bombing. This suspect, too, came courtesy of Internet amateurs: the social network Reddit gave him so much attention that for a day or two many people following the case were all but certain he was the prime suspect. In fact, he had been dead for weeks — his body was found eight days after the marathon.
At five o’clock on the Thursday after the bombing, the FBI held a press conference at a Sheraton hotel in Boston. By five-thirty the media had released pictures of another pair of young men: one older, one younger; one wearing a white baseball cap, the other a black one. Oddly, most of this was also true of the two Moroccans, and in some quarters confusion persisted. The pictures were taken from surveillance tapes; the FBI believed that the two men were the bombers and was asking the public to help identify them...
More to come...