Search the KHIT Blog

Sunday, March 18, 2018

The "costs of unchecked innovation?"

Is "innovation" an unalloyed societal good, and thus a no-brakes-necessary technological and economic priority? A staple "conservative" political stance opposing "government regulation" it that "stifles innovation." The libertarian-leaning entrepreneurs and venture capitalists of Silicon Valley and beyond regard the fevered pursuit of "innovation" as a cardinal virtue.

Inferred in the common definition "introduction of new things or methods" is that innovation always implies "improvement."**
** Attempts at innovation that don't bear fruit obviously don't count. Efforts that do make the cut, though, should be subjected to an honest accounting of "net utility," comprising candor with respect to the extent and consequences of "side effects" / "adverse outcomes."
It shouldn't be difficult to come up with counterexamples upon the briefest reflection. For one thing, there are frequently casualties among those "disrupted" by "disruptive innovation." Relatedly, consider the new "Tracking Point XS1" Artificial Intelligence-enabled assault style semiautomatic rifle.

An "innovative" way to more effectively "disrupt" a person's life? Permanently, in the worst case. As a military weapon, its net utility is rather obvious. It is not, however, simply a "deer rifle" or personal protection appliance, marketing spin of its manufacturer notwithstanding.

No, "innovations" (and those who develop them) don't rightfully get a moral blank check. apropos, see my November post "Artificial Intelligence and Ethics." See also my post on "Slaughterbots."

OK, comes a new wrinkle, reported at WIRED:

Meltdown, Spectre, and the Costs of Unchecked Innovation

…Even if Intel wouldn't quite agree that Moore's law is over, its real-world performance benefits may be substantially erased after Meltdown and Spectre are tamed. The long-standing computing trope that should be even more concerning in this context, however, is "it's all just ones and zeroes." We're not just talking about bits once those bits drive our robots, drones, and 3-D printers. New technologies now often manifest in the real world, since for now that is still where most of the money is, but even Bitcoin melts the polar icecaps.

On the mind-boggling cosmic scale, these exploits will affect our ability to create and edit organisms, but on a more tangible level, they also decreased the operational speeds of both processors and online timing measurements, thereby reversing advances we thought we'd made both in hardware and with the general sophistication of the web as a platform. In both fields, we had quite literally been racing toward something terrible.

We’ve built technology too quickly for our own good, quantifiable now in dollars and microseconds, using a wide range of tools and metrics even though SharedArrayBuffer is no longer around to take the measurements. Anything that seeks to reshape the infrastructure built by our past selves should deserve our most aggressive scrutiny, regulation, and suspicion. If backtracking overeager technology is already proving so catastrophic for the cheap chips in our laptops and phones, then we certainly have no hope of reversing its changes to our homes, cities, and oceans. Some things can't be patched or safely versioned. We just have to get it right the first time.
Yikes. Read all of it. Broad implications.

See also my January 2017 post "Disruption ahead on all fronts, for good and ill."

Another good read relevant to the topic:

"...The financial markets were changing in ways even professionals did not fully understand. Their new ability to move at computer, rather than human, speed had given rise to a new class of Wall Street traders, engaged in new kinds of trading. People and firms no one had ever heard of were getting very rich very quickly without having to explain who they were or how they were making their money..."

Lewis, Michael. Flash Boys: A Wall Street Revolt (p. 17). W. W. Norton & Company. Kindle Edition.

Google Naked Capitalism Uber. Bring a Snickers; you're going to be a while.

Intelligent to a Fault: When AI Screws Up, You Might Still Be to Blame
Interactions between people and artificially intelligent machines pose tricky questions about liability and accountability, according to a legal expert
By Larry Greenmeier 
Artificial intelligence is already making significant inroads in taking over mundane, time-consuming tasks many humans would rather not do. The responsibilities and consequences of handing over work to AI vary greatly, though; some autonomous systems recommend music or movies; others recommend sentences in court. Even more advanced AI systems will increasingly control vehicles on crowded city streets, raising questions about safety—and about liability, when the inevitable accidents occur.

But philosophical arguments over AI’s existential threats to humanity are often far removed from the reality of actually building and using the technology in question. Deep learning, machine vision, natural language processing—despite all that has been written and discussed about these and other aspects of artificial intelligence, AI is still at a relatively early stage in its development. Pundits argue about the dangers of autonomous, self-aware robots run amok, even as computer scientists puzzle over how to write machine-vision algorithms that can tell the difference between an image of a turtle and that of a rifle.

Still, it is obviously important to think through how society will manage AI before it becomes a really pervasive force in modern life…

More to come...

Thursday, March 15, 2018

Theranos in the news again

I've taken multiple shots at Theranos and Elizabeth Holmes before. e.g., see here for a thread of some of my prior posts on the topic.

The USA Today story.
SAN FRANCISCO — How did disgraced biotech start-up Theranos become a $9 billion darling? The old-fashioned way: Through star power, dazzling promises, deep pockets and devout believers.

But the Palo Alto company, whose founder Elizabeth Holmes was stripped of her leadership role Wednesday by the Securities and Exchange Commission, wound up with a story line worthy of Icarus.

Theranos rose quickly from being a college dropout's idea to revolutionize the blood analysis industry to a hot tech bet that accrued $700 million in funding and many famous names for its board.

Anchoring it all was Holmes, now 34, whose smarts, fierce determination and Steve Jobs-inspired look (a black turtle neck was her staple) were critical to recruiting believers for a secretive company that ultimately could not deliver the technology required to do complex blood work based not on vials but mere drops of blood…
Perhaps the civil sanctions are punishment enough. But, she and her CEO apparently committed multiple counts of criminal fraud ongoing. I would favor prosecution, for a full, on-the-record airing of the particulars. to wit, from Forbes:
Lawyers: Elizabeth Holmes Could Still Serve Time In Prison
By Ellie Kincaid and Michela Tindera

Elizabeth Holmes agreed to settle with the SEC on fraud charges that she deceived investors to raise $700 million for her blood-testing company Theranos. She and the company didn't admit or deny the allegations, but lawyers say she could still face jail time if prosecutors decide to pursue her.

“She could very well serve time,” said Elliot Lutzker, the chair of New York City commercial law and government relations firm Davidoff Hutcher & Citron’s corporate group, who has decades of experience handling noncriminal matters with the SEC. “She is subject to criminal charges because she outright lied.” Theranos declined to comment for this story…
From THCB:
On Theranos: It’s Time to Throw the Book at Healthcare Tech Frauds
Mar 14, 2018

Huge news hit today as Theranos, its Chairman and CEO Elizabeth Holmes and its former President and COO Ramesh “Sunny” Balwani were charged with “elaborate, years-long fraud” by the Securities and Exchange Commission. The litany of supposed violations of the Securities Act of 1933 and the Securities Exchange Act of 1934 are almost as dizzying as the detailed factual allegations of repeated, willful fraud perpetuated by Holmes and Balwani on investors who likely should have known better.

Reviewing the SEC complaint against Holmes, it’s stunning to see the extent to which Holmes and Balwani were able to pull the wool over investors’ eyes…
"White collar criminals rarely face criminal investigations and charges for defrauding corporate investors. Sympathy is hard to come by for VCs who fail to do the due diligence to see through an amateurish con pulled by a neophyte founder and a President and COO shrouded in mystery..."
Yeah. But...

Search "Theranos" on Google news. Plenty of coverage.


The folks over at Naked Capitalism are not amused.
Jay Clayton’s SEC Lets Theranos Founder Elizabeth Holmes Get Away With Brazen Fraud
Posted on March 16, 2018 by Yves Smith

Due to the state of my internet connection (barely functioning), I’ll have to be terse and limit myself to a few high level comments about the pathetic punishment meted out to Theranos founder Elizabeth Holmes. This case proves that the Trump SEC is setting new lows by giving get out of jail nearly free cards to fraudsters.

Holmes settled with the SEC, paying a puny $500,000 when she raised and torched $700 million of investor funds. She also surrendered 18.9 million shares and give up control of the company by converting her Class B shares, which give her voting control, to Class A shares. She is also barred from serving as the director or officer of a public company for 10 years. That bizarrely means she remains as CEO of Theranos. She did not admit or deny guilt.

The Department of Justice is dutifully reported by the press to be looking at a case against her. If you believe that, I have a bridge to sell you. The SEC refers cases to the Department of Justice when it thinks they merit criminal charges and the two agencies work together. It is possible that the Department of Justice could pursue FDA-related charges against Holmes, but the securities law claims were a slam dunk, and the Department of Justice is highly unlikely to pursue a case on its own, particular since it is plenty busy with things like suing California over passing legislation that defies its crackdown on sanctuary cities.

We’ve embedded the filing at the end of the post for your entertainment.

As bad as the overall picture is, some of the items in the SEC filing are eyepopping. Holmes told investors she expected to have over $100 million in revenues in 2014 when she had only $100,000. She told investors that her largely vaporware blood tests didn’t need FDA approval when they did (and why did no reporter bother to check that claim out?). She presented prospective investors with a binder of endorsements with pharma company logos. But only one was real. The rest were made up by Theranos and the put the logos on the page. Holmes also claimed that Theranos was making its own equipment. That should have elicited a lot more study from investors and the press, since that would require engineering expertise that was notably absent on the Theranos team, plus seeing the facilities where the equipment was being made should have been part of the usual investor dog and pony show and apparently wasn’t.

It was disturbing to see so much of the press reports lead with the SEC’s line that the SEC had charged Holmes with “massive fraud” yet for the most part treat the punishment with “just the facts, ma’am” deference, as opposed to seeking expert comment on its suitability. In the Twitterverse, many pointed out that pharma bro Martin Skrelli was just sentenced to seven years in prison, and that even though he had engaged in brazen fraud, he had arguably not lost investors any money in the end. But he was so smug and full of himself that that alone guaranteed he’d get a harsh sentence…
Then there's this:
James Mattis is linked to a massive corporate fraud and nobody wants to talk about it
Better let a scandal slide than risk a nuclear war.

Secretary of Defense James Mattis is implicated in one of the largest business scandals of the past decades, described by the Securities and Exchange Commission as an “elaborate, years-long fraud” through which Theranos, led by CEO Elizabeth Holmes and president Ramesh “Sunny” Balwani, “exaggerated or made false statements about the company’s technology, business, and financial performance.”

Basically, their biotech startup was founded on the promise of faster, cheaper, painless blood tests. But their technology was fake.

Mattis not only served on Theranos’s board during some of the years it was perpetrating the fraud after he retired from US military service, but he earlier served as a key advocate of putting the company’s technology (technology that was, to be clear, fake) to use inside the military while he was still serving as a general. Holmes is settling the case, paying a $500,000 fee and accepting various other penalties, while Balwani is fighting it out in court.

Nobody on the board is being directly charged with doing anything. But accepting six-figure checks to serve as a frontman for a con operation is the kind of thing that would normally count as a liability in American politics.

But nobody wants to talk about it. Not just Trump and his co-partisans in Congress; the Democratic Party opposition is also inclined to give Mattis a pass. Everyone in Washington is more or less convinced that his presence in the Pentagon is the only thing standing between us and possible nuclear Armageddon…
The hits just keep on comin'.


STATnews has an interesting article up.
Getting past the bad blood of Theranos through collaboration

This week’s news that the Securities and Exchange Commission charged Theranos and its CEO with fraud put the troubles of the company back into the spotlight. For those of us in the field of liquid biopsy, Theranos has cast a long and persistent shadow on what’s clearly one of the most promising areas in cancer care — using blood tests to improve detection, diagnosis, and treatment and, more broadly, advancing the reality of precision medicine in cancer.

The long-unfolding Theranos story has chilled investment and primed clinicians to be wary of emerging blood tests, all to the detriment of patients whose cancer care may benefit from less invasive blood-based biopsies that can detect and pull critical information from genes, proteins, and cancer cells that have shed into the bloodstream.

What’s the silver lining for those of us in the field? The Theranos saga set the stage for an unprecedented level of collaboration and data-sharing across companies working to develop liquid biopsy technologies…
 "Liquid biopsy." I'll get to that in a moment. The article sums up:
Though the Theranos story served as a backdrop for the collaboration in liquid biopsy, there is a lesson here for all those innovating in the life sciences sector. Any new technology or approach can, and should, be met with healthy skepticism by all stakeholders, including investors, clinicians, professional societies, patients, and insurers. Without a critical eye on the science, we will find ourselves in a scientific Wild West instead of exploring a promising new frontier...
Doing transparent science. Real science. Check out the author's website.

apropos of which, from a recent edition of my (paywalled) AAAS Science Magazine:
Cancer detection: Seeking signals in blood

Most cancers are detected when they cause symptoms that lead to medical evaluation. Unfortunately, in too many cases this results in diagnosis of cancers that are locally invasive or already metastatic and hence no longer curable with surgical resection or radiation treatment. Medical therapies, which might be curative in the setting of minimal tumor burden, typically provide more limited benefit in more advanced cancers, given the emergence of drug resistance (1). On page 926 of this issue, Cohen et al. (2) describe a strategy for early cancer detection, CancerSEEK, aimed at screening for multiple different cancers within the general population. This study challenges current assumptions in the field of blood-based biomarkers and sets the stage for the next generation of cancer screening initiatives.

Given the potential curative advantage of earlier diagnosis and treatment, why have so many cancer screening approaches failed? In the past, efforts at screening healthy populations for cancer have relied on tests that were insufficiently specific. For example, most men with rising serum prostate-specific antigen (PSA) do not have prostate cancer but instead have benign prostatic enlargement. However, where accurate tests exist, there have been dramatic improvements in cancer outcomes (3). For example, advanced cervical cancer has virtually disappeared in countries where Pap screening is the standard of care; although less reliable, mammography and screening colonoscopy are recommended for early detection of breast and colon cancers in individuals above ages 40 to 45 and 50, respectively, and screening heavy smokers by use of low-dose chest computed tomography (CT) scans reduces deaths from lung cancer (4). However, these tests are imperfect, and cost-effectiveness for broad deployment remains a challenge, particularly because a multitude of false-positive test results may lead to extensive diagnostic evaluations and unnecessary medical interventions. Unfortunately, for the majority of cancers no effective early screening tests are available…

Detection and localization of surgically resectable cancers with a multi-analyte blood test
SEEK and you may find cancer earlier
Many cancers can be cured by surgery and/or systemic therapies when detected before they have metastasized. This clinical reality, coupled with the growing appreciation that cancer's rapid genetic evolution limits its response to drugs, have fueled interest in methodologies for earlier detection of the disease. Cohen et al. developed a noninvasive blood test, called CancerSEEK that can detect eight common human cancer types (see the Perspective by Kalinich and Haber). The test assesses eight circulating protein biomarkers and tumor-specific mutations in circulating DNA. In a study of 1000 patients previously diagnosed with cancer and 850 healthy control individuals, CancerSEEK detected cancer with a sensitivity of 69 to 98% (depending on cancer type) and 99% specificity.

Earlier detection is key to reducing cancer deaths. Here, we describe a blood test that can detect eight common cancer types through assessment of the levels of circulating proteins and mutations in cell-free DNA. We applied this test, called CancerSEEK, to 1005 patients with nonmetastatic, clinically detected cancers of the ovary, liver, stomach, pancreas, esophagus, colorectum, lung, or breast. CancerSEEK tests were positive in a median of 70% of the eight cancer types. The sensitivities ranged from 69 to 98% for the detection of five cancer types (ovary, liver, stomach, pancreas, and esophagus) for which there are no screening tests available for average-risk individuals. The specificity of CancerSEEK was greater than 99%: only 7 of 812 healthy controls scored positive. In addition, CancerSEEK localized the cancer to a small number of anatomic sites in a median of 83% of the patients.
Given the plights of my daughters (both of whom were dx'd presenting symptoms at Stage IV), my interest here should be obvious. Won't help us, but perhaps numerous others will benefit.


...A couple of years ago, Theranos, a company claiming to be able to almost magically do all sorts of medical tests on a single drop of human blood, fell apart. A brilliant Wall Street Journal investigation showed that its technology didn’t work; this week the Securities and Exchange Commission brought fraud charges against its founder. Diagnostics start-ups extracted a few lessons: Have actual, peer-reviewed data and, like, don’t lie to investors. But the Theranos debacle didn’t stop their work. That game has been on since at least 2000, and doctors, patients, and insurers are still clamoring for those tests. Nominally they might reduce health care costs, but more than that they promise new, faster diagnoses and better care…
A good read.


More from STATnews:
Investigators say his fingerprints are all over financial crimes at Theranos. Why is he a virtual ghost?
By REBECCA ROBBINS @rebeccadrobbins, DAMIAN GARDE @damiangarde, and ADAM FEUERSTEIN @adamfeuerstein MARCH 19, 2018

Fallen wunderkind Elizabeth Holmes is the face of the Theranos scandal. But the next act of Silicon Valley’s biggest blow-up rests on a mysterious tech entrepreneur with almost no digital footprint.
Ramesh “Sunny” Balwani is a virtual ghost — despite serving nearly seven years in the No. 2 position at the blood-testing startup that turned out to be too good to be true. While the black-turtleneck-clad Holmes graced magazine covers and spoke before adoring crowds, Balwani, her former boyfriend, stayed in the shadows. He has almost no internet presence, and the only verifiable photo that STAT could find of him was a grainy image from his 1988 college yearbook.

Now, he’s at the center of a legal showdown that could tear open a new chapter in a scandal that has rocked the business world and captivated the public imagination. And it could set up a daytime-TV legal defense: My ex-girlfriend duped me…

The SEC’s court documents paint Balwani as a hyperactive manager who operated with cunning and methodical intensity. And they find Balwani’s fingerprints all over Theranos’s alleged financial crime scene. The allegations: He lied to investors and partners about the blood test’s capabilities. He falsely claimed it was being used on military helicopters. He promised $1 billion in annual sales despite booking just $100,000. He orchestrated a campaign of secrecy within the ranks of the company, instructing employees to use code names for the third-party machines used in lieu of the company’s proprietary technology to process blood tests…
Long, detailed article. Kudos to the authors. Well worth your time.

"Financial crimes?" Prosecute, I say.

More to come...

Wednesday, March 7, 2018

An Epidemic of Wellness?

"What's the definition of a 'well person'?"
"A patient who hasn't been adequately worked up."
- Old physician joke

LOL. Goes to the "medicalization of life" itself. IIRC, it was the curmudgeonly Dr. Thomas Szasz who once riffed irascibly on a "humanectomy" px.

Wish I was at the Mardi Gras of Health IT this year, #HIMSS18. to wit,

Among the goodies in my latest Harper's is an essay by Barbara Ehrenreich.

apropos of our exuberantly-touted mobile digitech-enabled "culture of wellness" of late.
Running to the Grave

By Barbara Ehrenreich, from Natural Causes, which will be published next month by Twelve. Ehrenreich is the author of more than a dozen books, including Nickel and Dimed (Henry Holt). She holds a PhD in cellular immunology.

The pressure to remain fit, slim, and in control of one’s body does not subside with the end of youth — it grows only more insistent as one grows older. Friends, family members, and doctors start nagging the aging person to join a gym, “eat healthy,” or at the very least go for a daily walk. You may have imagined a reclining chair or a hammock awaiting you after decades of stress and physical exertion. But no, your future more likely holds a treadmill and a lat pull, if you can afford access to these devices. You may have retired from paid work, but you have a new job: going to the gym. One of the bossier self-help books for seniors commands:

Exercise six days a week for the rest of your life. Sorry, but that’s it. No negotiations. No give. No excuses. Six days, serious exercise, until you die.
People over the age of fifty-five are now the fastest-growing demographic for gym membership. Mark, a fifty-eight-year-old white-collar worker who goes to my gym, does a six o’clock workout before going to the office, then another after leaving. His goal? “To keep going.” The price of survival is endless toil.

For an exemplar of healthy aging, we are often referred to Jeanne Louise Calment, a Frenchwoman who died in 1997 at the age of 122 — the longest confirmed life span on record. Calment never worked in her life, but it could be said that she worked out. She and her wealthy husband enjoyed tennis, swimming, fencing, hunting, and mountaineering. She took up fencing at the age of 85, and rode a bicycle until her 100th birthday.

Anyone looking for dietary tips will be disappointed; Calment liked beef, fried foods, chocolate, and pound cake. Unthinkable by today’s standards, she smoked cigarettes and sometimes cigars, though anti-smoking advocates should be relieved to know that she suffered from a persistent cough in her final years.

This is “successful aging,” which, except for the huge investment of time it requires, is supposedly indistinguishable from not aging at all. It has many alternative names: “active aging,” “healthy aging,” “productive aging,” “vital aging,” “anti-aging,” and “aging well.” In 2012, the World Health Organization dedicated World Health Day to healthy aging, and the European Union designated that year its Year for Active Aging.

Popular science and self-help books on the topic are proliferating. Among the titles currently available on Amazon are: Successful and Healthy Aging: 101 Best Ways to Feel Younger and Live Longer; Live Long, Die Short: A Guide to Authentic Health and Successful Aging; Do Not Go Gentle: Successful Aging for Baby Boomers and All Generations; Aging Backwards: Reverse the Aging Process and Look 10 Years Younger in 30 Minutes a Day; and, of course, Healthy Aging for Dummies. A major theme is that aging is abnormal and unacceptable. Henry Lodge, a physician and coauthor of Younger Next Year, writes, “The more I looked at the science, the more it became clear that such ailments and deterioration” — heart attacks, strokes, the common cancers, diabetes, most falls, fractures — “are not a normal part of growing old. They are an outrage.”

Who is responsible for this outrage? Well, each of us is individually responsible. All the books in the successful-aging literature insist that a long and healthy life is within the reach of anyone who will submit to the required discipline. It’s up to you and you alone, never mind what scars — from overexertion, genetic defects, or poverty — may be left from your prior existence. Nor is there much concern for the material factors that influence the health of an older person, such as personal wealth or access to transportation and social support.

There is a bright side to aging: declines in ambition, competitiveness, and lust. When Betty Friedan was in her seventies, she wrote a book called The Fountain of Age. As her subjects grew older, she observed, they became “more and more authentically themselves.” They didn’t care anymore what other people thought of them. I can add from my own experience that aging also comes with a refreshing refusal to strive — I feel no need to take on every obligation or opportunity that comes my way.

But even the most ebullient of the elderly eventually come to realize that aging is above all an accumulation of disabilities, often beginning well before Medicare eligibility or the first Social Security check. Vision loss typically begins in one’s forties. Menopause strikes in a woman’s early fifties, along with the hollowing-out of bones. Knee and lower-back pain arise in the forties and fifties, compromising the mobility required for successful aging. The US Census Bureau reports that nearly 40 percent of people aged sixty-five and older suffer from at least one disability, with two thirds of them saying they have difficulty walking or climbing. Yet we soldier on. “You don’t become inactive because you age,” we’ve been told over and over. “You age because you’ve become inactive.”

The goal of successful aging is often described as the “compression of morbidity” into one’s last few years — in other words, a healthy, active life followed by a swift descent into death. But the truly sinister possibility is that for many of us, all the little measures we take to remain fit — all the deprivations and exertions — will lead only to the extension of years spent with crippling and humiliating disabilities. There are no guarantees...

This book ought be a beaut. From the Amazon blurb:
Bestselling author of Nickel and Dimed, Barbara Ehrenreich explores how we are killing ourselves to live longer, not better.

A razor-sharp polemic which offers an entirely new understanding of our bodies, ourselves, and our place in the universe, NATURAL CAUSES describes how we over-prepare and worry way too much about what is inevitable. One by one, Ehrenreich topples the shibboleths that guide our attempts to live a long, healthy life -- from the importance of preventive medical screenings to the concepts of wellness and mindfulness, from dietary fads to fitness culture.

But NATURAL CAUSES goes deeper -- into the fundamental unreliability of our bodies and even our "mind-bodies," to use the fashionable term. Starting with the mysterious and seldom-acknowledged tendency of our own immune cells to promote deadly cancers, Ehrenreich looks into the cellular basis of aging, and shows how little control we actually have over it. We tend to believe we have agency over our bodies, our minds, and even over the manner of our deaths. But the latest science shows that the microscopic subunits of our bodies make their own "decisions," and not always in our favor.

We may buy expensive anti-aging products or cosmetic surgery, get preventive screenings and eat more kale, or throw ourselves into meditation and spirituality. But all these things offer only the illusion of control. How to live well, even joyously, while accepting our mortality -- that is the vitally important philosophical challenge of this book.

Drawing on varied sources, from personal experience and sociological trends to pop culture and current scientific literature, NATURAL CAUSES examines the ways in which we obsess over death, our bodies, and our health. Both funny and caustic, Ehrenreich then tackles the seemingly unsolvable problem of how we might better prepare ourselves for the end -- while still reveling in the lives that remain to us.
Can't wait to read it.

Relatedly, I saw this over at The Atlantic:
Why So Many of Us Die of Heart Disease
Evolution doomed us to have vital organs fail. For years, experts failed us, too.

The Assyrians treated the “hard-pulse disease” with leeches. The Roman scholar Cornelius Celsus recommended bleeding, and the ancient Greeks cupped the spine to draw out animal spirits.
Centuries later, heart disease remains America’s number one killer, even though medical advances have made it so that many more people can survive heart attacks. Some parts of the country are especially hard-hit: In areas of Appalachia, more people are dying of heart disease now than were in 1980.

Haider Warraich, a fellow in cardiovascular medicine at the Duke University Medical Center (and an occasional Atlantic contributor), is at work on a book about how heart disease came to be such a big threat to humanity…
Certainly of interest to me these days. And, that article led me to this book:

From his NPR Fresh Air interview last year:
Doctor Considers The Pitfalls Of Extending Life And Prolonging Death
Humans have had to face death and mortality since since the beginning of time, but our experience of the dying process has changed dramatically in recent history.

Haider Warraich, a fellow in cardiology at Duke University Medical Center, tells Fresh Air's Terry Gross that death used to be sudden, unexpected and relatively swift — the result of a violent cause, or perhaps an infection. But, he says, modern medicines and medical technologies have lead to a "dramatic extension" of life — and a more prolonged dying processes.

"We've now ... introduced a phase of our life, which can be considered as 'dying,' in which patients have terminal diseases in which they are in and out of the hospital, they are dependent in nursing homes," Warraich says. "That is something that is a very, very recent development in our history as a species.”…

I've just started on this book.

I'll close for now with a bit more Barbara Ehrenreich:
In 2000, an Italian immunologist named Claudio Franceschi proposed the neologism “inflammaging” to describe the entire organism-wide process of aging. Far from being a simple process of decay originating in individual cells, aging involves the active mobilization of macrophages to deal with proliferating sites of cellular damage. Today, Franceschi’s theory is widely accepted. The hallmark disorders of aging — atherosclerosis, arthritis, Alzheimer’s disease, diabetes, osteoporosis — are all inflammatory diseases, characterized by localized buildup of macrophages. In atherosclerosis, for example, macrophages settle in the arteries that lead to the heart and gorge themselves on lipids until the arteries are blocked. In type 2 diabetes, macrophages accumulate in the pancreas, where they destroy the cells that produce insulin. Osteoporosis involves the activation of bone-dwelling macrophages, called osteocytes, that kill normal bone cells. The inflammation associated with Alzheimer’s was first thought to represent macrophages’ attempts to control the beta-amyloid plaques that clog up the Alzheimer’s brain. But the most recent research suggests that the macrophages actually drive the progression of the disease.

These are not degenerative diseases, not accumulations of errors and cobwebs. They are active and seemingly purposeful attacks by the immune system on the body itself. Why should this happen? Perhaps a better question is: Why shouldn’t it happen? The survival of an older person incapable of reproduction is of no evolutionary consequence. In a Darwinian sense, it might even be better to remove the elderly before they can use up resources that would otherwise go to the young. In that case, you could say that there is something almost altruistic about the diseases of aging. Just as programmed cell death, called apoptosis, cleanly eliminates damaged cells from the body, so do the diseases of aging clear out the clutter of biologically useless older people — only not quite so cleanly. This perspective may be particularly attractive at a time like the present, when the dominant discourse on aging focuses on the deleterious economic effects of aging populations. If we didn’t have inflammatory diseases to get the job done, we might be tempted to turn to euthanasia...

On this broad topic of aging and chronic maladies, I am reminded of Dan Lieberman's fine book.

I cited and reviewed it here.


Was recently apprised of this book.

Looks very interesting. I am reminded of Einer Elhauge.

BTW, another new read I'm just starting.

Heard this one touted on CNN on Sunday.


apropos of my recent #NeverAgain posts.


More to come...

Monday, March 5, 2018

A tale of two sisters,

in which it now seems to be increasingly and crushingly likely that I may soon have outlived
both of my daughters.

Sissy and Danielle, high school years in Knoxville.

I have a lot of shortcomings. Failing to be a consistently devoted father is not among them.

The initial backstory on my salt & pepper girls (from an essay on another of my blogs):
"The year is 1969, the place, suburban Seattle. A young couple chafes within the throes of an ill-advised (and ultimately doomed) marriage. They have an infant girl, on whom the young father joyfully dotes. The one unequivocally bright spot. Parenthood, at least, suits him, so it seems.

The young wife announces one day that she is again pregnant. But, while the husband is thrilled at the news, she exudes an inexplicable anxious and distant air. In the subsequent weeks, her smoldering anxiety morphs into a controlled state of cornered panic, and the devastating truth must finally be aired one night; she had had a recent transient sexual dalliance, and this unwanted pregnancy is almost certainly the upshot. To make matters even more complex, the cuckolding paramour is a black man (this couple is white).

Thermonuclear agonies ensue, regarding which, words utterly fail.

The young woman is beyond frantic to obtain an abortion (circumstances being exacerbated by the fact that her own father is an overt racist), but, this being an era prior to Rove vs Wade, abortions are proscribed by law in Washington state. Her subsequent attempts to procure one illegally fail, and she realizes she will have to carry this fetus to term.

She is then advised by state social services agencies that she may indeed relinquish the newborn sight-unseen for adoption, and wishes to opt for that alternative to end this nightmare, however imperfectly. This, though, requires the husband's written assent, which, for reasons not entirely clear to him, he declines to provide. In part, one can safely assume, hoping against hope that this is all a cruel, horrific dream, and the child will in fact prove to be biologically his.

An uneventful delivery obtains in the hospital in Renton in July of 1970, a 7 lb. 6 oz. healthy baby girl. The young man hesitantly approaches the glass partition of the nursery unit. The moment of truth in a glance: 'Nope, well, this is definitely not your child.' A fleeting, wracked feeling of being summarily dropped down an open elevator shaft gives way within seconds to a subsequent flustered internal flurry: 'Now what? Whatever will become of this child? None of this shit is her fault...'

He turns and heads down the hall to the office, whereupon he signs the requisite parental paperwork. He will be her "father." Not even legally her "adoptive father," simply her father, DNA be damned. His bigoted father-in-law be damned. Subsequent hushed gossip and furtive glances within his social cohort be damned.

Fast forward four years to a Clark County, Washington courtroom. The young man is granted an uncontested divorce, along with sole custody of his two girls. The henceforth ex-wife does not attend the hearing.

Fast forward yet again. Knoxville, Tennessee a decade later, a dining room discussion ensues during which the younger daughter learns for the first time the full story. "Thanks, Dad, you saved my life."

They laugh. It is good.


The foregoing is no mere illustrative fictional anecdote conjured up for emotional impact. I am that father."
Seattle, 1974
Knoxville, 1980

 More on our Knoxville years.

Twenty years ago this July 1st, we lost Danielle's elder sister to cancer at the end of an excruciating 26-month ordeal. I wrote extensively about that. Still seems like last week in many ways. The original title was "One in Three," which no longer works, given that this household is now "batting a thousand" in the cancer department. I spent most of 2015 dealing with non-life threatening albeit serious enough prostate cancer, recounted here.

Then, on March 29th, 2017, Danielle was unceremoniously apprised of her staggering diagnosis of Stage IV metastatic pancreatic cancer. It's "Three in Three" now at our house.

This is gonna take a while. Stay tuned. I have to also try to keep up with KHIT topical stuff ongoing as time permits.


We've had a disconcerting week. Hospice time is here. No more chemo. Wednesday night CT scan and labs during a 10-hour ER stint were dismal.

I did this "selfie" Friday night on the couch in the family room. What a year.

Time may be very short. We're a bit overwhelmed today.


Hospice (in-home) is now fully in place. Danielle's cognitive function has improved (waning of "chemo brain"), and her pain management regimen seems adequate at this point.

Day by day now.

More to come...

Sunday, February 25, 2018

#AI and health diagnostics. "Reproducibility," anyone?

From ARS Technica:

AI trained to spot heart disease risks using retina scan
The blood vessels in the eye reflect the state of the whole circulatory system.

The idea behind using a neural network for image recognition is that you don't have to tell it what to look for in an image. You don't even need to care about what it looks for. With enough training, the neural network should be able to pick out details that allow it to make accurate identifications.

For things like figuring out whether there's a cat in an image, neural networks don't provide much, if any, advantages over the actual neurons in our visual system. But where they can potentially shine are cases where we don't know what to look for. There are cases where images may provide subtle information that a human doesn't understand how to read, but a neural network could pick up on with the appropriate training.

Now, researchers have done just that, getting a deep-learning algorithm to identify risks of heart disease using an image of a patient's retina.

The idea isn't quite as nuts as it might sound. The retina has a rich collection of blood vessels, and it's possible to detect issues in those that also effect the circulatory system as a whole; things like high levels of cholesterol or elevated blood pressure leave a mark on the eye. So, a research team consisting of people at Google and Verily Life Sciences decided to see just how well a deep-learning network could do at figuring those out from retinal images.

To train the network, they used a total of nearly 300,000 patient images tagged with information relevant to heart disease like age, smoking status, blood pressure, and BMI. Once trained, the system was set loose on another 13,000 images to see how it did.
Simply by looking at the retinal images, the algorithm was typically able to get within 3.5 years of a patient's actual age. It also did well at estimating the patient's blood pressure and body mass index. Given those successes, the team then trained a similar network to use the images to estimate the risk of a major cardiac problem within the next five years. It ended up having similar performance to a calculation that used many of the factors mentioned above to estimate cardiac risk—but the algorithm did it all from an image, rather than some tests and a detailed questionnaire.

The neat thing about this work is that the algorithm was set up so it could report back what it was focusing on in order to make its diagnoses. For things like age, smoking status, and blood pressure, the software focused on features of the blood vessels. Training it to predict gender ended up causing it to focus on specific features scattered throughout the eye, while body mass index ended up without any obvious focus, suggesting there are signals of BMI spread throughout the retina…
OK, I'm all for reliable, accurate tech dx assistance. But, from my latest (paywalled) issue of Science Magazine:

Last year, computer scientists at the University of Montreal (U of M) in Canada were eager to show off a new speech recognition algorithm, and they wanted to compare it to a benchmark, an algorithm from a well-known scientist. The only problem: The benchmark's source code wasn't published. The researchers had to recreate it from the published description. But they couldn't get their version to match the benchmark's claimed performance, says Nan Rosemary Ke, a Ph.D. student in the U of M lab. “We tried for 2 months and we couldn't get anywhere close.”

The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. AI researchers have found it difficult to reproduce many key results, and that is leading to a new conscientiousness about research methods and publication protocols. “I think people outside the field might assume that because we have code, reproducibility is kind of guaranteed,” says Nicolas Rougier, a computational neuroscientist at France's National Institute for Research in Computer Science and Automation in Bordeaux. “Far from it.” Last week, at a meeting of the Association for the Advancement of Artificial Intelligence (AAAI) in New Orleans, Louisiana, reproducibility was on the agenda, with some teams diagnosing the problem—and one laying out tools to mitigate it.

The most basic problem is that researchers often don't share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm's code. Only a third shared the data they tested their algorithms on, and just half shared “pseudocode”—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)

Researchers say there are many reasons for the missing details: The code might be a work in progress, owned by a company, or held tightly by a researcher eager to stay ahead of the competition. It might be dependent on other code, itself unpublished. Or it might be that the code is simply lost, on a crashed disk or stolen laptop—what Rougier calls the “my dog ate my program” problem.

Assuming you can get and run the original code, it still might not do what you expect. In the area of AI called machine learning, in which computers derive expertise from experience, the training data for an algorithm can influence its performance. Ke suspects that not knowing the training for the speech-recognition benchmark was what tripped up her group. “There's randomness from one run to another,” she says. You can get “really, really lucky and have one run with a really good number,” she adds. “That's usually what people report.”…
Issues of "proprietary code," "intellectual property," etc? Morever, there's the additional problem, cited in the Science article, that AI applications, by virtue of their "learning" functions, are not strictly "algorithmic." There's a "random walk" aspect, no? Moreover, accuracy of AI results assumes accuracy of the training data. Otherwise, the AI software learns our mistakes.

Years ago, when I was Chair of the ASQ Las Vegas Section, we once had a presentation on the "software life cycle QA" of military fighter jets' avionics at nearby Nellis AFB. That stuff was tightly algorithmic, and was managed with an obsessive beginning-to-end focus on accuracy, reliability.


Update: of relevance, from Science Based Medicine: 
Replication is the cornerstone of quality control in science, and so failure to replicate studies is definitely a concern. How big a problem is replication, and what can and should be done about it?

As a technical point, there is a difference between the terms “replication” and “reproduction” although I often see the terms used interchangeably (and I probably have myself). Results are said to be reproducible if you analyse the same data again and get the same results. Results are replicable when you repeat the study to obtain fresh data and get the same results.

There are also different kinds of replication. An exact replication, as the name implies, is an effort to exactly repeat the original study in every detail. But scientists acknowledge that “exact” replications are always approximate. There are always going to be slight differences in the materials used and the methodology...
From the lab methodology chapter of my 1998 grad school thesis:
The terms “accuracy” and “precision” are not synonyms. The former refers to closeness of agreement with agreed-upon reference standards, while the latter has to do with the extent of variability in repeated measurements. One can be quite precise, and quite precisely wrong. Precision, in a sense, is a necessary but insufficient prerequisite for the demonstration of “accuracy.” Do you hit the “bull’s eye” red center of the target all the time, or are your shots scattered all over? Are they tightly clustered lower left (high precision, poor accuracy), or widely scattered lower left (poor precision, poor accuracy). In an analytical laboratory, the “accuracy” of production results cannot be directly determined; it is necessarily inferred from the results of quality control (“QC”) data. If the lab does not keep ongoing, meticulous (and expensive) QC records of the performance histories of all instruments and operators, determination of accuracy and precision is not possible….

A “spike” is a sample containing a “known” concentration of an analyte derived from an “NIST-traceable” reference source of established and optimal purity (NIST is the National Institute of Standards and Technology, official source of all U.S. measurement reference standards). A “matrix blank” is an actual sample specimen “known” to not contain any target analytes. Such quality control samples should be run through the lab production process “blind,” i.e., posing as a normal client specimens. Blind testing is the preferred method of quality control assessment, simple in principle but difficult to administer in practice, as lab managers and technicians are usually adept at sniffing out inadequately concealed blinds, which subsequently receive special scrutiny. This is particularly true at certification or contract award time; staffs are typically put on “red alert” when Performance Evaluation samples are certain to arrive in advance of license approvals or contract competitions. Such costly vigilance may be difficult to maintain once the license is on the wall and the contracts signed and filed away…

#AI developers, take note. Particularly in the health care space. If someone doesn't get their pizza delivery because of AI errors, that's trivial. Miss an exigent clinical dx, that's entirely another matter.

Related Science Mag article (same issue, Feb. 15th, 2018): "Missing data hinder replication of artificial intelligence studies."

Also tangentially apropos, my November post "Artificial Intelligence and Ethics." And, "Digitech AI news updates."


Also of relevance. A nice long read:

The Coming Software Apocalypse
A small group of programmers wants to change how we code — before catastrophe strikes.

…It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical — a program that is a thousand times more complex than another takes up the same actual space — it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”…
Read all of it.


OpenAI's mission is to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs.

We're a non-profit research company. Our full-time staff of 60 researchers and engineers is dedicated to working towards our mission regardless of the opportunities for selfish gain which arise along the way...
Lots of ongoing "Open AI" news here. They're on Twitter here.


From Wired:
Why Artificial Intelligence Researchers Should Be More Paranoid
LIFE HAS GOTTEN more convenient since 2012, when breakthroughs in machine learning triggered the ongoing frenzy of investment in artificial intelligence. Speech recognition works most of the time, for example, and you can unlock the new iPhone with your face.

People with the skills to build things such systems have reaped great benefits—they’ve become the most prized of tech workers. But a new report on the downsides of progress in AI warns they need to pay more attention to the heavy moral burdens created by their work.

The 99-page document unspools an unpleasant and sometimes lurid laundry list of malicious uses of artificial-intelligence technology. It calls for urgent and active discussion of how AI technology could be misused. Example scenarios given include cleaning robots being repurposed to assassinate politicians, or criminals launching automated and highly personalized phishing campaigns.

One proposed defense against such scenarios: AI researchers becoming more paranoid, and less open. The report says people and companies working on AI need to think about building safeguards against criminals or attackers into their technology—and even to withhold certain ideas or tools from public release…
We all need to closely read both the article and the 99 page report. The Exec Summary of the Report:

I can assume that many of you have watched to 2018 Winter Olympics. The opening and closing ceremonies featuring dynamic choreographed drone light shows were beautiful, amazing.

Now, imagine a huge hostile swarm of small drones, each armed with explosives, target-enabled with the GPS coordinates of the White House and/or Capitol Hill, AI-assisted, remotely "launched" by controllers halfway around the world.

From a distance they might well resemble a large flock of birds. They wouldn't all have to get through.



More to come...

Monday, February 19, 2018

The costs of firearms violence

I've been stewing, aghast, over the Parkland FL mass school shooting, finding it difficult to just move on to other topics just yet. For one thing, I tweeted,

My Reno physician friend Andy responded on Facebook with this link:

Emergency Department Visits For Firearm-Related Injuries In The United States, 2006–14

Firearm-related deaths are the third leading cause of injury-related deaths in the United States. Yet limited data exist on contemporary epidemiological trends and risk factors for firearm-related injuries. Using data from the Nationwide Emergency Department Sample, we report epidemiological trends and quantify the clinical and financial burden associated with emergency department (ED) visits for firearm-related injuries. We identified 150,930 patients—representing a weighted total of 704,916 patients nationally—who presented alive to the ED in the period 2006–14 with firearm-related injuries. Such injuries were approximately nine times more common among male than female patients and highest among males ages 20–24. Of the patients who presented alive to the ED, 37.2 percent were admitted to inpatient care, while 8.3 percent died during their ED visit or inpatient admission. The mean per person ED and inpatient charges were $5,254 and $95,887, respectively, resulting in an annual financial burden of approximately $2.8 billion in ED and inpatient charges. Although future research is warranted to better understand firearm-related injuries, policy makers might consider implementing universal background checks for firearm purchases and limiting access to firearms for people with a history of violence or previous convictions to reduce the clinical and financial burden associated with these injuries.
My response on Facebook:
Very good. But, we have to add to those data all of the postacute care stuff. Relatedly, how about all of the expenses associated with law-enforcement and other first responders? Not to mention the myriad n-dimensional legal expenses.
Beyond all the unquantifiable tragic, searing human miseries, what about the broader adverse economic impacts? apropos,
As reported by CNN, NOAA estimates the aggregate cost of 2017 U.S. natural disasters at $306 billion. I can't help but wonder how much of that is reflected in the "recent GDP growth" that Donald Trump never fails to brag about?
I can't help but feel that the Health Affairs article seriously understates the overall financial impacts of these shootings. We can be sure that the NRA will not let the government do any precise analytical studies on the topic -- "that I can tell you."

The President stopped by in Parkland on his way to Mar-a-Lago.

I'd tweeted this:

That was before I saw the photos.

Barron Trump will most certainly never face the muzzle of an assault rifle while at school



The (tobacco industry) analogy is a bit of a stretch, I know (and I know that some of my "gun enthusiast" friends will scoff). But, not that much of one. A "perfect analogy" is essentially a redundancy, anyway. Relevant similarities are what matter.

In civil tort terminology, the “Inherently Dangerous Instrumentality” is one for which there is no "safe" use. “Used as directed,” it harms or kills its customers (e.g., tobacco products; not even mentioning the tangential effects of “second-hand smoke”). Cigarettes were finally found legally to be ‘inherently dangerous instrumentalities” (notwithstanding that many users were/are not made diagnosably "ill" or killed by smoking). While that designation did not outlaw tobacco products, it laid the foundation for by-now settled legislative and regulatory actions.

While, yes, a firearm can be used “safely,” the projectiles it fires are designed and manufactured for one purpose — the damage or destruction of the objects of their targeted aim, be they beer bottles, tin cans, paper targets, or living beings. IMO, a firearm comes quite close enough to the logic of the “inherently dangerous instrumentality” to warrant rational regulation (slippery slope hand-wringing by 2nd Amendment paranoid “gun enthusiasts” aside). That this does not happen owing principally to the political power of the NRA is an outrage.


Things may well get materially worse. From The Incidental Economist:

AI Rifles and Future Mass Shootings
The scale and frequency of mass killings have been increasing, and this is likely to continue. One reason — but just one — is that weapons are always getting more lethal. One of the next technical innovations in small arms will be the use of artificial intelligence (AI) to improve the aiming of weapons. There is no reason for civilians to have this technology and we should ban it now…
Good grief.

Hey, chill, the "Tracking Point XS1" is merely an improved accuracy deer rifle, just a 21st century musket. Pay no attention to the heat vent barrel outer cover.

The Intractable Debate over Guns

When Russian forces stormed the school held hostage by Chechen terrorists, over 300 people died. The Beslan school siege wasn’t the worst terrorist attack arithmetically – the fatalities were only a tenth of September 11th. What made the school siege particularly gruesome was that many who died, and died in the most gruesome manner, were children.

There’s something particularly distressing about kids being massacred, which can’t be quantified mathematically. You either get that point or you don’t. And the famed Chechen rebel, Shamil Basayev, got it. Issuing a statement after the attack Basayev claimed responsibility for the siege but called the deaths a “tragedy.” He did not think that the Russians would storm the school. Basayev expressed regret saying that he was “not delighted by what happened there.” Basayev was not known for contrition but death of children doesn’t look good even for someone whose modus operandi was in killing as many as possible.

There’s a code even amongst terrorists – you don’t slaughter children – it’s ok flying planes into big towers but not ok deliberately killing children. Of course, neither is ok but the point is that even the most immoral of our species have a moral code. Strict utilitarians won’t understand this moral code. Strict utilitarians, or rational amoralists, accord significance by multiplying the number of life years lost by the number died, and whether a death from medical error or of a child burnt in a school siege, the conversion factor is the same. Thus, for rational amoralists sentimentality specifically over children dying, such as in Parkland, Florida, in so far as this sentimentality affects policy, must be justified scientifically.

The debate over gun control is paralyzed by unsentimental utilitarianism but with an ironic twist – it is the conservatives, known to eschew utilitarianism, who seek refuge in it. After every mass killing, I receive three lines of reasoning from conservatives opposed to gun control: a) If you restrict guns there’ll be a net increase in crimes and deaths, b) there’s no evidence restricting access to guns will reduce mass shootings, and c) people will still get guns if they really wish to. This type of reasoning comes from the same people who oppose population health, and who deeply oppose the sacrifice of individuals for the greater good, i.e. oppose utilitarianism…
Real all of it. 



 Aren't we all comforted? #NeverAgain


More to come...#NeverAgain