Search the KHIT Blog

Sunday, February 25, 2018

#AI and health diagnostics. "Reproducibility," anyone?

From ARS Technica:

AI trained to spot heart disease risks using retina scan
The blood vessels in the eye reflect the state of the whole circulatory system.

The idea behind using a neural network for image recognition is that you don't have to tell it what to look for in an image. You don't even need to care about what it looks for. With enough training, the neural network should be able to pick out details that allow it to make accurate identifications.

For things like figuring out whether there's a cat in an image, neural networks don't provide much, if any, advantages over the actual neurons in our visual system. But where they can potentially shine are cases where we don't know what to look for. There are cases where images may provide subtle information that a human doesn't understand how to read, but a neural network could pick up on with the appropriate training.

Now, researchers have done just that, getting a deep-learning algorithm to identify risks of heart disease using an image of a patient's retina.

The idea isn't quite as nuts as it might sound. The retina has a rich collection of blood vessels, and it's possible to detect issues in those that also effect the circulatory system as a whole; things like high levels of cholesterol or elevated blood pressure leave a mark on the eye. So, a research team consisting of people at Google and Verily Life Sciences decided to see just how well a deep-learning network could do at figuring those out from retinal images.

To train the network, they used a total of nearly 300,000 patient images tagged with information relevant to heart disease like age, smoking status, blood pressure, and BMI. Once trained, the system was set loose on another 13,000 images to see how it did.
Simply by looking at the retinal images, the algorithm was typically able to get within 3.5 years of a patient's actual age. It also did well at estimating the patient's blood pressure and body mass index. Given those successes, the team then trained a similar network to use the images to estimate the risk of a major cardiac problem within the next five years. It ended up having similar performance to a calculation that used many of the factors mentioned above to estimate cardiac risk—but the algorithm did it all from an image, rather than some tests and a detailed questionnaire.

The neat thing about this work is that the algorithm was set up so it could report back what it was focusing on in order to make its diagnoses. For things like age, smoking status, and blood pressure, the software focused on features of the blood vessels. Training it to predict gender ended up causing it to focus on specific features scattered throughout the eye, while body mass index ended up without any obvious focus, suggesting there are signals of BMI spread throughout the retina…
OK, I'm all for reliable, accurate tech dx assistance. But, from my latest (paywalled) issue of Science Magazine:

Last year, computer scientists at the University of Montreal (U of M) in Canada were eager to show off a new speech recognition algorithm, and they wanted to compare it to a benchmark, an algorithm from a well-known scientist. The only problem: The benchmark's source code wasn't published. The researchers had to recreate it from the published description. But they couldn't get their version to match the benchmark's claimed performance, says Nan Rosemary Ke, a Ph.D. student in the U of M lab. “We tried for 2 months and we couldn't get anywhere close.”

The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. AI researchers have found it difficult to reproduce many key results, and that is leading to a new conscientiousness about research methods and publication protocols. “I think people outside the field might assume that because we have code, reproducibility is kind of guaranteed,” says Nicolas Rougier, a computational neuroscientist at France's National Institute for Research in Computer Science and Automation in Bordeaux. “Far from it.” Last week, at a meeting of the Association for the Advancement of Artificial Intelligence (AAAI) in New Orleans, Louisiana, reproducibility was on the agenda, with some teams diagnosing the problem—and one laying out tools to mitigate it.

The most basic problem is that researchers often don't share their source code. At the AAAI meeting, Odd Erik Gundersen, a computer scientist at the Norwegian University of Science and Technology in Trondheim, reported the results of a survey of 400 algorithms presented in papers at two top AI conferences in the past few years. He found that only 6% of the presenters shared the algorithm's code. Only a third shared the data they tested their algorithms on, and just half shared “pseudocode”—a limited summary of an algorithm. (In many cases, code is also absent from AI papers published in journals, including Science and Nature.)

Researchers say there are many reasons for the missing details: The code might be a work in progress, owned by a company, or held tightly by a researcher eager to stay ahead of the competition. It might be dependent on other code, itself unpublished. Or it might be that the code is simply lost, on a crashed disk or stolen laptop—what Rougier calls the “my dog ate my program” problem.

Assuming you can get and run the original code, it still might not do what you expect. In the area of AI called machine learning, in which computers derive expertise from experience, the training data for an algorithm can influence its performance. Ke suspects that not knowing the training for the speech-recognition benchmark was what tripped up her group. “There's randomness from one run to another,” she says. You can get “really, really lucky and have one run with a really good number,” she adds. “That's usually what people report.”…
Issues of "proprietary code," "intellectual property," etc? Morever, there's the additional problem, cited in the Science article, that AI applications, by virtue of their "learning" functions, are not strictly "algorithmic." There's a "random walk" aspect, no? Moreover, accuracy of AI results assumes accuracy of the training data. Otherwise, the AI software learns our mistakes.

Years ago, when I was Chair of the ASQ Las Vegas Section, we once had a presentation on the "software life cycle QA" of military fighter jets' avionics at nearby Nellis AFB. That stuff was tightly algorithmic, and was managed with an obsessive beginning-to-end focus on accuracy, reliability.


Update: of relevance, from Science Based Medicine: 
Replication is the cornerstone of quality control in science, and so failure to replicate studies is definitely a concern. How big a problem is replication, and what can and should be done about it?

As a technical point, there is a difference between the terms “replication” and “reproduction” although I often see the terms used interchangeably (and I probably have myself). Results are said to be reproducible if you analyse the same data again and get the same results. Results are replicable when you repeat the study to obtain fresh data and get the same results.

There are also different kinds of replication. An exact replication, as the name implies, is an effort to exactly repeat the original study in every detail. But scientists acknowledge that “exact” replications are always approximate. There are always going to be slight differences in the materials used and the methodology...
From the lab methodology chapter of my 1998 grad school thesis:
The terms “accuracy” and “precision” are not synonyms. The former refers to closeness of agreement with agreed-upon reference standards, while the latter has to do with the extent of variability in repeated measurements. One can be quite precise, and quite precisely wrong. Precision, in a sense, is a necessary but insufficient prerequisite for the demonstration of “accuracy.” Do you hit the “bull’s eye” red center of the target all the time, or are your shots scattered all over? Are they tightly clustered lower left (high precision, poor accuracy), or widely scattered lower left (poor precision, poor accuracy). In an analytical laboratory, the “accuracy” of production results cannot be directly determined; it is necessarily inferred from the results of quality control (“QC”) data. If the lab does not keep ongoing, meticulous (and expensive) QC records of the performance histories of all instruments and operators, determination of accuracy and precision is not possible….

A “spike” is a sample containing a “known” concentration of an analyte derived from an “NIST-traceable” reference source of established and optimal purity (NIST is the National Institute of Standards and Technology, official source of all U.S. measurement reference standards). A “matrix blank” is an actual sample specimen “known” to not contain any target analytes. Such quality control samples should be run through the lab production process “blind,” i.e., posing as a normal client specimens. Blind testing is the preferred method of quality control assessment, simple in principle but difficult to administer in practice, as lab managers and technicians are usually adept at sniffing out inadequately concealed blinds, which subsequently receive special scrutiny. This is particularly true at certification or contract award time; staffs are typically put on “red alert” when Performance Evaluation samples are certain to arrive in advance of license approvals or contract competitions. Such costly vigilance may be difficult to maintain once the license is on the wall and the contracts signed and filed away…

#AI developers, take note. Particularly in the health care space. If someone doesn't get their pizza delivery because of AI errors, that's trivial. Miss an exigent clinical dx, that's entirely another matter.

Related Science Mag article (same issue, Feb. 15th, 2018): "Missing data hinder replication of artificial intelligence studies."

Also tangentially apropos, my November post "Artificial Intelligence and Ethics." And, "Digitech AI news updates."


Also of relevance. A nice long read:

The Coming Software Apocalypse
A small group of programmers wants to change how we code — before catastrophe strikes.

…It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical — a program that is a thousand times more complex than another takes up the same actual space — it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”…
Read all of it.


OpenAI's mission is to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs.

We're a non-profit research company. Our full-time staff of 60 researchers and engineers is dedicated to working towards our mission regardless of the opportunities for selfish gain which arise along the way...
Lots of ongoing "Open AI" news here. They're on Twitter here.


From Wired:
Why Artificial Intelligence Researchers Should Be More Paranoid
LIFE HAS GOTTEN more convenient since 2012, when breakthroughs in machine learning triggered the ongoing frenzy of investment in artificial intelligence. Speech recognition works most of the time, for example, and you can unlock the new iPhone with your face.

People with the skills to build things such systems have reaped great benefits—they’ve become the most prized of tech workers. But a new report on the downsides of progress in AI warns they need to pay more attention to the heavy moral burdens created by their work.

The 99-page document unspools an unpleasant and sometimes lurid laundry list of malicious uses of artificial-intelligence technology. It calls for urgent and active discussion of how AI technology could be misused. Example scenarios given include cleaning robots being repurposed to assassinate politicians, or criminals launching automated and highly personalized phishing campaigns.

One proposed defense against such scenarios: AI researchers becoming more paranoid, and less open. The report says people and companies working on AI need to think about building safeguards against criminals or attackers into their technology—and even to withhold certain ideas or tools from public release…
We all need to closely read both the article and the 99 page report. The Exec Summary of the Report:

I can assume that many of you have watched to 2018 Winter Olympics. The opening and closing ceremonies featuring dynamic choreographed drone light shows were beautiful, amazing.

Now, imagine a huge hostile swarm of small drones, each armed with explosives, target-enabled with the GPS coordinates of the White House and/or Capitol Hill, AI-assisted, remotely "launched" by controllers halfway around the world.

From a distance they might well resemble a large flock of birds. They wouldn't all have to get through.



More to come...

Monday, February 19, 2018

The costs of firearms violence

I've been stewing, aghast, over the Parkland FL mass school shooting, finding it difficult to just move on to other topics just yet. For one thing, I tweeted,

My Reno physician friend Andy responded on Facebook with this link:

Emergency Department Visits For Firearm-Related Injuries In The United States, 2006–14

Firearm-related deaths are the third leading cause of injury-related deaths in the United States. Yet limited data exist on contemporary epidemiological trends and risk factors for firearm-related injuries. Using data from the Nationwide Emergency Department Sample, we report epidemiological trends and quantify the clinical and financial burden associated with emergency department (ED) visits for firearm-related injuries. We identified 150,930 patients—representing a weighted total of 704,916 patients nationally—who presented alive to the ED in the period 2006–14 with firearm-related injuries. Such injuries were approximately nine times more common among male than female patients and highest among males ages 20–24. Of the patients who presented alive to the ED, 37.2 percent were admitted to inpatient care, while 8.3 percent died during their ED visit or inpatient admission. The mean per person ED and inpatient charges were $5,254 and $95,887, respectively, resulting in an annual financial burden of approximately $2.8 billion in ED and inpatient charges. Although future research is warranted to better understand firearm-related injuries, policy makers might consider implementing universal background checks for firearm purchases and limiting access to firearms for people with a history of violence or previous convictions to reduce the clinical and financial burden associated with these injuries.
My response on Facebook:
Very good. But, we have to add to those data all of the postacute care stuff. Relatedly, how about all of the expenses associated with law-enforcement and other first responders? Not to mention the myriad n-dimensional legal expenses.
Beyond all the unquantifiable tragic, searing human miseries, what about the broader adverse economic impacts? apropos,
As reported by CNN, NOAA estimates the aggregate cost of 2017 U.S. natural disasters at $306 billion. I can't help but wonder how much of that is reflected in the "recent GDP growth" that Donald Trump never fails to brag about?
I can't help but feel that the Health Affairs article seriously understates the overall financial impacts of these shootings. We can be sure that the NRA will not let the government do any precise analytical studies on the topic -- "that I can tell you."

The President stopped by in Parkland on his way to Mar-a-Lago.

I'd tweeted this:

That was before I saw the photos.

Barron Trump will most certainly never face the muzzle of an assault rifle while at school



The (tobacco industry) analogy is a bit of a stretch, I know (and I know that some of my "gun enthusiast" friends will scoff). But, not that much of one. A "perfect analogy" is essentially a redundancy, anyway. Relevant similarities are what matter.

In civil tort terminology, the “Inherently Dangerous Instrumentality” is one for which there is no "safe" use. “Used as directed,” it harms or kills its customers (e.g., tobacco products; not even mentioning the tangential effects of “second-hand smoke”). Cigarettes were finally found legally to be ‘inherently dangerous instrumentalities” (notwithstanding that many users were/are not made diagnosably "ill" or killed by smoking). While that designation did not outlaw tobacco products, it laid the foundation for by-now settled legislative and regulatory actions.

While, yes, a firearm can be used “safely,” the projectiles it fires are designed and manufactured for one purpose — the damage or destruction of the objects of their targeted aim, be they beer bottles, tin cans, paper targets, or living beings. IMO, a firearm comes quite close enough to the logic of the “inherently dangerous instrumentality” to warrant rational regulation (slippery slope hand-wringing by 2nd Amendment paranoid “gun enthusiasts” aside). That this does not happen owing principally to the political power of the NRA is an outrage.


Things may well get materially worse. From The Incidental Economist:

AI Rifles and Future Mass Shootings
The scale and frequency of mass killings have been increasing, and this is likely to continue. One reason — but just one — is that weapons are always getting more lethal. One of the next technical innovations in small arms will be the use of artificial intelligence (AI) to improve the aiming of weapons. There is no reason for civilians to have this technology and we should ban it now…
Good grief.

Hey, chill, the "Tracking Point XS1" is merely an improved accuracy deer rifle, just a 21st century musket. Pay no attention to the heat vent barrel outer cover.

The Intractable Debate over Guns

When Russian forces stormed the school held hostage by Chechen terrorists, over 300 people died. The Beslan school siege wasn’t the worst terrorist attack arithmetically – the fatalities were only a tenth of September 11th. What made the school siege particularly gruesome was that many who died, and died in the most gruesome manner, were children.

There’s something particularly distressing about kids being massacred, which can’t be quantified mathematically. You either get that point or you don’t. And the famed Chechen rebel, Shamil Basayev, got it. Issuing a statement after the attack Basayev claimed responsibility for the siege but called the deaths a “tragedy.” He did not think that the Russians would storm the school. Basayev expressed regret saying that he was “not delighted by what happened there.” Basayev was not known for contrition but death of children doesn’t look good even for someone whose modus operandi was in killing as many as possible.

There’s a code even amongst terrorists – you don’t slaughter children – it’s ok flying planes into big towers but not ok deliberately killing children. Of course, neither is ok but the point is that even the most immoral of our species have a moral code. Strict utilitarians won’t understand this moral code. Strict utilitarians, or rational amoralists, accord significance by multiplying the number of life years lost by the number died, and whether a death from medical error or of a child burnt in a school siege, the conversion factor is the same. Thus, for rational amoralists sentimentality specifically over children dying, such as in Parkland, Florida, in so far as this sentimentality affects policy, must be justified scientifically.

The debate over gun control is paralyzed by unsentimental utilitarianism but with an ironic twist – it is the conservatives, known to eschew utilitarianism, who seek refuge in it. After every mass killing, I receive three lines of reasoning from conservatives opposed to gun control: a) If you restrict guns there’ll be a net increase in crimes and deaths, b) there’s no evidence restricting access to guns will reduce mass shootings, and c) people will still get guns if they really wish to. This type of reasoning comes from the same people who oppose population health, and who deeply oppose the sacrifice of individuals for the greater good, i.e. oppose utilitarianism…
Real all of it. 



 Aren't we all comforted? #NeverAgain


More to come...#NeverAgain

Wednesday, February 14, 2018

Happy Valentine's Day from the @NRA

I'd intended to blog about some other current news today. VR Tech stuff. Caregiver update stuff. Book review stuff. #NeverAgain

It's insane. No apparent end in sight. Nope.

BTW: Consider some firearms thoughts from a military veteran, now a structural engineer.

Monday, February 12, 2018

Pain Management Sessions

Tough it out’: Watch Jeff Sessions recommend aspirin instead of opioids for chronic pain patients

 I've had some observations about this perjurious ignoramus before. See "Jeff Sessions' Marijuana Advisor Wants Doctors to Drug-Test Everyone. See also my post from August, "The 'opioid epidemic' and the EHR."

My ailing daughter (who underwent bone scans today) is now in fairly constant appreciable pain from her worsening Stage IV pancreatic cancer. She gets by with morphine, and MS- and Oxycontin. Sessions can go to Hell.

Jeff Sessions: marijuana helped cause the opioid epidemic. The research: no.
The research shows that, contrary to Sessions’s remarks, medical marijuana may help mitigate the crisis.

Attorney General Jeff Sessions is blaming an old foe of his for the opioid crisis: marijuana.

Speaking at the Heritage Foundation to the Reagan Alumni Association this week, Sessions argued that cutting prescriptions for opioid painkillers is crucial to combating the crisis — since some people started on painkillers before moving on to illicit opioids like heroin and fentanyl. But then he expanded his argument to include cannabis.

“The DEA said that a huge percentage of the heroin addiction starts with prescriptions. That may be an exaggerated number; they had it as high as 80 percent,” Sessions said. “We think a lot of this is starting with marijuana and other drugs too.”

It’s true that, historically, a lot of opioid addiction started with prescribed painkillers — although that's changing. A 2017 study in Addictive Behaviors found that 51.9 percent of people entering treatment for opioid use disorder in 2015 started with prescription drugs, down from 84.7 percent in 2005. And 33.3 percent initiated with heroin in 2015, up from 8.7 percent in 2005.

Where Sessions, who once said that “good people don’t smoke marijuana,” went wrong is his suggestion that marijuana leads to heroin use — reiterating the old gateway drug theory…
The potheads are not the perps, Mr. Sessions:
Opioid makers gave millions to patient advocacy groups to sway prescribing
As the nation grapples with a worsening opioid crisis, a new report suggests that drug makers provided substantial funding to patient advocacy groups and physicians in recent years in order to influence the controversial debate over appropriate usage and prescribing.

Specifically, five drug companies funneled nearly $9 million to 14 groups working on chronic pain and issues related to opioid use between 2012 and 2017. At the same time, physicians affiliated with these groups accepted more than $1.6 million from the same companies. In total, the drug makers made more than $10 million in payments since January 2012.

“The fact that these same manufacturers provided millions of dollars to the groups suggests, at the very least, a direct link between corporate donations and the advancement of opioid-friendly messaging,” according to the report released on Monday night by U.S. Sen. Claire McCaskill, who has been probing opioid makers and wholesalers…


Relatedly, from,
Answering Our Critics – Again!
Critics of Science-Based Medicine keep making the same old tired arguments, despite the fact that their arguments have been repeatedly demolished. Here is a list of recurrent memes, with counterarguments.

Instead of a new post this week I decided to recycle and revise what I wrote about Answering Our Critics a few years ago, here and here.I thought it was time to visit this issue again, because our critics didn’t get the message. They are still flooding the Comments section with the same old tired arguments we have debunked over and over.

Some people don’t like what we have to say on Science-Based Medicine. Some attack specific points while others attack our whole approach. Every mention of complementary and alternative medicine (CAM) elicits protests in the Comments section from “true believer” users and practitioners of CAM. Every mention of a treatment that has been disproven or has not been properly tested elicits testimonials from people who claim to have experienced miraculous benefits from that treatment.

Our critics keep bringing up the same old memes, and I thought it might be useful to list those criticisms and answer them all in one place…


Heard this author interviewed on NPR's "Fresh Air" yesterday in the car while taking my daughter to Kaiser for a bone scan px.


I downloaded Kate Bowler's new book and read it straight through. Riveting. I will have plenty to cite and say about it shortly. Stay tuned.

Again, highly recommend Kate's book.

I am reminded of another fine cancer memoir I've cited before, here and here.

Gideon Burrows is still at it, recently publishing a second book. US Amazon link here.


More to come...

Tuesday, February 6, 2018

Digitech AI news updates

From NPR's All Things Considered:
Can Computers Learn Like Humans?
The world of artificial intelligence has exploded in recent years. Computers armed with AI do everything from drive cars to pick movies you'll probably like. Some have warned we're putting too much trust in computers that appear to do wondrous things.

But what exactly do people mean when they talk about artificial intelligence?

It's hard to find a universally accepted definition of artificial intelligence. Basically it's about getting a computer to be smart — getting it to do something that in the past only humans could do.
One key to artificial intelligence is machine learning. Instead of telling a computer how to do something, you write a program that lets the computer figure out how to do something all on its own…

From THCB:
Medicine Is a Profession That is Rapidly Losing Control of Its Tools

Artificial Intelligence hype and reality are everywhere. However, the last month or two has seen some thoughtful reflection. HHS / ONC announced “Hype to Reality: How Artificial Intelligence (AI) Can Transform Health and Healthcare” referencing a major JASON report “Artificial Intelligence for Health and Health Care [PDF -817 KB],”. From a legal and ethical perspective, we have a new multinational program: “PMAIL will provide a comparative analysis of the law and ethics of black-box personalized medicine,…”. Another Harvard affiliate writes “Optimization over Explanation” subtitled “Maximizing the benefits of machine learning without sacrificing its intelligence”. Meanwhile, an investigative journalism report from the UK “Google DeepMind and healthcare in an age of algorithms”, “…draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age”…
Over at

Interesting 4-part (thus far) series:
Living in the Machine
Does technology change the very state of being human?

Artificial intelligence and automation outsources even more of our cognitive functions to machines. What does this mean for art, for relationships — even for our connection to a higher being? What does it mean to be human in the age of the machine?
Comes with audio versions as well. Nice.

From the first post:
From Mining to Meaning 
If you use digital devices, AI is already being sicced on the grotesque bricolage that is your life to eliminate potential sources of “friction” — a tech-speak jargon term that means roughly “whatever grinds your gears.”

Whether it’s by monitoring your calorie intake, presenting you with “optimal” romantic prospects, or making it easier to spend a fortune on Amazon, algorithms are even now insinuating themselves into your every existential crack and crevice like so many squirts of WD-40.

The possibility of using AI to eliminate diseases is undeniably exciting. Excising problems like cancer from society would make for a better future. But is maximizing efficiency the only way to add value to the world?

Most people don’t see the world and its inhabitants simply as a resource to be mined more or less effectively, nor do we tend to think that human value is exhausted by the efficiency or otherwise of this resource mining. Sometimes we just want to make sense of things: to look closely at the world, grasp some pattern in it, and articulate its significance, without some further goal in mind. This desire is what drives people to become scholars, but it’s also why people look at art, listen to music, or strive to build relationships with their grandchildren. If a concern for efficiency is a big part of what makes us human, our desire to grasp significance and share meaningful experiences with others is just as crucial.

Like a world without cancer, a more thoughtful, artistic, and compassionate future strikes us as an unequivocal Good Thing. But adding this kind of value to the world requires something more than maximizing efficiency. Anyone who tries to “hack” being a thoughtful scholar, or a good friend, is kind of missing the point…
Good stuff.


Will robots take your job? Humans ignore the coming AI revolution at their peril.
Artificial intelligence aims to replace the human mind, not simply make industry more efficient.
by Subhash Kak
Robots have transformed industrial manufacturing, and now they are being rolled out for food production and restaurant kitchens. Already, artificial intelligence (AI) machines can do many tasks where learning and judgment is required, including self-driving cars, insurance assessment, stock trading, accounting, HR and many tasks in healthcare. So are we approaching a jobless future, or will new jobs replace the ones that are lost?
According to the optimistic view, our current phase of increasing automation will create new kinds of employment for those who have been made redundant. There is some historical precedent for this: Over a hundred years ago, people feared that the automobile revolution would be bad for workers. But while jobs related to horse-drawn carriages disappeared, the invention of the car lead to a need for automobile mechanics; the internal combustion engine soon found applications in mining, airplanes and other new fields.

The difference, however, is that today’s AI technology aims to replace the human mind, not simply make industry more efficient. This will have unprecedented consequences not predicted by the advent of the car, or the automated knitting machine…
Yeah, this is not a new concern. I've hit on the topic a number of times before. See also here.

BTW, another new read. Just getting started. A lot of technical overlap between AI, IA, AR, and VR.

Strongly recommend you tour his Stanford Virtual Human Interaction Lab website.

From NPR's Science Friday (March 2016):

How advances in virtual reality will change how we work and communicate.

My specific interests go to the potential utility of this technology in health care -- inclusive of clinical pedagogy. I'll reserve judgments until I've finished Jeremy's book.

BTW: Jeff and April, recall, are deploying VR in their startup,


Any tangential AI/VR connection here? From Medium this morning:
We are our own typos

Everyone seems to be writing about the recently announced effort by Amazon, Berkshire Hathaway, and JP Morgan Chase to attack their employee health costs. It is certainly newsworthy, and I am generally interested in whatever Amazon may do in healthcare.

They may very well have some success with this effort, but until I read a positive story about employee working conditions at Amazon, I’m going to be skeptical that any disruption in healthcare they accomplish with it is something that I shouldn’t be worried about.

So, instead, I’m going write to about why we can’t recognize our own typos, and what that means for our health.

As Wired summarized the problem a few years ago: “The reason we don’t see our own typos is because what we see on the screen is competing with the version that exists in our heads.” They go on to explain that one of the great skills of our big brains is that we build mental maps of the world, but those maps are not always faithful to the actual world.

As psychologist Tom Stafford explained: “We don’t catch every detail, we’re not like computers or NSA databases. Rather, we take in sensory information and combine it with what we expect, and we extract meaning.”

Thus, typos.

Unfortunately, the same is often true with how we view our health. We don’t think we’re as overweight as we are. We think we get more exercise than we do. We think our nutrition is better than it is. Overall, we think we’re in better health than we probably are.

Over the past few decades, the U.S. has been suffering “epidemics” of obesity, diabetes, asthma, and allergies, to name a few. Over half of adults now have one or more chronic conditions. Yet two-thirds of us still report being in good or excellent health, virtually unchanged for at least the last twenty years.

Something doesn’t jibe…

I'm reminded of the old QA auditor's saying, "you get what you INspect, not what you EXpect."

More news. Margalit is back with a vengeance:

Ambergan Prime

Dear primary care doctor, Jeff Bezos is about to devour your lunch. All of it. And then he’ll eat the table, the plates, the napkins and the utensils too, so you’ll never have lunch ever again. Oh yeah, and they’ll also finally disrupt and fix health care once and for all, because enough is enough already. Mr. Bezos, it seems, got together with two of his innovator buddies, Warren Buffet from Berkshire Hathaway and Jamie Dimon from J.P. Morgan, and they are fixing up to serve us some freshly yummy and healthy concoction.
Let’s call it Ambergan for now.

This is big. This is huge. It comes from outside the sclerotic “industry”. And it’s all about technology. The founders are no doubt well versed in the latest disruption theories and Ambergan will be a classic Christensen stealth destroyer of existing markets. When the greatest investor that ever-lived combines forces with the greatest banker in recent memory and the premier markets slayer of all times, who happens to be the richest man on earth, all to bring good things to life (sorry GE), nothing but goodness will certainly ensue.

Everybody inside and outside the legacy health care industry is going to write volumes about this magnificent new venture in the coming days and months, so I will leave the big picture to my betters. But since our soon to be dead industry has been busy lately bloviating about the importance of good old fashioned, relationship based primary care, perhaps it would be useful to understand that Ambergan is likely to take the entire primary care thing off the table and stash it safely in the bottomless cash vaults of its founders. It’s not personal, dear doctor. It’s business. Ambergan will be your primary care platform and you may even like it…
LOL. Read all of it.

The Cherry on top:
"The Amazon platform IS the network, and there will be terms, conditions, stars and promotions. There certainly are many legacy obstacles to overcome, and perhaps that is why Amazon couldn’t or wouldn’t go it alone. Throwing highly regulated markets wide open requires two strong lobbying arms, and a federal government willing to play fast and loose. The stars are indeed perfectly aligned for the first true disruption of our health care since 1965."

Happy Birthday to me (72 today). What's the joke? "If I'd known I was gonna live this long, I'd have taken better care of myself."

apropos of the overall topic of this post, another must-read has just come to my attention, via my latest issue of Science Magazine, in a book review entitled "The fetishization of quantification."

From the Amazon blurb:
How the obsession with quantifying human performance threatens our schools, medical care, businesses, and government

Today, organizations of all kinds are ruled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we've gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing--and shows how we can begin to fix the problem.

Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging "gaming the stats" or "teaching to the test." That's because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn't work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to—rather than a replacement for—judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial.

Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.
'eh? "Data, Learning, Experience, Perception, Meaning..."

Frrom the Science Magazine (non-paywalled) summary:
Although the numbers whose "tyranny" forms the subject of Jerry Muller's timely book share some of the attributes of scientific measurement, their purposes are primarily administrative and political. They are designed to be incorporated into systems of what might be called "data-ocracy," often for the sake of public accountability: Schools, hospitals, and corporate divisions whose numbers meet or exceed their goals are to be rewarded, whereas poor numbers, taken to imply underperformance, may bring penalties or even annihilation. In The Tyranny of Metrics, Muller shows how teachers, doctors, researchers, and managers are driven to sacrifice the professional goals they value in order to improve their numbers.
Yeah. While, hey, I'm a long-time "quant guy," a "QI guy," I've had the Brent James training ("if you can't measure it, you can't improve it"), I too have concerns that the phrase "data-driven" can often mean putting your brain in "park." One of the cautions regarding "machine learning" goes to the concern that the machines will "learn" all of our bias errors.

The Science Magazine book review concludes:
In 1975, the American social psychologist Donald Campbell and the British economist C. A. E. Goodhart articulated independently the principle that reliance on measurement to incentivize behaviors leads almost inevitably to a corruption of the measures. Muller explains the logic of this corruption and defends, in place of indiscriminate numbers, an ideal of professional knowledge and experience.

Measurement, he concludes, can contribute to better performance, but only if the measures are designed to function in alliance with professional values rather than as an alternative to them. Good metrics cannot be detached from customs and practices but must depend on a willingness to immerse oneself in the work of these institutions.
Add another book to the stash.


Speaking of "data," heard this in the car yesterday. The new "Panopticon":
With Closed-Circuit TV, Satellites And Phones, Millions Of Cameras Are Watching
Journalist Robert Draper writes in National Geographic that the proliferation of cameras focused on the public has led "to the point where we're expecting to be voyeur and exhibitionist 24/7."

See my November post "Artificial Intelligence and Ethics." See also my "The old internet of data, the new internet of things and "Big Data," and the evolving internet of YOU."


Interesting long-read article:
The Coming Software Apocalypse
A small group of programmers wants to change how we code — before catastrophe strikes.

By James Somers

Just heard Emily Chang interviewed on MSNBC.

Will have to read this one too. Brings to mind this prior post of mine.


The ultimate utility of AI/NLP?


This came across my Twitter feed. My first reaction was "yeah, right, this is straight outa SNL."

Nope. Some "As-Seen-on-TV" vendor fleecing the rubes on cheesy cable channels for months now. Making Bank.

Is this a great country, or what?

More to come...