Search the KHIT Blog

Friday, November 10, 2017

Cloud hidden, whereabouts unknown


I will be offline from November 11th through the 18th. Taking my ailing daughter on a 'bucket list" vacation retreat out of the U.S. She's done better thus far than we'd initially expected, but our world these days is one of always anxiously waiting for the next shoe to drop.

She'll be back on chemo once we return (round 15), with the next follow-up CTs and MRIs in December.

Not taking my Mac Air (don't want the TSA and Customs hassles), so, I'll be back in the fray once I return. I've left you plenty of material.
________

Thursday, November 9, 2017

Artificial Intelligence and Ethics


I was already tee'd up for the topic of this post, but serendipitously just ran across this interesting piece over at Naked Capitalism.
Why You Should NEVER Buy an Amazon Echo or Even Get Near One
by Yves Smith


At the Philadelphia meetup, I got to chat at some length with a reader who had a considerable high end IT background, including at some cutting-edge firms, and now has a job in the Beltway where he hangs out with military-surveillance types. He gave me some distressing information on the state of snooping technology, and as we’ll get to shortly, is particularly alarmed about the new “home assistants” like Amazon Echo and Google Home.

He pointed out that surveillance technology is more advanced than most people realize, and that lots of money and “talent” continues to be thrown at it. For instance, some spooky technologies are already decades old…
Read all of it, including the numerous comments.

"Your Digital Mosaic"

My three prior posts have returned to my episodic riffing on AI and robotics topics: see here, here, and here.

Earlier this week I ran across this article over at Wired:
WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results…
Again, highly recommend you read all of it.

That led me to the "AI Now 2017 Report." (pdf)


The AI Now authors' 36-pg report examines in heavily documented detail (191 footnotes) four topical areas of AI applications and their attendant ethical issues: [1] Labor and Automation; [2] Bias and Inclusion; [3] Rights and Liberties, and, [4] Ethics and Governance.

From the Institute's website:
Rights & Liberties
As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.

Labor & Automation
Automation and early-stage artificial intelligence systems are already changing the nature of employment and working conditions in multiple sectors. AI Now works with social scientists, economists, labor organizers, and others to better understand AI's implications for labor and work – examining who benefits and who bears the cost of these rapid changes.

Bias & Inclusion
Data reflects the social, historical and political conditions in which it was created. Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes. AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.

Safety & Critical Infrastructure
As artificial intelligence systems are introduced into our core infrastructures, from hospitals to the power grid, the risks posed by errors and blind spots increase. AI Now studies the way in which AI and related technologies are being applied within these domains and to understand possibilities for safe and responsible AI integration.
The 2017 Report proffers ten policy recommendations:
1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards…

2 — Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings…

3 — After releasing an AI system, companies should continue to monitor its use across different contexts and communities. The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized…

4 — More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion…

5 — Develop standards to track the provenance, development, and use of training datasets throughout their life cycle. This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work…

6 — Expand AI bias research and mitigation strategies beyond a narrowly technical approach. Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain — such as education, healthcare or criminal justice — legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines…

7 — Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed. Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision…

8 — Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development. Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces…

9 — The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms…

10 — Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles…
I printed it out and went old-school on it with yellow highlighter and red pen.


It is excellent. A must-read, IMO. It remains to be seen, though, how much traction these proposals get in a tech world of transactionalism and "proprietary IP/data" everywhere.

Given that my grad degree is in "applied ethics" ("Ethics and Policy Studies"), I am all in on these ideas. The "rights and liberties" stuff was particularly compelling for me. I've had a good run at privacy/technology issues on another of my blogs. See my post "Clapp Trap" and its antecedent "Privacy and the 4th Amendment amid the 'War on Terror'."


"DIGITAL EXHAUST"

Another recent read on the topic.

Introduction

This book is for everyone who wants to understand the implications of the Big Data phenomenon and the Internet Economy; what it is, why it is different, the technologies that power it, how companies, governments, and everyday citizens are benefiting from it, and some of the threats it may present to society in the future.

That’s a pretty tall order, because the companies and technologies we explore in this book— the huge Internet tech groups like Google and Yahoo!, global retailers like Walmart, smartphone and tablet producers like Apple, the massive online shopping groups like Amazon or Alibaba, or social media and messaging companies like Facebook or Twitter— are now among the most innovative, complex, fast-changing, and financially powerful organizations in the world. Understanding the recent past and likely future of these Internet powerhouses helps us to appreciate where digital innovation is leading us, and is the key to understanding what the Big Data phenomenon is all about. Important, too, are the myriad innovative frameworks and database technologies— NoSQL, Hadoop, or MapReduce— that are dramatically altering the way we collect, manage, and analyze digital data…


Neef, Dale (2014-11-05). Digital Exhaust: What Everyone Should Know About Big Data, Digitization and Digitally Driven Innovation (FT Press Analytics) (Kindle Locations 148-157). Pearson Education. Kindle Edition.
UPDATE

From MIT Technology Review:
Despite All Our Fancy AI, Solving Intelligence Remains “the Greatest Problem in Science”
Autonomous cars and Go-playing computers are impressive, but we’re no closer to machines that can think like people, says neuroscientist Tomaso Poggio.


Recent advances that let computers play board games and drive cars haven’t brought the world any closer to true artificial intelligence.


That’s according to Tomaso Poggio, a professor at the McGovern Institute for Brain Research at MIT who has trained many of today’s AI leaders.


“Is this getting us closer to human intelligence? I don’t think so,” the neuroscientist said at MIT Technology Review’s EmTech conference on Tuesday.

Poggio leads a program at MIT that’s helped train several of today’s AI stars, including Demis Hassabis, cofounder of DeepMind, and Amnon Shashua, cofounder of the self-driving tech company Mobileye, which was acquired by Intel earlier this year for $15.3 billion.

“AlphaGo is one of the two main successes of AI, and the other is the autonomous-car story,” he says. “Very soon they’ll be quite autonomous.”


But Poggio said these programs are no closer to real human intelligence than before. Responding to a warning by physicist Stephen Hawking that AI could be more dangerous than nuclear weapons, Poggio called that “just hype.”…
BTW - apropos, see The MIT Center for Brains, Minds, and Machines.

"The Center for Brains, Minds and Machines (CBMM)
is a multi-institutional NSF Science and Technology Center
dedicated to the study of intelligence - how the brain produces intelligent
behavior and how we may be able to replicate intelligence in machines."

Interesting. See their (dreadful video quality) YouTube video "Discussion Panel: the Ethics of Artificial Intelligence."

In sum, I'm not sure that difficulty achieving "general AI" -- "one that can think for itself and solve many kinds of novel problems" -- is really the central issue going to applied ethics concerns. Again, read the AI Now 2017 Report.

WHAT OF "ETHICS?" ("MORAL PHILOSOPHY")

Couple of good, succinct resources for you, here and here. My elevator speech take on "ethics" is that it is not about a handy "good vs bad cookbook." It goes to honest (albeit frequently difficult) moral deliberation involving critical thinking, deliberation that takes into account "values" that pass rational muster -- surpassing the "Appeal to Tradition" fallacy.

UPDATE

Two new issues of my hardcopy Science Magazine showed up in the snailmail today. This one in particular caught my attention.

What is consciousness, and could machines have it?

Abstract
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain?

I question here the scientific and strategic underpinnings of the runaway enthusiasm for industrial-scale projects at the interface between “wet” (biology) and “hard” (physics, microelectronics and computer science) sciences. Rather than presenting the achievements and hopes fueled by big-data–driven strategies—already covered in depth in special issues of leading journals—I focus on three major issues: (i) Is the industrialization of neuroscience the soundest way to achieve substantial progress in knowledge about the brain? (ii) Do we have a safe “roadmap,” based on a scientific consensus? (iii) Do these large-scale approaches guarantee that we will reach a better understanding of the brain?

This “opinion” paper emphasizes the contrast between the accelerating technological development and the relative lack of progress in conceptual and theoretical understanding in brain sciences. It underlines the risks of creating a scientific bubble driven by economic and political promises at the expense of more incremental approaches in fundamental research, based on a diversity of roadmaps and theory-driven hypotheses. I conclude that we need to identify current bottlenecks with appropriate accuracy and develop new interdisciplinary tools and strategies to tackle the complexity of brain and mind processes…
Interesting stuff. Stay tuned.
__

CODA

Save the date.

Link
____________

More to come...

Friday, November 3, 2017

Clinical cognition in the digital age


From the New England Journal of Medicine (open access essay):
Lost in Thought — The Limits of the Human Mind and the Future of Medicine
Ziad Obermeyer, M.D., and Thomas H. Lee, M.D.
In the good old days, clinicians thought in groups; “rounding,” whether on the wards or in the radiology reading room, was a chance for colleagues to work together on problems too difficult for any single mind to solve.

Today, thinking looks very different: we do it alone, bathed in the blue light of computer screens.

Our knee-jerk reaction is to blame the computer, but the roots of this shift run far deeper. Medical thinking has become vastly more complex, mirroring changes in our patients, our health care system, and medical science. The complexity of medicine now exceeds the capacity of the human mind.

Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research.

It’s ironic that just when clinicians feel that there’s no time in their daily routines for thinking, the need for deep thinking is more urgent than ever. Medical knowledge is expanding rapidly, with a widening array of therapies and diagnostics fueled by advances in immunology, genetics, and systems biology. Patients are older, with more coexisting illnesses and more medications. They see more specialists and undergo more diagnostic testing, which leads to exponential accumulation of electronic health record (EHR) data. Every patient is now a “big data” challenge, with vast amounts of information on past trajectories and current states.

All this information strains our collective ability to think. Medical decision making has become maddeningly complex. Patients and clinicians want simple answers, but we know little about whom to refer for BRCA testing or whom to treat with PCSK9 inhibitors. Common processes that were once straightforward — ruling out pulmonary embolism or managing new atrial fibrillation — now require numerous decisions...
"Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research."

'eh?

I am reminded of a prior contrarian post "Are structured data the enemy of health care quality?"

More recently, I've reported on the latest (excessively?) exuberant rah-rah over stuff like AI, NLP, and Robotics. See also here.

More Obermeyer and Lee from NEJM:
The first step toward a solution is acknowledging the profound mismatch between the human mind’s abilities and medicine’s complexity. Long ago, we realized that our inborn sensorium was inadequate for scrutinizing the body’s inner workings — hence, we developed microscopes, stethoscopes, electrocardiograms, and radiographs. Will our inborn cognition alone solve the mysteries of health and disease in a new century? The state of our health care system offers little reason for optimism. 
But there is hope. The same computers that today torment us with never-ending checkboxes and forms will tomorrow be able to process and synthesize medical data in ways we could never do ourselves. Already, there are indications that data science can help us with critical problems...
I found it quite interesting that Lincoln Weed, JD, co-author of the excellent "Medicine in Denial" (now available free in searchable PDF format) was first to comment under the essay.
LINCOLN WEED
Underhill VT
October 04, 2017

Medicine has long been operating in denial of complexity and its solutions
The authors correctly observe, "Algorithms that learn from human decisions will also learn human mistakes." But the authors understate the problem. "The complexity of medicine," they argue, "NOW exceeds the capacity of the human mind" (emphasis added). This is a bit like saying, "The demands of transportation NOW exceed the capacity of horse-powered vehicles." In reality, the complexity of medicine overtook the human mind many decades ago.  Moreover, conventional software engineering demonstrated the potential for tools to cope with complexity and transform medicine long before algorithms driven by machine learning emerged. Medical education, licensure, and practice have been operating in denial of this reality.
 
Interested readers are referred to Weed LL, Physicians of the Future, New Eng. J. Med. 1981;304:903-907; Weed LL, Weed L, Medicine in Denial, CreateSpace, 2011 (a book available in full text at www.world3medicine.org); and a recent guest blog post, https://nlmdirector.nlm.nih.gov/2017/09/05/larry-weeds-legacy-and-clinical-decision-support/. Disclosure: I am a son of and co-author with the late Dr. Larry Weed, author of the article and lead author of the book just cited.
I could not recommend the Weeds' book more highly. I've cited it multiple times, e.g., "Down in the Weeds'," "Back down in the Weeds'," and "Back down in the Weeds': A Complex Systems Science Approach to Healthcare Costs and Quality."

Back to more Obermeyer and Lee:
...Machine learning has already spurred innovation in fields ranging from astrophysics to ecology. In these disciplines, the expert advice of computer scientists is sought when cutting-edge algorithms are needed for thorny problems, but experts in the field — astrophysicists or ecologists — set the research agenda and lead the day-to-day business of applying machine learning to relevant data.
In medicine, by contrast, clinical records are considered treasure troves of data for researchers from nonclinical disciplines. Physicians are not needed to enroll patients — so they’re consulted only occasionally, perhaps to suggest an interesting outcome to predict. They are far from the intellectual center of the work and rarely engage meaningfully in thinking about how algorithms are developed or what would happen if they were applied clinically.
But ignoring clinical thinking is dangerous. Imagine a highly accurate algorithm that uses EHR data to predict which emergency department patients are at high risk for stroke. It would learn to diagnose stroke by churning through large sets of routinely collected data. Critically, all these data are the product of human decisions: a patient’s decision to seek care, a doctor’s decision to order a test, a diagnostician’s decision to call the condition a stroke. Thus, rather than predicting the biologic phenomenon of cerebral ischemia, the algorithm would predict the chain of human decisions leading to the coding of stroke.
Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system. Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors.
Ultimately, machine learning in medicine will be a team sport, like medicine itself. But the team will need some new players: clinicians trained in statistics and computer science, who can contribute meaningfully to algorithm development and evaluation. Today’s medical education system is ill prepared to meet these needs. Undergraduate premedical requirements are absurdly outdated. Medical education does little to train doctors in the data science, statistics, or behavioral science required to develop, evaluate, and apply algorithms in clinical practice.
The integration of data science and medicine is not as far away as it may seem: cell biology and genetics, once also foreign to medicine, are now at the core of medical research, and medical education has made all doctors into informed consumers of these fields. Similar efforts in data science are urgently needed. If we lay the groundwork today, 21st-century clinicians can have the tools they need to process data, make decisions, and master the complexity of 21st-century patients.
Big "AI/IA" takeaway for me:
"Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system. Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors."
Indeed. That is a huge and perhaps underappreciated concern in light of the prevalence of errors and omissions in many, many sources of data.


UPDATE

An important new "AI Now" report is out. See "Why AI is Still Waiting for its Ethics Transplant." Much more on this shortly.
__

Below: audio interview with Dr. Obermeyer.
In the aggregate foregoing vein, you might also like my prior riffs on "The Art of Medicine." In addition, see my "Philosophia sana in ars medica sana."

CODA

Save the date.

Link

____________

More to come...

Monday, October 30, 2017

6 weekly online sessions, $2,600, and you're a "Transformational #AI Leader"?

I'm on a lot of email lists. This just came in the other day.


Okeee-dokeee, then. In very short order, you'll acquire
  1. "A practical grounding in artificial intelligence (AI) and its business applications, equipping you with the knowledge and confidence you need to help you transform your organization into an innovative, efficient, and sustainable company of the future."
  2. "The ability to lead informed, strategic decision making and augment business performance by integrating key AI management and leadership insights into the way your organization operates."
  3. "Recognition of your understanding of AI in the form of a certificate of completion from the MIT Sloan School of Management - one of the world’s leading business schools."
To "GET COURSE BROCHURE," you have to give up an email address and phone number, and agree to this:
I consent to MIT and GetSmarter contacting me using the details given above, including by automated means, even if I am on a corporate, state or national Do Not Call Registry, subject to GetSmarter's Privacy Policy.
I gave them one of my KHIT email aliases (it maps to my ISP default), and my POS Xfinity hard line that I never use (it came with my Comcast package; incoming calls are probably 99% endless marketing robocalls, and wrong numbers. I just ignore it).

I am reminded of my March 2017 post "12 weeks, 1,200 hours, and $12,000, and you're a "Software Engineer"? See also my last post "Future jobs: robots, nerds, and nurses?"

The brochure notes that this course will principally dwell on 3 topical "AI" sub-areas: [1] Machine Learning; [2] Natural Language Processing (NLP), and; [3] robotics.

For my NLP takes, see "Assuming / Despite / If / Then / Therefore / Else..." Could AI do "argument analysis?" and Continuing with NLP, a $4,200 "study."

You might also find my 2015 "AI vs IA: at the cutting edge of IT R&D" of interest and utility.

More broadly, as I've noted before, there's a thriving market in these myriad "professional certificate" online courses these days. See, e.g., my post going to "Certified Genetic Counselors." (Scroll down.)
__

Whatever. Get their brochure and make up your own mind.


Pardon my dubiety.

UPDATE
"The term “AI” is thrown around casually every day. You hear aspiring developers saying they want to learn AI. You also hear executives saying they want to implement AI in their services. But quite often, many of these people don’t understand what AI is." -- Radu Raicea

ERRATUM

One of my daily web surfing stops is The Incidental Economist. Aaron Carroll, MD is a regular contributor there. He has a new book coming out.


From the Amazon blurb:
Physician and popular New York Times Upshot contributor Aaron Carroll mines the latest evidence to show that many “bad” ingredients actually aren’t unhealthy, and in some cases are essential to our well-being.
Advice about food can be confusing. There's usually only one thing experts can agree on: some ingredients—often the most enjoyable ones—are bad for you, full stop. But as Aaron Carroll explains, these oversimplifications are both wrong and dangerous: if we stop consuming some of our most demonized ingredients altogether, it may actually hurt us…
Looks interesting.
____________

More to come...

Friday, October 27, 2017

Future jobs: robots, nerds, and nurses?


LOL. My New Yorker.

New also from The Atlantic:
Why Nerds and Nurses Are Taking Over the U.S. Economy
A blockbuster report from government economists forecasts the workforce of 2026—a world of robot cashiers, well-paid math nerds, and so (so, so, so) many healthcare workers.


Manufacturing will fall. Retail will wobble. Automation will inch along but stay off the roads, for now. The rich will keep getting richer. And more and more of the country will be paid to take care of old people. That is the future of the labor market, according to the latest 10-year forecast from the Bureau of Labor Statistics.

These 10-year forecasts—the products of two years’ work from about 25 economists at the BLS —document the government’s best assessment of the fastest and slowest growing jobs of the future. On the decline are automatable work, like typists, and occupations threatened by changing consumer behavior, like clothing store cashiers, as more people shop online.

The fastest-growing jobs through 2026 belong to what one might call the Three Cs: care, computers, and clean energy. No occupation is projected to add more workers than personal-care aides, who perform non-medical duties for older Americans, such as bathing and cooking. Along with home-health aides, these two occupations are projected to create 1.1 million new jobs in the next decade. Remarkably, that’s 10 percent of the total 11.5 million jobs that the BLS expects the economy to add. Clean-energy workers, like solar-panel installers and wind-turbine technicians, are the only occupations that are expected to double by 2026. Mathematicians and statisticians round out the top-10 list…
 Another interesting article:
The Real Story of Automation Beginning with One Simple Chart
Robots are hiding in plain sight. It’s time we stop ignoring them.


There’s a chart I came across earlier this year, and not only does it tell an extremely important story about automation, but it also tells a story about the state of the automation discussion itself. It even reveals how we can expect both automation and the discussion around automation to continue unfolding in the years ahead. The chart is a plot of oil rigs in the United States compared to the number of workers the oil industry employs, and it’s an important part of a puzzle that needs to be pieced together before it’s too late.

What should be immediately apparent is that as the number of oil rigs declined due to falling oil prices, so did the number of workers the oil industry employed. But when the number of oil rigs began to rebound, the number of workers employed didn’t. That observation itself should be extremely interesting to anyone debating whether technological unemployment exists or not…
Sleeping Through a Wake Up Call
This is a story of technological unemployment that is crystal clear, and yet people are still arguing about it like it’s something that may or may not happen in the future. It’s actually a very similar situation to climate change, where the effects are right in our faces, but it’s still considered a debate. Automation is real, folks. Companies are actively investing in automation because it means they can produce more at a lower cost. That’s good for business. Wages, salaries, and benefits are all just overhead that can be eliminated by use of machines.

But hey, don’t worry, right? Because everyone unemployed by machines will find better jobs elsewhere that pay even more… Well, about that, that’s not at all what the history of automation in the computer age over the past 40 years shows. Yes, some with highly valued skills go on to get better jobs, but they are very much the minority. Most people end up finding new paid work that requires less skill, and thus pays less. The job market is steadily polarizing…
But, wait! There's more...

Robot Overlords or Robot Colleagues?
The endless debate over whether the future of work will actually include humans.

“In a bet against college, WeWork acquires a coding bootcamp”
A slew of pieces over the past few days only add to the debate over the future of work. First, let’s tackle the WeWork news above. I’ll believe this when I see it actually happen, but WeWork promises it will roll out a coding curriculum across its entire base of hundreds of locations worldwide. I’m skeptical because I’m not convinced the world needs millions of vocationally trained coders — I’m more convinced the world needs all of us to be minimally literate in how digital computing works, and the jobs of the future will more likely require us to understand how to work with computers, rather than how to code them. It’s a bit like writing a century or so ago — we should all learn how to read and write, but only a small fraction of us became professional writers of one kind or another. The rest of us got very good at reading the code of writing — the output…
I'm reminded of my prior post "12 weeks, 1,200 hours, and $12,000, and you're a "Software Engineer"?

See also "The future of health care? "Flawlessly run by AI-enabled robots, and 'essentially' free?" And, my post "Aye, Robot."

See as well my earlier "AI vs IA: At the cutting edge of IT R&D."

UPDATE

From Wired:
WORKERS DISPLACED BY AUTOMATION SHOULD TRY A NEW JOB: CAREGIVER

SOONER OR LATER, the US will face mounting job losses due to advances in automation, artificial intelligence, and robotics. Automation has emerged as a bigger threat to American jobs than globalization or immigration combined. A 2015 report from Ball State University attributed 87 percent of recent manufacturing job losses to automation. Soon enough, the number of truck and taxi drivers, postal workers, and warehouse clerks will shrink. What will the 60 percent of the population that lacks a college degree do? How will this vulnerable part of the workforce find both an income and the sense of purpose that work provides?...
Below: some disturbing news here specific to the health care space:
Robot-surgery firm from Sunnyvale facing lawsuits, reports of death and injury
SUNNYVALE — When Teresa Hershey was told she needed a hysterectomy, her doctor recommended a novel approach: an operation performed by a robot, guided by the surgeon.
“She was just very persuasive,” said Hershey, 45. “I’d never heard of it.”

The doctor’s assertion that less invasive, robot-assisted surgery would mean seven to 10 days of recovery instead of six to eight weeks for a conventional operation convinced her, along with the prospect of less scarring: The da Vinci robots from Sunnyvale’s Intuitive Surgical need only small holes for inserting surgical equipment.

Now, seven years and 10 corrective surgeries later, Hershey is gearing up to fight Intuitive in Santa Clara County Superior Court. She says she has refused the firm’s offers to settle.
“I want to go all the way,” said Hershey, whose case would be only the third to go to trial amid a torrent of legal claims. “There’s just been too much with this company, and too many people hurt. I just want the world to know what they’ve done. I don’t want them to get away with it, to be swept under a rug.”

Since the da Vinci surgical robot received FDA approval in 2000, Intuitive’s devices — which are operated by a surgeon using joysticks, foot pedals and a 3-D viewer — have propelled the firm to a $35 billion valuation and world dominance in robot-aided surgery. But the legal claims that have come with Intuitive’s success showcase the serious risks that accompany the rewards new medical technology can bring…
I'm not sure it's entirely accurate to call the da Vinci technology "robotic." But, whatever.

I recall being enthusiastically offered a "robotic prostatectomy" option back in 2015 during my stint with the disease. I declined.

apropos of all this,

Link here.
____________

More to come...

Monday, October 23, 2017

The unhappy intersection of health care data and clinical quality oversight policy


Kip Sullivan, JD has a doozy of a post up at THCB:
MedPAC Sinks Deeper Into the MACRA Tar Pit

The Medicare Payment Advisory Commission (MedPAC) has done it again. At their October 4, 2017 meeting they agreed to repeal the Merit-based Incentive Payment System (MIPS), an insanely complex and evidence-free pay-for-performance scheme within the larger program known as MACRA. Instead of examining how they made such a serious mistake in the first place (MedPAC has long supported turning fee-for-service Medicare into a giant pay-for-performance scheme), they repeated their original mistake –- they adopted yet another vague, complex, evidence-free proposal to replace MIPS.

MedPAC’s history gives us every reason to believe that when they discuss their “repeal and replace MIPS” proposal at their December 2017 and January 2018 meetings, they will refuse to discuss their “replace” proposal in any detail; they will not ask for evidence indicating their proposal is safe and effective; and in their March 2018 report to Congress they will foist upon CMS the dirty work of figuring out how to make their lead balloon fly. CMS will dutifully write up a gazillion pages of gibberish describing how the new program is supposed to work, it won’t work, MedPAC will return to the scene of the crime years later and, pretending they had no part in creating it, propose yet another evidence-free tweak. And so on.

MedPAC is caught in a trap of their own making. They endorse health policy fads without any evidence and without thinking through the details; then when the fads don’t work, rather than review their defective thought process, they endorse other iterations of the fads, again without evidence and without thinking through the details. The tweaked version of the fad fails, and MedPAC starts the cycle all over again. Two analogies for this trap or vicious cycle occur to me. One is the tar pit where mastodons got stuck and died; struggle only caused the dimwitted creatures to sink faster. The other is the hedge fund that gradually becomes a Ponzi scheme. Investors like Bernie Madoff make bad investments, and when the investments go south, instead of admitting their mistakes, they induce their investors to throw good money after bad…
Read all of it. Link in the title. Docs have been complaining angrily about "quality reporting measures" since I worked in the Meaningful Use program. MACRA is merely the successor to this data burden, with some new twists going to payment reforms. Some of the comments illustrate the problem nicely:
meltoots
Kip, can we be BFFs? You have a knack for putting into words exactly my feelings about all this mess of buzzword care and puffery language to assuage the politicos to feel that they are getting “Value” for their healthcare dollar. CMS and ONC and MEDPAC and all the others have made such a mess, it truly should be flushed. Attribution, There is no possible way to attribute costs to my part of the care for a fractured hip, when the patient has kidney disease, heart disease, GI problems, diabetes, etc. What part of the readmission within 90 days is “my” fault if I fixed the hip perfectly, but the patient suffered a hypoglycemic episode at 67 days? And how many click boxes, data entry points do I need to do? Do you really think that my reporting of preop antibiotics is anything but 100%? It always is. Yet somehow, MACRA MIPS values this, yet I have to report it ? Stupid. And its self reporting…no chance for inflation of “Value” by admins, right? All this counting of numerators and denominators and attesting has led to what, exactly? Nothing but burned out MDs that are distracted from real care. Worse, it drive MANY away from caring for the more fragile. less healthy, socially isolated, etc as it will make MY NUMBERS look worse if they have complications or higher resource use, like they are admitted to a skilled facility after a total hip, as they have no one to care for them at home, and they are anxious about going home. Is that my fault? Some articles in this blog has shown that public reporting of these “values” “Complications” etc are definitely driving MDs from caring for those that will ruin my numbers, even with 1 or 2 complicated patients. Think bundled care here. Why would I EVER operate on anyone that could kill my bundle and cost me money, punish me, report nasty numbers on me. Its just the nature of the beast. You punish me for caring for complicated patients both health and social, forget getting any kind of care. Thats EXACTLY what ACOs and BUNDLES do. MIPS MACRA are just the main stage of that mess. I found it extremely disheartening that MEDPAC is grasping at ANY buzzword straw to get themselves out of the MIPS MACRA mess and they initial thought was to just PUNISH providers for FFS no matter what, as FFS is obviously the devil to MEDPAC, so they are dying for a new set of abbreviations, mantras that can be the solution to the scourge that is FFS. What a nightmare, and they are in charge. They should be forced to read your blog. I love your work Kip, please keep it coming.
and,
William Palmer, MD
It seems pretty clear that defining and measuring quality in healthcare has long been an enormous challenge and remains one. We also don’t want to create disincentives for doctors to be willing to care for the highest risk, most complex patients. It’s also pretty clear that the fee for service payment model provides incentives to provide too much care and HMO’s provide incentives to provide too little care. We also have too much defensive medicine because our society is inherently more litigious than others.
At the same time, healthcare costs rose from around 5% of GDP in 1960 to between 17% and 18% of GDP today partly because of huge advances in what modern medicine can do for us patients and partly because of high prices, especially for drugs, devices and, to some extent, imaging. Moreover, most patients can’t afford to pay for the expensive procedures without health insurance. Balance billing, if we had it, would be an additional cost burden that wouldn’t count toward insurance deductibles and OOP limits. That would be a big problem for most people as well.

At the end of the day, what I want as a patient is good care, from both primary care doctors and specialists at a cost that won’t bankrupt me or the country. What those of us who invest in healthcare and health insurance companies want is for them to be sufficiently profitable to produce an adequate risk-adjusted return on our capital relative to other investment alternatives, again without bankrupting the country.

So what’s the answer to the cost conundrum? My own preferences include price transparency to allow both patients and referring doctors to identify the most cost-effective, good quality providers in real time, comprehensive tort reform to reduce defensive medicine, more use of data analytics to go after fraud, especially in the Medicare and Medicaid programs, and a lot less futile care and the end of life much of which patients don’t even want.

While doctors claim that they only account for 10% of healthcare costs after deducting practice expenses, their decisions to order tests, prescribe drugs, admit patients to the hospital, consult with patients and perform procedures themselves drive virtually all healthcare spending.

The docs are in a position to have the best ideas to bring healthcare costs under control relative to GDP but their preference is to be left alone to take care of patients. As Steve2 noted in a recent blog post, nobody cared about healthcare costs when they were 5% of GDP. At 18%, we have to care. Where’s the physician leadership on this issue?
Recommend you peruse all of the comments below the post as well.

One of my long-time wisecracks:
Just as no amount of calling point-to-point interfaced data exchange "interoperability" will make it so, neither will calling Process Indicators "CQMs" (Clinical Quality Measures) make them so.
Process indicators are, in the aggregate, very loosely-coupled proxies for "quality of care." Whether they are uniformly efficacious, or a precious-time-wasting check-box click burden continues to be a matter of heated dispute.

Kip Sullivan concludes:
...In my next comment I will explore the history of this habitual failure. I will focus on the commission’s endorsement of pay-for-performance in 2003 and how that endorsement led the commission into the MACRA tar pit.
I look forward to reading it.

Other THCB posts tagged "Kip Sullivan."

TANGENTIAL ERRATUM

Speaking of "science" and "wellness" quackery. Timothy Caulfield:

Wellness Brands Like Gwyneth Paltrow's GOOP Wage War on Science
Despite the best efforts of journalists and doctors, debunkers are not winning the wellness war.


Gwyneth Paltrow's wellness obsession has become one of the more reliable punchlines in Hollywood, but she may very well have the last laugh. The actress-turned-wellness-guru is now known as much for her acting as for her scientifically dubious lifestyle brand, Goop. In 2016, the company raised tens of millions of dollars in venture capital, all despite unrelenting mockery in the press. The marketing for some products is so ridiculous I sometimes wonder if Goop is really just a form of clever satire aimed at the dangers of pseudoscience. (If this is true, mission accomplished.)

But assuming this isn’t performance art, the increasing popularity of companies like Goop is a cause for legitimate concern. Despite the best efforts of journalists and doctors, the debunkers are not winning the wellness war. Indeed, there is evidence that the trust people place in traditional sources of science is eroding.

And it’s not just science — global trust in institutions everywhere is plummeting. While these are socially complex phenomena, I believe there are several powerful — and, ultimately, tremendously harmful — rhetorical devices deployed by the multibillion-dollar wellness industrial complex that have facilitated its cultural ascendency. By examining these devices, perhaps we can make people think twice before they try being voluntarily stung by bees as a cure for inflammation…
Another good read. I was struck by how little has changed on this front across the 19 years since my elder daughter died. I wrote on my 1998 essay:
Is science the enemy? To the extremist "alternative healing" advocate, the answer is a resounding 'yes'! A disturbing refrain common to much of the radical "alternative" camp is that medical science is "just another belief system," one beholden to the economic and political powers of establishment institutions that dole out the research grants and control careers, one that actively suppresses simpler healing truths in the pursuit of profit, one committed to the belittlement and ostracism of any discerning practitioner willing to venture "outside the box" of orthodox medical and scientific paradigms...
Different day, same bullshit.

Recall also my prior post "I am not a scientist?"
____________

More to come...

Wednesday, October 18, 2017

Omics update

My latest Science Magazine arrived the other day.


"Special Issue: Single-Cell Genomics?" Interesting.

INTRODUCTION TO SPECIAL ISSUE
A Fantastic Voyage in Genomics

Laura M. Zahn


Imagine being able to shrink down to a small enough size to peer into the human body at the single-cell level. Now take a deep breath and plunge into that cell to see all of the ongoing biological processes, including the full complement of molecules and their locations within the cell. This has long been the realm of science fiction, but not for much longer. Recent technological advances now allow us to identify and visualize RNA transcripts, proteins, and other cellular components at the single-cell level. This has led to discoveries about the immune system, brain, and developmental processes and is poised to revolutionize our understanding of the entire human body.

We anticipate breakthroughs with an increased ability to confidently examine the components of a single cell, including in identifying and treating disease at the cellular or even molecular level. Advancing our understanding of pathology will allow us to predict how genes predispose individuals to a disease and aid in prevention and treatment. This will be especially important for diseases such as cancer, which can often have extremely variable genetic compositions resulting in different gene expression profiles within a single tumor. Although the technology to shrink oneself remains fiction, our ability to visualize how genes act at the single-cell level is not, and we look forward to enlarging our knowledge of the human body.


Abstract
The immune system varies in cell types, states, and locations. The complex networks, interactions, and responses of immune cells produce diverse cellular ecosystems composed of multiple cell types, accompanied by genetic diversity in antigen receptors. Within this ecosystem, innate and adaptive immune cells maintain and protect tissue function, integrity, and homeostasis upon changes in functional demands and diverse insults. Characterizing this inherent complexity requires studies at single-cell resolution. Recent advances such as massively parallel single-cell RNA sequencing and sophisticated computational methods are catalyzing a revolution in our understanding of immunology. Here we provide an overview of the state of single-cell genomics methods and an outlook on the use of single-cell techniques to decipher the adaptive and innate components of immunity.

Abstract
The stereotyped spatial architecture of the brain is both beautiful and fundamentally related to its function, extending from gross morphology to individual neuron types, where soma position, dendritic architecture, and axonal projections determine their roles in functional circuitry. Our understanding of the cell types that make up the brain is rapidly accelerating, driven in particular by recent advances in single-cell transcriptomics. However, understanding brain function, development, and disease will require linking molecular cell types to morphological, physiological, and behavioral correlates. Emerging spatially resolved transcriptomic methods promise to fill this gap by localizing molecularly defined cell types in tissues, with simultaneous detection of morphology, activity, or connectivity. Here, we review the requirements for spatial transcriptomic methods toward these goals, consider the challenges ahead, and describe promising applications.

Abstract
Single-cell multi-omics has recently emerged as a powerful technology by which different layers of genomic output—and hence cell identity and function—can be recorded simultaneously. Integrating various components of the epigenome into multi-omics measurements allows for studying cellular heterogeneity at different time scales and for discovering new layers of molecular connectivity between the genome and its functional output. Measurements that are increasingly available range from those that identify transcription factor occupancy and initiation of transcription to long-lasting and heritable epigenetic marks such as DNA methylation. Together with techniques in which cell lineage is recorded, this multilayered information will provide insights into a cell’s past history and its future potential. This will allow new levels of understanding of cell fate decisions, identity, and function in normal development, physiology, and disease.
Firewalled, AAAS members only. Or, you can buy the hardcopy at a newsstand or read it at a library.

Lots of detail, mostly over my head, but important stuff. I have to wonder how far away these research developments are from widespread applied clinical tx practice?

Some reporting on the topic from H&HN:
Genomic Medicine Has Entered the Building
With game-changing promises starting to pay off, hospitals need to start preparing now for the changes genomics will bring


After years of fanfare and a few false starts, the era of genomic medicine has finally arrived.

Across the country, thousands of patients are being treated, or having their treatment changed, based on information gleaned from their genome. It’s a revolution that has been promised since the human genome was first published in 2001. But making it real required advances in information technology infrastructure and a precipitous drop in price.

Today, the cost of whole exome sequencing, which reveals the entire protein-coding portion of DNA, is roughly equivalent to an MRI exam in many parts of the country, says Louanne Hudgins, M.D., president of the American College of Medical Genetics and Genomics and director of perinatal genetics at Lucile Packard Children's Hospital Stanford, Palo Alto, Calif.

“Genomic sequencing is a tool like any other tool in medicine, and it’s a noninvasive tool that continues to provide useful information for years after it is performed,” she says…
"Treating genes, not organs"
Let's hope. Read all of it.

I've had a recurrent go at "Omics" topics before, e.g., here, and here. See also my post on "Personalized Medicine."

Below, apropos?

NEXT UP AT HEALTH 2.0

How Technology Development and Big Data are Affecting the Transformation of Health Care
               
Precision Medicine has come a long way in the last 10+ years thanks to advances in diagnostics, computing, and consumer tools. The ongoing quest to better understand disease predisposition and prevention through genomic and environmental factors is key to increasing the quality and length of life. Technology for Precision Health will explore how technology can help.

How can we think differently about gathering, analyzing and sharing information? Which incentives can be offered to structurally change the system toward longer term care of patients? Which mechanisms will empower patients with their data and create virtuous partnerships with providers to truly drive value? Conference delegates will learn about the latest tools in Precision Medicine and Health as well as be part of the discussion on new ontologies and policy changes needed to bring these technologies to patients...
Click here (or the above headline) for the site link.

Also of pertinence, from the NEJM:
Lost in Thought — The Limits of the Human Mind and the Future of Medicine
Ziad Obermeyer, M.D., and Thomas H. Lee, M.D.


In the good old days, clinicians thought in groups; “rounding,” whether on the wards or in the radiology reading room, was a chance for colleagues to work together on problems too difficult for any single mind to solve.
Today, thinking looks very different: we do it alone, bathed in the blue light of computer screens.

Our knee-jerk reaction is to blame the computer, but the roots of this shift run far deeper. Medical thinking has become vastly more complex, mirroring changes in our patients, our health care system, and medical science. The complexity of medicine now exceeds the capacity of the human mind.

Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research.

It’s ironic that just when clinicians feel that there’s no time in their daily routines for thinking, the need for deep thinking is more urgent than ever. Medical knowledge is expanding rapidly, with a widening array of therapies and diagnostics fueled by advances in immunology, genetics, and systems biology. Patients are older, with more coexisting illnesses and more medications. They see more specialists and undergo more diagnostic testing, which leads to exponential accumulation of electronic health record (EHR) data. Every patient is now a “big data” challenge, with vast amounts of information on past trajectories and current states.

All this information strains our collective ability to think. Medical decision making has become maddeningly complex. Patients and clinicians want simple answers, but we know little about whom to refer for BRCA testing or whom to treat with PCSK9 inhibitors. Common processes that were once straightforward — ruling out pulmonary embolism or managing new atrial fibrillation — now require numerous decisions.

So, it’s not surprising that we get many of these decisions wrong…
Open access. Read all of it.
"Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research."
Also of note, our hardy perennial, EHR lamentation. From THCB:
EHR-Driven Medical Error: The Unknown and the Unknowable
BY ROSS KOPPEL


Politico’s Arthur Allen has written a useful report on recent findings about EHR-related errors. We must keep in mind, however, that almost all EHR-related errors are unknown, and often unknowable. Why?...
Interesting post. I'd like to have Dr. Jerome Carter's reaction. See also my prior post "Are structured data now the enemy..."

TWO NEW READS UNDERWAY


"Machine Learning" looks to be partially a bit remedial for me (e.g., regression models and decision trees), but looks like a good quick tutorial. "The Influential Mind" goes to my abiding interest in cognitive/neuroscience topics. Stay tuned.

A SAD CODA

Seattle's AI entrepreneur Matt Bencke has died, losing his fight against stage IV metatstatic pancreatic cancer. He was only 45. Very sad. I have followed developments closely, given that my 47 yr old daughter has a very similar dx.

Rachel Lerman of The Seattle Times has a fine story on Matt. My heart goes out to his family and friends.
____________

More to come...

Monday, October 16, 2017

What a week! California on fire.



The scope of the Napa-Sonoma-Santa Rosa area fires is breathtaking and disheartening. I know people who have lost everything except their lives. Last Wednesday as I was taking my daughter to her chemo session at Kaiser, the air was thick with the acrid smell of fire, and you could not even see the nearby foothills below Mt. Diablo (we live in Antioch close by Brentwood).

I don't even want to think about the tonnage of toxins in those smoke plumes, given the thousands of structures and vehicles destroyed in addition to the grasslands and woodlands.


Unreal. And, while firefights have made significant gains, it's not over yet. I hope the forecast for rain by Thursday is accurate.
____________

More to come...