Sunday, September 25, 2016

Showtime! Health 2.0's 10th annual conference

I could not be in a better mood this morning. We got our dog Jaco back yesterday after 12 days of numbing anguish over his getting loose and disappearing on September 13th. Jaco is safely home with my daughter, and I'm here at the Hyatt Regency onsite hotel at the Santa Clara Convention Center with my wife, who just returned from an intense week-long business trip to Israel. Not even a loud and obnoxious early morning hotel union protest out on the sidewalk could harsh my buzz (and, thank you, Hyatt staff for comping me the lattes as apology for the protest harangue).

As I noted on Facebook yesterday afternoon:
It's been a good day. My wife came home from overseas, my dog came back, and Tennessee beat Florida. I feel a country song coming on.
LOL. I been known to write a country song or two...

Cheryl and I lit into some maragritas and lemon drop martinis and chow at the hotel Evo bar last night, and got to have some nice chats with Health 2.0 co-founders Indu Subaiya and Matthew Holt, both of whom are utter delights.

Provider Symposium
The Provider Symposium is back again this year! This event will bring together hospital & other provider executives with innovators to tackle the biggest challenges facing health systems today.

To truly drive change in health care, innovation needs to occur at both an operational level as well as in the day-to-day workflow of clinical care. The symposium unites those worlds with classic Health 2.0 demos of the latest technology designed to improve the lives of these key stakeholders, as well as meaningful discussion anchored by case studies of pilots using new technology, or developing innovation within from forward-thinking institutions like Cedar-Sinai, UPMC and more! By gathering the right people, technologies, and systems into a single room, the Provider Symposium will foster debate and spark the exchange of ideas between the best minds from leading health care systems in an effort to change the face of health care as we know it...

Patients 2.0 at the 10th Annual Fall Conference
This year’s theme is Patient Match, where we discuss the resources and tools available for patients as they navigate their health journey from diagnosis to long-term care. The Patients 2.0 session is run by Patients and marshaled by Sarah Krug from the Society for Participatory Medicine...
"[I]nnovation needs to occur at both an operational level as well as in the day-to-day workflow of clinical care."

Indeed, which is why, aside from HIMSS and IHI conferences, I also cover The Lean Summits every year. Health 2.0 and Lean are my favorites, gotta say.

Gonna be a great day. My cameras are fully charged and ready to rock. Still ruminating on the myriad implications of "thinking about jobs, education, healthcare, and tech." And, Gerd Leonhard's "digital obesity" amid the broad and deep panoply of health space issues I continue to chew on here at KHIT.

Hope to meet a bunch of you here today and this week. My Twitter handle is @BobbyGvegas.


I should note this USC Center for Health Journalism webinar coming up on Thursday.

I've signed up. Recall my reviews of Ann Neumann's book "The Good Death" here and here.

News item. I searched Google news for "Health 2.0." This popped up:
Insiders: Silicon Valley is blowing its chance to ride the silver tsunami
By Dan Diamond


Health care is being swamped by the so-called silver tsunami: 10,000 seniors are turning 65 and becoming Medicare-eligible every day. But rather than ride the wave, investors in Silicon Valley and elsewhere are too focused on the shallower pool of younger, healthier Americans.
That's according to health care IT leaders and thinkers who participated in POLITICO's health IT advisory forum this week...
And, get off my lawn!


Are we finally getting to or at the uptick elbow of the Health IT innovation traction Hockey Stick curve. Last year at this time I mused on "Free Beer Tomorrow" - "Transforming the Healthcare System."


Stage lighting at Health 2.0 is generally fine, but today's pre-conference symposium day lighting was a bit weak (yes, I'm a lighting snob; come by it honest). While they continue to know that they should deploy overhead backlighting (unlike my friends at the Lean Healthcare Summits), the Front-of-House spot trees this time were inadequate. Makes for grainier, oversaturated photos. Nonetheless...

Below, the busiest guy in the ballroom. We ran long. It was way worth it.

My running joke with Matthew Holt.


 I took pretty good notes on the day. More later. Fixin' to go to the Redox reception shortly.


Community Health Center Network and Welkin Health Partner to Improve Health Outcomes for Bay Area's Most At-Risk Populations
Health 2.0, with the support of the Robert Wood Johnson Foundation, to award grant to develop complex case management platform for underserved populations

Community Health Center Network (CHCN), a non-profit public benefit corporation, and Welkin Health, a San Francisco-based digital health company specializing in patient relationship management, have partnered to develop a complex case management platform to elevate the success of CHCN’s Care Neighborhood program. The innovative program addresses and supports the health needs of the East Bay’s most at-risk and underserved populations through consistent patient outreach by Community Health Workers (CHWs) embedded in the CHCN health centers.

Selected as one of three national finalists for Health 2.0’s Technology for Healthy Communities initiative, a project supported by the Robert Wood Johnson Foundation, Welkin Health will develop a complex case management platform for CHCN. Emphasizing strong patient coaching and data-driven insights, the platform will help CHCN health centers streamline communication efforts between patients and CHWs, increasing engagement, reducing costs and leading to improved long-term health outcomes for patients. It will also aggregate patient data from EHR systems and claims databases to provide CHWs with optimal clinical decision support, increasing efficiency and the ability to track health improvements of enrolled patients over time.

The platform will launch at three CHCN health centers in the Alameda County with a goal to eventually expand to support Care Neighborhood programs at all eight health centers that are members of the network...

More to come...

Thursday, September 22, 2016

On deck: the 10th Annual Health 2.0 conference
It promises to be very interesting, as always. I'm staying onsite this year, going down Saturday so I can hit the ground running early Sunday. Mercifully, no NFL game across the street at Levi's Stadium this year.

In addition to the breadth of exquisitely-presented cutting edge health care and health tech topics, ten new companies will have their onstage debut.
  • Valeet Healthcares platform gives patients personalized health information while allowing providers to have a rounding tool and giving healthcare systems a dashboard to track metrics.
  • gripAble is an innovative mobile technology that bridges the gap between functional therapy and objective measurement of upper-limb function.
  • Cricket Health works with payor and provider customers to slow the progression of chronic kidney disease (CKD), manage the transition from CKD to End Stage Renal Disease, and improve ESRD care.
  • Qidza is a population health mobile platform that enables parents work with their physicians to track their children’s developmental milestones
  • Docent Health guides health systems to embrace a consumer-centric approach to healthcare by curating patient experiences.
  • Albeado builds Healthcare prediction and optimization solutions based on proprietary data science platform which combines clinical AI and Graph-Based Machine Learning.
  • Siren Care offers temperature-sensing smart socks which provide health data on foot ulcers, hot spots, and more to prevent future injuries.
  • MDwithME integrates soft and hardware components in a suitcase enabling full remote physical exams with an option of instant or delayed physician’s consult with quality of testing that equals or exceeds the current state of art.
  • DayTwo maintains health and prevent disease utilizing a microbiome platform, starting with personalized nutrition based on gut bacteria, aiming to normalize blood sugar levels and cultivate a healthy gut microbiome.
  • Regeneration Health is a health ecosystem powered by artificial intelligence that collects and monitors health in real time and curates free personalized health info and recommendations based on integrative medicine.
I recall my reflection on one of the 2015 launch presentations a year ago.

Among other things this year, Gerd Leonhard's "digital obesity" will be on my mind.

Along with that hardy perennial "interoperability." apropos,

Expand the deck to fullscreen for legibility.

I posted on HL7 "APIs" two years ago, here and here.

In related news, from the National Academy of Medicine:

As reported by FierceHealthCare:
National Academy of Medicine: HIT agenda must be reset

The U.S. has met only one of four federal health IT goals outlined by the White House in 2004, according to the National Academy of Medicine, which notes in a new discussion paper that the agenda for the next five years should be "reset."

The paper, part of NAM’s Vital Directions for Health and Health Care initiative, notes that while the national goal of electronic health record adoption has been reached, the three other goals outlined in 2004--interoperability, supporting consumers with information and public health, clinical trials and other data-intensive activities--have not. It identifies nine central themes in three focus areas that it suggests should be the major goals for health IT over the next five years, including:

Technical underpinnings:

  • Data standards and achieving interoperability at scale
  • Interoperability with consumer health technology
  • Improving patient identification matching to support interoperability
  • Service-oriented architectures and web-based services
...The paper outlines current problem areas, including Meaningful Use’s mixed success and inadequate EHR design. It also identifies opportunities and policy alternatives, such as authorizing the Department of Health and Human Services to adopt and promulgate standards through formal rule making for patient identification and matching.

The authors recommend three “vital directions” to take:

  1. End-to-end interoperability from devices to EHRs
  2. Aggressively address cybersecurity vulnerabilities
  3. Develop a data strategy that supports a learning health system...
Are we finally gonna cut through the persistent fog of "interoperababble"? Or will we see similar papers promulgated  five years out, all making the same recommendations we've been reading for years?

e.g., from the National Academy paper,
Data Standards and Achieving Interoperability at Scale
Many have concluded that the Meaningful Use goals of improved quality, safety, and efficiency cannot be reached until more data are shared for more purposes, with sharing integrated into the routine, health care–delivery work ow. As currently designed, HIT and the applicable regulations can slow the routine provision of health care. Enablers of efficiency—such as accurate, transparent, and actionable payer information available at the point of care; the ability to reuse structured health information for health care operations and administration; and documentation well suited for care in the 21st century—could help to achieve efficiency goals. Sharing data more broadly can enhance care coordination, ensuring that patients’ lifetime medical records travel among all providers. Redundant and unnecessary testing can be reduced. Physician orders for life-sustaining treatment can be communicated broadly. One estimate suggests that $80 billion could be saved annually if a comprehensive program of EHR data-sharing were widely implemented (Hillestad et al., 2005).
"2005"? I'm stuck in Groundhog Day. 2005 was the year I started working in the CMS Meaningful Use precursor DOQ-IT initiative at HealthInsight.

UPDATE: I printed the paper's pdf copy out and gave it the yellow marker / red pen treatment. Again, you could have pulled something up from a decade ago and made a few relatively minor edits to re-tread all of this. With respect to new stuff -- e.g., "precision medicine" and its foundational "omics" science underpinnings, this is all I find:
The Precision Medicine Initiative of the National Institutes of Health constitutes a bold step toward engaging individuals in helping to accelerate biomedical knowledge discovery through the use of electronic health information from EHRs and consumer health technology (NIH, 2016). [pg. 9]

In the era of "big data," the availability of more comprehensive, sensitive, and valuable — but less regulated — data emphasizes the ever present need for standards for encryption. Genomic (and "multi-omic") data used in personalized medicine lack policies and standards. [pg. 13]
That's all? I've ranted about these shortcomings in prior posts.
Got hooked up on Twitter with a company called "MI7" the other day.
Our software Q transforms every Electronic Health Record system into an API.
Sounds great. We'll see. Here's the question I want answered: "Does your app transform PHI metaphorically into the (iteratively, recursively lexically and semantically indestructible) "type-O blood" of health care? Y'know, that whole "data are the lifeblood of health care" thingy?

Interestingly another company, Redox,  -- host of a Sunday evening Health 2.0 Conference reception this year -- is also in this HIT "interop" fray:

Redox is the EHR integration platform for digital health solutions.
We empower healthcare applications to read, write, and query clinical data with any healthcare organization's electronic health record system. The bridge to interoperability is open.

I'm all eyes and ears.

In other news,

Chan Zuckerberg Initiative commits to investing $3 billion to cure diseases
An effort to cure all diseases within the lifetime of their daughter

The philanthropic initiative founded by Facebook founder Mark Zuckerberg and his wife, Priscilla Chan, will spend $3 billion over the next decade in an effort to cure and manage all human diseases. The Chan Zuckerberg's latest effort will begin with a $600 million investment in a project called Biohub, an independent research center located at the University of California at San Francisco that will work on developing new tools to measure and treat disease.

"Mark and I spent the past two years talking to scientists ranging form Nobel Prize laureates to graduate students," Chan said during an emotional talk at UCSF. "We believe that the future we all want for our children is possible. We set a goal: can we cure all diseases in our children's lifetime? That does't mean that no one will ever get sick. But it does mean that our children and their children should get sick a lot less. And that we should be able to detect and treat or at least manage it as an ongoing condition. Mark and I believe this is possible within our children's lifetime."...
Bears watching.


Day 9, no Jaco. It has been difficult to get motivated to do anything this week. Starting to despair that we'll never see him again.

"The benefit of being a Futurist is that you never have to change your slides."
LOL. Ian Morrison, Tuesday's Health 2.0 Keynote speaker, in an earlier interview with Matthew Holt. "I've got Powerpoint slides that are older than some of my clients."

More tech news...

Robot Nurses Will Make Shortages Obsolete
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

For years, the U.S. has experienced a shortage of registered nurses. The Bureau of Labor Statistics projects that while the number of nurses will increase by 19 percent by 2022, demand will grow faster than the supply, and that there will be over one million unfilled nursing jobs by then.

Those aged 65 or over comprise a bigger percentage of the U.S. population than ever, and by 2030, 20 percent of the U.S. population, or roughly 69 million people, will be senior citizens. While enrollment at nursing schools is up, these programs aren’t big enough to accept the number of applicants required to fill these positions.

So what’s the solution?

"Machines of Loving Grace"?


Saturday, Sept 24th, a young man found our Jaco this morning a couple of miles from our house, captured him, and turned him in to Animal Control. They scanned his chip and called us.

Apparently no worse for the wear. We are SO lucky. The kid didn't want to take the $100 reward I'd offered, but I insisted.

More to come...

Repeal and Replace

"We have to come up, and we can come up with many different plans. In fact, plans you don't even know about will be devised because we're going to come up with plans, -- health care plans -- that will be so good. And so much less expensive both for the country and for the people. And so much better.” 

- Donald Trump, September 14th, 2016 on the Dr. Oz show

Well, that's reassuring.

Sunday, September 18, 2016

On "Digital Obesity"

First, the annual Health 2.0 Conference draws nigh. I will be there loaded for bear this year. The central topic of this post, below, is of particular relevance.

Next, I was reading this interesting book ("neurophilosophy?" Lordy),

when this one jumped to the head of the line. (Compelling -- finished it in short order; back to "Touching a Nerve" shortly):

Yet another in my lengthy and growing list of reading and reviewing these "futurist" writers, apropos of Health IT writ large.

This one really rocks. (Interestingly, I cited an earlier Gerd Leonhard book some ten years ago back during my days hustling my friends' band in Las Vegas. See here as well. Probably a lot of "link rot" on that latter one by now. I've not checked.)

While there are a breadth of timely and important issues set forth in Gerd's book worthy of citation and discussion here, let's start with "digital obesity."

Chapter 7
Digital Obesity: Our Latest Pandemic 

As we wallow and pig out on a glut of news, updates, and algorithmically engineered information that may be anything but, we entertain ourselves in a burgeoning tech-bubble of questionable entertainment. Obesity is a global issue, and, according to McKinsey, it’s costing an estimated US $ 450 billion per year in the US alone, both in terms of healthcare costs and lost productivity. The Centers for Disease Control and Prevention stated in 2015 that more than two-thirds of Americans are overweight, and an estimated 35.7% are obese.

I believe we are reaching a similar or bigger challenge as we gorge on technology and bring on digital obesity.

I define digital obesity as a mental and technological condition in which data, information, media, and general digital connectedness are being accumulated to such an extent that they are certain to have a negative effect on health, well-being, happiness, and life in general.

Perhaps unsurprisingly, and despite those shocking health factoids, there is still little support globally for stricter regulation of the food industry to curb the use of addiction-building chemical additives, or to stop marketing campaigns that promote overconsumption. In America’s never-ending war on drugs, harmful foodstuffs and sugars are never so much as hinted at. Just as organic foods now seem to be largely the preserve of the well-off and wealthy, so too can we expect anonymity and privacy to become expensive luxuries— out of reach for most citizens.

Consumers are buying gadgets and apps that will supposedly help them reduce food consumption and increase fitness, such as the Fitbit, Jawbone, Loseit, and now Hapifork— which alerts you by trembling if you eat too fast— very useful indeed. It appears the idea is to buy (download) and consume yet another product or service that will miraculously, and without much effort, fix the original problem of overconsumption. 

Cravability means prosperity

The obvious bottom line is that the more people eat, the better it is for those who produce and sell our food— for example, growers, food processors, grocery stores, supermarkets, fast-food joints, restaurants, bars, and hotels. In addition, we may be shocked to find that, every year, every consumer in developed countries unwittingly ingests an estimated 150 pounds of additives— mostly sugar, yeast, and antioxidants, as well as truly nasty stuff such as MSG. These substances are the lubricants of overconsumption. Not only do they make food prettier and more durable, they also make it taste better— as debatable as that is. Thus consumers are strung along by cleverly engineering a “need-for-more” so that it becomes very hard to find the exit from that kingdom of endless, happy consumption.

If this sounds like Facebook or your smartphone, you are getting my drift. The food industry actually calls this cravability or crave-ability. In the world of technology, marketers call it magic, stickiness, indispensability, or more benignly, user engagement.

Craving and addiction— tech’s business model

Generating this kind of craving, or fueling our digital addictions in such a seemingly benign way, is clearly a powerful business model. It is easy to apply the cravability concept to the leading social-local-mobile (SoLoMo) super-nodes such as Google and Facebook, or to platforms such as WhatsApp. Many of us literally crave connectivity as we conduct our daily lives, and when we disconnect we feel incomplete.

Yet somehow, I wonder if it really could be in the interest of big Internet firms that a large number of their users end up with digital obesity issues? Is that really in the best interests of the predominantly US-owned technology and Internet giants? At the same time, we should not underestimate the strong temptation to make consumers dependent on these marvelous digital foods— to addict us to that serotonin-producing tsunami of likes, comments, and friend updates...

Leonhard, Gerd (2016-09-08). Technology vs. Humanity: The coming clash between man and machine (FutureScapes) (Kindle Locations 1767-1802). Fast Future Publishing. Kindle Edition.
'eh? Is "digital obesity" a real thing -- i.e., "clinically"? Or is this just a clicky-sticky metaphor?

Stay tuned. Keep checking back. Just getting started. A lot to discuss, including "Digital Ethics."
Chapter 10: Digital Ethics – In this chapter, I argue that, as technology permeates every aspect of human life and activity, digital ethics will evolve into a burning, un-ignorable issue for every individual and organization. At present we do not even have a common global language to discuss the issue, let alone agreement on accepted rights and responsibilities. Environmental sustainability is often brushed aside by the developing economies as a first world problem and is always sidetracked during economic recessions. In contrast, digital ethics will force its way to a permanent position at the front and center of our political and economic lives. It’s time to have the ethical conversation about digital technology— a potentially greater threat to continued human flourishing than nuclear proliferation. [ibid, Locations 109-114]
Recall our recent look at "ethics" in the case of Hurricane Katrina.

Again, much to discuss here going forward.


Day five. My Jaco remains lost. I'm starting to fear I will never see him again. A continuing kick to the gut. Difficult to get motivated to do anything this weekend.

Monday update: Some prior KHIT posts of relevance to Gerd Leonhard's book;
to cite just a few that jump right back up. Also, most recently, In the wake of Labor Day, thinking about jobs, education, healthcare, and tech.


In the simplest categorical tabulation, "ethics" (arriving at the "should/should not") comes in three flavors: [1] Consequentialism (utilitarianism), [2] "Deontology" ("duty theory"), and [3] "Virtue Ethics." The latter comes fraught with a bit of question-begging circularity to many. to wit,
Virtue ethics is currently one of three major approaches in normative ethics. It may, initially, be identified as the one that emphasizes the virtues, or moral character, in contrast to the approach which emphasizes duties or rules (deontology) or that which emphasizes the consequences of actions (consequentialism). Suppose it is obvious that someone in need should be helped. A utilitarian will point to the fact that the consequences of doing so will maximize well-being, a deontologist to the fact that, in doing so the agent will be acting in accordance with a moral rule such as “Do unto others as you would be done by” and a virtue ethicist to the fact that helping the person would be charitable or benevolent. [Stanford Encyclodedia of Philosophy]
Aristotle waxed eloquent on the subject of "virtue." One becomes "virtuous" via habituation -- by consistently performing "virtuous acts." Noble enough, in the unremarkable cases, where knowing a priori what counts as virtuous is an easy call. The petitio principii problem is a potential liability here ("begging the question"). "Charitable," "benelovent?" I'm down with those moral principles. I can readily think of others, though, who are not. A big part of the core purpose of moral deliberation is the rational determination of what counts as "virtuous." The "appeal to tradition" (which accounts for the bulk of "social/moral norms") does not summarily equate to "evidence" in support of an argument.

While a lot of ethics debates dwell (interminably?) on abstract (or the difficult, sometimes far-fetched "use case" scenario) "thought experiments" in pursuit of ostensibly rational moral guidance, as we've encountered in the 2005 post-Katrina "Playing God?" dustup at NOLA's Memorial Hospital, applied normative ethics in the bio-clinical space frequently require time-pressured dispositive decisions and action in exigent circumstances, lest people be injured or die.

What of Gerd's "Digital Ethics?"
Chapter 10 
Digital Ethics 

Technology has no ethics— but humanity depends on them. Let’s do some exponential math. If we continue on the current path, in just eight to 12 years— depending on when we start counting— overall technological progress is going to leap from today’s pivot point of four to 128. At the same time, the scope of our ethics will continue to limp along on a linear, step-wise, and human scale of improvement, from four to five or six if we’re lucky; it will improve just a little bit as we adapt to a new framework. 

Even if Moore’s Law may eventually cease to apply as far as microchips are concerned, many of the fields of technology, from communications bandwidth to artificial intelligence (AI) and deep learning, are still likely to grow at least exponentially and with combinatorial effects— the changes reinforcing one another.

Zoom forward another ten years, and we may indeed end up 95% automated, hyperconnected, virtualized, uber-efficient, and much less human than we could ever imagine today. A society that sleepwalks down the exponential growth-path of the Megashifts (see chapter 3), a society that does not pause to consider the consequences for human values, beliefs, and ethics, a society that is steered by technologists, venture capitalists, stock markets, and the military, is likely to enter a true machine age. 

So what are ethics? Going beyond the simple answer, how one should live, the Greek word ethos means custom and habit. Today, we often use ethics as a synonym or as shorthand for morals, values, assumptions, purposes, and beliefs. The primary concern of ethics is to question whether something is right or not in a given circumstance. What feels right to you is governed by your ethics, and in many cases it’s hard to explain why something does not feel right. That is clearly one of the challenges of agreeing on even the most basic ethical rules for the exponential age we are about to enter... [Leonhard, op cit, Kindle Locations 2318-2333]
Leonhard gives much deliberation to the speculative relative merits of technological "precaution" vs "proaction." He goes on to call for establishment of a "GDEC."
Creating a Global Digital Ethics Council: How would we define ethics that are fit for the exponential age?
I would like to address two main concerns: Firstly, to try and define what a globally agreeable set of ethics could be for an exponentially Digital Age; and secondly, to try and define what we would need to do to ensure that human well-being and ethical concerns actually remain on top of the agenda globally, and are not taken over by machine thinking. We need to define a set of bottom-line digital ethics— ethics that are fit for the Digital Age: open enough not to put the brakes on progress or hamper innovation, yet strong enough to protect our humanness. A compass, not a map, towards a future that will see increasingly powerful technologies first empower, then augment and then increasingly threaten humanity. 

To this end, I propose that we create a Global Digital Ethics Council (GDEC) tasked with defining what the ground rules and most basic and universal values of such a dramatically different, fully digitized society should be... [ibid, Kindle Locations 2401-2409]
Very intriguing stuff, all of it. Gerd clearly comes down on the "humanist" side of things
 If we don’t want to become technology ourselves; if we don’t want to be increasingly assimilated into the powerful vortex created by the Megashifts; if we want to remain “naturally human” in spite of the powerful lures of those magical technologies; if we want to safeguard what truly makes us happy and not just what makes us function, we must take action while we still have the wiggle room. That time is now. 

We must start asking why, followed by who, and when, not just if and how. We must ask questions about purpose, not just about profits. We must increasingly question industry leaders and especially technologists and the firms that employ them. We must compel them all to take a more holistic view, to consider the good as well as the not-so-good implications of what they are proposing. We must also ask them to acknowledge and address those unintended consequences, and to include the externalities of whatever they are creating in their business plans and revenue models. 

We must hold the creators and financiers of tomorrow— and of course ourselves, as users and consumers— responsible at every turn. We need to start denying customership to those companies that don’t care enough, and we must stop being the content for those platforms that are seeking to automate us. We must stop being silent contributors to machine thinking because everything else is less convenient. 

If we don’t want to end up with what I call the Oppenheimer Regret— named after the famous physicist J. Robert Oppenheimer, whose inventions made the atomic bomb a reality, and who subsequently regretted his actions and their consequences— we must commit to being on “team human,” to put humanity first and above all... [ibid, Kindle Locations 2719-2731]
"We must start asking why..."

Yeah. But, at Health2con next week, I'm sure the "why" is gonna mostly have to do with money, -- frenzied VC spending and market capture opportunities.

A highly recommended read. I know that he will get pushback on the "humanism" ethos, as some equally smart futurist thinkers regard the coming "Singularity" (which Gerd discusses) that eclipses biological humans as an inevitable next step in evolution (should we continue to survive). Cultural evolution, moreso than biological evolution, is firmly in the driver's seat these days. And, "first-world culture" is all about exponentially advancing technology, much if not most of it digital.

Amazon is gumshoeing me, unsurprisingly, mining my purchase and browsing histories. Got this recommendation in my email today.

A noted Harvard professor, no less. Hmmm...

But, hold on a second. There's but one reader review, a 4-star.
...the cases in the book are entirely retrospective — there isn't any speculation about the sorts of hypotheticals that philosophical ethicists love to worry about (and which don't have to be as tedious as the infamous trolley problems). While she doesn't mention the quote I cited above, SJ does invoke the famous Rumsfeldian "folk epistemology" of "known knowns / known unknowns / unknown unknowns" when criticizing risk assessment; her excellent point is that "scientific" risk assessment is limited to known unknowns, and ignores what often is most troubling to people, the unknown unknowns. So it's ironic that her book does the same thing, or even stick to known knowns (though arguably matters dealing with values aren't really "knowns"): every problem she mentions is illustrated with a case that occurred in the past. One of the benefits of philosophical ethics, though, is that an imaginative author can often pluck problems from the realm of the unknown and bring them into the realm of the known unknown, at least. Here are some junior high school-level examples: suppose some mad genius or company really invents robot warriors, or an AI that could bring about the "singularity," surpassing human intelligence by orders of magnitude — what are the ethical choices they ought to consider? Suppose someone invents some new creature or microbe in her bathtub and thinks it would be fun to release it into the wild? How do we deal with the fact that many inventors are loners, and far from the reach of governance institutions? People are really trying to do such things, so these aren't idle questions. As SJ herself points out, experts often cop out on these questions with expressions like "But that seems unlikely for now": she notes that "silenced in this account is the what-if question" (@252). She's right, and yet I felt the book was doing something similar. Movies like Scarlett Johansson's "Her" and TV shows like "Person of Interest" actually do a better job of raising the ethical issues I was hoping to see more soberly considered here. And even some here-and-now examples, such as private drones, military drones, and driverless cars, don't get analyzed — perhaps because there hasn't yet been a lawsuit about them...
Lordy. Think I'll pass, notwithstanding the four stars net. There are only so many hours in the day, and only so many dollars in my bank account.

Maybe I'll do this one next (after I finish up with "Touching a Nerve").

From the Amazon blurb:
...Nicholas Carr cuts through Silicon Valley’s unsettlingly cheery vision of the technological future to ask a hard question: Have we been seduced by a lie? Gathering a decade’s worth of posts from his blog, Rough Type, as well as his seminal essays, Utopia Is Creepy offers an alternative history of the digital age, chronicling its roller-coaster crazes and crashes, its blind triumphs, and its unintended consequences.

Carr’s favorite targets are those zealots who believe so fervently in computers and data that they abandon common sense. Cheap digital tools do not make us all the next Fellini or Dylan. Social networks, diverting as they may be, are not vehicles for self-enlightenment. And “likes” and retweets are not going to elevate political discourse. When we expect technologies—designed for profit—to deliver a paradise of prosperity and convenience, we have forgotten ourselves. In response, Carr offers searching assessments of the future of work, the fate of reading, and the rise of artificial intelligence, challenging us to see our world anew...
 I've cited Nicholas Carr before on KHIT (scroll down), re "The Glass Cage."

More to come...

Thursday, September 15, 2016

"Maybe one day precision medicine will lead to precision voting"

apropos of the latest candidate hyped-up "health issues" recently reported during this final stretch of the absurd 2016 Presidential campaign, Saurabh Jha, MD has a doozy of a post up at THCB. is understandable that a nation obsessed with health is obsessed with the health of its presidential runners. Mr. Trump’s doctor declared he’s the healthiest presidential candidate ever. Mr. Trump has drawn attention to his super health by pointing to the size of his hands – by Mr. Trump’s standards a rather decorous allusion. It matters not what has hypertrophied Mr. Trump’s hands, what matters is that Mr. Trump’s large hands signal vigor and imagination...

It is Mrs. Clinton who is in the crossfire of Health McCarthyites. Many have diagnosed her ill health merely by looking at her. This is an extraordinary feat by the lay public, which physicians can’t achieve even after years of training. Perhaps I’m not staring at her intently enough, but I can’t detect pallor or icterus in Mrs. Clinton. I do detect boredom in her. Rather than the vigor I expect to see in her in the last lap of becoming the first female president, Mrs. Clinton’s physiognomy is of a runner who has hit the wall at mile 17 of a marathon...

Perhaps presidential candidates will have to be approved by the FDA – after all presidents can be a safety issue for the public, just like a new drug or device. Perhaps bioethicists will have the first word on suitability of the candidate for presidency, bioethicists are always looking for things to do...

Recently, a chap in a solemn tone said “I’m really worried about Hillary’s health. We’re not talking about a hospital administrator. We’re talking about the commander-in-chief.”...

Concern about a candidate’s health is the latest of objective, data-driven, fa├žades where prejudice peacefully thrives. It is a circus show which distracts us from the main issue, which is that neither candidate offers a bold economic platform for the future. In America’s strangest election till date, the public are grasping the last bastions of objectivity. They’re grasping at straws and behaving very strangely. It is a clownish effort befitting a clownish election.

From my comment under Dr. Jha's post:
How has it come to pass that the two most despised national figures are the last ones standing for U.S. President in 2016? A pompous, vulgar, dangerously belligerent ignoramus and con man vs. a slippery, knife-fighting inside-player control freak who also thinks the rules don't apply to HER?
Read all of Dr. Jha's post.

I attended HIMSS14 in Orlando on a press pass. Hillary was Keynoting. They wouldn't even let us in the registered media attendees into the keynote ballroom to cover her. We had to watch it from a monitor in the press room -- where I caught a lame rebuke from a staffer for simply shooting a pic of Her Serene Eminence on the monitor.

I posted it anyway. Bite me.

The prior year, while covering HIMSS13 in New Orleans, I had zero trouble shooting pics of hubby ex-Prez Bill Clinton's Keynote (albeit under the creepy steely gaze of the Secret Service).

The 2016 Presidential race is giving me a serious headache.

And, I'm seriously distracted this week. I'd just returned from Baltimore where I was visiting my son and grandson when this happened in the evening:

He slipped out through the garage while we were putting out the trash, and simply disappeared. By the time we realized what had just happened, it was too late. We never even saw him go. Ugh. He was not wearing his collar and tags. He has a chip, which maps to the veterinary clinic in Las Vegas where our dogs had always been seen 'til I retired and moved to CA in 2013. I've alerted them.

We found Jaco on the I-95 and Jones freeway ramp in Vegas in 2003, a freaked-out lost puppy about to get killed in traffic. He's now 14 and on a kidney Rx diet. Heartsick times at the moment in my house. It's now the 15th as I write. He's not come back nor has he turned up in any shelters. Repeated car and foot trips around the 'hood have also turned up empty.


Back on topic. New at STATnews:
If my health as an anesthesiologist has to be painstakingly vetted, so should a president’s

If you’re having surgery today, you’ll meet me, or one of my colleagues — an anesthesiologist in blue scrubs. I will put you to sleep, monitor your vital signs like a hawk, keep you pain-free, and, when the operation is over, return you to consciousness as if nothing had ever happened.

Why do you trust me with your life?

First, you trust me because of the credentials “MD Anesthesiologist” you see spelled out on my hospital badge. Second, you trust me because of what you don’t see. Long before we met, my health record was checked for you. Like coffee grounds in a French press, my medical history and medical record were squeezed through review boards to make sure that I am fit to safely care for you...
"Disclosures matter because they eliminate the uncertainty in not knowing about a candidate’s health. The president of the United States of America presides over the largest economy and military in the world. Market stability and international security depend on that predictable leadership. Uncertainty, real or perceived, in a president’s health may be seen as weakness on the international stage and an opportunity to exploit or attack."

More to come...

Sunday, September 11, 2016

Tuesday, September 6, 2016

In the wake of Labor Day, thinking about jobs, education, healthcare, and tech

Labor Day parade, circa early 1900's. Times have changed.

I hope everyone had an enjoyable Labor Day weekend.

Among my many topical titles relevant to the overlapping health care and health tech spaces, I continue to read a lot of "futurism" books. e.g.,

Finished Brett King's rhapsodic take on all incipient things tech last week. The Amazon blurb:
The Internet and smartphone are just the latest in a 250-year- long cycle of disruption that has continuously changed the way we live, the way we work and the way we interact. The coming Augmented Age, however, promises a level of disruption, behavioural shifts and changes that are unparalleled. While consumers today are camping outside of an Apple store waiting to be one of the first to score a new Apple Watch or iPhone, the next generation of wearables will be able to predict if we’re likely to have a heart attack and recommend a course of action. We watch news of Google’s self-driving cars, but don’t likely realise this means progressive cities will have to ban human drivers in the next decade because us humans are too risky. Following on from the Industrial or machine age, the space age and the digital age, the Augmented Age will be based on four key disruptive themes—Artificial Intelligence, Experience Design, Smart Infrastructure, and HealthTech. Historically the previous ‘ages’ bought significant disruption and changes, but on a net basis jobs were created, wealth was enhanced, and the health and security of society improved. What will the Augmented Age bring? Will robots take our jobs, and AI’s subsume us as inferior intelligences, or will this usher in a new age of abundance?

Augmented is a book on future history, but more than that, it is a story about how you will live your life in a world that will change more in the next 20 years than it has in the last 250 years. Are you ready to adapt? Because if history proves anything, you don't have much of a choice.
A good read, if a tad too optimistic in a number of areas. Though, I did like that he cited another optimist a number of times -- Wired co-founder and original thinker Kevin Kelly, who I've cited before. See, e.g., my May post "Anything that CAN be tracked WILL be tracked." Inevitable Tech Forces That Will Shape Our Future.

The Amazon blurb for Alec Ross's very nice "The Industries of the Future."
Leading innovation expert Alec Ross explains what’s next for the world: the advances and stumbling blocks that will emerge in the next ten years, and how we can navigate them.

While Alec Ross was working as Senior Advisor for Innovation to the Secretary of State, he traveled to forty-one countries, exploring the latest advances coming out of every continent. From startup hubs in Kenya to R&D labs in South Korea, Ross has seen what the future holds.

In The Industries of the Future, Ross shows us what changes are coming in the next ten years, highlighting the best opportunities for progress and explaining why countries thrive or sputter. He examines the specific fields that will most shape our economic future, including robotics, cybersecurity, the commercialization of genomics, the next step for big data, and the coming impact of digital technology on money and markets.

In each of these realms, Ross addresses the toughest questions: How will we adapt to the changing nature of work? Is the prospect of cyberwar sparking the next arms race? How can the world’s rising nations hope to match Silicon Valley in creating their own innovation hotspots? And what can today’s parents do to prepare their children for tomorrow?

Ross blends storytelling and economic analysis to give a vivid and informed perspective on how sweeping global trends are affecting the ways we live. Incorporating the insights of leaders ranging from tech moguls to defense experts, The Industries of the Future takes the intimidating, complex topics that many of us know to be important and boils them down into clear, plainspoken language. This is an essential book for understanding how the world works—now and tomorrow—and a must-read for businesspeople in every sector, from every country.
Also a good read.

I continue reflect at length on the implications of my accruing recent book stash spanning a look-back to the origins of the universe and biological evolution through the various "disruptive" socioeconomic inflection eras to speculations on where humanity is headed in a world of exponentially accelerating cultural and technological evolution.

To these one is advised to leaven one's thinking with such as Paul Mason's bracing "Postcapitalism" and Joel Kotkin's excellent "The New Class Conflict."

In general, the concern goes to the question "what if half or more of employment just goes away, permanently, replaced by AI and robotics?"
In this regard. see the often-cited Oxford Martin study on the future of employment and computerization. Brett King et al cite it in Augmented.

Highly recommend, again, that you also take the time to read and reflect on "Four Futures," now an online essay at Jacobin, soon to be a book release.
Much of the literature on post-capitalist economies is preoccupied with the problem of managing labor in the absence of capitalist bosses. However, I will begin by assuming that problem away, in order to better illuminate other aspects of the issue. This can be done simply by extrapolating capitalism’s tendency toward ever-increasing automation, which makes production ever-more efficient while simultaneously challenging the system’s ability to create jobs, and therefore to sustain demand for what is produced. This theme has been resurgent of late in bourgeois thought: in September 2011, Slate’s Farhad Manjoo wrote a long series on “The Robot Invasion,” and shortly thereafter two MIT economists published Race Against the Machine, an e-book in which they argued that automation was rapidly overtaking many of the areas that until recently served as the capitalist economy’s biggest motors of job creation. From fully automatic car factories to computers that can diagnose medical conditions, robotization is overtaking not only manufacturing, but much of the service sector as well.

Taken to its logical extreme, this dynamic brings us to the point where the economy does not require human labor at all. This does not automatically bring about the end of work or of wage labor, as has been falsely predicted over and over in response to new technological developments. But it does mean that human societies will increasingly face the possibility of freeing people from involuntary labor. Whether we take that opportunity, and how we do so, will depend on two major factors, one material and one social...
So, where are we today? What kind of educational pursuits can you even take to immunize yourself from employment obsolescence? How will you pay for adequate, effective health insurance or health care if you simply cannot find any work? See my 2015 "The Robot will see you now -- assuming you can pay."

See also "Are you ready for the jobs revolution?"
Welcome to the age of uncertainty. Never has it been harder to predict what our working futures hold and arm ourselves accordingly, as technological advances, an ageing workforce and the rise of the sharing economy cause new jobs to appear as fast as others vanish.

Traditional career paths are on the way out, say the experts, with jobs-for-life replaced by a new expectation among younger workers that they might hold 11 or more jobs throughout their lives, according to Karin Klein of Bloomberg Beta.

This is the new gig economy, and it’s gaining ground. In its wake has come a culture of short-termism, an on-demand approach to work which is spreading through professions and occupations, causing a rise in freelance work and part-time jobs...

IRS Form 1099. The "self-employment" / "independent contractor" annual income reporting equivalent of the employee W-2.
The 1099 is nothing new. In 1978 I formed my last band, a five member R&B band in Knoxville, prior to finally going to college at Tennessee and getting out of the music business. I set up a Sub-S corporation via which to administer the business side of things. I paid myself a small salary, and paid my four sidemen equal splits as 1099 "contractors."

We disbanded in early 1980 after two years of scrapping it out on the road averaging about 2,000 miles a month with no real progress in sight.

I legally "laid myself off" and drew my unemployment. In September of that year, at the age of 34, I began my Freshman year at UTK.
Fast-forward to current times. A article excerpt; writer Steven Hill of the New America Foundation.
After working for many years in the Washington, D.C.–based think-tank world, the program that I directed lost most of its funding and was shut down shortly thereafter. All my employees, myself included, were laid off. I was promoting my latest book that had been published a few months before, so I surfed that wave for many more months. For a while, all seemed normal and natural, but without realizing it I had stepped off the safe and secure boat of having what is known as a “good job,” with a steady paycheck, secure employment and a comprehensive safety net, into the cold, deep waters of being a freelance journalist.

Suddenly I was responsible for paying for my own health care, arranging for my own IRAs and saving for my own retirement. I also had to pay the employer’s half of the Social Security payroll tax, as well as Medicare — nearly an extra 8 percent deducted from my income. The costs for my health-care premiums zoomed out of sight, since I was no longer part of a large health-care pool that could negotiate favorable rates...

In short, I had to juggle, juggle, juggle, while simultaneously running uphill — my life had been upended in ways that I had never anticipated. And I began discovering that I was not alone. Many other friends and colleagues — including Pulitzer Prize-winning journalists, professionals and intellectuals, as well as many friends in pink-, white- and blue-collar jobs — also had become 1099 workers, tumbleweeds adrift in the labor market. They found themselves increasingly faced with similar challenges, each in his or her own profession, industry or trade. In short, we had entered the world of what is known as “precarious” work, most of us wholly unprepared.

Not to worry. The sharing economy visionaries — who like Dr. Pangloss in Voltaire’s Candide always see “the best of all possible worlds” — had a plan in place for us. We could “monetize” our assets — rent out our house, our car, our labor, our driveway, our spare drill and other personal possessions — using any number of brokerage websites and mobile apps like TaskRabbit, Airbnb, SnapGoods, the ride-sharing companies Uber and Lyft, and more.

This is the new economy: contracted, freelanced, “shared,” automated, Uber-ized, “1099-ed.” In essence, the purveyors of the new economy are forging an economic system in which those with money will be able to use faceless, anonymous interactions via brokerage websites and mobile apps to hire those without money by forcing an online bidding war to see who will charge the least for their labor, or to rent out their home, their car, or other personal property.

Websites like Uber, Elance-Upwork, TaskRabbit, Airbnb and others are taking the Amazon/eBay model the next logical step. They benefit from an aura that seems to combine convenience with a patina of revolution; convenience as revolution. The idea of a “sharing” economy sounds so groovy — environmentally correct, politically neutral, anti-consumerist and all of it wrapped in the warm, fuzzy vocabulary of “sharing.” The vision has a utopian spin that is incredibly seductive in a world where both government and big business have let us down by leading us into the biggest economic crash since the Great Depression.

But the “sharing” economy’s app- and Web-based technologies have made it so incredibly easy to hire freelancers, temps, contractors and part-timers, why won’t every employer eventually lay off all its regular, W-2 employees and hire 1099 workers? Any business owner would be foolish not to, as he or she watches their competitors shave their labor costs by 30 percent (by escaping having to pay for an employee’s safety net and other benefits).

Outsourcing to these 1099 workers has become the preferred method for America’s business leaders to cut costs and maximize profits. One new economy booster clarified employers’ new strategy: “Companies today want a workforce they can switch on and off as needed” — like one can turn off a faucet or a radio.

Indeed, the so-called “new” economy looks an awful lot like the old, pre-New Deal economy – with “jobs” amounting to a series of low-paid micro-gigs and piece work, offering little empowerment for average workers, families or communities. We’re losing decades of progress, apparently for no other reason than because these on-demand companies conduct their business over the Internet and apps, somehow that makes them “special.” Technology has been granted a privileged and indulged place where the usual rules, laws and policies often are not applied.

If that practice becomes too widespread, you can say goodbye to the good jobs that have supported American families, goodbye to the middle class and say adios to the way of life that made the United States the leading power of the world.
The cover of his book:

I've not bought and read it yet. I get the picture.

In Brett King's book "Augmented," the uncritical lauding of the "gig economy" in general and Uber in particular got a bit tiresome after a while (keyword searching "Uber" alone yielded 65 hits).
Gigging, Job-hopping and Cloud-based Employment 
It was recently reported by Mintel that almost a quarter of Millennials would like to start their own businesses, and nearly one in five planned to do so in the next 12 months. In markets like the United States or Australia where the cost of college education is becoming either unattainable or a poor investment for large swathes of the population, many of this generation are choosing instead to be educated by online platforms, hackathons, internships, start-ups and experimentation rather than through traditional college approaches. With this alternative approach to education, this tech-savvy generation is increasingly demanding flexibility with employment. A total of 66 per cent of Millennials would be willing to wear technology to help them do their jobs. In fact, 40 to 45 per cent of Gen Y regularly use their personal smartphones and download apps specifically for work purposes (as opposed to 18 to 24 per cent of older generations). 

In the United Kingdom, 85 per cent of Gen Y graduates think that freelance or independent working will become a more common and accepted way to succeed in the job market over the next five years. In fact, freelancing is becoming so common amongst Millennials that they’ve even come up with their own term for it— gigging. As in “I’ve got a gig at Google.” Others call them “permanent freelancers” or “permalancers”. Increasingly, this type of work is done at home, at a shared workspace or even at a Starbucks. There are even websites dedicated to helping giggers find coffee shops that can be used as workspaces. It’s hardly surprising, then, that almost half of Millennials surveyed in the United Kingdom and the United States show a strong preference for this sort of working lifestyle. 

The full-time job is historically an anomaly. Prior to the industrial age, it didn’t really exist. Early industrialists, who needed to have workers on a production line at the same time for efficiency, are most likely responsible for creating the concept of a structured work week. Consequently, for the last 100 years, the 40-hour-a-week job has been the centrepiece of work life simply because there was no better way for people to gather in one place at the same time to connect, collaborate and produce. 

Now technology is changing the very nature of work. Millennials will be the first modern generation to work in multiple “micro-careers” at the same time, leaving the traditional full-time job or working week behind. “Work” is more likely to behave like a marketplace in the cloud than behind a desk at a traditional corporation. While a central skill set or career anchor will be entirely probable, most will be entrepreneurs, and many will have their side gigs. For instance, Uber, Lyft and Sidecar are platforms that give people a way to leverage their cars and time to make money. TaskRabbit is a market for odd jobs. Airbnb lets you rent out any extra rooms in your home. Etsy is a market for the handmade knick-knacks or 3D print designs that you make at home. DesignCrowd, 99designs and CrowdSPRING all offer freelance design resources that bid logos and other designs for your dollars. 

Before long, technology will allow instant marketing of your skill set, the auctioning of gigs and expertise, and the ability to be paid for your work in near real time or as deliverables are finished.

King, Brett; Lark, Andy; Lightman, Alex; Rangaswami, JP (2016-05-15). Augmented: Life in The Smart Lane (Kindle Locations 1062-1089). Marshall Cavendish International (Asia) Pte Ltd. Kindle Edition.
"...instant marketing of your skill set, the auctioning of gigs and expertise, and the ability to be paid for your work in near real time or as deliverables are finished."

Yeah. Just don't do any work for Donald Trump without getting paid in advance (and good luck with that demand).

UPDATE: hat tip to my FB friend Ann Neumann for this link: "Teachers Are Working for Uber Just to Keep a Foothold in the Middle Class."

BTW: King glosses over the now-moribund Theranos with equal ease:
Microfluidics is going to radically change the way we think about health care and diagnosis. Think about pretty much any condition you might need to be tested for today— high cholesterol, diabetes, kidney or liver problems, iron deficiency, heart problems, sexually transmitted infections, anaemia, hepatitis, HIV and various viruses— and they are typically tested for either via blood being drawn or via urinalysis. With blood tests, it typically involves drawing a significant amount of blood to get accurate tests. Here is where technology advancements will dramatically change these tests. 

While facing some controversy recently, the Silicon Valley start-up Theranos was one of the first to really tackle this problem. Theranos has developed blood tests that can help detect dozens of medical conditions based on just a drop or two of blood drawn with a pinprick from your finger. At Walgreens pharmacies across the United States, you simply show a pharmacist your ID and a doctor’s note, and you can have your blood drawn right there. From that one sample, several tests can be run often at a fraction of the cost of a typical pathology test. A typical lab test for cholesterol, for example, can cost $ 50 or more in the United States. The same Theranos test at Walgreens costs around US $ 3. [ibid, Kindle Locations 2577-2586]. 

Just in. Saw this in my iPhone.


In a searing investigation into the once lauded biotech start-up Theranos, Nick Bilton discovers that its precocious founder defied medical experts—even her own chief scientist—about the veracity of its now discredited blood-testing technology. She built a corporation based on secrecy in the hope that she could still pull it off. Then, it all fell apart.
It's worse than even I had thought. A number of Silicon Valley VC haircuts draw nigh.
In Silicon Valley, every company has an origin story—a fable, often slightly embellished, that humanizes its mission for the purpose of winning over investors, the press, and, if it ever gets to that point, customers, too. These origin stories can provide a unique, and uniquely powerful, lubricant in the Valley. After all, while Silicon Valley is responsible for some truly astounding companies, its business dealings can also replicate one big confidence game in which entrepreneurs, venture capitalists, and the tech media pretend to vet one another while, in reality, functioning as cogs in a machine that is designed to not question anything—and buoy one another all along the way.
It generally works like this: the venture capitalists (who are mostly white men) don’t really know what they’re doing with any certainty—it’s impossible, after all, to truly predict the next big thing—so they bet a little bit on every company that they can with the hope that one of them hits it big. The entrepreneurs (also mostly white men) often work on a lot of meaningless stuff, like using code to deliver frozen yogurt more expeditiously or apps that let you say “Yo!” (and only “Yo!”) to your friends. The entrepreneurs generally glorify their efforts by saying that their innovation could change the world, which tends to appease the venture capitalists, because they can also pretend they’re not there only to make money. And this also helps seduce the tech press (also largely comprised of white men), which is often ready to play a game of access in exchange for a few more page views of their story about the company that is trying to change the world by getting frozen yogurt to customers more expeditiously. The financial rewards speak for themselves. Silicon Valley, which is 50 square miles, has created more wealth than any place in human history. In the end, it isn’t in anyone’s interest to call bullshit...

I have little doubt that we are potentially on the cusp of truly "transformative," widespread deployment of health tech, finally passing by the chronic "Free Beer Tomorrow" problem. In a few weeks I'll be covering the 10th Annual Health 2.0 Conference, where I'm sure to be enthralled by the latest in Health Tech whiz-bang.

Brett King on nascent health tech promise:
In much the way GPS or navigation software can predict how traffic is going to impact your journey or travel time, over the next decade health sensors married with AI and algorithms will be able to sense developing cardiovascular disease, impending strokes, GI tract issues, liver function impairment or acute renal failure and even recommend or administer direct treatment that prevents a critical event while you seek more direct medical assistance. 

Insurance companies that offer health and life insurance are starting to understand that these tools will dramatically reduce their risk in underwriting policies, as well as help policyholders (that’s us) manage their health better in concert with medical professionals. Insurance will no longer be about assessing your potential risk of heart disease, as much as it will be about monitoring your lifestyle and biometric data so that the risk of heart disease can be managed. The paper application forms that you fill out for insurance policies today will be pretty much useless compared with the data insurers can get from these types of sensor arrays. Besides, an application form won’t be able to help you actively manage diet, physical activity, etc., to reduce the ongoing risk of heart disease. It’s why organisations like John Hancock, the US insurance giant, are already giving discounts to policyholders who wear fitness trackers.

With so much data being uploaded to the interwebs, every second of every day, we have already gone well beyond the point where humans can effectively analyse the volume and breadth of data the world collects without the use of other computers. This will also dramatically change the way we view diagnosis... [ibid, Kindle Locations 1402-1416].
All of our mobile, personalized (wearable and ingestible) health sensor data will be effortlessly uploaded into our docs' EHRs via APIs, right, where Watson and the rest of AI will largely take it from there, right? The Health Care Productivity Treadmill? Problem solved. More time for clinical "empathy"? Check. "Omics" driven personalized medicine everywhere? Check.



Recently in our snailmail.

The latest data indicate that the average cost of attending a public college for an undergraduate degree is just shy of $30,000 a year, and roughly double that for private colleges. Add to that the cost of graduate education (including med school), you might easily amass several hundred thousand dollars worth of debt that will be difficult to eliminate, even assuming a steady job thereafter.
I count myself extremely lucky. I didn't accrue very much undergrad debt at all (paid off in short order), and I paid cash for my Master's.
Particularly in today's tech sector, a Master's degree (often augmented by one or more "professional/technical society" certifications) is becoming the floor norm for employment. Onerous student debt in pursuit of all of this has become a major issue, and is sure to become even more acute and pressing if significant employment opportunity contractions ensue as predicted in some quarters.

Beyond conventional two- and four-year (and graduate) degree programs, we also have a proliferation of "accelerated professional degree" and certificate programs. e.g., recall my post "A Master's in Health Policy and Law? UCSF/Hastings."

Recall also my reporting on Stanford's recent "certificate" offering in "Genomics/Personalized Medicine."
One concern I continue to have with a lot of these offerings goes to the relatively lax "admissions requirements" (e.g., a 3.0 GPA, a "letter of interest," and cumulative "life experience"). A principal criterion often seems to be a possession of a credit card with a valid expiration date and sufficient credit line. After I inquired of the UCSF/Hastings program, and notwithstanding that I'd made it quite clear why (simply to report on it for my KHIT blog), I promptly got handed off to an "Admissions Counselor" (read outsourced boiler room sales rep) who proceeded thereupon to badger me mercilessly with repeated phone calls and emails warning me of the looming "enrollment deadline." I finally had to get pissy with her to shut her down.
None of this stuff comes cheap. My CQE cert (ASQ Certified Quality Engineer) was a huge relative bargain at $180 (ASQ member exam price) when I first sat for the exam in 1992 (it's now $498 for non-members, $348 members, still a bargain). And, every Tom, Dick, & Harry tech society (e.g. HIMSS, AHIMA, IAPP, etc) administers a breadth of similar certification vetting exams via which you can add more non-degree alphabet soup letters after your name.

Also now we see a proliferation of online "MOOC" offerings (Massively Online Open Courses). Think Khan Academy, IHI Open School, etc., along with a huge patchwork of every imaginable ad hoc and institutional educational offering on YouTube (legit and scams alike).

Well, you could become a self-taught "SME" (Subject Matter Expert) relatively inexpensively via your own initiative given sufficient interest and effort. But, how would anyone come to know of your chops?

More on that shortly.

In reflecting on my numerous recent reads on genomics, it occurs to me that to truly become a SME in "omics" science I would need to back up and study/re-study the relevant basic sciences in detail, e.g.,
  • Physics 
  • Biophysics
  • General Chemistry
  • Organic Chemistry
  • Inorganic Chemistry
  • Physical Chemistry
  • Biochemistry
  • Radiochemistry
  • Biology
  • Evolutionary biology
  • Anthropology
  • Paleontology
  • Calculus
  • Statistics
etc. While I've had physics, calculus, and the gamut of stats in undergrad school (and advanced health care stats in grad school), I'd have to start at the beginning with most of the rest. High school chemistry and biology don't count.


From the Wiki:
A subject-matter expert (SME) or domain expert is a person who is an authority in a particular area or topic. The term domain expert is frequently used in expert systems software development, and there the term always refers to the domain other than the software domain. A domain expert is a person with special knowledge or skills in a particular area of endeavour. (An accountant is an expert in the domain of accountancy, for example.) The development of accounting software requires knowledge in two different domains: accounting and software. Some of the development workers may be experts in one domain and not the other. A SME should also have basic knowledge of other technical subjects.

In general, the term is used when developing materials (a book, an examination, a manual, etc.) about a topic, and expertise on the topic is needed by the personnel developing the material. For example, tests are often created by a team of psychometricians and a team of subject-matter experts. The psychometricians understand how to engineer a test while the subject-matter experts understand the actual content of the exam. Books, manuals, and technical documentation are developed by technical writers and instructional designers in conjunctions with SMEs. Technical communicators interview SMEs to extract information and convert it into a form suitable for the audience. SMEs are often required to sign off on the documents or training developed, checking it for technical accuracy. SMEs are also necessary for the development of training materials.
Yeah. I was a partner in an admission test-prep "exam-cram" A/V business in the mid-80's to early 90's (wrote about that experience here). My partner Mike (Michael K. Smith, PhD) remains in that line of work today, at A great guy.

We put up posters with tear-off response cards all over campus.
I shot that pic in my studio control room and rendered the poster layout in Corel Draw.

Amid my studies of psychology and statistics I took "psychological tests and measures" while at UTK. I served as one of Mike's principal undergrad research assistants as he finished his psych Doctorate (focused on the psychology of "math anxiety"). Consequently I know a thing or two about valid psychometric assay construction (which I would later get to use, if informally, some years later in my role as an adjunct university faculty member).

The thought occurred: what if you could "MOOC" and then "comp" your way to a degree or a professional / technical cert via testing? At a tiny fraction of today's absurd cost of college or "professional certificate" programs?

apropos, I am reminded of the legendary con man Frank Abagnale (the "Catch Me If you Can" guy), who passed the Louisiana bar exam at the age of 19, never having been to either college or law school.
As I started my undergrad tenure at UTK, I took the CLEP exam ("College Level English Proficiency"). The national 99th percentile at the time was a score of 720. I blew it out, scoring a 765. Got me out of an entire year of Freshman English. Well worth it to me.
'eh? And, yeah, I know that, more broadly, there are other salient attributes factoring into job success that go beyond those revealed by strictly subject matter achievement exam results, but, such measures are solely what get you through school. You don't get vetted for EQ or motor skills, etc as part of graduation requirements.

Startup idea. An "Uber For Your Mind"? LOL. Lots of underemployed 1099 "gig economy" SMEs out there these days. Think about it.


Large, For-Profit ITT Tech Is Shutting Down All Of Its Campuses

The fall semester has just begun on most college campuses, but tens of thousands of students in 38 states were told Tuesday that, instead, their college is closing its doors.

In a press release, ITT Educational Services announced it would close all campuses of its ITT Technical Institutes. The for-profit college system has become a household name over the past half-century. The company blamed the shutdown on the U.S. Department of Education, which had stepped up oversight of the school and recently imposed tough financial sanctions...
I've never cared for this for-profit trade school thing. The entire business model is predicated on federally backed student loans. Excessive costs, low graduation rates, low job placement rates, etc. Two words: "Corporate Welfare."

Regarding current politics of for-profit education:
A For-Profit College Company Paid Bill Clinton Millions of Dollars. Is That a Scandal?

Laureate Education is the world's largest for-profit college operator, a behemoth that enrolls more than 1 million students, online and on campus, at 87 schools across 28 different countries. And, as the Washington Post recounts in a long feature this week, the company paid former President Bill Clinton some $17.6 million over five years to act as its “honorary chancellor”—a job that mostly involved letting the company refer to him as its “honorary chancellor.” Given the for-profit education sector’s rotten track record here in the United States, this naturally raises some questions, such as: Exactly what kind of an enterprise is Laureate? Is there any sign it was trying to buy favors from Hillary Clinton while she was secretary of state? Is there some hint of a scandal here, or any other reason to be annoyed by this particular thread in the Clintons’ complicated web of financial and philanthropic ties?

The short answer is that, no, there is no real scandal in this story, but yes, there are some reasons to be irritated—at least if you prefer your future presidents to enter office relatively free of personal financial ties to troublesome industries...


apropos of the foregoing, just posted at Medium.
Is online learning the future of education?

In the past, if you wanted to get a qualification, or even simply learn something new, you would sign up for a course at a bricks-and-mortar institution, pay any relevant fees, and then physically attend class. That was until the online learning revolution started...
Polling cited in the article indicates that a plurality of respondents don't think online learning is as effective as the "chalk talk" classroom method (47.68% vs 40.56%, with 11.76% unsure). Again, until we have widespread methodologically sound post-curriculum vetting in place, we're not really gonna know.

Medical schools have competition from other teaching tools. It’s time they acknowledge it

Learning and retaining medical knowledge is a greater challenge for students today than ever before. Some see this as an opportunity. Others see that business opportunity as a threat to established medical education.

In 1950, the doubling time of medical knowledge was estimated to be 50 years. By 2010 that had shrunk to 3.5 years and by 2020 it is projected to be just 73 days. With each passing year, medical students are required to learn more material than did their predecessors. Further adding to the burden is the increasing importance of standardized examinations, particularly the formidable United States Medical Licensing Exam (USMLE) Step 1. An eight-hour-long endeavor, Step 1 tests mastery of the basic sciences, also known as the preclinical curriculum. It is widely considered to be the single most important examination for placement into residency programs after medical school.

Are schools providing the tools to help medical students isolate and retain essential information, and so excel on licensing exams? If students’ study habits are any indication, the answer seems to be no. Rather, the sheer volume and complexity of tested material coupled with the high stakes of these exams has allowed third parties to flourish and capture students’ time, attention, and money...

More to come...