Search the KHIT Blog

Sunday, January 11, 2015

"Failure is the Siamese twin of innovation" - Jonathan Bush


From athenahealth CEO Jonathan Bush:
We are all so accomplished. We are all so proud. And we are all deeply infected by this pride, which can potentially keep us from risking what we must. But the bravest of us will set aside pride and fail our way to profitable success...

Read this fantastic recent article about failure in the New York Times. The gist: Failure is the Siamese twin of innovation. The path to any success is lined with disaster.

So, let’s have dinner and talk about failure.
Yeah. I will certainly have no shortage of failures to recount tomorrow night.

It is indeed a fantastic article.
Welcome to the Failure Age!
By ADAM DAVIDSON, NOV. 12, 2014, NY Times


...In pre-modern times, when starvation was common and there was little social insurance outside your clan, every individual bore the risk of any new idea. As a result, risks simply weren’t worth taking. If a clever idea for a crop rotation failed or an enhanced plow was ineffective, a farmer’s family might not get enough to eat. Children might die. Even if the innovation worked, any peasant who found himself with an abundance of crops would most likely soon find a representative of the local lord coming along to claim it. A similar process, one in which success was stolen and failure could be lethal, also ensured that carpenters, cobblers, bakers and the other skilled artisans would only innovate slowly, if at all. So most people adjusted accordingly by living near arable land, having as many children as possible (a good insurance policy) and playing it safe...

The original age of innovation may have ushered in an era of unforeseen productivity, but it was, for millions of people, absolutely terrifying. Over a generation or two, however, our society responded by developing a new set of institutions to lessen the pain of this new volatility, including unions, Social Security and the single greatest risk-mitigating institution ever: the corporation. During the late 19th century, a series of experiments in organizational structure culminated, in the 1920s, with the birth of General Motors, the first modern corporation. Its basic characteristics soon became ubiquitous. Ownership, which was once a job passed from father to son, was now divided among countless shareholders. Management, too, was divided, among a large group of professionals who directed units, or “subdivisions,” within it. The corporation, in essence, acted as a giant risk-sharing machine, amassing millions of investors’ capital and spreading it among a large number of projects, then sharing the returns broadly too. The corporation managed the risk so well, in fact, that it created an innovation known as the steady job. For the first time in history, the risks of innovation were not borne by the poorest. This resulted in what economists call the Great Compression, when the gap between the income of the rich and poor rapidly fell to its lowest margin.

The secret of the corporation’s success, however, was that it generally did not focus on truly transformative innovations. Most firms found that the surest way to grow was to perfect the manufacturing of the same products, year after year. G.M., U.S. Steel, Procter & Gamble, Kellogg’s, Coca-Cola and other iconic companies achieved their breakthrough insights in the pre-corporate era and spent the next several decades refining them, perhaps introducing a new product every decade or so. During the period between 1870 and 1920, cars, planes, electricity, telephones and radios were introduced. But over the next 50 years, as cars and planes got bigger and electricity and phones became more ubiquitous, the core technologies stayed fundamentally the same. (Though some notable exceptions include the television, nuclear power and disposable diapers.)


Celebrated corporate-research departments at Bell Labs, DuPont and Xerox may have employed scores of white-coated scientists, but their impact was blunted by the thick shell of bureaucracy around them. Bell Labs conceived some radical inventions, like the transistor, the laser and many of the programming languages in use today, but its parent company, AT&T, ignored many of them to focus on its basic telephone monopoly. Xerox scientists came up with the mouse, the visual operating system, laser printers and Ethernet, but they couldn’t interest their bosses back East, who were focused on protecting the copier business.

Corporate leaders weren’t stupid. They were simply making so much money that they didn’t see any reason to risk it all on lots of new ideas. This conservatism extended through the ranks. Economic stability allowed millions more people to forgo many of the risk-mitigation strategies that had been in place for millenniums. Family size plummeted. Many people moved away from arable land (Arizona!). Many young people, most notably young women, saw new forms of economic freedom when they were no longer tied to the routine of frequent childbirth. Failure was no longer the expectation; most people could predict, with reasonable assurance, what their lives and careers would look like decades into the future. Our institutions — unions, schools, corporate career tracks, pensions and retirement accounts — were all predicated on a stable and rosy future...
"Bell Labs conceived some radical inventions, like the transistor, the laser and many of the programming languages in use today, but its parent company, AT&T, ignored many of them to focus on its basic telephone monopoly."

Interesting. Let's pause right there for a moment. I know this personally. I was born nearly 69 years ago after my now-late Dad came home from WWII minus a leg (nighttime glider crash on Sicily), mustered out of the military, and set about starting a family in New Jersey and a career at Bell Labs as a semiconductor applications research technician, first in Murray Hill and subsequently at the Whippany facility.


Pop never went to college (he did, however, do technical schooling along the way while working). He and all four of his brothers served for the duration of WWII. Only Dad and my late Uncle Warren survived the war years. One of my late Ma's brothers was in the D-Day invasion (all of her of-age brothers served as well) on one of those boats (he survived). I did not find "Saving Private Ryan" entertaining. Too close to home.

Pop worked for Bell Labs for his entire civilian career before taking his full VA disability pension at the earliest opportunity and dragging my mother off to the shithole swamps of Palm Bay, Florida so he could play golf (his dream prior to WWII had been to become a long-distance runner and then a golf pro). My golf widow Ma, in her totemic, ever irascible Long Island accent: "Bawby, ya Fawthah and that Gawddamned Gawlf."

My marching orders as a maturing kid were "find a specialty and become an expert in it." Pop loved to sarcastically cite the phrase "Jack of All Trades, Master of None" as dismissive admonition. I was expected to excel in football and academics and then go on to college to find and cement myself in some area of expertise.

I went on the road in a rock and roll band straight out of high school instead. literally the day of Commencement. Imagine how well that went down.
"All I ever wanted was to be a guitar player. Got truly hooked at about age 13, IIRC. $10 Harmony acoustic guitar and a Mel Bay chord book. Done, man. My late Dad came to rue the day he bought those for me (as did my sports coaches). I left immediately after high school to go on the road in a band in lieu of college. Now, LOL, I'm this pedigreed technocrat. But really, I'm still just a kid with a guitar."
It took us a long time to bury the hatchet (I could write my own "Great Santini" memoir). Bury it we did, though. Ironically, we came to be at our reciprocal best during his lengthy dementia-addled terminal stint in long-term care, during which time I had to don the legal mantle of his Legal Guardian. I "interviewed" him 6 months before he died.

Miss those two. They were great, loyal parents. Notwithstanding that they made me crazy. I'm sure I made them equally so.

Back to Adam Davidson:
An age of constant invention naturally begets one of constant failure. The life span of an innovation, in fact, has never been shorter. An African hand ax from 285,000 years ago, for instance, was essentially identical to those made some 250,000 years later. The Sumerians believed that the hoe was invented by a godlike figure named Enlil a few thousand years before Jesus, but a similar tool was being used a thousand years after his death. During the Middle Ages, amid major advances in agriculture, warfare and building technology, the failure loop closed to less than a century. During the Enlightenment and early Industrial Revolution, it was reduced to about a lifetime. By the 20th century, it could be measured in decades. Today, it is best measured in years and, for some products, even less. (Schuetz receives tons of smartphones that are only a season or two old.)

The closure of the failure loop has sent uncomfortable ripples through the economy. When a product or company is no longer valued in the marketplace, there are typically thousands of workers whose own market value diminishes, too. Our breakneck pace of innovation can be seen in stock-market volatility and other boardroom metrics, but it can also be measured in unemployment checks, in divorces and involuntary moves and in promising careers turned stagnant...


For American workers, the greatest challenge would come from computers. By the 1970s, the impact of computers was greatest in lower-skilled, lower-paid jobs. Factory workers competed with computer-run machines; secretaries and bookkeepers saw their jobs eliminated by desktop software. Over the last two decades, the destabilizing forces of computers and the Internet has spread to even the highest-paid professions. Corporations “were created to coordinate and organize communication among lots of different people,” says Chris Dixon, a partner at the venture-capital firm Andreessen Horowitz. “A lot of those organizations are being replaced by computer networks.” Dixon says that start-ups like Uber and Kickstarter are harbingers of a much larger shift, in which loose groupings of individuals will perform functions that were once the domain of larger corporations. “If you had to know one thing that will explain the next 20 years, that’s the key idea: We are moving toward a period of decentralization,” Dixon says.

Were we simply enduring a one-time shift into an age of computers, the adjustment might just require us to retrain and move onward. Instead, in a time of constant change, it’s hard for us to predict the skills that we will need in the future. Whereas the corporate era created a virtuous cycle of growing companies, better-paid workers and richer consumers, we’re now suffering through a cycle of destabilization, whereby each new technology makes it ever easier and faster to create the next one, which, of course, leads to more and more failure. It’s enough to make us feel like mollusk-gland hunters...

Every other major shift in economic order has made an enormous impact on the nature of personal and family life, and this one probably will, too. Rather than undertake one career for our entire working lives, with minimal failure allowed, many of us will be forced to experiment with several careers, frequently changing course as the market demands — and not always succeeding in our new efforts. In the corporate era, most people borrowed their reputations from the large institutions they affiliated themselves with: their employers, perhaps, or their universities. Our own personal reputations will now matter more, and they will be far more self-made. As career trajectories and earnings become increasingly volatile, gender roles will fragment further, and many families will spend some time in which the mother is a primary breadwinner and the father is underemployed and at home with the children. It will be harder to explain what you do for a living to acquaintances...
LOL. Me? I'm just a washed-up guitar player and songwriter.
To succeed in the innovation era, says Daron Acemoglu, a prominent M.I.T. economist, we will need, above all, to build a new set of institutions, something like the societal equivalent of those office parks in Sunnyvale, that help us stay flexible in the midst of turbulent lives. We’ll need modern insurance and financial products that encourage us to pursue entrepreneurial ideas or the education needed for a career change. And we’ll need incentives that encourage us to take these risks; we won’t take them if we fear paying the full cost of failure. Acemoglu says we will need a far stronger safety net, because a society that encourages risk will intrinsically be wealthier over all.

History is filled with examples of societal innovation, like the United States Constitution and the eight-hour workday, that have made many people better off. These beneficial changes tend to come, Acemoglu told me, when large swaths of the population rally together to demand them. He says it’s too early to fully understand exactly what sorts of governing innovations we need today, because the new economic system is still emerging and questions about it remain: How many people will be displaced by robots and mobile apps? How many new jobs will be created? We can’t build the right social institutions until we know the precise problem we’re solving. “I don’t think we are quite there yet,” he told me.

Generally, those with power and wealth resist any significant shift in the existing institutions. Robber barons fought many of the changes of the Progressive Era, and Wall Street fought the reforms of the 1930s. Today, the political system seems incapable of wholesale reinvention. But Acemoglu said that could change in an instant if enough people demand it. In 1900, after all, it was impossible to predict the rise of the modern corporation, labor unions, Social Security and other transformative institutions that shifted gains from the wealthy to workers.

We are a strange species, at once risk-averse and thrill-seeking, terrified of failure but eager for new adventure. If we discover ways to share those risks and those rewards, then we could conceivably arrive somewhere better. The pre-modern era was all risk and no reward. The corporate era had modest rewards and minimal risks. If we conquer our fear of failure, we can, just maybe, have both. - Adam Davidson.
Interesting article. I am reminded of a recent read I cited on this blog.

After the industrial revolution, the definitive twentieth-century institution became The Corporation. Think General Motors, an automobile company where mass production was happening at plants thanks to a confluence of factors, including access to power, water, and a blue-collar labor force. Meanwhile, both union members on factory floors and white-collar workers in headquarters enjoyed safe careers and comfortable middle-class lifestyles. In the twenty-first century, The Corporation as a hub of economic activity is being challenged by The Platform. We’ve touched on platforms in the Strategy chapter, but picture Diapers.com (which was subsequently bought by Amazon). A platform is a very different sort of hub than a corporation. A corporation’s relationship with consumers is one-way. GM decides how to design, manufacture and market a new product to its consumers, and sells it through a network of dealerships. In contrast, a platform has a back-and-forth relationship with consumers and suppliers. There’s a lot more give-and-take. Amazon is a corporation, but it is also a marketplace where buyers and sellers come together. Amazon does not just dictate what it sells to consumers. Consumers tell Amazon what they are looking for, and Amazon sources it for them. Consumers have a voice; they can rate products and services.

Who succeeds and who fails in a world of platforms? 

Thanks to the Buggles, we know that video killed the radio star, and thanks to the 2011 bankruptcy of the bookseller Borders, we know that platforms like Amazon can hurt incumbent corporations. Borders was no lightweight. As late as 2005 it had a market capitalization of over $ 1.6 billion, and at the time it filed for Chapter 11, it employed more than seventeen thousand people.

So it seems to us that incumbent businesses have a choice to make. They can continue to operate as they always have, existing in a world where technology is something to be used not as a tool of transformation but simply to optimize operational efficiency and maximize profits. In a lot of these incumbent businesses, technology is that interesting thing run by that slightly odd group in the other building; it isn’t something that anchors the CEO’s agenda every week. And the impending disruption caused by new competitors entering their markets is something to be fought with battalions of lobbyists and lawyers. Although it might take a long time (and cost a lot of money), this dig-a-moat-and-bury-your-head-in-the-sand approach is bound to end tragically. The forces of technology and disruption are too powerful. So the incumbent that follows this strategy will eventually fail, or at the very least become irrelevant. Along the way, it will hamper customer choice and squelch innovation in its industry, because that is exactly its intent. Innovation means change; for incumbents the status quo is a much more comfortable place to be. 

Venture capitalist and Sun Microsystems cofounder Vinod Khosla, who sometimes speaks at the class Eric teaches at Stanford, points out a couple of simple reasons for this. First, at the corporate level, most innovative new things look like small opportunities to a large company. They are hardly worth the time and effort, especially since their success is far from certain. And at an individual level, people within big companies aren’t rewarded for taking risks, but are penalized for failure. The individual payoff is asymmetrical, so the rational person opts for safety.

Schmidt, Eric; Rosenberg, Jonathan (2014-09-23). How Google Works (Kindle Locations 3203-3229). Grand Central Publishing. Kindle Edition.
A lot to ponder. No monochrome careerist here. Across the 51 years since I left home, I've been a musician, songwriter and composer, hole-in-the-wall self-built shoestring A/V studio owner/producer-director, photographer, graphic artist, radiation laboratory QC analyst and programmer (pdf), industrial technical marketing copywriter and editor (pdf), Medicare QIO analyst (pdf), credit risk and portfolio management modeler (pdf), university adjunct faculty instructor, REC Health IT project coordinator (which spawned this blog in 2010), an "EHR Vendor" (just kidding), and now lone-wolf independent blogger, busker, and compulsive ideas muse -- e.g., here and here, to cite just two of late.


Ahhh... "What do I know?"

My failures have been legion (cringes, eyes roll), both personal and amid the course of employment and entrepreneurial efforts. Some of these will be a bit painful to recount tomorrow night amid the august company that will be attending Jonathan's private dinner (I'm still scratching my head trying to figure how I got invited).

Mr. Bush:

What Should Entrepreneurs Do? 

First, entrepreneurs have to confront an unwelcome reality: Facebook exists and has gone to market. Same goes for Twitter, Google, and LinkedIn, not to mention outfits like Microsoft and Apple. What does this mean? In the world of big consumer applications for technology, the low-hanging fruit has been plucked. Sure, other breakthroughs are inevitable. But the competition is sure to be tough, coming from start-ups and titans alike. 

The health care economy, by contrast, is a jungle dripping with low-hanging fruit. The opportunities are limitless, and the market is immense. For digital entrepreneurs, it’s almost like getting a chance to return to 1998. So the first thing they should do, after the requisite due diligence, is to jump in. 

One great advantage of plunging into health care is that the competition, in many cases, is nonexistent. In other words, customers, whether patients or caregivers, are getting by without the best service or software. They may be suffering long waits or communications snafus . But that suffering is the status quo. They’re used to it. Let’s say you come along with a solution and they don’t like it. At that point in a normal market, you’re dead. But in health care, there’s often no competitor to sweep in. You can refine your product, come back again, and still find the potential customers suffering with the status quo, ready to be convinced. For those arriving from more competitive industries, the emptiness of the health care market can lead to slack-jawed disbelief. It’s too good to be true. 

There are reasons for this, of course. It’s a convoluted industry full of roadblocks and regulations. Many of the customers are nonprofits, which are less than eager to buy disruptive technologies. And yes, the industry grapples with certain issues, like death, that require more serious reflection than, say, mobile video games. But at the same time, getting into health care provides entrepreneurs with a chance not just to build a thriving business, but to serve humanity...
Bush, Jonathan; Baker, Stephen (2014-05-15). Where Does It Hurt?: An Entrepreneur's Guide to Fixing Health Care (pp. 203-204). Penguin Group US. Kindle Edition.
I'd asked them for a comp review copy of his book some time back. No response. So, I bought it and read it this week. Studying Dr. Topol's new book at the moment: The Patient Will See You Now: The Future of Medicine is in Your Hands. I've also been waffling over whether to buy Steven Brill's new book "America's Bitter Pill" as well. For now, though, running across Malcolm Gladwell's review in The New Yorker will suffice.
It is useful to read “America’s Bitter Pill” alongside David Goldhill’s “Catastrophic Care.” Goldhill covers much of the same ground. But for him the philosophical question—is health care different, or is it ultimately like any other resource?—is central. The Medicare program, for example, has a spectacularly high loss ratio: it pays out something like ninety-seven cents in benefits for every dollar it takes in. For Brill, that’s evidence of how well it works. He thinks Medicare is the most functional part of the health-care system. Goldhill is more skeptical. Perhaps the reason Medicare’s loss ratio is so high, he says, is that Medicare never says no to anything. The program’s annual spending has risen, in the past forty years, from eight billion to five hundred and eighty-five billion dollars. Maybe it ought to spend more money on administration so that it can promote competition among its suppliers and make disciplined decisions about what is and isn’t worth covering...
I run hot and cold with respect to Gladwell. I have most of his books. This seemed to be a good review.

I have to find it curious that all of this waxing rhapsodic over the inevitability and desirability of failure amid innovation initiatives does not extend to government. The endless tsunami of nasty jokes about government incompetence and failures just write themselves, as we all know. 'Those who can, do; those who cannot go into public service?"

People are fallible people, whether they work in the private or public sectors. Is there something irreducibly inherent in the organizational culture of public sector bureaucracy that makes failure risk significantly more likely there -- recognizing, if we are honest, that private sector organizations can -- and frequently do -- ossify into the frustrating cultural and process rigidities that beget excessive failures?

Tangentially, to wit:

...Perhaps the most dramatic difference between Paul Levy’s handling of his hospital’s wrong-side surgery and more conventional reactions to medical error was its extreme transparency. The hospital went as far as it could toward widespread and detailed acknowledgment of the error without compromising the privacy of the patient. As that suggests, this openness wasn’t done on behalf of the patient, as a kind of bigger, showier apology. (The hospital was open with the patient, and did apologize, but such apologies should transpire only between the medical team, the appropriate hospital administrators, the patient, and the patient’s family.) It’s more accurate to say that it was done on behalf of future patients. The point of the very public admission of error was to ensure that everyone in the hospital learned as much as possible from it, to remind staff members to follow the error-prevention protocols already in place, and to generate as many new ideas as possible for preventing future problems. A similar recognition of the importance of openness spurred the airline industry to create a culture in which crew and ground members are encouraged (and in some cases even required) to report mistakes and are protected from punishment and litigation if they do so. Likewise, GE, one of the early Six Sigma adopters, claims that to eliminate error, it has “opened our culture to ideas from everyone, everywhere, decimated the bureaucracy and made boundaryless behavior a reflexive, natural part of our culture.”

Schulz, Kathryn (2010-05-25). Being Wrong: Adventures in the Margin of Error (pp. 304-305). HarperCollins. Kindle Edition.
Dunno. Failure-reducing, operationally-beneficent "boundaryless behavior?" Is that invariably less likely in the public sphere? Maybe that's been dispositively researched in objective, non-ideological fashion. I simply don't know.

I am cautioned by another recent read:
In a 2009 essay for the Cato Institute, a libertarian think tank, [Peter] Thiel explained why his brand of technoescapism is a serious endeavor. “In our time, the great task for libertarians is to find an escape from politics in all its forms— from the totalitarian and fundamentalist catastrophes to the unthinking demos that guides so-called ‘social democracy.’” Thus, he writes, “the critical question then becomes one of means, of how to escape not via politics but beyond it . . . The mode for escape must involve some sort of new and hitherto untried process that leads us to some undiscovered country; and for this reason I have focused my efforts on new technologies that may create a new space for freedom.” Thiel then proceeds to outline how cyberspace might be one such space, pointing out that “in the late 1990s, the founding vision of PayPal centered on the creation of a new world currency, free from all government control and dilution— the end of monetary sovereignty, as it were.”

Morozov, Evgeny (2013-03-05). To Save Everything, Click Here: The Folly of Technological Solutionism (p. 130). PublicAffairs. Kindle Edition.
Lordy. I cited this book in a prior post, "Will Silicon Valley's digerati "solve" healthcare?"

In this regard, I have to give Jonathan Bush his props:
What Should Government Do? 

It would be easy to say simply, “Get out of the way.” Many people would leave it at that. But since the government runs about half of the health care economy and regulates the other half, getting completely out of the way is not a viable option— nor one I consider safe. The more important question is what governments can do to encourage competition and innovation in health care. In some cases, this does involve getting out of the way. The issue is where and how. In other areas, you might be surprised to hear me say, government could actually do more. (op cit, p. 194).
Yeah. For Peter Theil, though, "competition" is inimical to capitalist "innovation." See my post "ONC and FTC on Health IT Market Competition."

I look forward to continuing my education tomorrow evening. It will be a humbling time.


ADDENDUM

While we're addressing "innovation" (and its downside risks) I need to cite something that had escaped me until I saw a reader submission yesterday in my current New Yorker.
The Blood-Test Business
The utility of the Theranos blood-testing technology that Ken Auletta writes about is limited to convenience and reduced cost (“Blood, Simpler,” December 15th). These benefits are important, but Elizabeth Holmes appears to believe that her company also empowers people to make decisions about their health based on lab results. This would be true if the clinical significance of a lab test were self-evident, but this is not the case. Accurate interpretation of a lab test is based on much more information than simply the normal value that is supplied next to the result; such judgment requires an understanding of, among other things, biochemistry, anatomy, and pharmacology in the context of someone’s known medical conditions. Most people have trouble getting beyond the vague nomenclature of “bad cholesterol” versus “good cholesterol,” so how will they interpret a sodium level of 139, a fasting glucose level of 105, or a TSH of 6.1? Holmes may be changing the way that blood samples are drawn and processed, but her offer of self-directed medicine through lab-test results confuses raw information with clinical wisdom.


Josh Johnson, M.D., Lake Oswego, Ore.
Goes to this article:
Annals of Innovation, DECEMBER 15, 2014 ISSUE
Blood, Simpler
One woman’s drive to upend medical testing.

BY KEN AULETTA


One afternoon in early September, Elizabeth Holmes took the stage at TEDMED, at the Palace of Fine Arts, in San Francisco, to talk about blood. TEDMED, a part of the Technology, Entertainment, and Design enterprise, is an annual conference devoted to health care; its speakers span a range of inquiry from Craig Venter, the genomic scientist, discussing synthetic life, to Ozzy Osbourne discussing his decision to get his entire genome sequenced. The phrases “disruptive technology” and “the future of medicine” come up a lot.

Holmes, who is thirty, is the C.E.O. of Theranos, a Silicon Valley company that is working to upend the lucrative business of blood testing. Blood analysis is integral to medicine. When your physician wants to check some aspect of your health, such as your cholesterol or glucose levels, or look for indications of kidney or liver problems, a blood test is often required. This typically involves a long needle and several blood-filled vials, which are sent to a lab for analysis. Altogether, diagnostic lab testing, including testing done by the two dominant lab companies, Quest and Laboratory Corporation of America, generates seventy-five billion dollars a year in revenue.

Holmes told the audience that blood testing can be done more quickly, conveniently, and inexpensively, and that lives can be saved as a consequence...
Read all of it. Rather long-ish, but well worth your time. Ahhh... venture capital, essential proprietary opacity vs transparency, scientific/technical peer review, independent lab QC assessment (the area of my initial white collar work - pdf).


I cited the Forbes article on her back on July 24th (scroll down).

Will they be successful at scale? Or will they fail? (Geez, sounds like lyrics.)  apropos of my current study of Dr. Topol's new book,
“When Theranos tells the story about what the technology is, that will be a welcome thing in the medical community,” Eric Topol, a cardiologist and geneticist and the director of the Scripps Translational Science Institute, in La Jolla, California, said. “Until it does that, it can have the big labs saying Theranos is not real, or is not a threat. I tend to believe that Theranos is a threat. But if I saw data in a journal, head to head, I would feel a lot more comfortable.”
Ms. Holmes and her inner circle are quite aware of their challenges:
Holmes faces a number of challenges as she pursues her vision for Theranos. One is logistical. Holmes’s brother, Christian, a Duke graduate and former management consultant who joined the company three years ago and is now the director of product management, says, “You’ve got to be able to scale this. If we can’t, we’ll get killed.” Another challenge is the competition. As miniaturization becomes the standard, researchers are finding ways to bring medical tests directly to patients. Many companies are exploring a range of tests that don’t require needles, relying instead on lasers, oximetry, biosensors, and medical imaging, such as MRIs.

Holmes says she is acutely aware that technology could disrupt Theranos. “We focus all the time on disrupting ourselves, and that’s one of the core tenets in the way we operate,” she said. “Silicon Valley is a great symbol of disruptive technology being able to, one, change the world, and, two, obsolete itself.”
'eh?

END NOTE
EHR adoption has increased significantly, and as more providers use EHR systems during patient interactions, more new problems are discovered. Usability complaints are increasing, patient safety is a concern, CDS is harder than expected, interoperability is much harder than expected, data quality is not a given, and the importance of workflows is beginning to sink in...

Consider the many skills required to design good clinical software — everything from database design to user experience with process awareness, interoperability, and security in between.  A team, including practicing clinical professionals, is required.  A central gathering place is needed to share ideas...

- Jerome Carter, MD, EHR Science
___

More to come...

No comments:

Post a Comment