NOW REPORTING FROM BALTIMORE. An eclectic, iconoclastic, independent, private, non-commercial blog begun in 2010 in support of the federal Meaningful Use REC initiative, and Health IT and Heathcare improvement more broadly. Moving now toward important broader STEM and societal/ethics topics. Formerly known as "The REC Blog." Best viewed with Safari, FireFox, or Chrome.
NOTES, the Adobe Flash plugin is no longer supported. Comments are moderated, thanks to trolls.
I've used both Visio and SmartDraw going back quite a while. I always liked the SmartDraw app net over Visio. Below, one of my HealthInsight SmartDraw visuals, a high-level workflow swimlanes graphic that attempts to depict, beyond simple logic paths, relative time consumption and waste. See my old deck "Workflow Demystified" (pdf)
And another. Conceptual elements of clinical workflow.
I whined repeatedly to the SmartDraw people (to no avail) regarding their refusal to put out a native Mac platform edition (I'm an unapologetic Mac snob). Well, maybe there's now a workaround.
I signed up today. They gave me the upgrade discount because I'd been a prior registered user of their Windows app, years ago. Nice.
After I paid and registered I popped out a simple chart in a couple of minutes going to stuff I've been reading of late (cognition, AI, evolution, etc).
Below, a Meaningful Use client Reno, NV doc workflow I did in SmartDraw.
SmartDraw has Venn Diagram functionality. I tossed this together in Apple Keynote in just a couple of minutes, prior to buying in to SmartDraw Cloud. I may try it in SmartDraw as well.
Sort of a quick summary take on the overlapping/intertwining topical interests I pursue here at KHIT. Mere wafts of the implicit, difficult interconnectedness.
UPDATE
My latest book. A relatively quick read, nicely done. Finished it during my plane ride to MSP for my grandson's college graduation.
...higher intelligence predicts a later death. Not only will better-educated people be more aware of how to stay healthy, but more qualifications allow entry into better jobs, which bring all the health benefits that more income provides... [Kindle Locations 496-498].
Health and mortality Brighter people tend to do healthier things: they exercise more, eat better and are less likely to smoke (Gottfredson, 2004). This might be down to their better education, or their greater ability to sensibly interpret the constant buzz of health-related information in the media. We also saw above that higher-IQ people tend to end up in higher social classes. As we’ve discussed thus far, these all seem very plausible reasons for the IQ– mortality link mentioned at the beginning of the chapter.
Indeed, studies from the relatively new field of cognitive epidemiology (the study of the links between intellectual abilities and health and disease) have repeatedly found correlations between health and intelligence: smarter people are somewhat less likely to have medical conditions, like heart disease, obesity or hypertension, that decrease life expectancy. This is found for mental as well as physical health: large-scale studies have shown that those with lower intelligence test scores are more likely to be hospitalized for psychiatric conditions (Gale et al., 2010). The link seems particularly strong for schizophrenia: there might be a biological connection between schizophrenia and intelligence, and being more intelligent might help patients cope with the disorder’s often frightening and confusing symptoms (Kendler et al., 2014).
The IQ– health connection is found even controlling for social class, and is even found in very rich countries with free, first-class health care available to all (for example, one study found the link in Luxembourg (Wrulich et al., 2014)).
It’s no surprise, then, that there’s such an impressive link between intelligence and mortality. Figure 3.2 illustrates this relation with data from a Swedish study of almost a million men. People in the lowest of the nine IQ categories were over three times more likely to die in the 20 years after their testing session than those with the highest IQ scores... [Kindle Locations 550-565].
Yeah, genetic and "Upstream" factors.
____________
Back in January, a panelist at the Health 2.0 WinterTech conference made a wisecrack about mHealth "quantified self fitness wearables" all coming with a "for entertainment purposes only" disclaimer tucked down in the obtuse Terms and Conditions fine print.
A headline this morning:
Fitbit Trackers Are 'Highly Inaccurate,' Study Finds by KALYEENA MAKORTOFF, CNBC A class action lawsuit against Fitbit may have grown teeth following the release of a new study that claims the company's popular heart rate trackers are "highly inaccurate." Researchers at the California State Polytechnic University, Pomona tested the heart rates of 43 healthy adults with Fitbit's PurePulse heart rate monitors, using the company's Surge watches and Charge HR bands on each wrist... Comparative results from rest and exercise — including jump rope, treadmills, outdoor jogging and stair climbing — showed that the Fitbit devices miscalculated heart rates by up to 20 beats per minute on average during more intensive workouts. "The PurePulse Trackers do not accurately measure a user's heart rate, particularly during moderate to high intensity exercise, and cannot be used to provide a meaningful estimate of a user's heart rate," the study stated...
Interesting. My wife bought me a Fitbit HR for Christmas (the "Plum" model depicted at the top of this post). It is a frustrating piece of crap. It has never held a charge for more than half the time they claim it will. And, after taking it off the charger, more than half of the time I could not get it to sync with the Fitbit app on my iPhone. Consequently, the data it captures and then sends back to me (replete with the dopey little congratulatory "badges") are of nil value; they are woefully incomplete and wildly off. Now that I've joined a fitness club and have resumed a strenuous weight training and cardio regimen, having some accurate and useful activity metrics would be nice, particularly in the wake of my health travails of late.
As of late last week, my Fitbit HR will not even charge at all.
It's going in the junk drawer (to repose with the older one I'd bought back in Vegas some years ago, which also no longer works). I don't have time for this. I'm done with these people. BTW, read their entire Legal/Privacy Policy. See if that stuff gives you the warm fuzzies.
__
COMING UP...
Finished this book. Nicely done.
Among other things, excellent non-technical explanation of Bayesian reasoning spanning chapters 9 and 10.
Bayesian methods are used to refine a posterior probability estimate by using anterior probability knowledge. The table above is familiar to anyone who works in health care or epidemiological analysis. For example, we know both the approximate prevalence of a clinical condition (the proportion of people in the population with the condition) and the historical false positive and false negative rates of a relevant lab test. Using Bayes formula (below), we can better estimate the likelihood you in fact have a disease given that your test comes back positive, or the probability that you are actually disease-free given a negative lab test...
Chapter 9 Not much is known about Rev. Thomas Bayes, who lived during the eighteenth century. Serving mostly as clergyman to his local parish, he published two works in his lifetime. One defended Newton’s theory of calculus, back when it still needed defending, and the other argued that God’s foremost aim is the happiness of his creatures.
In his later years, however, Bayes became interested in the theory of probability. His notes on the subject were published posthumously, and have subsequently become enormously influential— a Google search on the word “Bayesian” returns more than 11 million hits. Among other people, he inspired Pierre-Simon Laplace, who developed a more complete formulation of the rules of probability. Bayes was an English Nonconformist Presbyterian minister, and Laplace was a French atheist mathematician, providing evidence that intellectual fascination crosses many boundaries.
The question being addressed by Bayes and his subsequent followers is simple to state, yet forbidding in its scope: How well do we know what we think we know? If we want to tackle big-picture questions about the ultimate nature of reality and our place within it, it will be helpful to think about the best way of moving toward reliability in our understanding.
Even to ask such a question is to admit that our knowledge, at least in part, is not perfectly reliable. This admission is the first step on the road to wisdom. The second step on that road is to understand that, while nothing is perfectly reliable, our beliefs aren’t all equally unreliable either. Some are more solid than others. A nice way of keeping track of our various degrees of belief, and updating them when new information comes our way, was the contribution for which Bayes is remembered today.
Among the small but passionate community of probability-theory aficionados, fierce debates rage over What Probability Really Is. In one camp are the frequentists, who think that “probability” is just shorthand for “how frequently something would happen in an infinite number of trials.” If you say that a flipped coin has a 50 percent chance of coming up heads, a frequentist will explain that what you really mean is that an infinite number of coin flips will give equal numbers of head and tails.
In another camp are the Bayesians, for whom probabilities are simply expressions of your states of belief in cases of ignorance or uncertainty. For a Bayesian, saying there is a 50 percent chance of the coin coming up heads is merely to state that you have zero reason to favor one outcome over another. If you were offered to bet on the outcome of the coin flip, you would be indifferent to choosing heads or tails. The Bayesian will then helpfully explain that this is the only thing you could possibly mean by such a statement, since we never observe infinite numbers of trials, and we often speak about probabilities for things that happen only once, like elections or sporting events. The frequentist would then object that the Bayesian is introducing an unnecessary element of subjectivity and personal ignorance into what should be an objective conversation about how the world behaves, and they would be off...
On the topic of "evolution," I recommend triangulating Sean Carroll's book with these two:
"The Story of the Human Body" and "A Natural History of Human Morality." The Lieberman book goes principally to the evolution of human physiology, and what he terms our current prevalence of "evolutionary mismatch diseases." Tomasello's book sets forth the scientific evidence underpinning the adaptive utility of prosocial/empathic/altruistic behaviors.Sean Carroll's book goes more to the physical fundamentals and evolutionary processes at the atomic/subatomic levels. It squares with the take proffered by CERN:
The theories and discoveries of thousands of physicists since the 1930s have resulted in a remarkable insight into the fundamental structure of matter: everything in the universe is found to be made from a few basic building blocks called fundamental particles, governed by four fundamental forces. Our best understanding of how these particles and three of the forces are related to each other is encapsulated in the Standard Model of particle physics. Developed in the early 1970s, it has successfully explained almost all experimental results and precisely predicted a wide variety of phenomena. Over time and through many experiments, the Standard Model has become established as a well-tested physics theory. Matter particles All matter around us is made of elementary particles, the building blocks of matter. These particles occur in two basic types called quarks and leptons. Each group consists of six particles, which are related in pairs, or “generations”. The lightest and most stable particles make up the first generation, whereas the heavier and less stable particles belong to the second and third generations. All stable matter in the universe is made from particles that belong to the first generation; any heavier particles quickly decay to the next most stable level. The six quarks are paired in the three generations – the “up quark” and the “down quark” form the first generation, followed by the “charm quark” and “strange quark”, then the “top quark” and “bottom (or beauty) quark”. Quarks also come in three different “colours” and only mix in such ways as to form colourless objects. The six leptons are similarly arranged in three generations – the “electron” and the “electron neutrino”, the “muon” and the “muon neutrino”, and the “tau” and the “tau neutrino”. The electron, the muon and the tau all have an electric charge and a sizeable mass, whereas the neutrinos are electrically neutral and have very little mass. Forces and carrier particles There are four fundamental forces at work in the universe: the strong force, the weak force, the electromagnetic force, and the gravitational force. They work over different ranges and have different strengths. Gravity is the weakest but it has an infinite range. The electromagnetic force also has infinite range but it is many times stronger than gravity. The weak and strong forces are effective only over a very short range and dominate only at the level of subatomic particles. Despite its name, the weak force is much stronger than gravity but it is indeed the weakest of the other three. The strong force, as the name suggests, is the strongest of all four fundamental interactions. Three of the fundamental forces result from the exchange of force-carrier particles, which belong to a broader group called “bosons”. Particles of matter transfer discrete amounts of energy by exchanging bosons with each other. Each fundamental force has its own corresponding boson – the strong force is carried by the “gluon”, the electromagnetic force is carried by the “photon”, and the “W and Z bosons” are responsible for the weak force. Although not yet found, the “graviton” should be the corresponding force-carrying particle of gravity. The Standard Model includes the electromagnetic, strong and weak forces and all their carrier particles, and explains well how these forces act on all of the matter particles. However, the most familiar force in our everyday lives, gravity, is not part of the Standard Model, as fitting gravity comfortably into this framework has proved to be a difficult challenge. The quantum theory used to describe the micro world, and the general theory of relativity used to describe the macro world, are difficult to fit into a single framework. No one has managed to make the two mathematically compatible in the context of the Standard Model. But luckily for particle physics, when it comes to the minuscule scale of particles, the effect of gravity is so weak as to be negligible. Only when matter is in bulk, at the scale of the human body or of the planets for example, does the effect of gravity dominate. So the Standard Model still works well despite its reluctant exclusion of one of the fundamental forces...
Erwin Schrödinger, in What Is Life?, recognized the need for information to be passed down to future generations. Crystals don’t do the job, but they come close; with that in mind, Schrödinger suggested that the culprit should be some sort of “aperiodic crystal”— a collection of atoms that fit together in a reproducible way, but one that had the capacity for carrying substantial amounts of information, rather than simply repeating a rote pattern. This idea struck the imaginations of two young scientists who went on to identify the structure of the molecule that actually does carry genetic information: Francis Crick and James Watson, who deduced the double-helix form of DNA.
Deoxyribonucleic acid, DNA, is the molecule that essentially all known living organisms use to store the genetic information that guides their functioning. (There are some viruses based on RNA rather than DNA, but whether or not they are “living organisms” is disputable.) That information is encoded in a series of just four letters, each corresponding to a particular molecule called a nucleotide: adenine (A), thymine (T), cytosine (C), and guanine (G). These nucleotides are the alphabet in which the language of genes is written. The four letters string together to form long strands, and each DNA molecule consists of two such strands, wrapped around each other in the form of a double helix. Each strand contains the same information, as the nucleotides in one strand are paired up with complementary ones in the other: A’s are paired with T’s, and C’s are paired with G’s. As Watson and Crick put it in their paper, with a measure of satisfied understatement: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”
In case it has managed to escape your notice, the copying mechanism is this: the two strands of DNA can unzip from each other, then act as templates, with free nucleotides fitting into the appropriate places on each separate strand. Since each nucleotide will match only with its specific kind of partner, the result will be two copies of the original double helix— at least as long as the duplication is done without error.
The information encoded in DNA directs biological operations in the cell. If we think of DNA as a set of blueprints, we might guess that some molecular analogue of a construction worker comes over and reads the blueprints, and then goes away to do whatever task is called for. That’s almost right, with proteins playing the role of the construction workers. But cellular biology inserts another layer of bureaucracy into the operation. Proteins don’t interact with DNA directly; that job belongs to RNA... [Carroll, op cit, pp. 257-258]
UPDATE
"The Gene" may have to take a back seat to this one that just came to my attention via Science Based Medicine:
Medicine is an uncertain business. It is an applied science, applying the results of basic science knowledge and clinical studies to patients who are individuals with differing heredity, environment, and history. It is commonly assumed that modern science-based doctors know what they are doing, but quite often they don’t know for certain. Different doctors interpret the same evidence differently; there is uncertainty about how valid the studies’ conclusions are and there is still considerable uncertainty and disagreement about things like guidelines for screening mammography and statin prescriptions.
Snowball in a Blizzard by Steven Hatch, MD, is a book about uncertainty in medicine. The title refers to the difficulty of interpreting a mammogram, trying to pick out the shadows that signify cancer from a veritable blizzard of similar shadows...
A culture of denial subverts the health care system from its foundation. The foundation—the basis for deciding what care each patient individually needs—is connecting patient data to medical knowledge. That foundation, and the processes of care resting upon it, are built by the fallible minds of physicians. A new, secure foundation requires two elements external to the mind: electronic information tools and standards of care for managing clinical information.
Electronic information tools are now widely discussed, but the tools depend on standards of care that are still widely ignored. The necessary standards for managing clinical information are analogous to accounting standards for managing financial information. If businesses were permitted to operate without accounting standards, the entire economy would be crippled. That is the condition in which the $2 1⁄2 trillion U.S. health care system finds itself—crippled by lack of standards of care for managing clinical information. The system persists in a state of denial about the disorder that our own minds create, and that the missing standards of care would expose.
This pervasive disorder begins at the system’s foundation. Contrary to what the public is asked to believe, physicians are not educated to connect patient data with medical knowledge safely and effectively. Rather than building that secure foundation for decisions, physicians are educated to do the opposite—to rely on personal knowledge and judgment—in denial of the need for external standards and tools. Medical decision making thus lacks the order, transparency and power that enforcing external standards and tools would bring about... [Lawrence Weed, MD and Lincoln Weed, JD, Medicine in Denial, pp 1-2]
Slow Medicine Slow Medicine promotes a thoughtful, evidence-based approach to clinical care, emphasizing careful clinical reasoning and patient-focused care. Slow Medicine draws on many of the principles of the broader “Slow Movement”, which have been applied to a wide range of fields including food, art, parenting, and technology, among others. Like the broader “Slow Movement,” which emphasizes careful reflection, Slow Medicine involves careful interviewing, examination, and observation of the patient. It reminds us that the purpose of health care is to improve the wellbeing of patients, not simply to utilize the ever growing array of medical tools and gadgets. In addition, Slow Medicine recognizes that many clinical problems do not yet have a technological “magic bullet” but instead require lifestyle changes that have powerful effects over time. Importantly, Slow Medicine practitioners are eager to promote innovation, new ideas and adopt new technologies early, but aim to do so in a methodical manner and only after it’s clear that newer really is better for the individual patient
Technology, particularly the technology of knowledge, shapes our thought. The possibility space created by each technology permits certain kinds of thinking and discourages others. A blackboard encourages repeated modification, erasure, casual thinking, spontaneity. A quill pen on writing paper demands care, attention to grammar, tidiness, controlled thinking. A printed page solicits rewritten drafts, proofing, introspection, editing. Hypertext, on the other hand, stimulates yet another way of thinking: telegraphic, modular, nonlinear, malleable, cooperative. As Brian Eno, the musician, wrote of Bolter’s work, “[Bolter’s thesis] is that the way we organize our writing space is the way we come to organize our thoughts, and in time becomes the way which we think the world itself must be organized...”
Some interesting observations near the end of Kevin Kelly's book. apropos, I am reminded of my December 2015 post setting forth "Margalit's Lament."
Are structured data now the enemy of health care quality?
Margalit apparently thinks so. Per my last post, which was an annotated analytical cross-post of Margalit Gur-Arie's provocative post "Bingo Medicine," which was itself first cross-posted at THCB.
the
one foundational problem plaguing current EHR designs – the draconian
enforcement of structured data elements as means of human endeavor...
People don’t think in codified vocabularies. We don’t express ourselves in structured data fields...
Furthermore,
when your note taking is template driven, most of your cognitive effort
goes towards fishing for content that fits the template (like playing
Bingo), instead of just listening to whatever the patient has to say...
"Draconian enforcement." Gotta love that.
So, is digital health IT is inimical to clinical acumen, and consequently, patient outcomes?
__
More Kevin Kelly:
The space of knowledge in ancient times was a dynamic oral tradition. By the grammar of rhetoric, knowledge was structured as poetry and dialogue—subject to interruption, questioning, and parenthetical diversions. The space of early writing was likewise flexible. Texts were ongoing affairs, amended by readers, revised by disciples; a forum for discussions. When scripts moved to the printed page, the ideas they represented became monumental and fixed. Gone was the role of the reader in forming the text. The unalterable progression of ideas across pages in a book gave the work an impressive authority—“authority” and “author” deriving from a common root. As Bolter notes, “When ancient, medieval, or even Renaissance texts are prepared for modern readers, it is not only the words that are translated: the text itself is translated into the space of the modern printed book.”
A few authors in the printed past tried to explore expanded writing and thinking spaces, attempting to move away from the closed linearity of print and into the nonsequential experience of hypertext. James Joyce wrote Ulysses and Finnegan’s Wake as a network of ideas colliding, cross-referencing, and shifting upon each reading. Borges wrote in a traditional linear fashion, but he wrote of writing spaces: books about books, texts with endlessly branching plots, strangely looping self-referential books, texts of infinite permutations, and the libraries of possibilities. Bolter writes: “Borges can imagine such a fiction, but he cannot produce it....Borges himself never had available to him an electronic space, in which the text can comprise a network of diverging, converging, and parallel times.”
A new thinking space I live on computer networks. The network of networks—the Internet—links several millions of personal computers around the world. No one knows exactly how many millions are connected, or even how many intermediate nodes there are. The Internet Society made an educated guess in August 1993 that the Net was made up of 1.7 million host computers and 17 million users. No one controls the Net, no one is in charge. The U.S. government, which indirectly subsidizes the Net, woke up one day to find that a Net had spun itself, without much administration or oversight, among the terminals of the techno-elite. The Internet is, as its users are proud to boast, the largest functioning anarchy in the world. Every day hundreds of millions of messages are passed between its members, without the benefit of a central authority. I personally receive or send about 50 messages per day. In addition to the vast flow in individual letters, there exist between its wires that disembodied cyberspace where messages interact, a shared space of written public conversations. Every day authors all over the word add millions of words to an uncountable number of overlapping conversations. They daily build an immense distributed document, one that is under eternal construction, constant flux, and fleeting permanence. “Elements in the electronic writing space are not simply chaotic,” Bolter wrote, “they are instead in a perpetual state of reorganization...”
The total summation we call knowledge or science is a web of ideas pointing to, and reciprocally educating each other. Hypertext and electronic writing accelerate that reciprocity. Networks rearrange the writing space of the printed book into a writing space many orders larger and many ways more complex than of ink on paper. The entire instrumentation of our lives can be seen as part of that “writing space.” As data from weather sensors, demographic surveys, traffic recorders, cash registers, and all the millions of electronic information generators pour their “words” or representation into the Net, they enlarge the writing space. Their information becomes part of what we know, part of what we talk about, part of our meaning.
At the same time the very shape of this network space shapes us. It is no coincidence that the postmodernists arose in tandem as the space of networks formed. In the last half-century a uniform mass market—the result of the industrial thrust—has collapsed into a network of small niches—the result of the information tide. An aggregation of fragments is the only kind of whole we now have. The fragmentation of business markets, of social mores, of spiritual beliefs, of ethnicity, and of truth itself into tinier and tinier shards is the hallmark of this era. Our society is a working pandemonium of fragments... [Kevin Kelly, Out of Control, illustrated edition, pp 387-391]
"Shards." Yeah. As in "Shards of health care. Fragmentation." A lot of us are not finding those shards very useful or comforting. Are we gonna get to true "interoperability" in the health care space anytime soon? Or will we remain mired in digital silo fragmentation and "Interoperababble" owing to continuing/increasing industry "fragmentation" wrought by powerful incumbent market forces?
Reflections on "ways of thinking" here at KHIT focus as a priority on clinical cognition and judgment, and on the workflow realities that may negatively impact them (much of which have little to nothing to do with infotech -- digital or analog -- per se).
Think about it. Your phone carrier knows who you are talking to. Uber knows where you are going. Facebook knows who your friends are. Amazon knows your purchase habits. Fitbit knows how much you exercise. etc. etc. etc. Your entire life is producing data. You are the product. Is this a good thing? Well, the short answer is it depends...
Yeah. My comment:
“When you use a free online service, you’re not the customer, you’re the PRODUCT.” From my latest riff on my KHIT.org blog (I think I was quoting either Dan Lyons or Douglas Rushkoff at that point). Much more to come on that topic shortly. Kevin Kelly asserts that “whatever CAN be tracked WILL be tracked.” But, not withstanding its ostensible inevitability, the propriety of that, as you note, will depend on who’s doing the tracking, and for what purpose. Who benefits, and who is put at risk? I recently covered a medical infotech conference amid which one presenter showed his fatuous “app” that purports to calculate an AI-enabled machine-learning social media based adaptive “health score,” sort of akin to a FICO credit score (all totally unregulated, of course). What could possibly go wrong there? Similarly, would you like to have, say, Facebook calculating (and sharing with the Feds), without your knowledge or consent, your “Terrorism Risk Score?”
Hang around enough health IT and health policy blogs for any sustained length of time (in particular delving into and participating in the always-fractious comments sections), and you will by now have become utterly familiar with the overwrought, cynical sentiment set forth in the above title. Nothing is ever incrementally better or worse, it's always an existential catastrophe or a totally beneficent "transformative new era in health care." EHRs are uniformly "dangerous, untested, experimental medical devices that harm and kill patients" (which, of course, begs the fundamental question that, if they're "untested," how, precisely, can we summarily know they're causally dangerous?). And, ObamaCare isindisputably"the worst thing since slavery" (according to one very prominent neurosurgeon and former GOP presidential candidate, no less).
You know the riff.
All part of a larger opus, one particularly, predictably notable during this fractious 2016 "money = speech" national election year. Now-failed GOP Oval Office candidate Senator Ted Cruz ominously warns that the nation is "facing the abyss," that "your religious freedoms and 2nd Amendment rights are in dire peril," and that Hillary will "appoint five radical left-wing extremist judges to the Supreme Court" if elected. Oh, the Horrors.
Right.
I particularly love how all of these Republican contenders are going to "repeal ObamaCare on Day One" -- all of them having apparently been out sick (albeit with good health insurance) during Article II day in ConLaw.
Equally uniform is the GOP candidates' heartwarming apple-pie assertion that they're going to "return control of health care to the patients."
Perhaps, amid other vague "reforms," by putting their medical data back on paper charts?
I got off into this rant by way of a NY Times article, one brought to my attention by my Facebook pal Jonathan Taplin of USC. Noted writer Gregg Easterbrook:
When Did Optimism Become Uncool ...most American social indicators have been positive at least for years, in many cases for decades. The country is, on the whole, in the best shape it’s ever been in. So what explains all the bad vibes?
Social media and cable news, which highlight scare stories and overstate anger, bear part of the blame. So does the long-running decline in respect for the clergy, the news media, the courts and other institutions. The Republican Party’s strange insistence on disparaging the United States doesn’t help, either.
But the core reason for the disconnect between the nation’s pretty-good condition and the gloomy conventional wisdom is that optimism itself has stopped being respectable. Pessimism is now the mainstream, with optimists viewed as Pollyannas. If you don’t think everything is awful, you don’t understand the situation!...
Pollution, discrimination, crime and most diseases are in an extended decline; living standards, longevity and education levels continue to rise. The American military is not only the world’s strongest, it is the strongest ever. The United States leads the world in science and engineering, in business innovation, in every aspect of creativity, including the arts...
Manufacturing jobs described by Mr. Trump and Mr. Sanders as “lost” to China cannot be found there, or anywhere. As Charles Kenny of the nonpartisan Center for Global Development has shown, technology is causing factory-floor employment to diminish worldwide, even as loading docks hum with activity. This transition is jarring to say the least — but it was always inevitable. The evolution of the heavy-manufacturing sector away from workers and toward machines will not stop, even if international trade is cut off completely...
The lack of optimism in contemporary liberal and centrist thinking opens the door to Trump-style demagogy, since if the country really is going to hell, we do indeed need walls. And because optimism has lost its standing in American public opinion, past reforms — among them environmental protection, anti-discrimination initiatives, income security for seniors, auto and aviation safety, interconnected global economics, improved policing and yes, Obamacare — don’t get credit for the good they have accomplished.
In almost every case, reform has made America a better place, with fewer unintended consequences and lower transaction costs than expected. This is the strongest argument for the next round of reforms. The argument is better made in positive terms — which is why we need a revival of optimism...
"We need a revival of optimism." While I would tend to agree in principle, I won't be holding my breath this year.
"My optimism is off the chart. I got it from Asia, where I saw how quickly civilizations could move from abject poverty to incredible wealth. If they can do it, almost anything is possible. Let me go back to the original quote about seeing God in a cell phone: The reason we should be optimistic is life itself. It keeps bouncing back even when we do horrible things to it. Life is brimming with possibilities, details, intelligence, marvels, ingenuity. And the Technium is very much an extension of that possibility space." -- Interview with Kevin Kelly – “My Optimism Is Off The Chart”
I'm loving his book "Out of Control" (now free pdf), and can't wait to read his forthcoming one.
Given the financial imperatives of our news media, it's unsurprising that they are predisposed to fan the flames of pessimism and controversy, to stoke the fires of negativism and cynicism. "If it bleeds it leads" goes the venerable media marketing axiom. Acrimony and reflexive disputation sell (whether pertaining to politics or other topics).
Whether what they sell is increasingly toxic to the body politic and humanity writ large is quite another matter.
BTW, for a rational review of some of the actual pros and cons of the ACA ("ObamaCare") see my review last year of Jed Graham's book "ObamaCare is a Great Mess." As I noted last July:
All of the advances in medical science (and clinical pedagogy), health IT, and progressive delivery process QI we can muster will still be hemmed in by national policy. Will we continue to experience our health care in the "Shards" of my recent characterization? Will the ACA be complicit in that chronic fragmentation?
With respect to the actual utility and challenges associated with Health IT, you simply can't do any better than spending significant time over at Dr. Jerome Carter's EHR Science.
UPDATE: MS. GUR-ARIE IS AT IT AGAIN
Health Care is Not a System The Merriam-Webster dictionary has many definitions for the term system, but the most straightforward, and arguably the most applicable to our health care conversation is “a regularly interacting or interdependent group of items forming a unified whole”. The common wisdom is that our health care system is broken and hence our government is vigorously attempting to fix it for us through legislation, reformation and transformation. We usually work ourselves into a frenzy arguing how the government should go about fixing the system, but I would like to take a step back and question the assumption that health care is, or should be, a system. This is not about splitting the hairs of semantics. This is about proper definition of the problem we wish to solve.
You could argue that we use the term system loosely to refer to everything and there are no nefarious implications to calling health care a system. We have a transportation system, an education system, a legal system, a financial system, a water system, a political system and so forth. Note however that we rarely talk about our food system or auto system, fashion system, hospitality system, etc. We call those industries. Starting to see a difference here? Good. Our government obviously regulates both systems and industries, but it regulates them differently. And systems have distinct characteristics that industries seldom have, such as built-in (systemic) mechanisms for discrimination, and institutionalized (yep, systemic) corruption aplenty.
When we begin by assuming that health care is a system, we assume that health care should possess those same characteristics. We assume that health care in Beverly Hills will be, by design, different than health care in Flint, Michigan. We assume that health care delivered in private settings will be different than health care accessed in public settings. We assume that some areas will have sprawling, on demand health care hubs, while others will have none. We assume that public engagement in health care is for show only, while the billionaire class and its carefully constructed echo chamber get to make all our health care decisions. We assume that health care is, and always will be, rigged. And based on these assumptions, we proceed to fix our health care “system”.
You may be tempted to dismiss these thoughts as specious demagoguery, strawmen, soapbox arguments or just plain exaggerations. After all, health care system fixing includes such socially beneficent endeavors as expanding “coverage” for the poor (Medicaid expansion), subsidizing insurance for the less poor (Obamacare exchanges), granting insurance to the sick (preexisting conditions), and a steady drumbeat of accountability, measurement and reduction in “disparities” for “vulnerable populations”. To that I would respond by pointing you to several recent utterings from public figures empowered to effect health care reforms... The Return of the Broccoli I’ve written compulsively about the apparent war on doctors in the past, and I am certain I will be writing more, but the war on people is a much more intricate subject. It’s relatively easy to separate a quarter of one percent of people from the herd, paint them as for-profit mass murderers and sic the hungry mobs on them. But then how do you subdue the mobs? For that, my friend, we have government. We have behavioral economics. We have the experts and pundits in that echo chamber. And we have the righteous souls who innocently light the fuse of every calamity.
I’m old enough to remember the debates preceding the Obamacare litigation in front of the Supreme Court, culminating with both Justice Scalia and Chief Justice Roberts pondering whether the government has it within its enumerated powers to make you buy broccoli. Before the broccoli debacle, the same libertarian lunatic fringe wondered if government can order Americans to lose weight, or if the government can mandate that we buy certain products from certain manufacturers. Of course Obamacare and its mandate to buy health insurance or be penalized by the IRS survived these outlandish challenges, and the IRS is doing its best to rake in those penalties...
My comment:
If not a "system," is it even "a market"?
"...The common wisdom
is that our health care [market] is broken and hence our government is
vigorously attempting to fix it for us through legislation, reformation
and transformation..."
Or is it a dystopian perplex of contending markets, public and private. (There's no such thing as a "free market.")
"In a few years we’ll have artificial intelligence that can accomplish professional human tasks. There is nothing we can do to stop this. In addition our lives will be totally 100% tracked by ourselves and others. This too is inevitable. Indeed much of what will happen in the next 30 years is inevitable, driven by technological trends which are already in motion, and are impossible to halt without halting civilization. Some of what is coming may seem scary, like ubiquitous tracking, or robots replacing humans. Others innovations seem more desirable, such as an on-demand economy, and virtual reality in the home. And some that is coming like network crime and anonymous hacking will be society’s new scourges. Yet both the desirable good and the undesirable bad of these emerging technologies all obey the same formation principles."
Kevin Kelly, co-founder of Wired.com, an extremely important thinker. I was in the car with NPR on the radio the other day and heard an interview featuring him. Had to look him up forthwith. Interesting man:
My educational background is minimal. I am a college drop out. Instead of going to university, I went to Asia. That was one of the best decisions I ever made. I traveled in the 1970s as a poor, solo photographer in the hinterlands and villages of Asia, between Iran and Japan. I traveled on about US$2,500 per year and came back with 36,000 slides...
His photos are breathtaking. His writing is witty and elegant. His views are at once spot-on and challenging.
"The ongoing scientific process of moving our lives
away from the rule of matter and toward abstractions and intangibles can only prepare us for a better understanding of the ultimate abstraction."- Nerd Theology, 1999
Interesting. Prolific, thoughtful observer and writer.
OK, got the interview audio embed link. Thanks, Kevin.
At first glance, Kevin Kelly is a contradiction: a self-described old hippie and onetime editor of hippiedom’s do-it-yourself bible, The Whole Earth Catalog, who went on to co-found Wired magazine, a beacon of the digital age. In our latest edition of FREAK-quently Asked Questions, Kelly sits down with Stephen Dubner to explain himself; the episode is called “Someone Else’s Acid Trip”...
...Images of a machine as organism and an organism as machine are as old as the first machine itself. But now those enduring metaphors are no longer poetry. They are becoming real—profitably real. This book is about the marriage of the born and the made. By extracting the logical principle of both life and machines, and applying each to the task of building extremely complex systems, technicians are conjuring up contraptions that are at once both made and alive. This marriage between life and machines is one of convenience, because, in part, it has been forced by our current technical limitations. For the world of our own making has become so complicated that we must turn to the world of the born to understand how to manage it. That is, the more mechanical we make our fabricated environment, the more biological it will eventually have to be if it is to work at all. Our future is technological; but it will not be a world of gray steel. Rather our technological future is headed toward a neo-biological civilization. The triumph of the bio-logic Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic. Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude. It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning. We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in a “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured. Genetic engineering is precisely what cattle breeders do when they select better strains of Holsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements. The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1) Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being. What should we call that common soul between the organic communities we know of as organisms and ecologies, and their manufactured counterparts of robots, corporations, economies, and computer circuits? Icall those examples, both made and born, “vivisystems” for the lifelikeness each kind of system holds. In the following chapters I survey this unified bionic frontier. Many of the vivisystems I report on are “artificial”—artifices of human making—but in almost every case they are also real—experimentally implemented rather than mere theory. The artificial vivisystems Isurvey are all complex and grand: planetary telephone systems, computer virus incubators, robot prototypes, virtual reality worlds, synthetic animated characters, diverse artificial ecologies, and computer models of the whole Earth. But the wildness of nature is the chief source for clarifying insights into vivisystems, and probably the paramount source of more insights to come... The vivisystems I examine in this book are nearly bottomless complications, vast n range, and gigantic in nuance. From these particular big systems I have appropriated unifying principles for all large vivisystems; I call them the laws of god, and they are the fundamentals shared by all self-sustaining, self-improving systems.
As we look at human efforts to create complex mechanical things, again and again we return to nature for directions. nature is thus more than a diverse gene bank harbor- ing undiscovered herbal cures for future diseases—although it is certainly this. nature is also a “meme bank,” an idea factory. Vital, postindustrial paradigms are hidden in every jungly ant hill. The billion-footed beast of living bugs and weeds, and the aboriginal human cultures which have extracted meaning from this life, are worth protecting, if for no other reason than for the postmodern metaphors they still have not revealed. destroying a prairie destroys not only a reservoir of genes but also a treasure of future metaphors, insight, and models for a neo-biological civilization... [pp. 6-8]
A bonus for watching the complete Kevin Kelly SXSW talk was that it was followed in my YouTube feed with one by the equally compelling Douglas Rushkoff, who I've cited on this blog in prior posts. See here as well.
If you're too busy (or too lazy) to obtain and read the Rushkoff book, his SXSW talk will likely suffice as Cliff Notes.
So coming back around to the core tech thing -- Health IT -- see my May 2nd post "Better, Smarter, and Healthier" Really?" Wherein Robin Farmanfarmian waxes rhapsodic over the nascent wave of 24/7 ubiquitous digital sensing technologies.
The Anecdote is the Antidote for What Ails Modern Medical Science by John R. Adler, Jr. M.D. It’s hard to imagine anybody being more of a medical insider than Dr. John R. Adler, the founding editor of Cureus. Adler has a Harvard medical degree, served his residency at Massachusetts General hospital, and is a Stanford professor of neurosurgery, as well as founding CEO of a leading radiation oncology company, Accuray. This makes it especially heartening that Dr. Adler is now focused on opening up medical research literature to important kinds of evidence that have often been ignored: the anecdote and the case report. Quote: “The altruism that is supposed to drive the publication of scientific research has been almost entirely co-opted by the peculiar needs of academic promotion and tenure, as well as the pecuniary demands of the scholarly publishing industry; the public good of medical knowledge has been reduced to a mere after-thought by both academia and the publishing industry.”
__
THURSDAY UPDATE
apropos of the foregoing AI/IA/Robotics riff, an across this article this morning on Medium:
Why You Will Soon Be Sharing Your Deepest Secrets With A Robot: This will be the largest cognitive leap in human history.
Note the author byline:
I guess now resumes will be increasingly replete with how many IPO exits one has been "associated" with, for what dollars amounts.
Then there's Edge.org. Like I don't already have enough to read. to wit:
Weighing in from the cutting-edge frontiers of science, today’s most forward-thinking minds explore the rise of “machines that think.”
Stephen Hawking recently made headlines by noting, “The development of full artificial intelligence could spell the end of the human race.” Others, conversely, have trumpeted a new age of “superintelligence” in which smart devices will exponentially extend human capacities. No longer just a matter of science-fiction fantasy (2001, Blade Runner, The Terminator, Her, etc.), it is time to seriously consider the reality of intelligent technology, many forms of which are already being integrated into our daily lives. In that spirit, John Brockman, publisher of Edge. org (“the world’s smartest website” – The Guardian), asked the world’s most influential scientists, philosophers, and artists one of today’s most consequential questions: What do you think about machines that think?
I find it interesting that the usual list of allusively admonitory movie citations misses this one:
The self-repairing autonomous "Mark 13" warrior robot. A gruesome, dystopian cult flick. One of my faves.
Also up amid the never-diminishing book stash:
Additionally on deck: "Two cheers for uncertainty." What the hell does that mean? Isn't "science" (in particular medical science and its bedrock IT foundation) all about reducing uncertainty?
For now, a bit of Kevin Kelly tease on the topic:
Letting go to win As Moses tells the story, on the sixth day of creation, that is at the eleventh hour of a particularly frantic creative bout, the god kneaded some clayey earth and in an almost playful gesture, crafted a tiny model to dwell in his new world. This god, Yahweh, was an unspeakably mighty inventor who built his universe merely by thinking aloud. He had been able to do the rest of his creation in his head, but this part required some fiddling. The final hand-tuned model—a blinking, dazed thing, a “man” as Yahweh called him—was to be a bit more than the other creatures the almighty made that week.
This one was to be a model in imitation of the great Yahweh himself. In some cybernetic way the man was to be a simulacra of Yahweh.
As Yahweh was a creator, this model would also create in simulation of Yahweh’s creativity. As Yahweh had free will and loved, this model was to have free will and love in reflection of Yahweh. So Yahweh endowed the model the same type of true creativity he himself possessed.
Free will and creativity meant an open-ended world with no limits. Anything could be imagined, anything could be done. This meant that the man-thing could be creatively hateful as well as creatively loving (although Yahweh attempted to encode heuristics in the model to help it decide).
Now Yahweh himself was outside of time, beyond space and form, and unlimited in scope—ultimate software. So making a model of himself that could operate in bounded material, limited in scale, and constrained by time was not a cinch. By definition, the model wasn’t perfect.
To continue where Moses left off, Yahweh’s man-thing has been around in creation for millennia, long enough to pick up the patterns of birth, being, and becoming. A few bold man-things have had a recurring dream: to do as Yahweh did and make a model of themselves—a simulacra that will spring from their own hands and in its turn create novelty freely as Yahweh and man-things can.
So by now some of Yahweh’s creatures have begun to gather minerals from the earth to build their own model creatures. Like Yahweh, they have given their created model a name. But in the cursed babel of man-things, it has many designations: automata, robot, golem, droid, homunculus, simulacra.
The simulacra they have built so far vary. Some species, such as computer viruses, are more spirit than flesh. Others species of simulacra exist on another plane of being—virtual space. And some simulacra, like the kind marching forward in SIMNET, are terrifying hybrids between the real and the hyperreal.
The rest of the man-things are perplexed by the dream of the model builders. Some of the curious bystanders cheer: how wonderful to reenact Yahweh’s incomparable creation! Others are worried; there goes our humanity. It’s a good question. Will creating our own simulacra complete Yahweh’s genesis in an act of true flattery? Or does it commence mankind’s demise in the most foolish audacity?
Is the work of the model-making-its-own-model a sacrament or a blasphemy?
One thing the man-creatures know for sure: making models of themselves is no cinch.
The other thing the man-things should know is that their models won’t be perfect, either. Nor will these imperfect creations be under godly control. To succeed at all in creating a creative creature, the creators have to turn over control to the created, just as Yahweh relinquished control to them.
To be a god, at least to be a creative one, one must relinquish control and embrace uncertainty. Absolute control is absolutely boring. To birth the new, the unexpected, the truly novel—that is, to be genuinely surprised—one must surrender the seat of power to the mob below.
The great irony of god games is that letting go is the only way to win. [Kevin Kelly, Out of Control, pp. 219-220]
"...one must relinquish control and embrace uncertainty. Absolute control is absolutely boring."
From my stacks:
Two cheers for uncertainty Imagine a life without uncertainty. Hope, according to Aeschylus, from the lack of certainty of fate; perhaps hope is inherently blind. Imagine how dull life would be if variables assessed for admission to professional school, graduate program, or executive training program really did predict with great accuracy who would succeed and who would fail. Life would be intolerable — no hope, no challenge. Thus we have a paradox. Although we all strive to reduce the uncertainties of our existence and of the environment, ultimate success — that is, a total elimination of uncertainty — would be horrific... [even] knowing pleasant outcomes with certainty would also detract from life's joy. An essential part of knowledge is to shrink the domain of the unpredictable. But although we pursue this goal, its ultimate attainment would not be at all desirable. Living with uncertainty Without uncertainty, there would be no hope, no ethics, and no freedom of choice. It is only because we do not know what the future holds for us (e.g., the exact time and manner of our own death) that we can have hope. It is only because we do not know exactly the results of our choices that our choices can be free and to compose a true ethical dilemma. Moreover, most of us are aware that there is much uncertainty in the world, and one of our most basic choices is whether we will accept that uncertainty is a fact or try to run away from it. Those who choose to deny uncertainty invent a stable world of their own. Such people's natural desire to reduce uncertainty, which may be basic to the whole cognitive enterprise of understanding the world, is taken to the extreme point where they believe that uncertainty does not exist. The statisticians definition that an optimist is "someone who believes that the future is uncertain" is not as cynical as it may at first appear to be [Hastie & Dawes, Rational Choice in an Uncertain World, 2001, pp. 326 -- 328].
What's there to see In the Land of the Blind? Where they're all 20-20, Having made up their minds. Well, Two Cheers for the Mystery, Otherwise, it's all History. If it's all just so clear, What're we still doin here?
Maybe one day I'll finish that song.
I'm not sure I'm driving toward any clear point here. It's just about the broad implications of AI science and technology, and the uncertainty regarding where they will lead us, given that notable people like Hawking, Boostrom, Musk, and Bill Gates have voiced concerns -- concerns that Kevin Kelly has done his best to counter, e.g., "Why I Don't Worry About a Super AI."
NOTE: If you can get through Jaron Lanier's ramblingly obtuse "The Myth of AI," beginning at about halfway down the long web page are a raft of illuminating expert comments on the topic. Well worth your time.
At this point it looks to me that AI will be more "IA" in the health care domain -- "Intelligence Augmentation" (recall Larry Weed?) that helps the clinicians sift through the otherwise increasingly unmanageable torrents of data they face when trying to arrive at accurate dx's and efficacious px's and tx's. to wit,
The Rise of the Chief Cognitive Officer ...Toby Cosgrove, the CEO of the Cleveland Clinic stated “It may sound odd, but technology like Watson will make healthcare less robotic and more human.” The reasoning behind putting an AI through a version of medical school is that human physicians can’t possibly read and process the exponentially growing volumes of clinical trials, medical journals, and individual cases available in the digital domain. A computer that digests them can transform them into useful support options for care of a patient. Furthermore humans can’t be a part of every case and learn from every physician. But by combining a human with the capacity of a computer as a physician’s assistant, physicians can focus on the many things that they are uniquely able to do in the complex domain of medicine. This includes the critical conversations with patients and their families.
In short a more human physician interaction can be made by sharing the workload between man and machine. The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (Artificial Intelligence) thinking and human provider thinking within medicine...
The root word for cognitive in Latin is “cogn” which means “to learn, know.” It tends to be an active requirement for success in modern medicine with ideas such as ‘the learning health system’ — and the modern learning health system is typically able to learn and know things because algorithms do a chunk of the learning that people can’t scale to do.
At the moment I don’t work within a health system. Because I am focusing on Cognitive Computing I am considering declaring myself the Chief Cognitive Officer and taking over establishing systems of anything related to thinking and knowledge. I wonder if anyone will stop me? Maybe the job of the chief cognitive officer should be the CEO? I’d say so but ‘executive’ implies decisions. Thinking isn’t always about big decisions but about the tools that can be used to make the millions of little decisions needed for every interaction to generate value. Someone has to make this next generation stuff work.
Top scientists hold closed meeting to discuss building a human genome from scratch Over 130 scientists, lawyers, entrepreneurs, and government officials from five continents gathered at Harvard on Tuesday for an “exploratory” meeting to discuss the topic of creating genomes from scratch — including, but not limited to, those of humans, said George Church, Harvard geneticist and co-organizer of the meeting.
The meeting was closed to the press, which drew the ire of prominent academics... The topic is a heavy one, touching on fundamental philosophical questions of meaning and being. If we can build a synthetic genome — and eventually, a creature — from the ground up, then what does it mean to be human?
“This idea is an enormous step for the human species, and it shouldn’t be discussed only behind closed doors,” said Laurie Zoloth, a professor of religious studies, bioethics, and medical humanities at Northwestern University...