Search the KHIT Blog

Friday, August 16, 2019

As football season draws nigh, a short take on being "data driven"

My latest Harpers Magazine just arrived in the mail.

Talk about "data-driven." A short snip from an excellent long-read (paywalled) entitled 'The Wood Chipper."
...One test at the [NFL evaluation] combine is more interesting [relative to all the obvious physical stuff] and says more about how we judge than all the others put together. It’s called the Wonderlic, and it was created by a Northwestern University graduate student in 1936. His name was E. F. Wonderlic. It was an I.Q. test meant to measure cognitive ability—math, language, basic reasoning. It consisted of fifty questions, with each correct answer yielding a point, fifty being a perfect score.

You can find sample Wonderlic questions on the internet:

  • Six cooks can boil 12 pots of water in four minutes. How many cooks are needed to boil 48 pots of water in four minutes?
  • A girl is 18 years old and her brother is a third her age. When the girl is 36, what will be the age of her brother?
  • What is the 18th letter of the English alphabet?
You have 12 minutes to take the test, 50 questions in 720 seconds. That was the innovation: the pressure of the ticking clock, the deadline looming. Wonderlic meant it to measure poise, not just how a person performs but how he performs under fire. It was designed for employers. He figured they’d use it to separate the execs from the mop pushers, but it was the armed forces that took it up first, especially the air forces—Army, Navy—­whose recruiters saw in it a way to find pilots. The ticking clock was thought to mimic the pressure a flier feels in combat, under the canopy when the ­MiGs close in. Fifty seconds till contact. Ten seconds. Three. Only around 2 percent of test takers even finished.
Tom Landry, the iconic leader of the Dallas Cowboys, was one of the first N.F.L. coaches to use the Wonderlic. Born in Mission, Texas, in 1924, Landry joined the Army Air Corps soon after his brother was killed in action over the North Atlantic in 1944. Landry flew thirty sorties in a B-­17 bomber and survived a crash landing. After the war, he played football at the University of Texas. A defensive back, he was elusive and fast and hit with the sort of force that wide receivers remembered years later.

Landry was taken by the Giants in the seventh round in the 1946 draft. He played seven professional seasons, the last two as a player/coach. He ran the defense opposite the offensive coordinator and future Hall of Famer Vince Lombardi. Most people remember Landry as the taciturn Texan who coached the Cowboys for 29 seasons, had 2 Super Bowl championships and 270 wins—­the face of the franchise. Though he looked as stolid as Clint Eastwood’s “Man with No Name,” Landry, in sport coat and fedora, was in fact an innovator. It was Landry who perfected the 4–­3 defense, which football fans will recognize as a standard alignment of the game: it starts with four “down” linemen, so-­called because these huge men begin each play in a three-­point stance (fingers in the turf, asses in the air), backed by three linebackers—­hence, 4–3. And Landry was one of the first N.F.L. coaches to realize the need for intelligence testing.

The game had become so complicated by the mid-­1970s—­so many formations, each requiring a read by the quarterback, which called for a series of adjustments made at the line of scrimmage, just the sort of improvisation known to combat pilots—­that Landry wanted a better way to scout for smarts. Not just speed and strength, but can a player think as he’s getting punched in the face, or concussed, or when Dick Butkus is biting his ankle at the bottom of the pile? That’s why he remembered the . . . wait . . . what’s that test they had us take during the war?

The Wonderlic had been in circulation long enough to generate a sea of data, the sort in which experts can read patterns. Via the test, they could tell you which professions attracted the smartest (and dumbest) people. ­Twenty was said to be the average score. Above forty, you’re a genius. Below ten . . . well. The highest average scores went first to systems analysts (32), then to chemists (31), and electrical engineers (30). These are your elites. Below that come the middle class, the multitude. Accountant (28). Copywriter (27). Bank teller (22). Firefighter (21), welder (17), janitor (14). Landry began giving the test to his players in the late 1970s. The rest of the league followed. It’s been a combine staple from the start, hated and feared.

Based on the Wonderlic, we know which positions are, on average, staffed by the smartest people on a football field, and which by the stupidest. You’d probably think that quarterbacks are the smartest players—­they have to run the offense, read defensive formations, and then make necessary changes—­but you’d be wrong. Offensive tackles have the top score, 26. Then centers (25), then quarterbacks (24). Running backs are said to be the dumbest, scoring an average of 16 on the Wonderlic. It would be interesting to give players the test before and after their careers; all those head blows must have an effect.

Of course, there are exceptions, outliers. Mario Manningham, a Michigan receiver, after failing multiple drug tests, lying about it, then admitting he’d lied, scored a 6 on the Wonderlic. (The scores are supposed to be confidential, but the numbers leak.) Running back Frank Gore, a probable Hall of Famer taken in the third round in 2005, scored a 6 as well. Jeff George, a physically gifted thrower who could never get it together, got a 10 on the Wonderlic, which is about as low as it gets for a quarterback. Aaron Rodgers, considered one of the smartest players because he looks brainy and played at U.C. Berkeley, scored a 35. Eli Manning, who took the Giants to two Super Bowls, scored a 39. Eric Decker, a receiver who did not compete at the combine because of an in­jury, scored an entirely unnecessary 43 on the Wonderlic (receivers average 17). Ryan Fitzpatrick, who played quarterback at Harvard, got a 48. He went to the Rams in the seventh round in 2005. Despite his nickname (Fitzmagic) and the length of his career (he’s played fourteen N.F.L. seasons), he’s been mostly mediocre, a fact that some use to discount the importance of the Wonderlic—­Fitzpatrick got a 48 and still sucks—­but that others use to prove its relevance—­If he weren’t a genius, the guy wouldn’t have lasted ten games in the N.F.L.

Linebacker Mike Mamula scored an amazing 49 on the Wonderlic (linebackers average 19). He broke or nearly broke several records at the 1995 combine, which bumped him way up in the draft. He went from a probable third rounder to a first rounder; he was taken seventh overall by the Eagles, just behind Steve ­McNair and just ahead of Warren Sapp, but lasted a mere handful of seasons and was never better than okay. Mamula is held up as an example of all that is wrong with the combine. Great in the weight room, great on the test, shitty on the field. The guy could do everything but play.

Only one prospect has ever gotten a perfect Wonderlic score: Pat ­­McInally, a Harvard wide receiver and punter who went in the fifth round in 1975 to Cincinnati, where he played ten seasons, which brings up an interesting question: Is it bad to overachieve on the Wonderlic?

General managers tend to steer clear of those who do poorly on the test and also of those who do well. Given a choice between too smart and too dumb, they’d choose too dumb every time. (Frank Gore, 6.) Anything over a 40 tends to be seen as a potential problem. Will too smart on the test mean too much thinking on the field and too much questioning in the locker room? If you’re looking at a 45, you’re looking at a guy who knows he’s smarter than the coach and who just might lead an insurrection. Some people speak of a Wonderlic sweet spot: 30 to 38, a range that would net most elite pro quarterbacks, including Andrew Luck, Tony Romo, and Colin Kaeper­nick. You want just enough intelligence to get up and down the field. Anything more is unnecessary or even a liability…
A great piece. Subscribe. Or, buy it off the stand.

The article concludes:
With all we know about the condition of retired players and the long-­term effects of concussions, maybe the real winners are those who didn’t get picked at all.
 Like, say, my grandson Keenan.

Preocious kid tennis player, USTA-ranked 43rd nationally by age 12. Four year varsity starter in high school, recruited by more than 20 postsecondary schools, and (mercifully) opted to go Div III for his college ride (St. Olaf). We were so relieved when it was all over and he emerged unhurt.


 "SAM"--"Sentiment Analysis Moderation," that is. Artificial Intelligence-assisted censorship. From Naked Capitalism: "Advertisers blacklisting news and other stories containing 'controversial' words..."

Stay tuned.

More to come...

Thursday, August 15, 2019

Four months in Baltimore

In mid-afternoon April 15th Cheryl and I rolled in to our new home via our dogs-and-cats-in-tow convoy after an eight-day 2,790 mile transcontinental schlep from California.

Loving it thus far.

A few subsequent related posts, here, here, here, and here.

Wednesday, August 14, 2019

Healthcare Triage: Medicare coverage implications for the 2020 campaign

This is quite good. Timely, apropos of the 2020 campaign.




More to come...

Monday, August 12, 2019

China Rx

H/T to Naked Capitalism:

The Amazon blurb:
Millions of Americans are taking prescription drugs made in China and don't know it--and pharmaceutical companies are not eager to tell them. This is a disturbing, well-researched wake-up call for improving the current system of drug supply and manufacturing.Several decades ago, penicillin, vitamin C, and many other prescription and over-the-counter products were manufactured in the United States. But with the rise of globalization, antibiotics, antidepressants, birth control pills, blood pressure medicines, cancer drugs, among many others are made in China and sold in the United States. China's biggest impact on the US drug supply is making essential ingredients for thousands of medicines found in American homes and used in hospital intensive care units and operating rooms. The authors convincingly argue that there are at least two major problems with this scenario. First, it is inherently risky for the United States to become dependent on any one country as a source for vital medicines, especially given the uncertainties of geopolitics. For example, if an altercation in the South China Sea causes military personnel to be wounded, doctors may rely upon medicines with essential ingredients made by the adversary. Second, lapses in safety standards and quality control in Chinese manufacturing are a risk. Citing the concerns of FDA officials and insiders within the pharmaceutical industry, the authors document incidents of illness and death caused by contaminated medications that prompted reform. This probing book examines the implications of our reliance on China on the quality and availability of vital medicines.
Comforted to have Donald Trump in the White House? It's not limited to Rx, either. Consider the Chinese manufacturing origins of so many of the components of our military hardware alone.


Props to STATnews.
Canadians are hopping mad about Trump’s drug importation plan. Some of them are trying to stop it.

Canadians are furious about the Trump administration’s plan to import their prescription drugs. And some of them are determined to stop the proposal in its tracks.

Trump’s plan, which was announced late last month, would allow states, wholesalers, and pharmacies to import cheaper drugs from Canada. It’s a long way off from being implemented, but Canadians are baffled that America would look north to lower its own drug prices, and indignant that such a plan could exacerbate an already pressing drug shortage issue plaguing the country.

“You are coming as Americans to poach our drug supply, and I don’t have any polite words for that,” said Amir Attaran, a professor at the University of Ottawa, who calls the plan “deplorable” and “atrociously unethical.” “Our drugs are not for you, period.”…

More to come...

Thursday, August 8, 2019

The Technological Tail Wagging Humanity's Dog

Lots of hand-wringing these days with respect to the existing and potential malign attributes of digital technologies--social media in particular.

This is very interesting, very smart young man, Tristan Harris. Watch all of it. Time well spent.

Highly recommend you spend some time surfing their website once you've viewed the above video talk.

Dovetails in a number of areas with this book I'm close to finishing.

...The pandemic of contempt in political matters makes it impossible for people of opposing views to work together. Go to YouTube and watch the 2016 presidential debates: they are masterpieces of eye-rolling, sarcasm, and sneering derision. For that matter, listen as politicians at all levels talk about their election opponents, or members of the other party. Increasingly, they describe people unworthy of any kind of consideration, with no legitimate ideas or views. And social media? On any contentious subject, these platforms are contempt machines. 

Of course this is self-defeating in a nation in which political competitors must also be collaborators. How likely are you to want to work with someone who has told an audience that you are a fool or a criminal? Would you make a deal with someone who publicly said you are corrupt? How about becoming friends with someone who says your opinions are idiotic? Why would you be willing to compromise politically with such a person? You can resolve problems with someone with whom you disagree, even if you disagree angrily, but you can’t come to a solution with someone who holds you in contempt or for whom you have contempt. 

Contempt is impractical and bad for a country dependent on people working together in politics, communities, and the economy. Unless we hope to become a one-party state, we cannot afford contempt for our fellow Americans who simply disagree with us. 

Nor is contempt morally justified. The vast majority of Americans on the other side of the ideological divide are not terrorists or criminals. They are people like us who happen to see certain contentious issues differently. When we treat our fellow Americans as enemies, we lose friendships, and thus, love and happiness. That’s exactly what’s happening. I already cited a poll showing that a sixth of Americans have stopped talking to a family member or close friend because of the 2016 election. People have ended close relationships, the most important source of happiness, because of politics...

Brooks, Arthur C.. Love Your Enemies. Broadside e-books. Kindle Edition, location 352.
A lot to think about.

UPDATE: I finished Arthur's book. Also definitely "time well spent," though I have some picks.


How facial recognition became the most feared technology in the US
Two lawmakers are drafting a new bipartisan bill that could seriously limit the use of the technology across the US.

Facial recognition is having a moment.

Across the US, local politicians and national lawmakers on both sides of the aisle have started introducing rules that bar law enforcement agencies from using facial recognition technology to surveil everyday citizens.

In just the past few months, three cities — San Francisco, Oakland, and Somerville, Massachusetts — have passed laws to ban government use of the controversial technology, which analyzes pictures or live video of human faces in order to identify them. Cambridge, Massachusetts, is also moving toward a government ban. Congress recently held two oversight hearings on the topic and there are at least four pieces of current federal legislation to limit the technology in some way.

And now, Recode has learned that two top lawmakers, Rep. Elijah Cummings (D-MD) and Rep. Jim Jordan (R-OH), plan this fall to introduce a new bipartisan bill on facial recognition, according to representatives from both legislators’ offices. The specifics of the bill are still being hashed out, but it could include issuing a pause on the federal government’s acquisition of new facial recognition technology, according to a staffer from Jordan’s office.

Facial recognition is a rare case where regulators are working together — on a bipartisan level, no less — to try to get ahead of technology instead of catching up to it. That’s because this powerful new technology has the potential to infringe on Americans’ civil liberties — no matter their political persuasion — and to have a chilling effect on free speech…
For one thing, I am reminded of my 2008 post "Privacy and the 4th Amendment Amid the 'War on Terror'."

More to come...

Tuesday, August 6, 2019

A.I. for the masses?

What could possibly go wrong?

In my latest snailmail Science Magazine:
Bringing machine learning to the masses

Yang-Hui He, a mathematical physicist at the University of London, is an expert in string theory, one of the most abstruse areas of physics. But when it comes to artificial intelligence (AI) and machine learning, he was naïve. “What is this thing everyone is talking about?” he recalls thinking. Then his go-to software program, Mathematica, added machine learning tools that were ready to use, no expertise required. He began to play around, and realized AI might help him choose the plausible geometries for the countless multidimensional models of the universe that string theory proposes.

In a 2017 paper, He showed that, with just a few extra lines of code, he could enlist the off-the-shelf AI to greatly speed up his calculations. “I don't have to get down to the nitty gritty,” He says. Now, He says he is “on a crusade” to get mathematicians and physicists to use machine learning, and gives about 20 talks a year on the power of these new user-friendly versions.

AI used to be the specialized domain of data scientists and computer programmers. But companies such as Wolfram Research, which makes Mathematica, are trying to democratize the field, so scientists without AI skills can harness the technology for recognizing patterns in big data. In some cases, they don't need to code at all. Insights are just a drag-and-drop away. Computational power is no longer much of a limiting factor in science, says Juliana Freire, a computer scientist at New York University in New York City who is developing a ready-to-use AI tool with funding from the Defense Advanced Research Projects Agency (DARPA). “To a large extent, the bottleneck to scientific discoveries now lies with people.”…

The AI tools are more than mere toys for nonprogrammers, says Tim Kraska, a computer scientist at the Massachusetts Institute of Technology in Cambridge who leads Northstar, a machine learning tool supported by the $80 million DARPA program called Data-Driven Discovery of Models. Wade Shen, who leads the DARPA program, says the tools can outperform data scientists at building models, and they're even better with a subject matter expert in the loop.

In a demo for Science, Kraska showed how easy it was to use Northstar's drag-and-drop interface for a serious problem. He loaded a freely available database of 60,000 critical care patients that includes details on their demographics, lab tests, and medications. In a couple of clicks, Kraska created several heart failure prediction models, which quickly identified risk factors for the condition. One model fingered ischemia—a poor blood supply to the heart—which doctors know is often codiagnosed with heart failure. That was “almost like cheating,” Kraska said, so he dragged ischemia off the list of inputs and the models immediately began to retrain to look for other predictive factors.

Maciej Baranski, a physicist at the Singapore-MIT Alliance for Research & Technology Centre, says the group plans to use Northstar to explore cell therapies for fighting cancer or replacing damaged cartilage. The system will help biologists combine the optical, genetic, and chemical data they've collected from cells to predict their behavior…

The trend toward off-the-shelf AI has risks. Machine learning algorithms are often called black boxes, their inner workings shrouded in mystery, and the prepackaged versions can be even more opaque. Novices who don't bother to look under the hood might not recognize problems with their data sets or models, leading to overconfidence in biased or inaccurate results.
But Kraska says Northstar has a safeguard against misuse: more AI. It includes a module that anticipates and counteracts typical rookie mistakes, such as assuming any pattern an algorithm finds is statistically significant. “In the end it actually tries to mimic what a data scientist would do,” he says.
"The trend toward off-the-shelf AI has risks. Machine learning algorithms are often called black boxes, their inner workings shrouded in mystery...Novices who don't bother to look under the hood might not recognize problems with their data sets or models, leading to overconfidence in biased or inaccurate results."
I'll re-post something from last year:

Another "Holy Shit" book. Yikes.

ALMOST two decades ago, when I wrote the preface to my book Causality (2000), I made a rather daring remark that friends advised me to tone down. “Causality has undergone a major transformation,” I wrote, “from a concept shrouded in mystery into a mathematical object with well-defined semantics and well-founded logic. Paradoxes and controversies have been resolved, slippery concepts have been explicated, and practical problems relying on causal information that long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Put simply, causality has been mathematized.”

Reading this passage today, I feel I was somewhat shortsighted. What I described as a “transformation” turned out to be a “revolution” that has changed the thinking in many of the sciences. Many now call it “the Causal Revolution,” and the excitement that it has generated in research circles is spilling over to education and applications. I believe the time is ripe to share it with a broader audience.

This book strives to fulfill a three-pronged mission: first, to lay before you in nonmathematical language the intellectual content of the Causal Revolution and how it is affecting our lives as well as our future; second, to share with you some of the heroic journeys, both successful and failed, that scientists have embarked on when confronted by critical cause-effect questions.

Finally, returning the Causal Revolution to its womb in artificial intelligence, I aim to describe to you how robots can be constructed that learn to communicate in our mother tongue— the language of cause and effect. This new generation of robots should explain to us why things happened, why they responded the way they did, and why nature operates one way and not another. More ambitiously, they should also teach us about ourselves: why our mind clicks the way it does and what it means to think rationally about cause and effect, credit and regret, intent and responsibility…

Pearl, Judea; Mackenzie, Dana. The Book of Why: The New Science of Cause and Effect (Kindle Locations 47-61). Basic Books. Kindle Edition.
This one is gonna be fun. Stay tuned. From the Atlantic interview article: Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently...
"If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do."
In short, being unreflectively "data-driven" (that fashionable tech cliche) is a both naive and a cop-out. (Note: some of this will surely go -- at least tangentially --  to the "information ethics" topic of my prior post.)

See also my 2018 post "Data Science?"


This is a hoot:
Artificial intelligence is not intelligent enough or, more exactly, not imaginative enough or creative enough to make us resign thinking. Tests for artificial intelligence are not rigorous enough. It does not take intelligence to meet the Turing test – impersonating a human interlocutor – or win a game of chess or general knowledge. You will know that intelligence is artificial only when your sexbot says, ‘No.’

Fernández-Armesto, Felipe. Out of Our Minds. University of California Press. Kindle Edition, location 7720. 
This book, wow!
The speed and reach of the computer revolution raised the question of how much further it could go. Hopes and fears intensified of machines that might emulate human minds. Controversy grew over whether artificial intelligence was a threat or a promise. Smart robots excited boundless expectations. In 1950, Alan Turing, the master cryptographer whom artificial intelligence researchers revere, wrote, ‘I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’ The conditions Turing predicted have not yet been met, and may be unrealistic. Human intelligence is probably fundamentally unmechanical: there is a ghost in the human machine. But even without replacing human thought, computers can affect and infect it. Do they corrode memory, or extend its access? Do they erode knowledge when they multiply information? Do they expand networks or trap sociopaths? Do they subvert attention spans or enable multi-tasking? Do they encourage new arts or undermine old ones? Do they squeeze sympathies or broaden minds? If they do all these things, where does the balance lie? We have hardly begun to see how cyberspace can change the psyche. [Ibid, location 7441]

Amazon recommended this book to me:

Only $4.99 Kindle price. 5 star reviews. I precipitously did 1-Click.

My Bad. It's awful. Reads like it was written by A.I.

Machine learning is one in all the quickest growing areas of technology, with far-reaching applications. This textbook is intended to give a proper introduction of machine learning, and all the algorithmic paradigms that machine learning offers, in a principled way. The book provides an intensive hypothesis of the basic concepts underlying machine learning and also the mathematical derivations that remodel these principles into practical algorithms. After a presentation of the basics of the sector, the book covers a wide range of central topics that have never been addressed by previous textbooks. These embody a discussion of the process complexity of learning and also the ideas of convexity and stability; major algorithmic paradigms together with stochastic gradient descent, neural networks, and structured output learning; and rising theoretical ideas like the PAC-Bayes approach and compression-based bounds. Designed for a starting graduate or refined student course, the text makes the elemental and algorithms of machine learning accessible to non-expert readers and pupils of arithmetics, engineering, statistics and computer science.

Samelson, Steven. Machine Learning: The Absolute Complete Beginner’s Guide to Learn and Understand Machine Learning From Beginners, Intermediate, Advanced, To Expert Concepts (pp. 1-2). Kindle Edition.
Seriously? Need I really elaborate? Got played this time.


Reported at TechCrunch:
The UK’s National Health Service is launching an AI lab

The UK government has announced it’s rerouting £250M (~$300M) in public funds for the country’s National Health Service (NHS) to set up an artificial intelligence lab that will work to expand the use of AI technologies within the service.

The Lab, which will sit within a new NHS unit tasked with overseeing the digitisation of the health and care system (aka: NHSX), will act as an interface for academic and industry experts, including potentially startups, encouraging research and collaboration with NHS entities (and data) — to drive health-related AI innovation and the uptake of AI-driven healthcare within the NHS.

Last fall the then new in post health secretary, Matt Hancock, set out a tech-first vision of future healthcare provision — saying he wanted to transform NHS IT so it can accommodate “healthtech” to support “preventative, predictive and personalised care”.

In a press release announcing the AI lab, the Department of Health and Social Care suggested it would seek to tackle “some of the biggest challenges in health and care, including earlier cancer detection, new dementia treatments and more personalised care”.

Other suggested areas of focus include:

  • improving cancer screening by speeding up the results of tests, including mammograms, brain scans, eye scans and heart monitoring
  • using predictive models to better estimate future needs of beds, drugs, devices or surgeries
  • identifying which patients could be more easily treated in the community, reducing the pressure on the NHS and helping patients receive treatment closer to home
  • identifying patients most at risk of diseases such as heart disease or dementia, allowing for earlier diagnosis and cheaper, more focused, personalised prevention
  • building systems to detect people at risk of post-operative complications, infections or requiring follow-up from clinicians, improving patient safety and reducing readmission rates
  • upskilling the NHS workforce so they can use AI systems for day-to-day tasks
  • inspecting algorithms already used by the NHS to increase the standards of AI safety, making systems fairer, more robust and ensuring patient confidentiality is protected
  • automating routine admin tasks to free up clinicians so more time can be spent with patients...
Have to wonder what Seamus O'Mahony would say?

More to come...

Sunday, August 4, 2019

El Paso, Dayton:

Where will they strike next?

An unarmed "nutcase" is just a nutcase. A tilting-at-delusional-windmills "manifesto" writer.

On "American Exceptionalism."

Friday, August 2, 2019

Our family tree is to grow again

Eileen and Matthew
Our son Matt has now lost both of his sisters to cancer (Sissy in 1998, Danielle last April). He is our last kid standing, hence our relocation from California to Baltimore back in April. He and his fabulous fiance Eileen (Baltimore native, humane, breathtakingly-smart environmental engineer with the state and an accomplished sailor) will bring us a new grandson early next year.

Our personal ecstasy at this family news aside, I continue to fret over the quality of the world we will bequeath all of our children and grandchildren.

From an article I just read at WIRED:
...Asking how to pay for the impact of climate change implies that these costs are a matter of choice. The reality is that global warming will impose massive costs, regardless of whether policymakers respond or not. Thus, the real question is not “How would you propose to pay?” but instead “Who is going to pay?” and “How much?”
Indeed. What looms is not optional in the aggregate, the inane bleatings of people like the ethical zombie Donald Trump aside. Left effectively unchallenged, Frase's "Quadrant IV" draws nigh, and its realization will not be pretty.

The WIRED article continues:
People are already paying for climate change with their lives. Rising temperatures are killing more than 150,000 people every year. This death toll is estimated to increase to 1.5 million people annually by the turn of the century. Some are confronting the likelihood of failed crops; others have been forced to flee floodplains.

Those currently paying for the effects of climate change are the most vulnerable—people in the developing world, the poor, the sick, the elderly, and the very young. As the world changes, more people are going to suffer the cost of heat waves, rising water, damaged or dying ecosystems, and flooded coastal cities. This will create what political science and public policy experts describe as “existential politics,” in which different groups fight to preserve their entire way of life.

On one side of this existential fight will be those who want things to continue mostly as they are…
I enjoy my entertainments as much as anyone else, but they don't consume my consciousness. We all have a moral duty to our offspring to do whatever we can leave a better world behind. I want for everyone a healthy, blooming, growing Family Tree going forward.


From an interesting post on Medium:
Climate change — the sheer scale of the catastrophe we collectively face — is finally breaking through to mass consciousness. That’s a good thing. Yet accompanying it is a pernicious myth. Climate change is your fault — therefore, solving climate change is a matter of your individual actions.

This myth goes something like this. “I’m going to eat less meat! I’m going to travel less on airplanes!! And anyone who does those things is bad! They must not care about the planet!” It’s a fairy tale, my friends. Like so many myths, its purpose is to shield us from a truth we don’t want to face — or aren’t capable of facing yet.

Now, this is an old American fantasy — the fantasy of individual action. The idea that everything can be fixed by our individual actions — the more heroic, the better. But collective action? Cooperation? Those can never be allowed to exist. It’s the same myth, really, that caused America to end up without a working healthcare, education, or retirement system. Individual action, not collective action — everything’s your fault, and therefore, your responsibility, too. The system can never be at fault. There shouldn’t be a system for anything in the first place, except for anything but profit…
"There shouldn’t be a system for anything in the first place, except for anything but profit."
 Need I really elaborate? OK, how "profitable" will be business entities in societies collapsing on multiple fronts owing to increasingly acute and severe worldwide climate degradation? Seriously?

BTW, see my April 22nd post "An #Earthday reflection from Baltimore."

More to come...

Friday, July 26, 2019

Louise Aronson, MD: a home run on "Elderhood."

Just stop whatever you're doing, buy this book, and read it closely. Lordy, Mercy!
From the Amazon blurb:
As revelatory as Atul Gawande's Being Mortal, physician and award-winning author Louise Aronson's Elderhood is an essential, empathetic look at a vital but often disparaged stage of life.

For more than 5,000 years, "old" has been defined as beginning between the ages of 60 and 70. That means most people alive today will spend more years in elderhood than in childhood, and many will be elders for 40 years or more. Yet at the very moment that humans are living longer than ever before, we've made old age into a disease, a condition to be dreaded, denigrated, neglected, and denied. 

Reminiscent of Oliver Sacks, noted Harvard-trained geriatrician Louise Aronson uses stories from her quarter century of caring for patients, and draws from history, science, literature, popular culture, and her own life to weave a vision of old age that's neither nightmare nor utopian fantasy--a vision full of joy, wonder, frustration, outrage, and hope about aging, medicine, and humanity itself. 

Elderhood is for anyone who is, in the author's own words, "an aging, i.e., still-breathing human being."
This book jumped the line that is my never-ending book queue. I couldn't put it down. I bought my wife her own copy (the "elderhood" topic is about us). I tweeted,
While digital infotech was mentioned only relatively sparely, this jumped out, and is relevant to our original core KHIT topic:
For every hour they spend face-to-face with patients, doctors now spend two to three hours on the electronic medical record, or EMR. They also spend “pajama time” at home at night finishing electronic notes they can’t finish during their long workdays. Many of us lament this. Much less discussed is how technology that has undermined efficiency and the doctor-patient relationship became the national standard. Or why medicine bought electronic record systems from businesses with vastly different priorities from those of clinicians and patients, or why, having seen the harm to clinicians in systems that already adopted that technology, more and more health care organizations followed suit. Instead, we discuss the alarming, increasing rates that doctors get sick, take drugs, get divorced, and leave medicine, and how they commit suicide at rates higher than the general population. We institute programs on wellness and resilience, but don’t change anything fundamental about the priorities and systems that make such programs necessary. We blame the victims.

As a doctor, I use the particular electronic medical record that holds the health information of a majority of Americans. It’s a system designed to facilitate billing, not care. Its greatest asset is that the accounting department can quickly find the information needed to plug into formulas that link activities to charges. To make their jobs easier, we clinicians must provide required data in specific places in interconnected windows that resemble nothing so much as a fun house where doors lead to doors, and mirrors lead to confusion. We are also strongly encouraged to use standardized text, as if my visual disability or cancer surgery or inflammatory arthritis were identical to yours. Or as if one doctor’s take on a particular patient were always identical to another’s. This need to input copious information in particular language and places incentivizes cutting and pasting old notes to make new ones, and erring on the side of leaving things in rather than highlighting what may matter most. Medical notes are now so full of noise and jargon that it’s often impossible to figure out what actually happened during a specific encounter. One night on call, the lab paged me about a dangerously abnormal test result in a cancer patient I don’t know. I read and reread her notes, unable to tell which of the three cancer diagnoses on her chart was active. This is typical. Meanwhile, patients’ illness stories and their doctors’ analyses of those particular experiences, neither of which aid billing, are often altogether absent.

Electronic medical records are not the only contributors to physician burnout, but they are the technological embodiment of the nefarious values driving our health care system. The biggest EMR company apparently dismisses complaints from patients, doctors, and nurses. Our concerns don’t matter, I’ve been told by multiple sources, because we’re not their customers. Medical centers and health systems are, and they just keep on buying the product. In defending their actions, health leaders tout the EMR’s reliability, its accessibility from anywhere, and its usefulness for research and quality improvement. Those are significant benefits. Unmentioned is its often redundant, recycled, and outdated information or its frequent, significant, systematic information gaps with real potential to harm or kill patients. Such flaws would not be tolerated by most businesses or consumers. As anyone who works with data knows: garbage in equals garbage out.

I do not feel sentimental about the days of handwritten patient notes and the illegible, sometimes unsafe, hard-to-find, and practically impossible-to-share records they produced. But I do feel nostalgic for something essential that was lost when they were replaced by electronic record platforms. Heedlessly and unnecessarily, this particular approach to cyberdata collection has desecrated the most precious, meaningful elements of the patient-doctor relationship: the human connection, direct and intimate, laden with subtleties, significance, and respect for each person’s unique feelings and needs. In our brave new world, very little worth is accorded to activities such as spending a clinic visit talking through the impact of a patient’s new diagnosis on her health and life or building the sort of relationship that enables discussion of the real reasons why another patient can’t lose the excess weight causing his diabetes and high blood pressure. The things I most want from my doctors and try hardest to give my patients—things like attentive listening, shared decision-making, and individualized treatment—don’t much matter. In such a system, I am penalized if my patient doesn’t get a colonoscopy, something the EMR and my health center track, but struggle to find a place to document the half hour I spent with her and her daughter discussing why her multiple advanced illnesses and short life expectancy mean that she would likely incur all the risks and inconveniences of that screening test but none of its benefits.

The screen-focused physician is one reason patients complain doctors don’t listen or know them. It’s one reason 81 percent of physicians now say their workload is at capacity or overextended; half would not recommend medicine as a career. It’s not that electronic records are the sole cause of the historically unprecedented disillusionment of doctors today, but they are paradigmatic.

Erosion results in a wound, the worn-away part present like the negative space in a sculpture. When I tried to learn how best to use our new electronic record system, my institution sent me to trainings with a young man who informed my large group of doctors that he hadn’t been trained on what he called “the clinician interface.”

Months later, when I went to the lead doctor in our practice to ask for help because the system-generated notes seemed so worthless that I found myself creating both those required checkbox, robotext records and also narrative notes that captured the important elements of my patient visits, her unspoken words and actions made me feel that she thought my concerns were the time-sucking ramblings of a technologically inept person with an irreparable cognitive deficit and an annoyingly flawed character…

Aronson, Louise. Elderhood (pp. 217-220). Bloomsbury Publishing. Kindle Edition.
Not exactly a new complaint. She's at UCSF, which means she's on Epic. Since I retired, my patient world has been all Epic all the time (they pretty much dominate the SF Bay Area, and as a Kaiser member in Baltimore since June, my chart is now on Epic as well).

While Dr. Aronson cracked on EHRs in a few pointed venting-her-frustrations passages, the book's emphasis delves deeply into her personal story as a child, then a grown daughter / caregiver, her medical education, residency, practice, episode of burnout, and eventually professorship. Particularly moving are her stories of numerous elderly patients. as well as her candid revelations of her personal reactions to now being an "aging" woman.

A truly marvelous read.

From another of my tweets:
"Elderhood." You either die early, or it awaits you. This elegantly written, painfully probing, insightful, humane book should win multiple literary awards.

Interesting post up at THCB: "Doctors will vote with their patients."

The author's book:

...When Donald Trump expressed his cluelessness—“Nobody knew that healthcare could be so complicated”—before a meeting of state governors in February 2017, he was referring to our approach to health insurance, which has been a political piñata whacked by both left and right for decades. But even when we Americans acknowledge the absurdity of our convoluted system of third-party payers, and the pretzel positions our politicians weave into and out of as they try to justify it, reform it, then unreform it, many still find solace in telling themselves, “Well, we still have the best health care in the world.” 

In point of fact, we’re not even close to having the best health care in the world. As legendary Princeton health economist Uwe Reinhardt said, “At international health care conferences, arguing that a certain proposed policy would drive some country’s system closer to the U.S. model usually is the kiss of death.” Our system is marked by extreme variability, a nation of health care haves and have-nots. The fortunate receive services from immensely talented and dedicated physicians, nurses, and other caregivers, and they have access to drugs, devices, and facilities that are the envy of the world. All others struggle just to stay healthy without going broke. Americans spend from 50 percent to 100 percent more on health care as a share of GDP than people in other industrialized countries do, and for all our high expenditure we get collective outcomes that are demonstrably worse. In fact, we get outcomes that are, in general, truly dismal...

Magee, Mike (2019-06-03T23:58:59). Code Blue. Grove Atlantic. Kindle Edition, locations 92-98.
Add one more to the endless reading list. BTW, apropos of the "medical industrial complex"
 Dr. Mike alludes to repeatedly in his book, see "Can medicine be cured?"


My latest snailmail edition of Science arrived the other day. The cover art:

"Glacial Melting." The article is quite technical, but, in plain English, the news is not good.
Ice loss from the world’s glaciers and ice sheets contributes to sea level rise, influences ocean circulation, and affects ecosystem productivity. Ongoing changes in glaciers and ice sheets are driven by submarine melting and iceberg calving from tidewater glacier margins. However, predictions of glacier change largely rest on unconstrained theory for submarine melting. Here, we use repeat multibeam sonar surveys to image a subsurface tidewater glacier face and document a time-variable, three-dimensional geometry linked to melting and calving patterns. Submarine melt rates are high across the entire ice face over both seasons surveyed and increase from spring to summer. The observed melt rates are up to two orders of magnitude greater than predicted by theory, challenging current simulations of ice loss from tidewater glaciers.
Expect the inertial Denialism to continue, though. Right, Donald?

More to come...

Thursday, July 25, 2019

Climate change: What We Know.

apropos, from the always excellent Neurologica Blog:
The Global Warming Consensus

The degree to which there is a scientific consensus in anthropogenic global warming (AGW) remains politically controversial, even though it is not scientifically controversial. Denial of the consensus remains a cornerstone of AGW denial, so let’s examine the science and the arguments used to deny it..
The Anthropocene. Notwithstanding what The Donald thinks. And I use the word "thinks" charitably.

The Risk of Conflict Rises as the World Heats Up
Ignoring the connections between climate and security poses risks for the U.S.

The Trump administration scorns climate science: it has rolled back environmental regulations while promoting fossil fuels, and removed and downplayed mentions of climate change on government websites, among other moves that could weaken efforts to address global warming. It should come as no surprise, then, that the White House also seems to be ignoring—even potentially challenging—research and expert opinions on the connections between climate change and national security…
Yeah. significant, overlapping adversities loom. Read the 2018 IPCC Report.


One of my Facebook friends nails it:
Q: How do you get a Denialist to change his mind on global warming?
A: Claim it causes autism.


Note: TRNN interview transcript at Naked Capitalism.


Two new books on my radar.

See "Environmental racism is bad for your brain."
...One factor that is little considered, even by academics who rail against racist stereotypes, is pollution and its impact on the brain. Research shows that contamination from cars, planes, power plants, factories and landfills is eroding the bodies and minds of black and brown Americans. Two new books call attention to this invisible crisis...
The Upstream.


Ran across this outfit:

DeSmog launched in January 2006 and quickly became the world’s number one source for accurate, fact based information regarding global warming misinformation campaigns...
Well, we''ll see if those self-kudos are warranted.

More to come...

Wednesday, July 24, 2019

More on Medicare Advantage, and other news

Recall that Cheryl and I are now enrolled in Medicare Advantage via Kaiser-Permanente here in Baltimore.

Interesting caveats. Too early for us to tell yet. By year's end I'll have firmer views.


Intriguing article at Vox:
The science of regrettable decisions
A doctor explains how our brains can trick us into making bad choices — and how to fight back.
By Robert Pearl

As Full House actress Lori Loughlin and her husband await their next court date, they stand accused of paying a $500,000 bribe to get their daughters into the University of Southern California as crew team recruits. Their defense is said to rest on the belief that they were making a perfectly legal donation to the university and its athletic teams (their children never rowed a competitive race in their lives).

Legal strategies and moral considerations aside, this strange behavior has left many observers wondering, “What were they thinking?” Surely, Loughlin and her family must have considered someone at the university would audit the admissions records or realize the coach’s high-profile recruits had never rowed a boat.

We may never know exactly what Loughlin and her family were thinking. But as a physician who has studied how perception alters behavior, I believe that to understand what compelled them to do something so foolish, a more relevant question would be, “What were they perceiving?”

Understanding the science of regrettable decisions
Several years ago, I joined forces with my colleague George York, a respected neurologist affiliated with the University of California Davis, to understand why smart people make foolish choices in politics, sports, relationships, and everyday life. Together, we combed through the latest brain-scanning studies and decades of psychological literature.

We compared the scientific findings with an endless array of news stories and firsthand accounts of real people doing remarkably irrational things: We examined the court testimony of a cop who, despite graduating top five in his academy, mistook his gun for a Taser and killed an innocent man. We dug through the career wreckage of a once-rising politician who, despite knowing the risks, used his work phone to send sexually explicit messages. And we found dozens of studies confirming that doctors, the people we trust to keep us safe from disease, fail to wash their hands one out of every three times they enter a hospital room, a mistake that kills thousands of patients each year.

When we read about famous people ruining their lives or hear about normal people becoming famous for public follies, we shake our heads in wonder. We tell ourselves we’d never do anything like that.

But science tells us that we would, far more often than we’d like to believe…
Read all of it. "Science."

I riffed on Twitter: apropos, Kathryn Schulz noted "there is no first-person present-tense phrasing of the word 'wrong'." No one thinks or says "I am wrong."

Great book.

On deck, another awesome book, also of topical relevance:

The thoughts that come out of our minds can make us seem out of our minds. 

Some of our most potent ideas reach beyond reason, received wisdom, and common sense. They lurk at chthonic levels, emerging from scientifically inaccessible, rationally unfathomable recesses. Bad memories distort them. So do warped understanding, maddening experience, magical fantasies, and sheer delusion. The history of ideas is patched with crazy paving. Is there a straight way through it – a single story that allows for all the tensions and contradictions and yet makes sense? 

The effort to find one is worthwhile because ideas are the starting point of everything else in history. They shape the world we inhabit. Outside our control, impersonal forces set limits to what we can do: evolution, climate, heredity, chaos, the random mutations of microbes, the seismic convulsions of the Earth. But they cannot stop us reimagining our world and labouring to realize what we imagine. Ideas are more robust than anything organic. You can blast, burn, and bury thinkers, but their thoughts endure. 

To understand our present and unlock possible futures, we need true accounts of what we think and of how and why we think it: the cognitive processes that trigger the reimaginings we call ideas; the individuals, schools, traditions, and networks that transmit them; the external influences from culture and nature that colour, condition, and tweak them...

Fernández-Armesto, Felipe (2019-07-01T23:58:59). Out of Our Minds. University of California Press. Kindle Edition, location 198.
One more quick excerpt:
"In the pages that follow I intend to argue that evolution has endowed us with superabundant powers of anticipation, and relatively feeble memories; that imagination issues from the collision of those two faculties; that our fertility in producing ideas is a consequence; and that our ideas, in turn, are the sources of our mutable, volatile history as a species." (location 365)
I am reminded of this cool book I've heretofore studied and cited.


More to come...