Search the KHIT Blog

Thursday, December 26, 2019

Next up, 2020

Should be quite an eventful year. Presidential election year (ugh), with Trump (ugh) undergoing a fractious Senate impeachment trial amid all the maudlin political campaign hoopla. Obamacare is back in the judicial crosshairs yet again, global warming continuing apace largely unmitigated, health space digitech proceeding on multiple fronts, innovation conflated with hype at every turn...

My wife and BFF of 45 years turns 70 this New Year's Eve. My new grandson is to arrive 1st week of February (perhaps sooner). I will be mounting my counterattack on Parkinson's as I turn 74 in February, and will be pushing forward on a number of topical fronts in general as I see fit.


Finished this book. Excellent.

He's done his homework. And his candor is admirable (click the image).

Below, now under way.

Wonderfully written so far. Glad I picked this one too.
UPDATE: Finished it. Excellent. Five stars. I'm a sucker for the deftly turned phrase. Robert had my head spinning. On to the next one. Stay tuned.

May you all have a Happy and safe New Year. If you're traveling, I hope your hassles are not too onerous. I'm glad to be staying put this year.

One of my Facebook friends turned me on to these cats. Simply fabulous.


Re-read this interesting Arthur C. Brooks article on decline and "loss" as I reflect on my coming year(s).

More to come...

Thursday, December 19, 2019

My personal G20 Summit

ICD-10, that is.

On December 10th I wrote a post entitled "dx: Parkinson's?"

Just got back from my Kaiser Neurology consult.

Yep. From my visit notes: "Your exam is consistent with clinical diagnosis of Parkinson's disease..."

ICD-10 code G20

There're no direct bioassay nor imaging dx indicators. No cure. Not fatal, just progressively debilitating. Meds are problematic. They ordinal-rank "stage" it I - V. I feel like a 1.5, subjectively. Elevated fall risk is my particular priority concern.

Had a tangential lab draw (thyroid indicator), came home, put on a slow-cook pot of chili, and I'm shortly off to another KP facility for a 3:45 CT brain scan.
To see if they can find one (wocka-wocka), and to rule out any tumor(s)--one would hope.

Guess I'll have to launch a support/info podcast. This is how you learn. Throw stuff out there--after doing your onset due diligence, so you don't look foolish.

Down in the basement family room. Pulled up one of my old Vegas Logic Pro X podcast tracks onscreen, put one of my mics in front of it, shot a pic with my iPhone, uploaded it to Photoshop, added the text, then took another iPhone pic of all that, with my mug in the shot (with Ranger sitting by, just out of the shot).

I've been told that I have a great face for radio, so this will be a natural.

Called my amazing musician-singer-songwriter friend Doug Brons in Grant's Pass, Oregon, asked his permission to use one of his tunes as my chase tune, segue and v/o-bkg music, a song called 'Dance." Granted.
Sometimes you're up,
Sometimes you're down,
Life can spin you
Like a merry-go-round.
Best way to chase it 
Is to face it, you see,
Just get up and dance...

Registered a blog name. Will fire stuff up and get to it ASAP.

Gotta run. Back at'cha later. Stay tuned.


CT brain scan was negative for any pathology (or much else, some might say). Good news. Plan hereafter for now includes stepped-up regular vigorous exercise, which I've already begun (my hoops Jones, to which I'm gonna at least add therapeutic "shadow boxing" at the Y):
Various studies in the 1980s and 1990s supported the notion that rigorous exercise, emphasizing gross motor movement, balance, core strength, and rhythm, could favorably impact range of motion, flexibility, posture, gait, and activities of daily living. More recent studies, most notably at Cleveland Clinic, focus on the concept of intense “forced” exercise, and have begun to suggest that certain kinds of exercise may be neuro-protective, i.e., actually slowing disease progression...[link]
And, more more concerted daily guitar practice, which I need anyway. If I lose my guitar chops, that will be way bad for my headspace.

Not gonna do the Sinemet Rx, for now. Gotta be a better way to elevate the dopamine.

Lots to learn. Lots to try to contribute. Props to my KP physicians thus far.

And, [Bleep] Parkinson's.


Add four more to the endless reading pile.

Stay tuned.


Friday cardiology post-SAVR px annual follow-up. Everything looking great there. Really like my new KP cardiologist. Onward to confront my "G20" head-on.


Posted this on Crackbook.

If I lose my chops... I don't wanna think about it.

OK, enough boo-hoo'ing. How about a few laughs?


More to come...

Saturday, December 14, 2019

The future of malevolence: anonymous darkweb assassins for cryptocurrency hire

This is rather frightening. From my new Harper's issue.
The idea of an online assassination market was conceived long before it was possible to build one, and long before there was anything resembling the dark web.

In 1995, Jim Bell, an anarchist engineer who had studied at M.I.T. and worked at Intel, began writing a serialized essay titled “Assassination Politics” that proposed a theoretical framework for encouraging and crowdsourcing the murder of public officials. Inspired by a Scientific American article on the newfangled concept of encrypted “digital cash”—which did not yet exist in any meaningful way—Bell created one of the most sinister thought experiments of the early web.

The essay imagined a website or platform where users could anonymously nominate someone to be killed and pledge a dollar amount toward the bounty. They’d also be able to pay a small fee to make a “prediction”—an encrypted message that only the predictor and the site were privy to—as to when that person would be killed. Once the person was confirmed dead, the predictions would be decrypted and the pledged funds automatically transferred to the successful predictor. Implicit in the design was that the best way to predict when someone is going to die is to kill them yourself.

An ardent anarcho-libertarian, Bell was one of the more extreme cypherpunks, a group of internet privacy and cryptography advocates that coalesced around a mailing list in the early Nineties. (John Gilmore, a founder of the Electronic Frontier Foundation, and Timothy C. May, a senior scientist at Intel, were among the first contributors.) Bell’s own politics were animated by a pronounced distrust of government. He believed that his system would bring power to heel and usher in a new anarchic order. “If only 0.1% of the population, or one person in a thousand, was willing to pay $1 to see some government slimeball dead,” Bell wrote,

that would be, in effect, a $250,000 bounty on his head ... Perfect anonymity, perfect secrecy, and perfect security ... Chances are good that nobody above the level of county commissioner would even risk staying in office.
Since conspirators would never meet, or even be aware of one another’s identities, it would be impossible to rat anyone out. The money trail would be invisible. Back in 1995, there was no such thing as bitcoin or any other tradeable cryptocurrency, and few had access to encrypted web browsing. Now it’s easy to purchase bitcoins on any number of mainstream markets and “tumble” them so that their point of purchase is obscured. Similarly, thanks to Tor, accessing the dark web requires only opening a browser and enduring slower download speeds…
Subscribers-only paywalled long read written by Brian Merchant (BTW, I checked using a different browser while not logged in. It appears that they give up one free full read). This one essay is worth the cost of subscribing. I've been a Harper's subscriber for decades.

One more excerpt:
[Recent HS grad Alexis Stern’s] murder had apparently been ordered on a website called Camorra Hitmen, which advertised gun-for-hire services with the promise of keeping its clients anonymous.

Earlier that month, a user had logged on to Camorra Hitmen with the Tor browser—the most popular way to access the dark web—and created an account with the alias Mastermind365. Five days later, Mastermind365 sent a message asking whether it was possible for a hit man to carry out a kidnapping instead of a murder. The site’s administrator replied that it was, but it would be more expensive, because such an operation was riskier.

A week later, on July 15, Mastermind365 sent another message. “I have changed my mind since i previously spoke to you,” the user wrote.

I would not like this person to be kidnapped. Instead, i would just like this person to be shot and killed. Where, how and what with does not bother me at all. I would just like this person dead.
And with that, Mastermind365 sent more than $5,000 in bitcoin to Camorra Hitmen, along with a photo of Stern—a portrait sshe’d posted on a website she’d built in one of her classes…
Well, of late here we've just been looking at stuff like"Questioning Innovation," "America's epidemic of unkindness" and "Ethical Artificial Intelligence?" How about the ultimate in malevolent ethics, the ultimate in creepy unkindness, the worst uses of 'innovation?" Darkweb-enabled, cryptocurrency-funded anonymous murder for hire markets?
The author notes that these dark net platforms are prime real estate via which to scam gullible assassin-wannabees. You pays you bitcoins upfront and you gets hustled, who you gonna complain to--the FTC? Justice Dept Financial Crimes Unit? Serves your ignorant ass right.

Still, people have in fact been killed by cyber-nutjobs.
"Monteiro started scrolling through messages he had cached from Yura’s previous sites. The markets may have been scams, but the desire for violence was real. Monteiro had amassed a running list of people who had been singled out for death; people who’d had bounties placed on their heads, and a log of detailed conversations about how and why their would-be killers wanted them beaten, tortured, kidnapped, and murdered. It was like a Wikipedia entry for the outer extremes of human cruelty…"
While I was working on this story, journalists at BBC News Russia confirmed the first known case of a murder being ordered on the dark web and successfully carried out by hired assassins. On March 12, 2019, two young men, aged seventeen and nineteen, were arrested for the murder of a prominent investigator in Moscow who had been aggressively pursuing a drug-trafficking operation. The murder was not orchestrated over any of Yura’s scam sites, but over a standard, all-purpose dark-web marketplace similar to the Silk Road, according to Andrei Soshnikov, one of the reporters who broke the story. The killers never met the person who posted the job. They were paid anonymously, in bitcoin, and one of them attended a concert later that night.

A threshold had been crossed. For years, “dark-net hit man” stories made for good clickbait and little else. Experienced tech journalists emphatically debunked such stories as myths, because for years, that’s all they were: myths and fearmongering. But the fact that the hits didn’t happen was never really about the technology; it was an issue of trust. There has never been any serious question that the technology behind the dark web could preserve anonymity and allow users to move untraced through its pages: it absolutely can. That’s why the FBI resorts to old-fashioned methods of going undercover as drug buyers, child pornographers, and hit men in an effort to catch criminals there…


One of my recently finished reads.

Again, Amazon link embedded in the cover pic. This one also goes to my AI tech riffs, to which I will be returning soon. to wit,
The Power of Unintended Consequences

Neither Alfred Nobel nor Mikhail Kalashnikov anticipated how their inventions would be used for disruptive political violence. In the same way, the creators of the Internet, social media, and the many new and emerging technologies today have not foreseen the nefarious uses to which they’re being put. 

Today’s period of innovation is comparable to the explosive era of open technological innovation at the turn of the nineteenth century, and arguably even more potentially disruptive. Current and emerging technologies were not consciously designed to kill, as the AK-47 was; they are just as subject to popular whim, more tailored for attention-getting, and more wide-ranging in their potential applications, both good and bad. Because the current technological revolution is an open one, and because there is again so much money to be made by the diffusion of the new technologies, they have again spread rapidly and will continue to do so. Due to their accessibility and ease of use, clusters of new technologies will be combined in novel ways that are unanticipated. 

In the next part of the book, we will use the insights introduced in the first two sections to examine how today’s new and emerging technologies are already being used to raise the stakes of political violence, by enhancing the mobilization and reach of surprise attacks. We’ll also investigate how emerging breakthroughs in autonomy, robotics, and artificial intelligence may be harnessed for even more potent leverage...

Cronin, Audrey Kurth. Power to the People (pp. 167-168). Oxford University Press. Kindle Edition.

apropos of the foregoing riff on disturbing advances in internet-facilitated violent mayhem, I am reminded of a screenplay idea that occurred to me a number of years ago back when the Silk Road thing was above the fold:

Quickie Photoshop expression.
"TM?" LOL. The joke will utterly escape those mired in the Irony-Free Zone.
I'm a long-time fan of intrigue/action flicks such as the Borne series, the Homeland series, Tom Clancy's Jack Ryan series, Syriana, Zero Dark Thirty, The Kingdom, Traitor, the incredible Peaky Blinders, etc etc etc. I've never written any fiction, but this dark idea seemed like something worth an effort. My idea focused on neutralizing obvious "bad guys" (e.g., terrorists, corrupt pols, and myriad egregious criminals / predators--hardly a novel concept), but the foregoing Harper's article infers a whole new dark tech-enabled level. Terminal "doxxing" of anyone who irks you on Twitter or Facebook?

I've already had my share of Chairborne Division Armrest Battalion Keyboard Kommando "death threats" for mocking simpleton anarchist militia buffoons such as the poignant Bundy sagebrush rebellion clan (The Power of Photoshop Compels Me...). And, while that hapless posse can't even pronounce "Tor" and "cryptocurrency," well...

Nah, not gonna dial it back. BYKOTA has its limits. I'm not the one advocating violence. I advocate pushback mockery of those intransigently advocating violence and intimidation.


More to come...

Christina Farr Twitter thread on our personal health data

 Nicely done.

Eleven follow-on tweets in the thread. Read them all. Follow the links she provides. Good job.

If anyone wants to gumshoe my health history, just read my blog. That HIPAA horse is way outa my barn.

More to come...

Friday, December 13, 2019

The 2019 AI Now Institute Report

An important read, and not just for us techie people.
Click the cover image for the full PDF
Forty pages of endnotes. Kudos.

See my prior post "Ethical artificial intelligence?"

Stay tuned. A busy topical week.

2.6 Health
AI technologies today mediate people’s experiences of health in many ways: from popular consumer-based technologies like Fitbits and the Apple Watch, to automated diagnostic support systems in hospitals, to the use of predictive analytics on social-media platforms to predict self-harming behaviors. AI also plays a role in how health insurance companies generate health-risk scores and in the ways government agencies and healthcare organizations allocate medical resources.

Much of this activity comes with the aim of improving people’s health and well-being through increased personalization of health, new forms of engagement, and clinical efficiency, popularly characterizing AI in health as an example of “AI for good” and an opportunity to tackle global health challenges. This appeals to concerns about information complexities of biomedicine, population-based health needs, and the rising costs of healthcare. However, as AI technologies have rapidly moved from controlled lab environments into real-life health contexts, new social concerns are also fast emerging.

The Expanding Scale and Scope of Algorithmic Health Infrastructures

Advances in machine learning techniques and cloud-computing resources have made it possible to classify and analyze large amounts of medical data, allowing the automated and accurate detection of conditions like diabetic retinopathy and forms of skin cancer in medical settings. At the same time, eager to apply AI techniques to health challenges, technology companies have been analyzing everyday experiences like going for a walk, food shopping, sleeping, and menstruating to make inferences and predictions about people’s health behavior and status.

While such developments may offer future positive health benefits, little empirical research has been published about how AI will impact patient health outcomes or experiences of care. Furthermore, the data- and cloud-computing resources required for training models to AI health systems have created troubling new opportunities, expanding what counts as “health data,” but also the boundaries of healthcare. The scope and scale of these new “algorithmic health infrastructures” give rise to a number of social, economic, and political concerns.

The proliferation of corporate-clinical alliances for sharing data to train AI models illustrates these infrastructural impacts. The resulting commercial incentives and conflicts of interest have made ethical and legal issues around health data front-page news. Most recently, a whistle-blower report alerted the public to serious privacy risks stemming from a partnership, known as Project Nightingale, between Google and Ascension, one of the largest nonprofit health systems in the US. The report claimed that patient data transferred between Ascension and Google was not “de-identified.” Google helped migrate Ascension’s infrastructure to their cloud environment, and in return received access to hundreds of thousands of privacy-protected patient medical records to use in developing AI solutions for Ascension and also to sell to other healthcare systems.

Google, however, is not alone. Microsoft, IBM, Apple, Amazon, and Facebook, as well as a wide range of healthcare start-ups, have all made lucrative “data partnership” agreements with a wide range of healthcare organizations (including many university research hospitals and insurance companies) to gain access to health data for the training and development of AI-driven health systems. Several of these have resulted in federal probes and lawsuits around improper use of patient data.

However, even when current regulatory policies like HIPAA are strictly followed, security and privacy vulnerabilities can exist within larger technology infrastructures, presenting serious challenges for the safe collection and use of Electronic Health Record (EHR) data. New research shows that it is possible to accurately link two different de-identified EHR datasets using computational methods, so as to create a more complete history of a patient without using any personal health information of the patient in question. Another recent research study showed that it is possible to create reconstructions of patients’ faces using de-identified MRI images, which could then be identified using facial-recognition systems. Similar concerns have prompted a lawsuit against the University of Chicago Medical Center and Google claiming that Google is “uniquely able to determine the identity of almost every medical record the university released” due to its expertise and resources in AI development. The potential harm from misuse of these new health data capabilities is of grave concern, especially as AI health technologies continue to focus on predicting risks that could impact healthcare access or stigmatize individuals, such as recent attempts to diagnose complex behavioral health conditions like depression and schizophrenia from social-media data.

New Social Challenges for the Healthcare Community
This year a number of reports, papers, and op-eds were published on AI ethics in healthcare. Although mostly generated by physicians and medical ethicists in Europe and North America, these early efforts are important for better understanding the situated uses of AI systems in healthcare.

For example, the European and North American Radiology Societies recently issued a statement that outlines key ethical issues for the field, including algorithmic and automation bias in relation to medical imaging. Radiology is currently one of the medical specialties where AI systems are the most advanced. The statement openly acknowledges how clinicians are reckoning with the increased value and potential harms around health data used for AI systems: “AI has noticeably altered our perception of radiology data—their value, how to use them, and how they may be misused.”

These challenges include possible social harms for patients, such as the potential for clinical decisions to be nudged or guided by AI systems in ways that don’t (necessarily) bring people health benefits, but are in service to quality metric requirements or increased profit. Importantly, misuses also extend beyond the ethics of patient care to consider how AI technologies are reshaping medical organizations themselves (e.g., “radiologist and radiology departments will also be data” for healthcare administrators) and the wider health domain by “blurring the line” between academic research and commercial AI uses of health data.

Importantly, medical groups are also pushing back against the techno-solutionist promises of AI, crafting policy recommendations to address social concerns. For example, the Academy of Medical Royal Colleges (UK) 2019 report, “Artificial Intelligence in Healthcare,” pragmatically states: “Politicians and policymakers should avoid thinking that AI is going to solve all the problems the health and care systems across the UK are facing.” The American Medical Association has been working on an AI agenda for healthcare, too, also adopting the policy “Augmented Intelligence in Health Care”as a framework for thinking about AI in relation to multiple stakeholder concerns, which include the needs of physicians, patients, and the broader healthcare community.

There have also been recent calls for setting a more engaged agenda around AI and health. This year Eric Topol, a physician and AI/ML researcher, questioned the promises of AI to fix systemic healthcare issues, like clinician burnout, without the collective action and involvement of healthcare workers. Physician organizing is needed not because doctors should fear being replaced by AI, but to ensure that AI benefits people’s experiences of care. “The potential of A.I. to restore the human dimension in health care,” Topol argues, “will depend on doctors stepping up to make their voices heard.”

More voices are urgently needed at the table—including the expertise of patient groups, family caregivers, community health workers, and nurses—in order to better understand how AI technologies will impact diverse populations and health contexts. We have seen how overly narrow approaches to AI in health have resulted in systems that failed to account for darker skin tones in medical imaging data, and cancer treatment recommendations that could lead to racially disparate outcomes due to training data from predominantly white patients.

Importantly, algorithmic bias in health data cannot always be corrected by gathering more data, but requires understanding the social context of the health data that has already been collected. Recently, Optum’s algorithm designed to identify “high-risk” patients in the US was based on the number of medical services a person used, but didn’t account for the numerous socioeconomic reasons around the nonuse of needed health services, such as being underinsured or the inability to take time off from work. With long histories of addressing such social complexities, research from fields like medical sociology and anthropology, nursing, human-computer interaction, and public health is needed to protect against the implementation of AI systems that (even when designed with good intentions) worsen health inequities.

More to come...

Tuesday, December 10, 2019

dx: Parkinson's?
Chapter 1



PARKINSON’S CREPT UP on me. The early symptoms of the disease were so subtle and progressed so slowly that it was remarkably easy for me to dismiss them as signs of advancing age.

I turned 60 in 2004. I felt healthy. Thanks to successful surgery, prostate cancer—the only serious medical problem I had encountered in my adult life—was four years behind me. I looked forward to completing my career at the law firm in Washington, DC, where I had practiced law since 1971. I also looked forward to retirement years that would give me more time to devote to my wife and family and to the travel, writing, photography, and other hobbies that I had largely neglected while practicing law.

True, in recent years my handwriting—which had never been good—had deteriorated significantly. I often could not read my own notes. I had greater difficulty with a keyboard. My hands and arms ached after only a few minutes of typing, and my typing errors mounted. I thought I might have carpal tunnel syndrome, but nothing more serious than that.

Oh, yes, there was also that nettlesome tremor in my right arm. It was particularly noticeable when I went outside in the morning to collect our daily newspaper. The tremor, I thought, was probably just a symptom of aging. I blew it off. I told myself that my father (who lived until his 90s) also had a very minor tremor in one of his hands in his later years, and so did one of my brothers (15 months younger than I). Nothing serious, I told myself.

One other thing. I was a little unsteady when I went downstairs to have breakfast each morning. No big deal. I didn’t fall. I didn’t even come close to falling. Nothing I couldn’t accommodate by holding on to the railing…

John M. Vine. A Parkinson's Primer: An Indispensable Guide to Parkinson's Disease for Patients and Their Families. Paul Dry Books. Kindle Edition.
Yeah. Let me paraphrase for the sake of comparison.
Chapter 1.alt



PARKINSON’S HAS PERHAPS CREPT UP on me. The early symptoms of the disease were so subtle and progressed so slowly that it was remarkably easy for me to dismiss them as signs of advancing age.

I turned 70 in 2016. I felt healthy. Thanks to successful 2015 Calypso radiation treatment, prostate cancer—the most serious medical problem I had encountered in my adult life 'til then—was a year behind me. I looked forward to getting back on the basketball court and continuing my blogging. I also looked forward to my “retirement” years that would give me more time to devote to my wife and family and to the pleasure reading, travel, photography, and other hobbies that I had often neglected during my meandering, random-walk career.

But in 2017 I started getting increasingly “jittery,” and my handwriting—which had never been good—began deteriorating. I often could not read my own penned notes. I had greater difficulty with a keyboard (and my 12-string guitar). My hands and arms ached after only a few minutes of typing, and my typing errors mounted. I thought I might have carpal tunnel syndrome or arthritis (like both of my late parents before me, but nothing more serious than that.

Oh, yes, there was also that increasingly nettlesome tremor in my left hand. It was particularly noticeable in the evenings. The tremor, I thought, was probably just a symptom of aging (and the unrelenting stress of my younger daughter’s cancer illness). I blew it off. For one thing, I told myself that my father (who lived until his 90s) also had a very minor tremor in one of his hands in his later years. Nothing serious, I told myself.

One other thing. I was a little unsteady when I went downstairs. No big deal. I didn’t fall. I didn’t even come close to falling. Nothing I couldn’t accommodate by holding on to the railing…

Bobby Gladd,
Swell. Whatever. Deal with it.

John Vine again:
In Appendix 1, I address some of the myths and misconceptions about Parkinson’s disease. For example, Parkinson’s is sometimes referred to as a “movement disorder,” and Parkinson’s specialists are sometimes called “movement disorders specialists.” However, Parkinson’s has numerous nonmovement symptoms, and I have tried, throughout the book, to clear up any confusion caused by references to movement symptoms. 

Similarly, Parkinson’s is sometimes referred to as a “disorder of the brain” despite mounting evidence that it is not exclusively a brain disease. Because I am concerned that “brain disorder” terminology could also be confusing, I have tried throughout the book to make clear that Parkinson’s might not be exclusively a brain disease. 

I do not refer to people with Parkinson’s as “victims” as some commentators do. Calling people “victims” implies that they have no control over their condition. In fact, people with Parkinson’s can influence the course of the disease. I prefer to use the term “patients.” Victims commiserate about their misfortunes. Patients get treatment. 

Likewise, I do not use the term “caregiver” to refer to the people who care for Parkinson’s patients. “Caregiver” implies that Parkinson’s patients are merely “care receivers.” This is far from the truth. Most Parkinson’s patients are largely responsible for their own care, especially in the years immediately following their diagnosis. Accordingly, I refer to the people who care for Parkinson’s patients as the patients’ “partners.” [Kindle location 54]
Stay tuned. I have much to learn. (What's new?) Time to schedule with my KP Primary. I need to get a cardiology referral anyway for a post-SAVR surveillance Echo px.
Most Parkinson’s patients are largely responsible for their own care, especially in the years immediately following their diagnosis.
Bring it, then.

I first started wondering about my symptoms while working on blog posts earlier this year focused on "science communication" issues. Saw an interview with actor Alan Alda, wherein he spoke of his own experience with PD. I recall thinking "yeah, that sounds familiar."

We shall see.


Finished this excellent book. Five stars. I feel fortified. Saw my Primary today. She got me a Neurology consult for Thursday. To be continued...


From my latest (paywalled) snailmail issue of Science Magazine:
Algorithms on regulatory lockdown in medicine
Prioritize risk monitoring to address the “update problem”

As use of artificial intelligence and machine learning (AI/ML) in medicine continues to grow, regulators face a fundamental problem: After evaluating a medical AI/ML technology and deeming it safe and effective, should the regulator limit its authorization to market only the version of the algorithm that was submitted, or permit marketing of an algorithm that can learn and adapt to new conditions? For drugs and ordinary medical devices, this problem typically does not arise. But it is this capability to continuously evolve that underlies much of the potential benefit of AI/ML. We address this “update problem” and the treatment of “locked” versus “adaptive” algorithms by building on two proposals suggested earlier this year by one prominent regulatory body, the U.S. Food and Drug Administration (FDA) (1, 2), which may play an influential role in how other countries shape their associated regulatory architecture. The emphasis of regulators needs to be on whether AI/ML is overall reliable as applied to new data and on treating similar patients similarly. We describe several features that are specific to and ubiquitous in AI/ML systems and are closely tied to their reliability. To manage the risks associated with these features, regulators should focus particularly on continuous monitoring and risk assessment, and less on articulating ex-ante plans for future algorithm changes.
I have an irascible (Quixotic?) pick with the cavalier use of the word "algorithm" apropos of AI/ML. Maybe that's just the pedantic bias of an old-school out-to-pasture former 3GL/4GL pre-OOP RDBMS programmer and SAS analyst (large PDF). We need a term encompassing something in between the "heuristic" and the "algorithmic." Fundamentally, algorithms are "black boxes." They don't "learn" anything. Data in, replicable results reflexively out. Lather, Rinse, Repeat. Empirical science 101. There's obviously something else at work here beyond the Boolean.

To this point, see Science Magazine's "Artificial intelligence faces reprodicibility crisis."
"You never step in the same training data stream twice." --BobbyG

OK, but, contrarily, to the extent you in fact can and do, and you get differing results per iterative (and/or recursive) immersion, do you have a "reproducibility crisis," or "learning?" (The "definition of insanity" joke comes to mind.) If it's the latter ("learning"), which data stream dive gets you to durably actionable "truth?" (Keyword actionable.) How will you know? How will you know when to stop "training?"

BTW, I jokingly call myself a "life-long unlearner."


POTUS will not be amused.

Based in Silicon Valley, was founded in 2016 by a group of Stanford PhD classmates who saw the strong potential of artificial intelligence to make a big impact on business and society. With trucking being the primary means of shipping in America, the founders decided to focus on self-driving technology to transform the trillion-dollar commercial trucking industry...
This truck reportedly just drove a transcontinental trip autonomously, carrying a full load of butter.

Breaker, Breaker, Billy Bob, hey, good buddy, you got'cher ears on?  They're hiring: 
PlusAI current job openings

Advanced Research
Machine Learning Scientist Silicon Valley
Business / Management
Operations Associate Silicon Valley
Technical Product Manager Silicon Valley
Software Engineering
DevOps Engineer Silicon Valley
IT Systems Administrator Silicon Valley
Perception Engineer Silicon Valley
Robotics Engineer Silicon Valley
Software Engineer - Robotics Silicon Valley
 Sr. C++ Backend Software Engineer Silicon Valley
Sr. Software Engineer - Control Systems Silicon Valley
Sr. Software Engineer - Full-Stack Silicon Valley
Sr. Software Engineer - Motion Planning Silicon Valley
Sr. Software Engineer - Perception Silicon Valley
Sr Software Engineer - Simulation Silicon Valley
Sr. Systems Design/Functional Safety Engineer Silicon Valley
Systems Engineering
Automotive Technician Silicon Valley
Mechatronics Engineer Silicon Valley
Vehicle Operations
Truck Driver (Class A) Silicon Valley
Vehicle Operations Specialist
Silicon Valley
Vehicle Operator - Semi Truck
Silicon Valley


More to come...

Monday, December 9, 2019

HPEC: Humanitarian Physicians Empowerment Community

The Founder reached out to me. After a brief phone conversation, I thought her company merited a preliminary uncritical shout. I pointed her to some HealthTech people with better SME chops than mine.

Detailed study to ensue ASAP. Looks to be blockchain-based tech.


Stay tuned. I'm all for anything that demonstrably helps improve the health care delivery space for the benefit of patients and physicians.

From a podcast featuring Dr. Houston:


More to come...

Thursday, December 5, 2019

Questioning "Innovation?" Is that even allowed?

From one of my favorite requisite daily hangs.

The commentariat is every bit the equal of the topical authors. Better bring your A-Game if you're gonna participate; they do not suffer fools gladly. ("Yves Smith" is the blog owner.)
Questioning Innovation
Posted on December 5, 2019 by Yves Smith

I haven’t quite worked through my reaction to some recent pieces in the Financial Times that seem to merit comment, so forgive me for picking out a just a couple of tidbits that still might serve as grist for discussion.

One thing that has long bothered me is the worship of innovation. Relatively early in my career, I had a consulting gig with a venture capital firm where my job was to look at oddball deals. I learned that quite a few bona fide inventions and technology improvements, even though they might seem cool and engineers would get excited about them, didn’t add up to a business opportunity. The most common failing was they didn’t represent a big enough improvement over the status quo to justify customers making the needed behavior changes to adopt them.

And an even earlier lesson came in college, when I majored in the history and literature of the modern era, which meant the Industrial Revolution to World War II. The first generation, and arguably even two, of the Industrial Revolution led to a decline in worker incomes in England. The revolutions of 1848 were mass pushback against the dislocations of the rise of factory work. So while industrialization eventually increased living standards, the transition costs exacted a great toll on laborers who didn’t live long enough to reap the benefits. And we are now suffering the long-term cost of environmental degradation…

If one wants to get worked up about the US actually being a laggard or being at risk of becoming one, it’s a little late to get worked up now. How about what passes for Silicon Valley talent focusing on….help me…apps? How about our slow and overpriced broadband? How about our generally terrible infrastructure, which is imposing a cost on citizens and businesses on a broad basis? America’s lifespan is falling, and the priority is 5G?

The reason I am increasingly a Luddite is I see too much use of technology to curtail our economic rights and even now our supposed ownership, or otherwise enable better rentierism…
Technology is working towards the creation of a new debt cropper society. It may not get anywhere near as far as it did in the Reconstruction Era, but having to pay and pay and pay (then via jacked up financing charges, now via restricted ownership rights leading to unnecessarily high costs, particularly from having to replace consumer durables more often or pay high authorized servicer repair costs), but the trend is underway…
Read all of it, including the accruing comments.
"Rentierism." I am reminded of Quadrant II of Frase's Four.
Also, buy and study Yves' excellent book. One of my FIRE Sector favs (in which I did a 5-yr tenure).
The economy is far too important to all of us to leave to experts, particularly when their recommendations often have little in the way of empirical foundations. Both experts and charlatans rely on intimidation, such as the use of arcane (even if useful) terminology and a dismissive attitude to deter reasonable queries. We all need to get in the habit of demanding support, not sound bites or sixth-grade level opinion pieces, but reasoned and complete explanations of why economists believe what they believe. That was the reason for adopting mathematical exposition in the first place, to make the logic and evidence behind their reasoning explicit and transparent. It’s time they adopt that standard for communication with the public. [Ch 9, location 6551]


Patients vs Paperwork
by Danielle Ofri
New York Times

Every doctor I know has been complaining about the growing burden of electronic busywork generated by the EMR, the electronic medical record. And it’s not just in our imaginations.

The hard data have been rolling in now at a steady pace. A recent study in the Annals of Family Medicine used the EMR to examine the work of 142 family medicine physicians over three years. These doctors spent more than half of their time — six hours of their average 11-hour day — on the EMR, of which nearly an hour and a half took place after the clinic closed.

Another study in Health Affairs tracked the activities of 471 primary care doctors over a three-year period, and also found that EMR time edged out face-to-face time with patients.
This study came on the heels of another analysis in the Annals of Internal Medicine in which 57 physicians were observed directly for 430 hours. These researchers found that doctors spent nearly twice as much time doing administrative work as actually seeing patients: 49 percent of their time, versus 27 percent.

These study results hovered over my head as I worked through a recent clinic session, most of which felt devoted to serving the EMR rather than my patients. It was the kind of day that spiraled out of control from minute one, and then I could never catch up. The kind of day, nowadays, that is every day.

Part of the issue is that there are simply more patients, most of whom are living longer with many more chronic illnesses, so each patient has much more that needs to be taken care of in a given visit.

But the main reason that I can’t keep up is the EMR. Like some virulent bacteria doubling on the agar plate, the EMR grows more gargantuan with each passing month, requiring ever more (and ever more arduous) documentation to feed the beast…
Yeah. That hardy perennial lament. Can't argue with the beef--except to assert that it's not the technology per se, it's the incumbent business paradigm. "Patients vs Paperwork." Interesting choice of words. I assume Dr. Ofri isn't advocating a return to actual paper recordkeeping.
Although, my friend Margalit might disagree.
Relatedly, on the "business paradigm" of health care policy:
The American Health Care Industry is Killing People

Yes, transitioning to a more equitable system might eliminate some jobs. But the status quo is morally untenable.

Won’t you spare a thought for America’s medical debt collectors? And while you’re at it, will you say a prayer for the nation’s health care billing managers? Let’s also consider the kindly, economically productive citizens in swing states whose job it is to jail pregnant women and the parents of cancer patients for failing to pay their radiology bills. Put yourself in the entrepreneurial shoes of the friendly hospital administrator who has found a lucrative new revenue stream: filing thousands of lawsuits to garnish sick people’s wages.

And who can forget the lawyers? And the lobbyists! Oh, aren’t they all having a ball in America’s health care thunderdome. Like the two lobbyists who were just caught drafting newspaper editorials for Democratic state representatives in Montana and Ohio, decrying their party’s push toward a “government-controlled” health care industry. It’s clear why these lobbyists might prefer the converse status quo: a government controlled by the health care industry. If we moved to a single-payer system, how would lobbyists put food on the table, and who would write lawmakers’ op-ed essays?

Welcome to the bizarre new argument against “Medicare for all”: It’s going to cost us jobs. Lots of jobs. Good, middle-class, white-collar jobs in America’s heartland, where Democrats need to win big to defeat Donald Trump…
Hmmm... From a prior post.

I am reminded of a passage from a David Graeber book:

wherein he asks,

Does this mean that members of the political class might actually collude in the maintenance of useless employment? If that seems a daring claim, even conspiracy talk, consider the following quote, from an interview with then US president Barack Obama about some of the reasons why he bucked the preferences of the electorate and insisted on maintaining a private, for-profit health insurance system in America: 
“I don’t think in ideological terms. I never have,” Obama said, continuing on the health care theme. “Everybody who supports single-payer health care says, ‘Look at all this money we would be saving from insurance and paperwork.’ That represents one million, two million, three million jobs [filled by] people who are working at Blue Cross Blue Shield or Kaiser or other places. What are we doing with them? Where are we employing them?”
I would encourage the reader to reflect on this passage because it might be considered a smoking gun. What is the president saying here? He acknowledges that millions of jobs in medical insurance companies like Kaiser or Blue Cross are unnecessary. He even acknowledges that a socialized health system would be more efficient than the current market-based system, since it would reduce unnecessary paperwork and reduplication of effort by dozens of competing private firms. But he’s also saying it would be undesirable for that very reason. One motive, he insists, for maintaining the existing market-based system is precisely its inefficiency, since it is better to maintain those millions of basically useless office jobs than to cast about trying to find something else for the paper pushers to do. 

So here is the most powerful man in the world at the time publicly reflecting on his signature legislative achievement—and he is insisting that a major factor in the form that legislature took is the preservation of bullshit jobs…

Graeber, David. Bullshit Jobs: A Theory (p. 157). Simon & Schuster. Kindle Edition.

Financialization of the U.S. Pharmaceutical Industry
Posted on December 7, 2019 by Yves Smith

Yves here. This article is a bit geeky but very much worth your attention. It shows how pharmaceutical companies are flat out lying when they say they need higher drug prices to support R&D. Their profits go almost entirely to buybacks and dividends. And this analysis does not incorporate another unflattering fact: that Big Pharma spends more on marketing than research. It also describes how government funding of drug research has increased more than three times in real terms since the 1980….as executives have lined their pockets.

The authors make a short set of recommendations at the end, starting with regulating drug prices...
NC rocks. Again, be sure to always read the comments.

From the post:
Perhaps no business activity is more important to our well-being than the discovery, development, and distribution of medicines. Unfortunately, many of the largest U.S. pharmaceutical companies have become global leaders in financialization at the expense of innovation. Drug prices are at least twice as high in the United States as elsewhere in the world.

Over the decades, pharmaceutical companies have lobbied vigorously against proposed market regulations designed to control drug prices in the United States. The main argument that the industry’s lobby group, the Pharmaceutical Research and Manufacturers of America (PhRMA), habitually makes against drug-price regulation is that the high level of profits that high drug prices make possible in the U.S. drug market enables pharmaceutical companies to be more effective in drug innovation…
Rx "Innovation." Yeah, right.


Hope draws nigh, scheduled to arrive this Christmas Eve. Hope Eleanor Nyquist, my first Great Niece, daughter of my awesome Niece April, wife of CEO and Chief Scientist Dr. Jeff Nyquist.

I shot this--and a couple hundred more--at their wedding.

I am remiss in not citing the Hopewell Cancer Support center here in the Baltimore area.

Gave 'em a permanent upper right-hand links column link (click the image). Looked up their most recently available IRS 990. Looks totally legit. They're only maybe 5 miles from our house. Will have to pay them a visit, looks for ways to help.

Tangentially, saw this discussed recently on TV and bought the book.
Our grief can’t just be buried alongside the ones we love. Even years after our losses, we still have moments of gut-wrenching sadness. We’re still annoyed by a wide variety of major and minor Hallmark holidays. We still get pissed thinking about the hand we’ve been dealt. But guess what? 

These days, we’re tagging family members on Instagram. They’re just not the ones we thought we’d be tagging—and ones that in our darkest moments we never thought would be in our lives. 

Eventually, we’re all going to lose people we love. Eventually, we’re all going to die. This is true whether or not we admit it to each other. So there’s value in building a community where there’s no stigma to talking about death and the countless ways it impacts our lives. And with this book, and the candor of those who contributed to it, we hope to open up the conversation so that, ideally, in the future, nobody has to hear crickets in the face of a loss.

Soffer, Rebecca. Modern Loss (pp. xxiii-xxiv). Harper Wave. Kindle Edition.
Haven't read much of it yet, but I will shortly and review it. Been rather sick of thinking about cancer and loss this year. My grief, while manageable, is permanent. I'm absolutely sure I'm not alone.

More to come...