Search the KHIT Blog

Friday, October 29, 2021

The definition of an "economist?"

"Someone who sees something that works in practice and tries to determine whether it will work in theory."—JD Kleinke

I am reminded of an old Jay Leno Tonight Show joke, wherein he sarcastically mocked mindless TV ad copy platitudes:
"Do you want the highest quality at the lowest price?"
"NO, we wanna pay top dollar for crap!"
I've riffed on the U.S. healthcare system at some length across the years. See also my Hahnemann debacle post.
From the above Healthcare Triage video:
OK, 71% of health care economists have apparently not gotten The Memo. **
(** Wait a minute. The v/o contradicts the slide. Which is it? Favored or opposed?)
I am now 75 and a Medicare bene, as is my 71 yr old wife. We hit the Medicare "high income earners" threshold several years ago, approximately doubling our monthly "Medicare Premium Deduction" (it has since gone back down to "normal"). Cheryl was still working, and we'd taken a couple of sizable IRA disbursements (which count as taxable "earned" income in the year withdrawn). So, some significant "means testing" is already in effect. Not that hard-liners don't still want to convert Medicare into a flat-out penurious welfare program, with "asset" limits as well as income ceilings. Spend-down-to-poverty-for-eligibility.
"Netherlands, Norway, Australia, New Zealand, UK, Germany, Sweden, Switzerland, France, Canada?" Higher-performing, lower costs? Buncha Commies.

No, we'll stick with Paying Top Dollar For Crap.
I first came to the healthcare space in 1993, as a QIO analyst, just as the corporatization of the sector was getting up to warp speed.
Interesting. This blog ensued during my 2nd QIO tenure in Health Information Technology. We were gonna materially improve outcomes quality and "bend the cost curve," in large measure via the widespread deployment of digital IT.
We mostly just MBA-ified it. 


I often reflect on what I wrote in 2009, when "Obamacare" was in the legislative oven.
Some reform advocates have long argued that we can indeed [1] extend health care coverage to all citizens, with [2] significantly increased quality of care, while at the same time [3] significantly reducing the national (and individual) cost. A trifecta "Win-Win-Win." Others find the very notion preposterous on its face. In the summer of 2009, this policy battle is now joined in full fury…


I will by no means be the first to note that our medical industry is not really a "system," nor is it predominantly about "health care." It is more aptly described as a patchwork post hoc disease and injury management and remediation enterprise, one that is more or less "systematic" in any true sense only at the clinical level. Beyond that it comprises a confounding perplex of endlessly contending for-profit and not-for-profit entities acting far too often at ruinously expensive cross-purposes…

Tuesday, October 26, 2021

Exigencies? Priorities?

I don't see how anyone can legitimately claim to be "bored." I certainly am not. There's just too much of importance to learn (and "unlearn") and act upon constructively and ethically.

Much of the foregoing overlap (think Venn diagram) and are recursively cause-and-effect (think "feedback loops"). Not rigidly rank-ordered. And, other folks would surely posit some different priority items. Some on the list can rationally be argued to be potentially "existential," others "merely" of differing relative degrees of lesser current-or-prospective malignancy. I've reported on many of these topics in prior posts, and will continue. Amid the acrimonious din of mendacity and shoot-from-the-lip willful ignorance, much wisdom is available to us, frequently hiding in plain sight.


Dr. Langer: …I wanted to write a book a long time ago, Arthur—I never wrote this one—that was called Is There Life Before Death?, because I found, you know, all these people worrying about life after death. Many people come alive, sadly, after they get some terrible diagnosis or they have a stroke or they find out they have cancer. When I speak to people who are miserable or whatever, I simply tell them that all you need to do is take care of the moment, just right this second. And if you keep doing that, then over the course of the day, you know, you’ve had a fine time...
"There is no conclusive evidence of life after death. But there is no evidence of any sort against it. Soon enough you will know. So why fret about it?"—Heinlein
The eminent Amos Tversky once pointed out that (paraphrasing here) "when you worry, you may suffer twice." Another favorite quote of mine comes from a Hastie & Dawes book chapter title: "Two cheers for uncertainty." 
Or, "Happy Wife, Happy Life."
Five podcasts to date. I've not yet listened to all of them. But I shall.

Saturday, October 23, 2021

Yikes! I've been Peirced! I've been abducted!

With affability, humility, and synaptic smog-clearing clarity, Dr. Erik J. Larson Takes No Prisoners.

Yikes, indeed.

In sum, we can dial back the ominously foreboding hyperbole regarding the incipient homo sapiens-enslaving / exterminating Singularity said to soon be wrought by AI. 320 pages of Chill Pills. 320 doses of fast-acting Naloxone HCl antidote to AGI Cognitive Pearl-Clutching Disorder.

One fumbles to know where to begin.

I love it when I learn stuff (unlearn, mostly, anymore). Even when it entails humbling internal reactions of "how the [bleep] have you missed that all these decades?"
In 1980, at the age of 34, divorced w/ custody of my two girls, I found it prudent and necessary to give up my then-16 years of hardscrabble roadhouse touring musician life (or, as my wife Cheryl calls it, "working in the not-for-profit sector") and enroll in undergraduate school at UTK.

I thrived, ravenously consuming the likes of deductive logic, inductive logic, philosophy of science, various flavors of ECON, the gamut of statistics courses, and experimental psychology and psychometrics.

In addition to my awesome Linear Regression text (written by my Prof, Mary Sue Younger), I still have my UTK deductive logic textbook in my hardcopy stash.

Looking back now through the index, I find no reference to either "abductive inference" or Charles Sanders Peirce. What? Who?

Erik Larson:
...[C]ommon sense is itself mysterious, precisely because it doesn’t fit into logical frameworks like deduction or induction. Abduction captures the insight that much of our everyday reasoning is a kind of detective work, where we see facts (data) as clues to help us make sense of things. We are extraordinarily good at hypothesizing, which is, to Peirce’s mind, not explainable by mechanics but rather by an operation of mind which he calls, for lack of another explanation, instinct. We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.

We must account for this in building an intelligence, because it is the starting point for any intelligent thinking at all. Without a prior abductive step, inductions are blind, and deductions are equally useless.

Induction requires abduction as a first step, because we need to bring into observation some framework for making sense of what philosophers call sense-datum—raw experience, uninterpreted. Even in simple induction, where we induce a general statement that All swans are white from observations of swans, a minimal conceptual framework or theory guides the acquisition of knowledge. We could induce that all swans have beaks by the same inductive strategy, but the induction would be less powerful, because all birds have beaks, and swans are a small subset of birds. Prior knowledge is used to form hypotheses. Intuition provides mathematicians with interesting problems.

When the developers of DeepMind claimed, in a much-read article in the prestigious journal Nature, that it had mastered Go “without human knowledge,” they misunderstood the nature of inference, mechanical or otherwise. The article clearly “overstated the case,” as Marcus and Davis put it. In fact, DeepMind’s scientists engineered into AlphaGo a rich model of the game of Go, and went to the trouble of finding the best algorithms to solve various aspects of the game—all before the system ever played in a real competition. As Marcus and Davis explain, “the system relied heavily on things that human researchers had discovered over the last few decades about how to get machines to play games like Go, most notably Monte Carlo Tree Search … random sampling from a tree of different game possibilities, which has nothing intrinsic to do with deep learning. DeepMind also (unlike [the Atari system]) built in rules and some other detailed knowledge about the game. The claim that human knowledge wasn’t involved simply wasn’t factually accurate.” A more succinct way of putting this is that the DeepMind team used human inferences—namely, abductive ones—to design the system to successfully accomplish its task. These inferences were supplied from outside the inductive framework.

Larson, Erik J.. The Myth of Artificial Intelligence (pp. 161-162). Harvard University Press. Kindle Edition.
I now humbly stand "Abducted."
(BTW: Not even in graduate school did I encounter C.S. Peirce or abductive inference. An inexplicable, significant omission, given what I now learn.)
I am not kidding about Erik J. Larson's new book. Another great place within which to hide a $100 bill from Your Favorite President Donald Trump.
This has been a compelling and fun read. I think the author has sustained his case.
In the pages of this book you will read about the myth of artificial intelligence. The myth is not that true AI is possible. As to that, the future of AI is a scientific unknown. The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time—that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations. Yet the inevitability of AI is so ingrained in popular discussion—promoted by media pundits, thought leaders like Elon Musk, and even many AI scientists (though certainly not all)—that arguing against it is often taken as a form of Luddism, or at the very least a shortsighted view of the future of technology and a dangerous failure to prepare for a world of intelligent machines.

As I will show, the science of AI has uncovered a very large mystery at the heart of intelligence, which no one currently has a clue how to solve. Proponents of AI have huge incentives to minimize its known limitations. After all, AI is big business, and it’s increasingly dominant in culture. Yet the possibilities for future AI systems are limited by what we currently know about the nature of intelligence, whether we like it or not. And here we should say it directly: all evidence suggests that human and machine intelligence are radically different. The myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them. Futurists like Ray Kurzweil and philosopher Nick Bostrom, prominent purveyors of the myth, talk not only as if human-level AI were inevitable, but as if, soon after its arrival, superintelligent machines would leave us far behind.

This book explains two important aspects of the AI myth, one scientific and one cultural. The scientific part of the myth assumes that we need only keep “chipping away” at the challenge of general intelligence by making progress on narrow feats of intelligence, like playing games or recognizing images. This is a profound mistake: success on narrow applications gets us not one step closer to general intelligence. The inferences that systems require for general intelligence—to read a newspaper, or hold a basic conversation, or become a helpmeet like Rosie the Robot in The Jetsons—cannot be programmed, learned, or engineered with our current knowledge of AI. As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking low-hanging fruit. The jump to general “common sense” is completely different, and there’s no known path from the one to the other. No algorithm exists for general intelligence. And we have good reason to be skeptical that such an algorithm will emerge through further efforts on deep learning systems or any other approach popular today. Much more likely, it will require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like, let alone the details of getting to it…
[Erik J. Larson, pp 1-2]
I think I can safely continue to exclude The Singularity from BobbyG's list of priority exigencies.

Gotta love the fictional MARK 13.

apropos of exigencies,


"AI” OR "IA" (Intelligence Augmentation)?
It remains true that technology often acts like a prosthetic to human abilities, as with the telescope and microscope. AI has this role to play, at least, but a mythology about a coming superintelligence should be placed in the category of scientific unknowns. If we wish to pursue a scientific mystery directly, we must at any rate invest in a culture that encourages intellectual ideas—we will need them, if any path to artificial general intelligence is possible at all. [Erik J. Larson, pg 280]
My intial interest in the topic went to all of the Gartner Hype Cycle swooning over AI's (overstated) clinical potential in Health IT. Beyond that, I mused about this stuff (click the robot thinker graphic):
My dubiety is hardly attenuated in the wake of reading Erik's book.
More broadly,

"Someone has to have an idea."

Yeah, and where do we get them?
“Because of livewiring, we are each a vessel of space and time. We drop into a particular spot on the world and vacuum in the details of that spot. We become, in essence, a recording device for our moment in the world.

When you meet an older person and feel shocked by the opinions or worldview she holds, you can try to empathize with her as a recording device for her window of time and her set of experiences. Someday your brain will be that time-ossified snapshot that frustrates the next generation.

Here’s a nugget from my vessel: I remember a song produced in 1985 called “We Are the World.” Dozens of superstar musicians performed it to raise money for impoverished children in Africa. The theme was that each of us shares responsibility for the well-being of everyone. Looking back on the song now, I can’t help but see another interpretation through my lens as a neuroscientist.

We generally go through life thinking there’s me and there’s the world. But as we’ve seen in this book, who you are emerges from everything you’ve interacted with: your environment, all of your experiences, your friends, your enemies, your culture, your belief system, your era—all of it.

Although we value statements such as “he’s his own man” or “she’s an independent thinker,” there is in fact no way to separate yourself from the rich context in which you’re embedded. There is no you without the external. Your beliefs and dogmas and aspirations are shaped by it, inside and out, like a sculpture from a block of marble. Thanks to livewiring, each of us is the world.”

— Livewired: The Inside Story of the Ever-Changing Brain by David Eagleman pp 244-245
The fuel source of abductive inference?
Some of my episodic prior posts discussing "robots." How about "Deep Fakes?"
Mathematician Bill Dembski's lengthy review of Erik's book. 
Erik Larson’s THE MYTH OF ARTIFICIAL INTELLIGENCE is far and away the best refutation of Kurzweil’s overpromises, but also of the hype pressed by those who have fallen in love with AI’s latest incarnation, which is the combination of big data with machine learning. Just to be clear, Larson is not a contrarian. He does not have a death wish for AI. He is not trying to sabotage research in the area (if anything, he is trying to extricate AI research from the fantasy land it currently inhabits). In fact, he has been a solid contributor to the field, coming to the problem of strong AI, or artificial general intelligence (AGI) as he prefers to call it, with an open mind about its possibilities...
Yep. I give Dembski's review five stars as well—one star for each 1,000 words. /s

Cool guy. 


22 months ago I was officially dx'd with Parkinson's. Until a month ago I've avoided the Rx, but I'm now doing the Carbidopa/Levodopa 25/100 regimen (6 tabs/day, 2 at a time). I find it problematic. Might yet be too early to assess efficacy, though. My side-effects thus far are pretty much just elevated standing balance issues. I was already a fall risk. Sux. But, so do the tremors, which are messing with my typing, mouse use, and guitar playing (and manual dexterity more broadly).

Mixing the film metaphor riffs, "To Make Benefit Glorious Nation of Sinemetistan."   

Wednesday, October 20, 2021

#LetsGetReal, now.

We all need to act in some capacity.

My wife is now partaking of their leadership training. Really liking it thus far.

Some of my prior anthropocene climate change posts are here. Also, see my one-off "Western U.S. drought" post from a few years back.
Capturing, converting, storing, and distributing all that "free" incoming solar energy sufficient to replace CO2-emitting carbon compound fuels—well, there are skeptics, and not all of them disingenuous fossil fuels industry shills. Solar panels and wind turbine farms just don't magically appear, and, likewise, sufficient raw materials and mfg capacity for efficient batteries at scale is a question. Hydroelectric dams, while basically comprising a proxy method of hydro cycle solar storage via water, come with their own environmental liabilities. What about the solar input necessarily consumed ongoing by the earth's aggregate biomass (of which humans reportedly comprise only about 0.01% by weight)?

Beyond the incumbent industry-motivated inertia, are there in fact significant tech and mfg barriers to getting us all the way there? I'd love to get the views of those who credibly know more.
In any event, the status quo isn't gonna cut it much longer.
UPDATE: What about mass production of H2 (pure hydrogen)? You'd have to separate it from other compounds—preferably H20 (either via electrolosis, high-temperature cracking, etc).
I share the concern.

Monday, October 18, 2021

Yeah, of course: Let's extend our destructive tendencies into space.

Like we don't have enough to contend with.
From my new issue of Harpers.
All armies prefer high ground to low.
  —Sun Tzu, The Art of War
In late January 2020, in an orbital belt around 640 kilometers above Earth, two unmanned Russian spacecrafts coasted through the sky toward USA-245, an American reconnaissance satellite.

From this elevation a traveler would have seen the earth as a rounded slope of green and brown. One could have made out the rugged edges of mountains and the contours of lakes, our white atmosphere, bowed around the planet, darkening to blue and then black. Seen from a backyard telescope, the satellites would have looked like small glimmers in the night, with light from the sun glinting off their alloyed coating as if off a distant windshield.

The Russian crafts had positioned themselves unusually close to the American, in a near-identical orbit, and they had synced their paths with USA-245—a classified, multibillion-dollar KH-11 satellite, equipped with imaging systems on par with the Hubble telescope—such that one of them came within twenty kilometers of it several times in a single day. Satellites in the same plane may on occasion pass within one hundred kilometers of one another but far less frequently. The Russians, it seemed, were stalking an American spy satellite…

In mid-April, Russia tested a direct-ascent anti-satellite weapon (DA-ASAT)—a missile launched from Earth rather than from a vessel already in orbit. The country had tested this weapon system—named Nudol, after a river near Moscow—multiple times before, and the UnitedStates, China, and India had all performed DA-ASAT tests in years prior, each demolishing defunct satellites of their own. The Russian weapon seemed intended for a target in open space: it sailed through the sky and then fell back to Earth, where it likely landed in the Laptev Sea. U.S. Space Command issued a statement the same day, declaring the test evidence of the growing threats to U.S. space systems and deeming it “hypocritical”: Russia had publicly called for “full demilitarization” in space. Space Command also took the opportunity to comment on the nesting dolls. Russia, the statement said, had “conducted maneuvers near a U.S. Government satellite that would be interpreted as irresponsible and potentially threatening in any other domain.” In a line attributed to Raymond directly, it warned that the United States was “ready and committed to deterring aggression and defending the Nation, our allies and U.S. interests from hostile acts in space.”…

In fact, the primary source for international law in space is a drastically outdated document from 1967 called the Outer Space Treaty, designed for an environment far simpler than the current field. In a September 2019 address at a conference for air, space, and cyber security, General Raymond put it this way: “The Outer Space Treaty says you can’t have nuclear weapons in space. That’s about what it says. The rest is the wild, wild West.”…

More than half a century later, this Cold War document remains the basis for all extraterrestrial law. It bans placing nuclear weapons and weapons of mass destruction into orbit—as Raymond noted—but it says nothing of Earth-to-space or space-to-space arms, nor does it speak to kinetic weapons or the many subtler forms of attack developed since its drafting. The agreement is silent on what constitutes hostile behavior, and though it states that international law extends into space, there is no ready translation of earthly rules to a realm without national borders or gravity, and with limitless potential planes of conflict. As the years have gone by and other nations have joined the United States and Russia in space, and as astronautic technologies have become vastly more sophisticated, the insufficiency of the Outer Space Treaty has become a significant danger…

Since 2015, Russia, China, India, Iran, Israel, France, and North Korea have all established military space programs. China’s and Russia’s space commands are close on the heels of the United States, and according to the Secure World Foundation, the United States has idled certain of its offensive-technology programs while China and Russia actively test the same capabilities. Over the course of the past two years, martial activity beyond our atmosphere has exploded, and in conversations this summer, many space and security experts told me that the pressure is rising. “We are watching tensions ratchet up,” said Jack Beard, a former Department of Defense attorney and a professor of law specializing in space.

In March 2019, India tested its first direct-ascent anti-satellite weapon, blowing up one of its own crafts in low Earth orbit. In April 2020, when Iran announced the creation of its military space program, it slung its first reconnaissance satellite, Noor 1 (“Light 1” in Farsi), into orbit. In September of that year, China successfully launched a reusable craft, dubbed a “spaceplane,” which cruises in low Earth orbit and returns to the planet in one piece, landing horizontally…

The U.N. Committee on the Peaceful Uses of Outer Space is currently focused on establishing guidelines to limit the creation of space debris. Niklas Hedman, the committee’s secretary, told me that he thinks any new binding treaty in the current geopolitical environment is “impossible.” Green, the Space Force lawyer, said of new treaties: “I don’t see that as likely at this point.” Mike Hoversten, the lead counsel for space, international, and operations law at Space Operations Command, told me that he thinks it is “unfortunately probably going to take some kind of a significant event” in space for the international community to accept a new treaty…
Swell. Have to add another explicit exigency category to my current list.
#FreeBritney may be about to resolve.

The object high overhead was not a bird, or a plane, or Superman. It was a metallic sphere the size of a beach ball, orbiting the earth at 18,000 miles an hour. Looking to the heavens, Americans could see the silvery orb streaking through the night; on the radio, they could hear its high-pitched beeping. The launch of Sputnik, on October 4, 1957, transfixed the world—and sent tremors throughout the United States. “Today a new moon is in the sky,” one newscaster intoned.

Under the light of that new moon, the United States saw itself differently. “There was a sudden crisis of confidence in American technology, values, politics, and the military,” wrote Sputnik chronicler Paul Dickson. Americans didn’t hang their heads, however. Within a few months of Sputnik’s launch, President Eisenhower established the Advanced Research Projects Agency to keep pace with Soviet scientists. Then came the National Aeronautics and Space Administration, NASA. In 1958, Eisenhower signed into law the National Defense Education Act, injecting over $1 billion to overhaul American science and engineering education. The legislation passed Congress overwhelmingly—with strong support among both Republicans and Democrats—tripling funding for science research, supporting and training thousands of new teachers, and revamping school curriculums. Those inclined to math and science flocked to universities, aided by funding that helped produce 15,000 new PhD students annually.

The specter of Sputnik hung over the ’60s, a celestial catalyst for terrestrial progress. Lockheed’s Sunnyvale-based Missile and Space division soon became the company’s largest and most lucrative, part of a tsunami of federal funding flowing into Silicon Valley. Eisenhower was succeeded by a young John F. Kennedy, who rode exaggerated fears of a Soviet-U.S. “missile gap” to the presidency and soon challenged the nation not just to match the Soviets in space but to beat them to the moon by the end of the decade. That goal was achieved in July 1969. Three months later, those two computer terminals were linked on the ARPANET, the first glimmers of today’s online world. And Sputnik was the spark for it all. In the estimation of historian Walter A. McDougall, “No event since Pearl Harbor set off such repercussions in public life.”…

Helberg, Jacob. The Wires of War: Technology and the Global Struggle for Power (pp. 232-233). Avid Reader Press / Simon & Schuster. Kindle Edition.


Sunday, October 17, 2021

"Hollywood in your pocket?"

Maybe I'll get an iPhone 13 Pro Max. Maybe.
Lovin' the dramatic TV commercials of late.

One pick, though: Go to the Tech Specs page, scroll down though screen after screen of impressive detail.

And, under the "Video Recording" heading, merely two words about audio functionality: "Stereo recording."

Nothing about frequency response bandwidth (e.g., "20 - 20kHz"). Nothing about external mic input. Nothing about SMPTE Code synchronization capability—the long-time video and film industry standard for linking up externally captured audio with its video at specified FPS ("Frames per Second"). In particular, shoots involving multiple cameras in real have to have audio SMPTE sync.

Not that consumer-level buyers would notice or care.

$1,599 for the top-end 1 TB Pro Max 13 model. I have not the slightest doubt that the video and still photo optics are fabulous (I have some 35 mm SLR chops with respect to live-action still photography).

That price would buy you a ton of Sony HD camcorder, with badass optics and XLR external mic capability. Dunno.
You can do a lot anymore just using a HD minicam and smartphone video. See the searing "For Sama" documentary.
I had a former life as a performing and recording musician. Even took acoustic and audio engineering in undergrad school at UTK. Below, one of my old home studio original song demos.
BobbyG, 1981, "The Once & Future Fool"

My 1980-92 south Knoxville home studio. All you need now is a MacAir.
Kind of a stickler for audio quality. Wouldn't want to do iPhone-based video with Mickey Mouse audio.


Awesome. But don't try to shine me on the audio. That's full-on studio post.


We turn to the Internet for answers. We want to connect, or understand, or simply appreciate something—even if it’s only Joe Rogan. It’s a fraught pursuit. As the Web keeps expanding faster and faster, it’s become saturated with lies and errors and loathsome ideas. It’s a Pacific Ocean that washes up skeevy wonders from its Great Garbage Patch. We long for a respite, a cove where simple rules are inscribed in the sand.

You may have seen one advertised online, among the “weird tricks” to erase your tummy fat and your student loans. It’s MasterClass, a site that promises to disclose the secrets of everything from photography to comedy to wilderness survival. The company’s recent ad, “Lessons on Greatness. Gretzky,” encapsulates the pitch: a class taught by the greatest hockey player ever, full of insights not just for aspiring players but for anyone eager to achieve extraordinary things. In the seminar, Wayne Gretzky tells us that as a kid he’d watch games and diagram the puck’s movements on a sketch of a rink, which taught him to “skate to where the puck is gonna be.” Likewise, Martin Scorsese says in his class that he used to storyboard scenes from movies he admired, such as the chariot race in “Ben-Hur.” The idea that mastery can be achieved by attentive emulation of the masters is the site’s foundational promise. James Cameron, in his class, suggests that the path to glory consists of only one small step. “There’s a moment when you’re just a fan, and there’s a moment when you’re a filmmaker,” he assures us. “All you have to do is pick up a camera and start shooting.”...
Right. And, "if all we had to do was sign up for the checks we'd all be millionaires."
I read the entire hour-long article and listened to the audio transcription. It was a continuous "Silicon Valley HBO" moment.
Go to YouTube, search for "MasterClass Parodies."

Friday, October 15, 2021

You don't need a weatherman

to know which way the wind is blowin'

Mostly Covid-19 related ECON upshot, I would think.
One of our tiny grocery countermeasures.

Life-long omnivore here (albeit moderately so). We've started buying this "Impossible Burger" at Trader Joe's. Thus far we've used it to make baked stuffed peppers, stroganov, and actual burgers. Totally impressed; I can't tell any difference. A lot cheaper than ground beef as well. Heartily recommended.


722k US deaths to date


Reviewed in my Science Magazine (paywalled). 
In the mid-19th century, the polymath William Whewell coined the term “consilience” to describe the progress of science through time (1). According to Whewell, inductive knowledge should “jump together” with evidence from different disciplines of science supporting more and more general, unified knowledge claims. The historian Kyle Harper explicitly adopts and enlarges this goal—which he attributes to E. O. Wilson—beyond science proper, incorporating insights from history, economics, anthropology, paleogenomics, parasitology, ecology, and phylogenetics, to write an ambitious, engaging, and unified history of humanity’s interaction with infectious disease….
From the Amazon blurb:
"...Panoramic in scope, Plagues upon the Earth traces the role of disease in the transition to farming, the spread of cities, the advance of transportation, and the stupendous increase in human population. Harper offers a new interpretation of humanity’s path to control over infectious disease—one where rising evolutionary threats constantly push back against human progress, and where the devastating effects of modernization contribute to the great divergence between societies. The book reminds us that human health is globally interdependent—and inseparable from the well-being of the planet itself.

Putting the COVID-19 pandemic in perspective, Plagues upon the Earth tells the story of how we got here as a species, and it may help us decide where we want to go."
688 pages. I am never bored.
"The book reminds us that human health is globally interdependent—and inseparable from the well-being of the planet itself."
Anthropocene, anyone?
BTW: Under "Threats to Democracy."

Sunday, October 10, 2021

Disturbing applications of AI

"Synthetic Media"

I'm having a difficult time seeing much ethical upside potential here at first blush. Lucrative opportunities in spades, to be sure, as is noted in the 60 Minutes piece, but I am not thrilled at this news overall at this point. (See also "60 Minutes Overtime.")
“Historically, scientific and technological advances have provided opportunities for more evidence or have enhanced our ability to assess the reliability of evidence—but deepfake technology is different. The sophistication of this technology has made it difficult for attorneys, judges, and jurors to know whether to believe their own eyes.” — JDSupra
There's that word "evidence."  See also "Liar's Dividend."

Dunno. Forgive the underwhelmingness of my mollification thus far.  Particularly in light of that all too prevalent cognitive bane “confirmation bias.”

“Believing is seeing.”
A new book that has jumped my reading queue and cut to the head of the ever-busy line.


Goes to a long-held AI dubiety beef of mine.

Thursday, October 7, 2021

Oil despoil, again

The latest southern California mess.

Latest reports have the underwater pipeline breach volume near 150,000 gallons.
As bad as it is on its own terms, it may be a chump change rounding error relative to that which portends in the Red Sea off the coast of war-torn Yemen. From a depressing New Yorker long-read (~1 hr, w/audio, probably paywalled).

Soon, a vast, decrepit oil tanker in the Red Sea will likely sink, catch fire, or explode. The vessel, the F.S.O. Safer—pronounced “Saffer”—is named for a patch of desert near the city of Marib, in central Yemen, where the country’s first reserves of crude oil were discovered. In 1987, the Safer was redesigned as a floating storage-and-off-loading facility, or F.S.O., becoming the terminus of a pipeline that began at the Marib oil fields and proceeded westward, across mountains and five miles of seafloor. The ship has been moored there ever since, and recently it has degraded to the verge of collapse. More than a million barrels of oil are currently stored in its tanks. The Exxon Valdez spilled about a quarter of that volume when it ran aground in Alaska, in 1989.

The Safer’s problems are manifold and intertwined. It is forty-five years old—ancient for an oil tanker. Its age would not matter so much were it being maintained properly, but it is not. In 2014, members of one of Yemen’s powerful clans, the Houthis, launched a successful coup, presaging a brutal conflict that continues to this day. Before the war, the Yemeni state-run firm that owns the ship—the Safer Exploration & Production Operations Company, or sepoc—spent some twenty million dollars a year taking care of the vessel. Now the company can afford to make only the most rudimentary emergency repairs. More than fifty people worked on the Safer before the war; seven remain. This skeleton crew, which operates with scant provisions and no air-conditioning or ventilation below deck—interior temperatures on the ship frequently surpass a hundred and twenty degrees—is monitored by soldiers from the Houthi militia, which now occupies the territory where the Safer is situated. The Houthi leadership has obstructed efforts by foreign entities to inspect the ship or to siphon its oil. The risk of a disaster increases every day…

"More than a million barrels" is "more than 42 million gallons." More than 300 times the magnitude of the current SoCal mess.

You think you're having trouble getting stuff now?

There are indications that one of these back-logged freight container ships may have dropped an anchor on the underwater oil pipeline, causing its rupture.
If the F.S.O. Safer Red Sea thing goes south... Use your imagination. For one thing, the Suez canal on the north end of the Red Sea will likely be shut down for months.

Better yet, read the New Yorker article. It's worth the price of a subscription by itself.

In 2008 I cited this article (originally posted at New Scientist) on another of my blogs.
"I'm optimistic about…a pair of very big numbers. The first is 4.5 x 10ˆ20. That is the current world annual energy use, measured in joules. It is a truly huge number and not usually a cause for optimism as 70 per cent of that energy comes from burning fossil fuels.

Thankfully, the second number is even bigger: 3,000,000 x 10ˆ20 joules. That is the amount of clean, green energy that pours down on the Earth totally free of charge every year. The Sun is providing 7,000 times as much energy as we are using, which leaves plenty for developing China, India and everyone else. How can we not be optimistic? We don't have a long-term energy problem. Our only worries are whether we can find smart ways to use that sunlight efficiently and whether we can move quickly enough from the energy systems we are entrenched in now to the ones we should be using. Given the perils of climate change and dependence on foreign energy, the motivation is there..."
I recently updated a Photoshopped graphic I'd done.

Decrementing the "7,000" to "6,000" to roughly account for subsequent worldwide energy consumption growth, one would also need to "net out" by accounting for the estimate that ~30% of the solar energy is radiated (reflected) right back out into space. So, what if we round down to "4,000x?" Capturing and converting 1/4000th of that (0.0025, or 0.025%) would get us to ~100% "solar"energy.

Less than three one hundredths of one percent?

Just asking. Seems to me that the barriers are mostly incumbent political-economic, not technological. Beyond immediate reflective re-radiation, I wonder how much solar energy continuously striking our planet is consumed in sustaining life writ large (all flora and fauna, beyond human socioeconomic consumption) versus eventual entropy decay? What proportion of solar energy input is consumed in the production of our weather patterns—winds, waves, storms, etc.

Thoughts? Questions? Pushback?
Oct 10th WaPo headline.

As directed by President Biden's January 28, 2021 Executive Order 14008, major Federal agencies are required to develop an adaptation and resilience plan to address their most significant climate risks and vulnerabilities. On October 7, 2021, the White House announced the release of more than 20 Federal Agency Climate Adaptation and Resilience Plans. As part of these efforts, agencies will embed adaptation and resilience planning and implementation throughout their operations and programs and will continually update their adaptation plans.
"Trump rarely talked about U.S. policy, and never really bothered to read the briefing materials that laid out the policy points in the first instance. He would refer to things happening on “my watch,” offer his views on topics, which were often based on a conversation he had had with a personal friend (which he would actually say directly), or repeat something he had heard on Fox News. Whenever he could, Trump would turn the discussion around to some of his pet peeves to get them off his chest—windmills and the advent of wind power was one that always came up. The president was not a fan of most renewable energy projects, except for hydropower. He would go on at length about how huge wind turbines marred the view and reduced property values or decimated migratory birds.

During the July 2019 state visit, the Brits had wanted to talk about climate change. They tried to put the issue on the formal agenda for the meeting at Number 10. The president refused. It was not an issue he wanted to talk about after pulling out of the Paris Climate Agreement. He hated Europeans’ trying to raise it again and put him on the spot. So the Brits deputized Queen Elizabeth’s eldest son and heir, Prince Charles, to discuss the subject at the U.S. ambassador’s dinner. That way Trump had to at least listen to the prince’s points, even though he was not enthused (and said so to the press after the dinner). Trump’s favorite topics were golf or other sporting events and related analogies, and his personal or family’s business success…"
Saw this on CBS Morning News:
Our climate is changing at an extraordinary rate. While this is plainly visible in our glacier landscapes, aerial and digital technologies are needed to see the full extent of changes across larger scales and timeframes.
This project explores the creative application of these technologies to better communicate the science behind our changing climate.
"Because we shouldn't deny future generations a livable planet."