I've posted on Ben and his work before.
See more prior crypto-related posts here.
See more prior crypto-related posts here.
NOW REPORTING FROM BALTIMORE. An eclectic, iconoclastic, independent, private, non-commercial blog begun in 2010 in support of the federal Meaningful Use REC initiative, and Health IT and Heathcare improvement more broadly. Moving now toward important broader STEM and societal/ethics topics. Formerly known as "The REC Blog." NOTE: Comments are moderated, thanks to trolls and bots.
The opening moments of the 1982 film Blade Runner introduce viewers to a world of artificially intelligent beings that are “virtually identical” to humans. To tell man from machine, people rely on something called the Voight-Kampff test, which is a little like a polygraph; robot irises exhibit subtle tells when prompted. If you’re dealing with a robot, you’ll know by the eyes.
If Sam Altman has his way, this could be sort of how it works in real life. Last week, he announced an expansion of the verification service World ID, created by a start-up called Tools for Humanity. Altman co-founded the company in 2019, the same year he became CEO of OpenAI. Onstage last Friday, he described the product as a way to certify personhood in a digital landscape rife with bots, deepfakes, phishers, and other sorts of impostors. Think of it as an evolution of CAPTCHA, the security program used to identify bots and prevent attacks on websites. To verify your humanness and secure a World ID, you must stare into a white, frosted orb and allow the company to take pictures of your face and eyeballs…
pchapin
This technology completely fails in the face of sock puppetry. It only proves that a human was present when the iris scan was made, not that a human is present at the time of the transaction.
Imagine a bad actor who pays a desperately poor person for their iris scan. For, say, $100, that poor person might be able to buy food for their family for weeks. They might not understand the significance of the scan or care about it. For only $10,000, the bad actor can now amass an "army" of sock puppets online using the iris scans of people who will never be online.
The system completely fails to provide the assurance that it claims to provide, and is therefore useless.
DarkHorse
not a chance I’m letting Altman’s tech scan my eyeballs
DrakX316
I honestly struggled to tell if this was satire.
JamesB
So let me get this straight. Mr. Altman creates a world where we can’t trust our own ears and eyes, and then wants me to surrender what’s left of my identity. Just to prove who I am to him?
No. You can’t have that.
Cynthiajay
A 'World ID' ... dear me. Sam Altman apparently doesn't read his history.
Americans were once fiercely opposed to any such thing as a national ID.
Whatever are we to think of this?
steve207144
The test in Blade Runner tested emotional responses to various scenarios. A tortoise is lying upside down in a desert, you won't help it, why? Since the replicants in Blade Runner were emotionally immature, they could be detected by their extreme response.
Given what I've heard about Sam Altman, I would think he would have no emotional response to others suffering, because he seems to be a sociopath.
I wouldn't trust him with my excrement. Never mind all kinds of bio-metric information and an app tracking me on my smartphone.
Knowing if someone on the internet is human, a dog, a bot, or an AI is a problem, but Sam Altman, and the other tech bros, aren't the solution.
rodeoairflow
Literally, it’s actually only a problem for people who want to extract money from the rest of us. If a terminator is bearing down on me, I’m not going to have the time to ask him to gaze into an orb. And otherwise I couldn’t care less if you’re a human, honestly. Do you.
Cynthiajay
He's just another oligarch.
Zuckerberg. Bezos. Musk. Just remember all of them clustered around the dais at trump's inauguration.
What more do we need to know?
brian99jordan
Story left out that Altman ran a Worldcoin huge scam in Kenya. It was found to be illegal for scamming underage children to provide valuable, sensitive biomedical data. Incredibly naive to let this scammer record your eye-print. Altman does not have an honest, non-hustler bone in his body.
pdstolen
For many years, in government and after while volunteering, I used—sometimes quite effectively—the original purpose of the federal environmental review laws. Simply, the purpose was to determine both the positive and downside effects of human efforts affecting the environment as well as human life. (That law has been deeply degrading and the original purpose essential lost in the shuffle.
My point is this: there is a huge amount of money and effort being spent on the supposed positive effects of various technology—while there is essentially little available for looking at the negative effects. I learned this early on, as we desperately tried to describe the negatives. Now here, you pathetically see the proponent of the positives coming out with supposed solutions to the negatives. No, no no….not at all at what is needed.
Slight
I just can't wait to see whether this will catch on and the all the conspiracy theories begin surrounding it. Meanwhile, I will take a hard pass.
rodeoairflow
I’m so freaked out by the description of this technology that it’s difficult for me to understand how anyone could recognize it as anything other than malicious.
Zobi44
The article was way to blasé about it
malloryt
Brave New World is our new reality.
renegaderosie
Anyone feel the Borg coming for us?
Spectato
Wait, so first you verify that you are human by looking into an orb. And then a token confirming that you are human is stored on your phone. And now the AI agent on your phone can prove online that it's human. Did I get that right?
bookerloo66
Who cares?
malloryt
People who care about unethical market manipulation, violation of privacy rights, and being forced to pay for things that should be optional.
It is like a pharmaceutical company inventing a disease so they can sell an antidote.
Or a pest control company releasing a colony of termites in a newly constructed housing development, then offering to exterminate them.
Or creating a tax system so complex most Americans have to hire a CPA or pay for an online DIY tax prep service to file their tax return.
Altman made AI that is used to create false identities that aid in completing fraudulent activities, then he developed a fee-based service that can spot AI fakes.
For now, this human-verification process is “free” to users. But I would bet that eventually that cost will wind up being paid by the consumer. Perhaps a subscription fee or service charge, or an increase in the cost of goods because the store has to pay for the services Altman is providing.
brian99jordan
Altman’s entire career is marked with scams and dishonest, unethical business practices. If he were not protected by the elites he would be in jail.
jo_snover
I have used both Zoom and DocuSign and would like to be able to continue to do so.
I don't want to provide biometric data to a for-profit company run by a man with no ethical compass (Sam Altman) in order to do it. I'm not sure who/what I would trust, but never Sam Altman. Once given, you can't get this data back or control its use
IadmirePublius
(Edited)
Absolutely unacceptable that they would require me to allow them to ID me that way. Captcha is already bad enough the way it regularly presents me with pictures too fuzzy for me to see even with my glasses on. It discriminates against visually handicapped. It should be illegal already.
Murray
I’m really quite surprised the author was so willing to provide a private corporation with rather dubious ethics a copy of his most private, personal biometric information.
Once you’ve given up biometrics the threat of identity theft backed by that data rises very significantly, and the value of hacking a database containing millions of peoples’ biometrics is almost inconceivable. Armed with a copy of your irises anyone could become you. Encrypted? Should anyone trust that in encryption in 2026? 2030? And I’d also assume systems using this data pass tokens rather than the actual data so having the ability to create those tokens represents the same threat.
I’d say that threat comes from malicious actors, which now includes AIs run by malicious actors, but I’m hardly convinced Altman himself is to be trusted. Much less Musk. Much less an authoritarian government. They all desperately want our biometrics and want to create situations where we are either incentivised or forced to do so. Don’t give it to them.
ForTheBirds
I was going to make a similar comment about the risks of giving biometric information to these companies. It’s likely to make it easier to use your information by AI.
Hmthinkingaboutthis
My thoughts as well. Why on earth would you give up your biometrics. It's your final "password"
The long-running fight to rein in the government’s power to search Americans’ phone calls, emails and text messages without a warrant has gained new urgency on Capitol Hill over concerns that AI will supercharge state surveillance.Total Information Awareness 3.0?
Privacy advocates warn that if the law enabling warrantless monitoring of Americans is not meaningfully reformed, many citizens could be subject to increasingly invasive AI-powered analysis of communications swept up by foreign intelligence programs as well as commercially available location and behavioral data.
“Imagine instead of doing a query with one person that you turned AI loose on these databases,” Rep. Thomas Massie, R-Ky., said Thursday at a press conference announcing a new bill to close data-collection loopholes. “There’s virtually nothing the government can’t know about you.”
Section 702 of the Foreign Intelligence Surveillance Act (FISA) allows the government to collect the communications of foreigners abroad, but it also enables the government to collect messages, emails and other transmissions from Americans when they contact foreigners. The government can then perform warrantless searches on those emails, messages and other communications. Though the provision was originally passed in 2008, lawmakers must renew it every few years…
![]() |
| Jacob Ward |
"Other Currents"
1. Meta logged its employees’ keystrokes, then announced their layoffs. The Model Capability Initiative — Meta’s new tool for capturing employee keystrokes and mouse clicks, framed as AI training data — was announced this week with no opt-out option. Two days later, 8,000 layoffs. The sequence is the story: document what the workers do, then eliminate them.
2. Palantir published a manifesto. Anthropic went to court. Two companies drew opposite lines in the same week. Palantir’s 22-point public document calls AI weapons inevitable, some cultures “dysfunctional and regressive,” and pluralism a failure. Bellingcat’s Eliot Higgins noted the obvious: these aren’t abstract ideas floating in space — they’re the stated ideology of a company that sells targeting software to militaries and immigration enforcement. Meanwhile, Anthropic has been in federal court since March fighting a Pentagon designation that labeled it a national security supply-chain risk — because it refused to remove safeguards against autonomous weapons and mass domestic surveillance. One company announced what it believes. The other is paying lawyers to defend it.
3. The Joint Chiefs said autonomous weapons are coming. Nobody asked whether they work. Yesterday the Chairman of the Joint Chiefs called autonomous weapons a “key and essential part of everything we do.” Anthropic’s actual position — that frontier AI is not reliable enough to kill people without a human in the loop — has not been rebutted. It has been bypassed.
4. 96,000 tech layoffs this year, all attributed to AI. Nobody has to prove it. Amazon, Meta, Microsoft, Snap, Salesforce, Block — every announcement uses the same language: efficiency, AI investment, right-sizing. No company is required to document whether AI replaced a single role. The NLRB has no jurisdiction over the framing. Nobody does.
5. Amazon’s Ring can now identify your neighbors by face. Familiar Faces rolls out facial recognition to Ring doorbells — identifying family, friends, delivery drivers — processed in Amazon’s cloud, opt-out by default. EFF and Senator Markey are pushing back. Three states have already blocked it. “Optional and disabled by default” is how every ambient surveillance feature starts.
6. OpenAI lost three senior executives in a single day and is projecting $14 billion in losses. The CPO, the head of Sora, and the enterprise CTO all departed the same Friday. Multiple cited the DOD contract and the cultural shift from research to commercial operations. The company generating $25 billion in annual revenue is simultaneously losing the people who built the products and spending $14 billion more than it earns.
7. Anthropic just passed OpenAI in revenue. The gap matters because of what each company agreed to. Anthropic hit $30 billion in annualized revenue this month, passing OpenAI’s $25 billion. Enterprise customers — not consumers — drove it. Worth noting alongside item 2: Anthropic is the company currently in court over autonomous weapons restrictions. OpenAI signed a DOD contract that removed equivalent restrictions weeks after Anthropic was blacklisted.
8. S&P 500 boards are disclosing AI risk while knowing almost nothing about AI. 83% of S&P 500 companies now list AI as a material risk in disclosures — up from 12% in 2023. Board directors with AI expertise: 2.7%. The people with the legal authority to set limits have mostly opted not to understand the thing they’re supposed to be limiting. This is the governance gap that makes every other story in this section possible.
Another random “current" from a subscriber: The other day, I posted a story to my Facebook page, and was shortly thereafter notified by Facebook that, unless I opted out, henceforth all of my story postings would be given an AI generated “headline." Supposedly to help gain better visibility for my stories. Read “content monetizing assistance." I replied in testy ALL CAPS. (Use your imagination.) Something along the lines of NO.MUTHA.ZUCKING.WAY…
When US President Trump signed an executive order in May 2025 seeking to streamline the adoption of nuclear energy by directing federal agencies to reconsider whether radiation protection standards have been unnecessarily strict, he reignited a debate that has smoldered in radiation science for decades. At the heart of the controversy is the linear nonthreshold (LNT) model—the idea that any amount of radiation, no matter how small, has damaging biological effects. Resolving whether LNT is a reasonable precaution or a costly misapplication of incomplete science will require not just better arguments, but better data.
The science underlying low-dose risk (less than 100 milligray, a measure of how much radiation is absorbed by living tissue) is incomplete. Decades of epidemiology and biology have neither confirmed LNT nor established a universally agreed threshold dose below which there is essentially no risk. Given the uncertainties, some experts advocate that doses should be as low as reasonably achievable (ALARA). There is also a hypothesis called hormesis, which holds that very low doses of radiation might be beneficial by stimulating cellular repair mechanisms. These are not purely scientific questions. How society weighs uncertain risks against economic and energy imperatives, whether in nuclear power, medical imaging, or occupational safety, involves value judgments that data alone cannot fully resolve.
Answering these questions will still require more research on several fronts. Here, experts do not agree on what that evidence will ultimately show. Some (including E.A.C.) hold that although LNT is not an established biological truth, it is a defensible regulatory tool—adding, however, that ALARA has been applied inconsistently, at times generating financial costs, psychological harm, and forgone societal benefits without proportionate protective benefit. Others (including B.A.U.) maintain that evidence for harm at low doses is lacking and that the real-world consequences of LNT-based regulation, from unnecessary abortions following the Chernobyl nuclear disaster in 1986 to evacuation deaths after the Fukushima nuclear accident in 2011, demonstrate that treating a contested model as settled science carries measurable human cost...
"I have floated in the middle of the Pacific, far from any shore, looking down into water so blue it felt like the beginning of the universe. I have stood at the edge of the Arctic, listening to ice crack and fall into a warming sea. In these moments, I understood something that no dataset had ever been able to teach me: we are small, and the Earth is generous, and we have not yet earned that generosity.She posted this on Facebook today. We heard her speak last week. Fabulous.
Today the world pauses to celebrate this planet. I want to ask you to do something harder than celebrating. I want to ask you to grieve, just for a moment, for what we have already lost. The reefs we bleached. The forests we traded for cattle. The glaciers that are now water. Grief, when we allow ourselves to feel it, becomes fuel.
I grew up in Mexico, the daughter of a country where the land and the sea were never abstractions. They were food, livelihood, identity. The communities I have photographed across a hundred countries, Indigenous guardians, fishermen, divers, children who still swim in clean rivers, they have never needed Earth Day to remind them that the planet is alive. They know it, in their bones and their hands and their hunger.
The rest of us are still learning.
Here is what I have learned behind the lens: beauty is not passive. A photograph of a humpback whale does not ask you to admire her. It needs you to protect her. Every image I have ever made has been a letter: urgent, loving, written to whoever is willing to read it.
So today, I ask you to make a promise that survives until April 23rd. And April 24th. And every day after that. Protect a piece of it. Vote for it. Fund it. Teach your children its name.
The Earth does not need Earth Day. But it does need you.
Happy Earth Day. Now let's get to work."
...In 2018, I was a guest at Jeff Bezos’s Campfire retreat in Santa Barbara, California. It’s an annual event in which the Amazon founder invites 80-plus guests—celebrities, artists, intellectuals, and anyone else he thinks is interesting—to spend three nights at a private resort. I had recently been approached by Amazon about moving my film-and-television business over from Disney, and although I had declined (or maybe because I had declined), Bezos’s team invited me to Campfire, perhaps keen to impress me with the power of his reach…
Each morning, we gathered in a lecture hall to hear presentations. If you’ve ever seen a TED Talk, you understand the format. The year I went, a sitting Supreme Court justice was interviewed, and a neurologist talked about technological advances in prosthetics. In the afternoons and evenings, we were encouraged to exchange ideas over drinks and four-course meals, with no set purpose—to network, in other words, with some of the most rarefied talent on Earth. The most common question I heard was “Why am I here?”
“Why am I here?” asked the 1980s hair-metal singer. “Why am I here?” asked the Pulitzer Prize–winning novelist, the famous anthropologist, the presidential historian. Only the movie stars and the billionaires didn’t ask: They had done this kind of thing before. It turns out there is a circuit of idea festivals. Many tech billionaires host one, and if you find yourself on the right list, you can spend much of the year traveling the world, eating Wagyu, and discussing how to make the world a better place with the most famous talk-show host in history...
At drinks on the second night, the head of a major talent agency asked me what I thought of the weekend. I said, “I’ve spent my whole career trying to figure out how the world works. I didn’t realize I could just come here and ask the people who ran it.” On some level I was kidding. The lead singer of an alt-country band didn’t run the world, nor did a noted author who would later be accused of impropriety. But finding myself at that resort by exclusive invitation, I now knew exactly what people meant when they talked about the elite.
Sitting in the lecture hall, pencils out, listening to a famous chef explain his humanitarian work, it was easy to feel like the solution to the world’s problems lay within our grasp. And yet, looking around at faces I had only ever seen in a magazine or on-screen, I had an unsettling revelation: This is the hubris of accomplishment. To be declared a genius at one thing is to begin to believe you are a genius at everything…
Here we were, 80 individuals with a combined net worth that was greater than a small city’s yet infinitesimal compared with the wealth and dominion of our host. How did he view this exercise—as a first step toward changing the world, or as a performative display of his reach and influence?
Bezos was everywhere that weekend—in a tight T-shirt, laughing too loudly, arms thrown around his teenage sons. He had recently become the world’s second centibillionaire, his net worth hovering somewhere around $112 billion, about half of what it is today. That number, previously unimaginable, had made him unique on a planet of 8 billion people, and you could feel it in the room. Even the richest and most famous among us were drawn to the energy of this impossible wealth...
Eight years later, Bezos and two of the world’s other richest men—Mark Zuckerberg and Elon Musk—have clearly left the world of consequences behind. They float in a sensory-deprivation tank the size of the planet, in which their actions are only ever judged by themselves.
The closer I’ve gotten to the world of wealth, the more I understand that being truly rich doesn’t mean amassing enough money to afford superyachts, private jets, or a million acres of land. It means that everything becomes effectively free. Any asset can be acquired but nothing can ever be lost, because for soon-to-be trillionaires, no level of loss could significantly change their global standing or personal power. For them, the word failure has ceased to mean anything…
This sense of invulnerability has deep psychological ramifications. If everything is free and nothing matters, then the world and other people exist only to be acted upon, if they are acknowledged at all. This is different from classic narcissism, in which a grandiose but fragile self-image can mask deep insecurity. What I’m talking about is a self-definition in which the individual grows to the size of the universe, and the universe vanishes. Asked recently if there is any check on his power, President Trump—himself a billionaire, and by far the richest president in American history—said, “Yeah, there is one thing. My own morality. My own mind. It’s the only thing that can stop me.” Not domestic or international law, not the will of the voters, not God or the centuries-old morality of civic and religious life…
I also sometimes push the audio to 2x when mocking Donald Trump.
Journalist Katrina Manson spent years inside the classified and not-so-classified world of U.S. military AI — interviewing the colonels, the defense tech founders, and the ethicists watching it all unfold. Her book, Project Maven, is the definitive account of how Silicon Valley's "ship it and fix it later" culture collided with the business of war. We talk about Palantir, Claude, the Jevons Paradox applied to killing, and why the people building these systems openly admit they're worried about what they've built. A conversation you won't hear anywhere else.A riveting discussion. Sobering.
Zeke Emanual was materially involved with the authoring of the PPACA—"Obamacare." I followed those developments closely, writing about them on another of my blogs. e.g., see "Public Optional." Prior to those days, I'd written my first grad school paper ("argument analysis & evaluation") on the PNHP "Single Payor" argument published in JAMA in 1994.
The search for survival and healing
Should you or a loved one be beset by advanced and life-threatening cancer, gird yourself for an overwhelming onslaught of information, much of it in conflict, all of it ostensibly requiring your immediate consideration and action lest you accede to fatal delay.
My empirical triage effort began within days of Sissy's admission to County. While we were still struggling to come to grips with everything coming fast and furious from the doctors and medical administration, well-meaning friends and acquaintances began peppering us with unsolicited advice and literature, some of it conveying a bizarre ignorance that would leave me floundering for the politic response that would not demean and offend. Pejorative retort suppression would become an ongoing emotional exercise (repeatedly aided by the quiet and gentle reproach of my saintly wife).
It commenced with a reprint of an"article" hawking a book wherein it would be recounted in further detail just why "all cancers" were caused by intestinal flatworms! (The author ended nearly every sentence with one or more exclamation marks!) The curative regimen would simply involve purging the gastrointestinal tract of these carcinogenic parasites through a regimen including colonics and herbal mixtures. The boyfriend delivering this wonderful news was utterly sold on its merit: Salvation was at hand!
Mercifully, Dr. Wren got me off the hook on this one with a diplomacy worthy of a Secretary of State.
Next would come the "Hoxsey" video, a slick production exuding first-class "documentary" production values-- sinister in tone-- detailing the history of the miraculous herbal anti-cancer "formula" purportedly discovered by one uneducated Harry Hoxsey decades ago while treating horse hide lesions on his father's farm. The video-- replete with ominous background music and newsreel headline cutaways-- recounts the foul banishment of this alleged "savior" to Mexico by the greedy and venal American Medical Association. The Hoxsey clinic in Tijuana today still attracts innumerable desperate cancer sufferers bereft of more conventional clinical options and ready with cash.
After close consideration, I had no choice but to count myself in the company of those regarding this stuff as the worst sort of quackery.
Essiac tea (brand name: Fluoressence). The story is told that a Canadian nurse-- one Rene Caisse-- was the recipient of a mysterious healing herbal recipe used by an Ojibwa tribal medicine man that caused all manner of malignancies to disappear in short order. It is predictably alleged that the Canadian counterpart to our A.M.A. in concert with Ottawa would see to the suppression of this wondrous substance.
Proponents of this beverage invariably mention that "Essiac" is "Caisse" spelled in reverse. In my case, no particular epiphany would be forthcoming in the wake of this "Sgt.-Pepper-Played-Backwards" intimation, and, after much digging concerning the ingredients and their asserted efficacy, I could find nothing clinically interesting in Essiac.
"Resonance" machines? What? Another phone number slipped to me at the hospital had me listening to a sales pitch extolling a $1,500"radionics therapy" device used to destroy tumors by resonating with the electro-biological "cancer frequency" of malignant tissues.
Right.
Not flatworms! No, it was "bacteria in the blood," the result of diet, specifically consumption of items such as chicken, that was the source of all cancer, another "doctor" explained to me over the phone from his Tijuana clinic. "We can't get the docs in the States to understand this," he intoned with weary resignation. His methodology: purge the blood through a revised diet that, among other things, eschewed chicken and mandated the consumption of lamb. Why? A devotee of this practitioner had a ready retort: "Y'ever watch chickens eat? They peck at the ground, picking up all kinds of bacteria." Oh.
Lamb, on the other hand, came from "root-eating" livestock that, while foraging through the subterrain, ingested the beneficent, supposedly cancer-curing below-ground nutrients central to this serum antiseptic "therapy." We could come down to the Tijuana clinic for an initial two-week stay for blood assessment and initiation of therapy. $2,500 per week. But, according to our locally referred contact-- 'he'll work it out with you if money is a problem; he's a very compassionate man. He really cares for his patients, he takes the time to listen to their concerns.'
In contrast to the "arrogant, narrow-minded, greedy, and indifferent" American clinicians who controlled medical practice in The States (the oft-repeated mantra of the more strident segment of the "alternative healing" movement) clearly implicit in this appeal.
The foregoing comprise a more or less representative sampling of our experience thus far with the quackery end of the alternative therapy spectrum, a distribution of propositions whose opposite terminus abuts the breadth of mainstream clinical research and practice, where methods as yet"unproven" but more logically reasonable and promising vie for acceptance by the medical establishment. In the middle lie tougher calls: does shark cartilage really shrink tumors, functioning as an angiogenesis inhibitor? (one skeptical journal article called it "the laetrile of the 90's") Hydrazine sulfate? (also reported on extensively in the mainstream clinical literature and generally-- though not uniformly-- dismissed as 'ineffective.') Nucleotide Reductase? Plant oils? Blue-green algae?
All of these unconventional therapeutic assertions-- many of which would prove to be merely unproductive, outlandish, maddening distractions-- would have to be checked out while also slogging through the vast archives of mainstream clinical literature, a quest that would take me through the most recent three years of month-by-month National Cancer Institute (NCI) hepatoma citations. Also, I began-- and continue to this day-- keyword-searching the Medline indices for anything related to Sissy's condition that might prove useful…
![]() |
| Link here. |