Search the KHIT Blog

Wednesday, November 29, 2017

"SlaughterBots?" #AI-driven autonomous weaponry?


Is this a realistic, exigent concern? We're all ga-ga these days over the beneficent potential of AI in the health care space, but,



From the autonomousweapons.org site:
What are killer robots?
Killer robots are weapons systems that, once activated, would select and fire on targets without meaningful human control. They are variously termed fully autonomous weapons or lethal autonomous weapons systems.

The concern is that low-cost sensors and rapid advances in artificial intelligence are making it increasingly possible to design weapons systems that would target and attack without further human intervention. If this trend towards autonomy continues, the fear is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, and then no role at all.

The US and others state that lethal autonomous weapon systems “do not exist” and do not encompass remotely piloted drones, precision-guided munitions, or defensive systems. Most existing weapons systems are overseen in real-time by a human operator and tend to be highly constrained in the tasks they are used for, the types of targets they attack, and the circumstances in which they are used.

While the capabilities of future technology are uncertain, there are strong reasons to believe that fully autonomous weapons could never replicate the full range of inherently human characteristics necessary to comply with international humanitarian law’s fundamental rules of distinction and proportionality. Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harm that fully autonomous weapons would be likely to cause...
Wonder what Elon Musk thinks about this stuff? Recent reporting.


'eh? See also my prior post "Artificial Intelligence and Ethics."

ERRATUM

Read an interesting set of posts over at Medium today.

"What to look (out) for when raising money."
Good stuff.
If You Take Venture Capital, You’re Forcing Your Company To Exit
To understand venture capital, you must understand the consequences of how VCs return capital to their investors


Why the Most Successful VC Firms Keep Winning
The best companies seek out the most successful investors, and gains accumulate at the top
 

One Investor Isn’t Enough
A company’s success is highly reliant on peer validation of investor decisions. This stunts diversity and must change if we want the best founders working on the biggest opportunities.
I follow VC news tangentially in the health care space, e.g., "#WinterTech2016: Venture Capital in #HealthIT."

Just became apprised of this company.

"We are a private investment company that practices a disciplined, evidence-based approach to define innovative business models capable of delivering new products and services that can positively impact the way we live, work and play.

What We Look For
We are not traditional seed investors. Rather, we are active and engaged co-founders who seek to get the most promising opportunities from the lab to the market. We bring to the table decades of entrepreneurial success and experience, offering seasoned perspectives. We seek individuals who share our desire for exploration, who are intellectually honest and have insatiable curiosity, who have the ability and the desire to systematically test their assumptions in the real world. We believe that the best entrepreneurs have the ability to blend technical insight with market-based feedback, allowing their innovations to mature into a successful company. Even our name evokes this passion. M34 is short for Mach 34, the speed that an object needs to achieve to escape the gravitational pull of the earth. Our goal for all of our companies is to achieve a successful liftoff from traditional forces that hold most new ventures back...."
Interesting. I found this noteworthy in particular:
M34 Capital founders understand the risks associated with developing new and innovative ideas. We have pioneered the principles of the lean startup approach to accelerate and improve the efficiency of the commercialization of new technologies.
More on this stuff perhaps later. "Lean startup?" Hmmm...

Seriously; this old school Deming/Shewhart dude loves me some "Lean."

UPDATE

apropos of the upcoming December 12th Health 2.0 "Technology for Precision Health Summit."
Health 2.0 sat down with Linda Molnar to discuss the evolution of Precision Health, the imperatives at stake in a fast-paced field, and empowerment through big data. Linda has over 20 years in the field of Life Sciences and is responsible for a number of initiatives that further the field with start-ups, the feds, and for investors. Her current endeavor is leading the upcoming Technology for Precision Health Summit in San Francisco alongside Health 2.0. “We’re never going to pull together all of this disparate data from disparate sources in a meaningful (i.e. clinically actionable) way, unless we talk about it” she says. The Summit is an attempt to bring together the worlds of Precision Medicine and Digital Healthcare to realize the full potential of a predictive and proactive approach to maintaining health...

____________

More to come...

Monday, November 27, 2017

From Pythagoras to petabytes

A few days back, I was surfing through a number of news articles recounting Elon Musk's latest freakout over The Existential Menace of AI. See, e.g., "AI is highly likely to destroy humans, Elon Musk warns."


A commenter beneath one of the articles (TechCrunch's "WTF is AI?") briefly cited this book.


OK, Elon's pearl-clutching will have to wait.

Garth's book is merely $4.61 Kindle price (and worth every penny). A must-read, IMO. Spanning, well, "Pythagoras to petabytes." (I'd have chosen "yottabytes," but it didn't have the cutesy alliterative post title ring.)
IT’S EASY TO FORGET that the digital age, and so the existence of computer programmers, still only spans a single working life, and one of those lives is mine. In my career as a commercial computer programmer I’ve experienced most of the changes that have occurred in the programming world in all their frequent craziness. This is my attempt to preserve that odyssey for the historical record...
For myself and many others, commercial computer programming has been a rewarding career since at least the late 1960s and during that time programming has seen massive changes in the way programmers work. The evolution of many different programming models and methodologies, the involvement of many extraordinary characters, endless ‘religious wars’ about the correct way to program and which standards to use. It would be a great shame if a new generation of programmers were unaware of the fascinating history of their profession, or hobby, and so I came to write this book...
Four of the overall themes in the book can be identified by searching for #UI (User Interface), #AI (Artificial Intelligence), #SOA (Service Oriented Architecture) and #PZ (Program componenti-Zation) and the Table of Contents is detailed enough to guide dipping in/ dipping out…
___
In Years BC (Before Computers)
Sometimes it seems like computers and the digital revolution sprang out of nowhere but of course they didn’t. They were the result of at least two and a half millennia of evolutionary development which can be traced all the way back to the ancient Greek philosopher Pythagoras. When I graduated from university in 1968 I had a degree in philosophy but I somehow managed to be awarded it without having absorbed much ancient Greek philosophy or the history and philosophy of science. There were just too many well documented distractions in the sixties. It was only later that I came to realise how critical they were in creating the context for digital computers and with them the new career of commercial computer programmer.

So we need to briefly cover some history in order to set the scene for my account though only the necessary facts will be covered. We’ll only be scratching the surface so don’t panic but if you prefer to skip this Prologue that’s OK. By the way I use BC and AD for dates and not BCE and CE, there is no particular reason, it’s just habit.

550 – 500 BC
During this period Pythagoras the ancient Greek philosopher was active in a Greek colony in southern Italy. No written record of his thinking survives from the time but he is known to have influenced Plato and Aristotle and through them all of Western thinking. He is rumored to have been the first person to ever call himself a philosopher…

Eaglesfield, Garth . The Programmer's Odyssey: A Journey Through The Digital Age (Kindle Locations 94-129). Pronoun. Kindle Edition.
Garth and I are roughly the same age (both born in 1946). He came to computing about 12 years before I did. My initiation came tangentially in my 30's amid the course of my undergrad research (I was studying Psychology and Statistics at Tennessee; we were investigating "math anxiety" among college students taking stats courses). We wrote SAS and SPSS code via primitive line editors (e.g., edlin and vi), with laboriously entered inline research results data, and prefaced with JCL headers (Job Control Language), all submitted to the DEC/VAX over a 300 baud dialup modem (after which we'd schlep over to the computer room to fetch the 132-column greenbar printouts once available).

My, my, how times have changed.

After graduation in 1985, I got my first white collar gig in January 1986, writing code in a radiation laboratory in Oak Ridge (pdf). After those years, my time was spent principally as an analyst, using mostly SAS and Stata, in health care (pdf) and bank risk management.

I found this particularly interesting in Garth's book:
On The Road To Bell Labs
I felt that I needed to get Unix and C on to my CV and where better to do that than at its birthplace where it all started? Bell Labs was in New Jersey on the other side of the Hudson river and would involve some travel but I knew that other contractors from Manhattan worked there so I reasoned that there must be some way of coping. The Holy Grail of the various Bell Labs, which were scattered around New Jersey, was the Murray Hills lab, it was the real Unix/ C holy of holies. But it was hard to get into and so I eventually got a contract at the Whippany lab. It was closer to Manhattan and was more of a traditional telephone company engineering lab but they used Unix and C so that would get it on to my CV. The project I worked on was the software for a diagnostic device used in network line testing…

At the Labs I was given a copy of the now legendary ‘White Book’ written by Kernighan and Ritchie accompanied by a Unix manual and I set about absorbing them both.

Entering your C source code for a program was done with the ubiquitous Unix editor vi from a dumb terminal. When vi was developed at Bell Labs the quest for portability had been continued by using ordinary characters for edit commands so that it could be used with any dumb terminal’s keyboard. Those keyboards of course did not have all the control keys that modern keyboards come with such as arrow keys, Home, End, Page Up, Page Down, and so on. As with everything in the Unix world the vi key combinations ranged from simple and intuitive to complex and barely comprehensible. It operated in 2 modes, insert and command modes, and in command mode, for instance, it used the normal keyboard characters ‘h’ ‘l’ ‘k’ and ‘j’ as substitutes for the absent left, right, up and down arrow keys.

Compared to the VMS operating system and to high level programming languages like COBOL, Coral 66 and Pascal, Unix and C certainly represented a much lower level, closer to the hardware and more like assembler coding. There were relatively few reserved words in the C language with the main ones being for, data classes (int, char, double, float), data types (signed, unsigned, short, long), looping verbs (for, while, continue, break), decision verbs (if, else, switch, case) and control verbs (goto, return). C took the componentization of computer programs several steps further through the use of functions, a specific form of subroutine. Functions, returned a single value that might then be part of an equation...
(ibid, Kindle Locations 1677-1706).
My late Dad came back from WWII (minus a leg), mustered out of the military, and went to work for Bell Labs, first at Murray Hill, and subsequently at the Whippany location. He worked in semiconductor R&D his entire career at Bell Labs (the only civilian job he ever had, until he took his tax-free early VA disability pension in 1972 and dragged my Ma off to the humid swamps of Palm Bay, Florida so he could play golf all the time.

Looks like Garth came to Bell Labs a few years after Pop had retired.

"The Programmer's Odyssey" covers a ton of ground, weaving great historical and technical detail into a fun read of a personal story. I give it an enthusiastic 5 stars.
"It would be a great shame if a new generation of programmers were unaware of the fascinating history of their profession, or hobby, and so I came to write this book."
Yeah. I am reminded of my prior post "12 weeks, 1,200 hours, and $12,000, and you're a "Software Engineer"? Really?"

In closing,
Postscript
We have reached the end of the odyssey. But where have we arrived?

Well in some ways, like Odysseus himself, we have finally come home, right back to where we started. Some things have changed, we now have a massive multi-layered global network of increasingly smart devices, including the ultimate smart device the human being. Yet ultimately it’s all based on the same old Von Neuman computer architecture and on the two digits 0 and 1, the binary system. The whole extraordinary software edifice rests on that foundation, which is really mindboggling.

Have we learned anything on our odyssey that suggests this is likely to change? Perhaps a new computer hardware architecture will finally emerge? Articles regularly appear about quantum computing devices and IBM have announced a project to research cognitive SyNAPSE chips but so far these are still very much research projects. It is also being said that the minituarisation of silicon chips by Intel and others will reach its limit by 2020 and Moore’s Law will finally come to an end.

On our odyssey we, like Odysseus, have encountered dangerous and worrying phenomena, in particular whether any conceivable developments in computer technology will put the human race in danger. It’s hard to believe that based on existing system components a self-conscious sentient being will come into existence like HAL in 2001 and threaten our own survival. Throughout history major technological advances have tended to become models for explaining human beings, always incorrectly. In the digital age this has meant starting to see human beings as just computing machines. Will this prove to be true or false? The jury is still out.

But if we are in danger from our technology perhaps it’s most likely to come from an artificial living creature/ cyborg/ replicant enhanced with implanted computer chips and interfaces that we have created? Created perhaps by using advanced tools such as the already being discussed DNA editors? The recent TV series ‘Humans’ is an interesting attempt to confront some of these issues and two recent books have added significantly to the debate; Rise of the Machines: The Lost History of Cybernetics; Thomas Rid; Scribe Publications and Homo Deus: A Brief History of Tomorrow, by Yuval Noah Harari, Harvill Secker.

Questions, questions. As the philosophically inclined outlaw replicant Roy Batty so presciently remarked in Blade Runner
(ibid, Kindle Locations 2646-2666)
'eh?


Kudos, Mr. Eaglesfield. I'd make this book required reading at "software engineer boot camp." (BTW, Garth's website is here.)

Also, Garth turned me on to this resource where he blogs:


Check out in particular the "History of Computing and Programming" page.
___

Other prior posts of mine of relevance here?
Then there are my posts on "NLP" (Natural Language Processing).

NEXT UP

Got a new Twitter follower.

The Web's Best Content Extractor
Diffbot works without rules or training. There's no better way to extract data from web pages. See how Diffbot stacks up to other content extraction methods:
Pretty interesting. Stay tuned. We'll see.

SAVE THE DATE

They just approved my press pass. Follow hashtag #health2con and Twitter handle @Health2con.

Details
UPDATE

From "history" to "futurism" predictions. I saw this infographic on Pinterest, and chopped it up into 4 segments for viewability.


 See also "Types of AI."

UPDATE

Garth just reciprocated my foregoing tout with the kindest mention of me. See "Cometh the hour, cometh the man."

Again, not kidding about how cool his book is.
____________

More to come...

Wednesday, November 22, 2017

The ultimate health issue

My December issue of Harpers arrived in the snailmail yesterday. apropos of the worrisome topic that is surely on many minds these days, in light of who has sole access to the U.S. "nuclear button,"

Destroyer of Worlds
Taking stock of our nuclear present


In February 1947, Harper’s Magazine published Henry L. Stimson’s “The Decision to Use the Atomic Bomb.” As secretary of war, Stimson had served as the chief military adviser to President Truman, and recommended the attacks on Hiroshima and Nagasaki. The terms of his unrepentant apologia, an excerpt of which appears on page 35, are now familiar to us: the risk of a dud made a demonstration too risky; the human cost of a land invasion would be too high; nothing short of the bomb’s awesome lethality would compel Japan to surrender. The bomb was the only option.

Seventy years later, we find his reasoning unconvincing. Entirely aside from the destruction of the blasts themselves, the decision thrust the world irrevocably into a high-stakes arms race — in which, as Stimson took care to warn, the technology would proliferate, evolve, and quite possibly lead to the end of modern civilization. The first half of that forecast has long since come to pass, and the second feels as plausible as ever. Increasingly, the atmosphere seems to reflect the anxious days of the Cold War, albeit with more juvenile insults and more colorful threats. Terms once consigned to the history books — “madman theory,” “brinkmanship” — have returned to the news cycle with frightening regularity.

In the pages that follow, seven writers and experts survey the current nuclear landscape. Our hope is to call attention to the bomb’s ever-present menace and point our way toward a world in which it finally ceases to exist.
From the subsequent “Haywire” essay therein:
...Today, the time frame of an attack has been reduced to mere seconds. It once took three or four nuclear warheads aimed at every silo to render an adversary’s missiles useless, a redundancy thought necessary for certain destruction. Intercontinental ballistic missiles may now be made inoperable with a single keystroke. Computer viruses and malware have the potential to spoof or disarm a nuclear command-and-control system silently, anonymously, almost instantaneously. And long-range bombers and missiles are no longer required to obliterate a city. A nuclear weapon can be placed in a shipping container or on a small cabin cruiser, transported by sea, and set off remotely by cellular phone.

A 2006 study by the Rand Corporation calculated the effects of such a nuclear detonation at the Port of Long Beach, California. The weapon was presumed to be two thirds as powerful as the bomb that destroyed Hiroshima in 1945. According to the study, about 60,000 people in Long Beach would be killed, either by the blast or by the effects of radiation. An additional 150,000 would be exposed. And 8,000 would suffer serious burns. At the moment, there are about 200 burn beds at hospitals in California — and about 2,000 nationwide. Approximately 6 million people would try to flee Los Angeles County, with varying degrees of success. Gasoline supplies would run out. The direct cost of that single detonation was estimated to be about $1 trillion. In September, North Korea detonated a nuclear device about thirty times more powerful than the one in the Rand study…
'eh?

If you're not a Harpers subscriber, I would suggest you get a newsstand copy and read all of this.

As if we didn't have enough to be concerned about, I give you a comment from a post at the Naked Capitalism blog post "Capitalism: Not With a Bang But With a (Prolonged) Whimper."
Michael
November 21, 2017 at 11:01 am


In the article the author claims it may take centuries to evolve away from neo-classical economics. I believe it will be much shorter. As pointed out by 15,000 climate scientists we only have a few years to take corrective action before the climate goes into a runaway state.

The head of United Planet Faith & Science Initiative, Stuart Scott, believes the root of the problem is neo-classical economics since it makes the assumption that the planet’s resources are infinite, and the ability to destroy the planet without consequence is also infinite. There is no measure in this paradigm for the ecological costs of capitalism.

Dr. Natalia Shakhova and colleagues from the University of Alaska have been monitoring methane eruptions for a number of years and claim there will be a 50 gigaton methane eruption in the East Siberian Arctic Shelf within the next ten years, since as of 2013 the amount of methane released from this area had doubled since previous measures. Dr. Peter Wadhams of Cambridge University calculates that should this occur, the average global temperature of the planet would increase by .6 degrees Celsius within a decade.

It is predicted that this will cause an drastic jet stream disruption creating drought, flood, and heat waves that will make food production for 7 billion people impossible. There will probably be a population crash along with accompanying wars.

Additionally, inasmuch as we are already 1.1 degrees Celsius above preindustrial levels this would put us close to 2 degrees Celsius and on a path to a runaway climate. We currently have no via means of removing excess methane nor CO2 from the atmosphere, although it is assumed in the IPCC models that geo-engineering is employed to keep us below both 2 and 4 degrees Celsius of warming.

IMHO we are nearing an inflection point of survivability. What happens within the next 5 years will determine the chances of human civilizational survival. Everything else is just rearrangement of the deck chairs…
I am reminded of one of my prior posts "The ultimate population health 'Upstream' issue?" Tangentially, see my prior post "How will the health care system deal with THIS?"

Yeah, I'm a barrel of laughs today.

No shortage of exigent and otherwise daunting issues to contemplate, right? Buy, hey, POTUS 45 is all over it.


Yeah, "Happy Thanksgiving" to you too.

UPDATE

President Trump visits the Coast Guard on Thanksgiving Day to compliment them on their "Brand."


__

ERRATUM

Our friends at Uber have yet another shitstorm on their hands.
Uber Paid Hackers to Delete Stolen Data on 57 Million People

Hackers stole the personal data of 57 million customers and drivers from Uber Technologies Inc., a massive breach that the company concealed for more than a year. This week, the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers…
I've cited Uber before. See "Health Care Needs an Uber." It links to a raft of Naked Capitalism thorough long-read analyses of their dismal prospects.

Recent news reports announcing their intent to do an IPO in 2019 are just so much whistling-past-the-graveyard BS to me, but, I know that doing one is their only path toward foisting off their untenable business model on the clueless public before their entire "Market Cap valuation" house of cards comes crashing down.

SAVE THE DATE

Details
____________

More to come...

Tuesday, November 21, 2017

For your health, Casa Suenos

We got home around midnight Saturday from Manzanillo, Mexico, where we'd spent eight days at the incredible, unique Casa Suenos. A "bucket list" trip for our ailing daughter, Danielle.


I think my son shot that sunset pic. Maybe it was his girlfriend Meg.

A decade ago, Danielle held a fundraiser for the non-profit youth golf organization in Las Vegas where she served as Executive Director at the time. One of the auction items was a week-long retreat at Casa Suenos.

My wife bid on it successfully. She and Danielle and my niece April and several others subsequently went down. I stayed behind because my friend Bill Champlin (former Chicago lead singer) was in Las Vegas for weekend performances at South Point. See my photo blog posts here and here.

I have subsequently never heard the end of it from Danielle and Cheryl for not going to Manzanillo.

Some months ago, Danielle inquired of the proprietor Denny Riga regarding the possibility of getting a week at the villa at "cost." He'd not known of Danielle's illness, and that she was no longer working and had had to move back in with us.

He gave us the week gratis. We only had to pay for our food, drink, and travel. They even told us they were waiving the customary aggregate staff gratuity.

We were having none of that latter forbearance. There were eleven of us. They worked hard all week to enure we had a fabulous time. Cheryl and I insisted on leaving an ample gratuity.


On Wednesday we had outdoor lunch and a swim at their private beach. There are no words, really.


Every meal for the entire week was beyond 5-star quality. Carlos, the chef, is an absolute culinary wizard. The service was impeccable. Every person we met while there, both staff and other locals, was beyond gracious and friendly. We could not have had a better time.


Danielle really needed this: muchos gracias.

EIGHT DAYS WITHOUT COMPUTER OR TV

I left my Mac Air home. They do have secure WiFi, so we could all use our iPhones to get online, mostly to upload photo updates to Facebook as the week ensued. I brought both of my big cameras, shot a good number of pics with them (which I've yet to process), but we all mostly just used our smartphones for photos.

I brought Walter Isaacson's hardbound Steve Jobs bio, which I'd bought several years ago and never gotten around to reading.


A long read. Excellent. I finished it. Having been a resolute "Mac snob" all the way back to 1991, consumed it with fascination. Complex, difficult guy. Visionary, but a major "***hole."
"This is a book about the roller-coaster life and searingly intense personality of a creative entrepreneur whose passion for perfection and ferocious drive revolutionized six industries: personal computers, animated movies, music, phones, tablet computing, and digital publishing. You might even add a seventh, retail stores, which Jobs did not quite revolutionize but did reimagine. In addition, he opened the way for a new market for digital content based on apps rather than just websites. Along the way he produced not only transforming products but also, on his second try, a lasting company, endowed with his DNA, that is filled with creative designers and daredevil engineers who could carry forward his vision. In August 2011, right before he stepped down as CEO, the enterprise he started in his parents’ garage became the world’s most valuable company.

This is also, I hope, a book about innovation. At a time when the United States is seeking ways to sustain its innovative edge, and when societies around the world are trying to build creative digital-age economies, Jobs stands as the ultimate icon of inventiveness, imagination, and sustained innovation. He knew that the best way to create value in the twenty-first century was to connect creativity with technology, so he built a company where leaps of the imagination were combined with remarkable feats of engineering. He and his colleagues at Apple were able to think differently: They developed not merely modest product advances based on focus groups, but whole new devices and services that consumers did not yet know they needed.

He was not a model boss or human being, tidily packaged for emulation. Driven by demons, he could drive those around him to fury and despair. But his personality and passions and products were all interrelated, just as Apple’s hardware and software tended to be, as if part of an integrated system. His tale is thus both instructive and cautionary, filled with lessons about innovation, character, leadership, and values…"


Isaacson, Walter (2011-10-24). Steve Jobs (Kindle Locations 341-355). Simon & Schuster. Kindle Edition.
Given my daughter's dire diagnosis, reading the particulars of Steve's ultimately fatal pancreatic cancer struggle was rather difficult.

I brought my iPad, wherein I have hundreds of Kindle edition books, but I never once fired it up. I finished the compelling Jobs bio while sitting on the tarmac at SFO awaiting a gate as we returned.

A highly recommended read.

BTW, amid the mail pile upon our return home was a signed copy of Steve Tobak's excellent book.


I'd previously cited and reviewed in on the blog, e.g., here. Out of the blue one day a couple of weeks ago he emailed me asking for my street address. We've never met in person, we're simply online "friends" via shared interests.

Steve's book is another highly recommended read (as is his blog; one of my requisite daily stops). I did a quick Kindle search, and I can report that Steve Tobak's numerous cites of Steve Jobs and Apple are uniformly spot-on (despite his admission to me that he'd not read the Isaacson bio).

Steve Tobak is an astute, straight-shooter. Buy his book, and bookmark him.

HORSEBACK IN MANZANILLO


Coronas are required, LOL. Left to Right, Grandson Keenan (now 23), me, and my wife Cheryl.

CODA

My last topical post prior to our Mexico trip was that of "Artificial Intelligence and Ethics." This MIT Technology Review article showed up in my LinkedIn feed while we were in Manzanillo: "AI Ca Be Made Legally Accountable For Its Decisions."

Stay tuned.

"OH, AND ONE MORE THING..."

Save the date.

Link
I asked for a press pass. No response this time thus far.
____________

More to come... dxFromHell

Friday, November 10, 2017

Cloud hidden, whereabouts unknown


I will be offline from November 11th through the 18th. Taking my ailing daughter on a 'bucket list" vacation retreat out of the U.S. She's done better thus far than we'd initially expected, but our world these days is one of always anxiously waiting for the next shoe to drop.

She'll be back on chemo once we return (round 15), with the next follow-up CTs and MRIs in December.

Not taking my Mac Air (don't want the TSA and Customs hassles), so, I'll be back in the fray once I return. I've left you plenty of material.
________

Thursday, November 9, 2017

Artificial Intelligence and Ethics


I was already tee'd up for the topic of this post, but serendipitously just ran across this interesting piece over at Naked Capitalism.
Why You Should NEVER Buy an Amazon Echo or Even Get Near One
by Yves Smith


At the Philadelphia meetup, I got to chat at some length with a reader who had a considerable high end IT background, including at some cutting-edge firms, and now has a job in the Beltway where he hangs out with military-surveillance types. He gave me some distressing information on the state of snooping technology, and as we’ll get to shortly, is particularly alarmed about the new “home assistants” like Amazon Echo and Google Home.

He pointed out that surveillance technology is more advanced than most people realize, and that lots of money and “talent” continues to be thrown at it. For instance, some spooky technologies are already decades old…
Read all of it, including the numerous comments.

"Your Digital Mosaic"

My three prior posts have returned to my episodic riffing on AI and robotics topics: see here, here, and here.

Earlier this week I ran across this article over at Wired:
WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results…
Again, highly recommend you read all of it.

That led me to the "AI Now 2017 Report." (pdf)


The AI Now authors' 36-pg report examines in heavily documented detail (191 footnotes) four topical areas of AI applications and their attendant ethical issues: [1] Labor and Automation; [2] Bias and Inclusion; [3] Rights and Liberties, and, [4] Ethics and Governance.

From the Institute's website:
Rights & Liberties
As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.

Labor & Automation
Automation and early-stage artificial intelligence systems are already changing the nature of employment and working conditions in multiple sectors. AI Now works with social scientists, economists, labor organizers, and others to better understand AI's implications for labor and work – examining who benefits and who bears the cost of these rapid changes.

Bias & Inclusion
Data reflects the social, historical and political conditions in which it was created. Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes. AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.

Safety & Critical Infrastructure
As artificial intelligence systems are introduced into our core infrastructures, from hospitals to the power grid, the risks posed by errors and blind spots increase. AI Now studies the way in which AI and related technologies are being applied within these domains and to understand possibilities for safe and responsible AI integration.
The 2017 Report proffers ten policy recommendations:
1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards…

2 — Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings…

3 — After releasing an AI system, companies should continue to monitor its use across different contexts and communities. The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized…

4 — More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion…

5 — Develop standards to track the provenance, development, and use of training datasets throughout their life cycle. This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work…

6 — Expand AI bias research and mitigation strategies beyond a narrowly technical approach. Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain — such as education, healthcare or criminal justice — legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines…

7 — Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed. Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision…

8 — Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development. Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces…

9 — The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms…

10 — Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles…
I printed it out and went old-school on it with yellow highlighter and red pen.


It is excellent. A must-read, IMO. It remains to be seen, though, how much traction these proposals get in a tech world of transactionalism and "proprietary IP/data" everywhere.

Given that my grad degree is in "applied ethics" ("Ethics and Policy Studies"), I am all in on these ideas. The "rights and liberties" stuff was particularly compelling for me. I've had a good run at privacy/technology issues on another of my blogs. See my post "Clapp Trap" and its antecedent "Privacy and the 4th Amendment amid the 'War on Terror'."


"DIGITAL EXHAUST"

Another recent read on the topic.

Introduction

This book is for everyone who wants to understand the implications of the Big Data phenomenon and the Internet Economy; what it is, why it is different, the technologies that power it, how companies, governments, and everyday citizens are benefiting from it, and some of the threats it may present to society in the future.

That’s a pretty tall order, because the companies and technologies we explore in this book— the huge Internet tech groups like Google and Yahoo!, global retailers like Walmart, smartphone and tablet producers like Apple, the massive online shopping groups like Amazon or Alibaba, or social media and messaging companies like Facebook or Twitter— are now among the most innovative, complex, fast-changing, and financially powerful organizations in the world. Understanding the recent past and likely future of these Internet powerhouses helps us to appreciate where digital innovation is leading us, and is the key to understanding what the Big Data phenomenon is all about. Important, too, are the myriad innovative frameworks and database technologies— NoSQL, Hadoop, or MapReduce— that are dramatically altering the way we collect, manage, and analyze digital data…


Neef, Dale (2014-11-05). Digital Exhaust: What Everyone Should Know About Big Data, Digitization and Digitally Driven Innovation (FT Press Analytics) (Kindle Locations 148-157). Pearson Education. Kindle Edition.
UPDATE

From MIT Technology Review:
Despite All Our Fancy AI, Solving Intelligence Remains “the Greatest Problem in Science”
Autonomous cars and Go-playing computers are impressive, but we’re no closer to machines that can think like people, says neuroscientist Tomaso Poggio.


Recent advances that let computers play board games and drive cars haven’t brought the world any closer to true artificial intelligence.


That’s according to Tomaso Poggio, a professor at the McGovern Institute for Brain Research at MIT who has trained many of today’s AI leaders.


“Is this getting us closer to human intelligence? I don’t think so,” the neuroscientist said at MIT Technology Review’s EmTech conference on Tuesday.

Poggio leads a program at MIT that’s helped train several of today’s AI stars, including Demis Hassabis, cofounder of DeepMind, and Amnon Shashua, cofounder of the self-driving tech company Mobileye, which was acquired by Intel earlier this year for $15.3 billion.

“AlphaGo is one of the two main successes of AI, and the other is the autonomous-car story,” he says. “Very soon they’ll be quite autonomous.”


But Poggio said these programs are no closer to real human intelligence than before. Responding to a warning by physicist Stephen Hawking that AI could be more dangerous than nuclear weapons, Poggio called that “just hype.”…
BTW - apropos, see The MIT Center for Brains, Minds, and Machines.

"The Center for Brains, Minds and Machines (CBMM)
is a multi-institutional NSF Science and Technology Center
dedicated to the study of intelligence - how the brain produces intelligent
behavior and how we may be able to replicate intelligence in machines."

Interesting. See their (dreadful video quality) YouTube video "Discussion Panel: the Ethics of Artificial Intelligence."

In sum, I'm not sure that difficulty achieving "general AI" -- "one that can think for itself and solve many kinds of novel problems" -- is really the central issue going to applied ethics concerns. Again, read the AI Now 2017 Report.

WHAT OF "ETHICS?" ("MORAL PHILOSOPHY")

Couple of good, succinct resources for you, here and here. My elevator speech take on "ethics" is that it is not about a handy "good vs bad cookbook." It goes to honest (albeit frequently difficult) moral deliberation involving critical thinking, deliberation that takes into account "values" that pass rational muster -- surpassing the "Appeal to Tradition" fallacy.

UPDATE

Two new issues of my hardcopy Science Magazine showed up in the snailmail today. This one in particular caught my attention.

What is consciousness, and could machines have it?

Abstract
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain?

I question here the scientific and strategic underpinnings of the runaway enthusiasm for industrial-scale projects at the interface between “wet” (biology) and “hard” (physics, microelectronics and computer science) sciences. Rather than presenting the achievements and hopes fueled by big-data–driven strategies—already covered in depth in special issues of leading journals—I focus on three major issues: (i) Is the industrialization of neuroscience the soundest way to achieve substantial progress in knowledge about the brain? (ii) Do we have a safe “roadmap,” based on a scientific consensus? (iii) Do these large-scale approaches guarantee that we will reach a better understanding of the brain?

This “opinion” paper emphasizes the contrast between the accelerating technological development and the relative lack of progress in conceptual and theoretical understanding in brain sciences. It underlines the risks of creating a scientific bubble driven by economic and political promises at the expense of more incremental approaches in fundamental research, based on a diversity of roadmaps and theory-driven hypotheses. I conclude that we need to identify current bottlenecks with appropriate accuracy and develop new interdisciplinary tools and strategies to tackle the complexity of brain and mind processes…
Interesting stuff. Stay tuned.
__

CODA

Save the date.

Link
____________

More to come...

Friday, November 3, 2017

Clinical cognition in the digital age


From the New England Journal of Medicine (open access essay):
Lost in Thought — The Limits of the Human Mind and the Future of Medicine
Ziad Obermeyer, M.D., and Thomas H. Lee, M.D.
In the good old days, clinicians thought in groups; “rounding,” whether on the wards or in the radiology reading room, was a chance for colleagues to work together on problems too difficult for any single mind to solve.

Today, thinking looks very different: we do it alone, bathed in the blue light of computer screens.

Our knee-jerk reaction is to blame the computer, but the roots of this shift run far deeper. Medical thinking has become vastly more complex, mirroring changes in our patients, our health care system, and medical science. The complexity of medicine now exceeds the capacity of the human mind.

Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research.

It’s ironic that just when clinicians feel that there’s no time in their daily routines for thinking, the need for deep thinking is more urgent than ever. Medical knowledge is expanding rapidly, with a widening array of therapies and diagnostics fueled by advances in immunology, genetics, and systems biology. Patients are older, with more coexisting illnesses and more medications. They see more specialists and undergo more diagnostic testing, which leads to exponential accumulation of electronic health record (EHR) data. Every patient is now a “big data” challenge, with vast amounts of information on past trajectories and current states.

All this information strains our collective ability to think. Medical decision making has become maddeningly complex. Patients and clinicians want simple answers, but we know little about whom to refer for BRCA testing or whom to treat with PCSK9 inhibitors. Common processes that were once straightforward — ruling out pulmonary embolism or managing new atrial fibrillation — now require numerous decisions...
"Computers, far from being the problem, are the solution. But using them to manage the complexity of 21st-century medicine will require fundamental changes in the way we think about thinking and in the structure of medical education and research."

'eh?

I am reminded of a prior contrarian post "Are structured data the enemy of health care quality?"

More recently, I've reported on the latest (excessively?) exuberant rah-rah over stuff like AI, NLP, and Robotics. See also here.

More Obermeyer and Lee from NEJM:
The first step toward a solution is acknowledging the profound mismatch between the human mind’s abilities and medicine’s complexity. Long ago, we realized that our inborn sensorium was inadequate for scrutinizing the body’s inner workings — hence, we developed microscopes, stethoscopes, electrocardiograms, and radiographs. Will our inborn cognition alone solve the mysteries of health and disease in a new century? The state of our health care system offers little reason for optimism. 
But there is hope. The same computers that today torment us with never-ending checkboxes and forms will tomorrow be able to process and synthesize medical data in ways we could never do ourselves. Already, there are indications that data science can help us with critical problems...
I found it quite interesting that Lincoln Weed, JD, co-author of the excellent "Medicine in Denial" (now available free in searchable PDF format) was first to comment under the essay.
LINCOLN WEED
Underhill VT
October 04, 2017

Medicine has long been operating in denial of complexity and its solutions
The authors correctly observe, "Algorithms that learn from human decisions will also learn human mistakes." But the authors understate the problem. "The complexity of medicine," they argue, "NOW exceeds the capacity of the human mind" (emphasis added). This is a bit like saying, "The demands of transportation NOW exceed the capacity of horse-powered vehicles." In reality, the complexity of medicine overtook the human mind many decades ago.  Moreover, conventional software engineering demonstrated the potential for tools to cope with complexity and transform medicine long before algorithms driven by machine learning emerged. Medical education, licensure, and practice have been operating in denial of this reality.
 
Interested readers are referred to Weed LL, Physicians of the Future, New Eng. J. Med. 1981;304:903-907; Weed LL, Weed L, Medicine in Denial, CreateSpace, 2011 (a book available in full text at www.world3medicine.org); and a recent guest blog post, https://nlmdirector.nlm.nih.gov/2017/09/05/larry-weeds-legacy-and-clinical-decision-support/. Disclosure: I am a son of and co-author with the late Dr. Larry Weed, author of the article and lead author of the book just cited.
I could not recommend the Weeds' book more highly. I've cited it multiple times, e.g., "Down in the Weeds'," "Back down in the Weeds'," and "Back down in the Weeds': A Complex Systems Science Approach to Healthcare Costs and Quality."

Back to more Obermeyer and Lee:
...Machine learning has already spurred innovation in fields ranging from astrophysics to ecology. In these disciplines, the expert advice of computer scientists is sought when cutting-edge algorithms are needed for thorny problems, but experts in the field — astrophysicists or ecologists — set the research agenda and lead the day-to-day business of applying machine learning to relevant data.
In medicine, by contrast, clinical records are considered treasure troves of data for researchers from nonclinical disciplines. Physicians are not needed to enroll patients — so they’re consulted only occasionally, perhaps to suggest an interesting outcome to predict. They are far from the intellectual center of the work and rarely engage meaningfully in thinking about how algorithms are developed or what would happen if they were applied clinically.
But ignoring clinical thinking is dangerous. Imagine a highly accurate algorithm that uses EHR data to predict which emergency department patients are at high risk for stroke. It would learn to diagnose stroke by churning through large sets of routinely collected data. Critically, all these data are the product of human decisions: a patient’s decision to seek care, a doctor’s decision to order a test, a diagnostician’s decision to call the condition a stroke. Thus, rather than predicting the biologic phenomenon of cerebral ischemia, the algorithm would predict the chain of human decisions leading to the coding of stroke.
Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system. Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors.
Ultimately, machine learning in medicine will be a team sport, like medicine itself. But the team will need some new players: clinicians trained in statistics and computer science, who can contribute meaningfully to algorithm development and evaluation. Today’s medical education system is ill prepared to meet these needs. Undergraduate premedical requirements are absurdly outdated. Medical education does little to train doctors in the data science, statistics, or behavioral science required to develop, evaluate, and apply algorithms in clinical practice.
The integration of data science and medicine is not as far away as it may seem: cell biology and genetics, once also foreign to medicine, are now at the core of medical research, and medical education has made all doctors into informed consumers of these fields. Similar efforts in data science are urgently needed. If we lay the groundwork today, 21st-century clinicians can have the tools they need to process data, make decisions, and master the complexity of 21st-century patients.
Big "AI/IA" takeaway for me:
"Algorithms that learn from human decisions will also learn human mistakes, such as overtesting and overdiagnosis, failing to notice people who lack access to care, undertesting those who cannot pay, and mirroring race or gender biases. Ignoring these facts will result in automating and even magnifying problems in our current health system. Noticing and undoing these problems requires a deep familiarity with clinical decisions and the data they produce — a reality that highlights the importance of viewing algorithms as thinking partners, rather than replacements, for doctors."
Indeed. That is a huge and perhaps underappreciated concern in light of the prevalence of errors and omissions in many, many sources of data.


UPDATE

An important new "AI Now" report is out. See "Why AI is Still Waiting for its Ethics Transplant." Much more on this shortly.
__

Below: audio interview with Dr. Obermeyer.
In the aggregate foregoing vein, you might also like my prior riffs on "The Art of Medicine." In addition, see my "Philosophia sana in ars medica sana."

CODA

Save the date.

Link

____________

More to come...