Search the KHIT Blog

Monday, July 20, 2015

AI vs IA: At the cutting edge of IT R&D

In truth, I've not paid all that much attention ongoing to developments in the really leading-edge IT R&D space. People have been waxing rhapsodic about the ostensibly ever-incipient promise of "Artificial Intelligence" (AI) since my code-writing days of the 1980's. Significant advances have seemed to always stay just around the corner.

My concerns have been more mundane, mainly commercial software "usability" and RDBMS design efficiency and effectiveness (i.e., "software QA"). As EHRs have evolved into providing at least rudimentary Clinical Decision Support functionality ("CDS"), such capability is really "IA" (Intelligence Augmentation) rather that "AI" (Artificial Intelligence). True, given advances in "neural net" software development, those separate concepts are beginning to blur, but, still, most of the applied work in the area focuses on IA, not AI.

Review my April post section regarding the Weeds' seminal book "Medicine in Denial."
Essential to health care reform are two elements: standards of care for managing clinical information (analogous to accounting standards for managing financial information), and electronic tools designed to implement those standards. Both elements are external to the physician’s mind. Although in large part already developed, these elements are virtually absent from health care. Without these elements, the physician continues to be relied upon as a repository of knowledge and a vehicle for information processing. The resulting disorder blocks health information technology from realizing its enormous potential, and deprives health care reform of an essential foundation. In contrast, standards and tools designed to integrate detailed patient data with comprehensive medical knowledge make it possible to define the data and knowledge taken into account for decision making. Similarly, standards for organizing patient data over time in medical records make it possible to trace connections among the data collected, the patient’s problems, the practitioner’s assessments, the actions taken, the patient’s progress, the patient’s behaviors and ultimate outcomes...
Larry Weed's digital POMR (Problem Oriented Medical Record) focus has clearly been that of applied IA.

My May 22nd post "The Robot will see you now..." cites Martin Ford's bracing new book 'The Rise of the Robots." That book takes us off more in the direction of the implications of increasing (and sometimes troubling) "AI."

Well, my new Harpers issue arrived.

Interesting article therein.
The Transhuman Condition
By John Markoff, from Machines of Loving Grace, out this month from Ecco Books. Markoff has been a technology and business reporter for the New York Times since 1988.
I look forward to getting this book when it's released next month.

Some excerpts from the Harpers piece:
Bill Duvall grew up on the peninsula south of San Francisco. The son of a physicist who was involved in classified research at Stanford Research Institute (SRI), a military-oriented think tank, Duvall attended UC Berkeley in the mid-1960s; he took all the university’s computer-programming courses and dropped out after two years. When he joined the think tank where his father worked, a few miles from the Stanford campus, he was assigned to the team of artificial-intelligence researchers who were building Shakey.

Although Life magazine would later dub Shakey the first “electronic person,” it was basically a six-foot stack of gear, sensors, and motorized wheels that was tethered — and later wirelessly connected — to a nearby mainframe. Shakey wasn’t the world’s first mobile robot, but it was the first that was intended to be truly autonomous. It was designed to reason about the world around it, to plan its own actions, and to perform tasks. It could find and push objects and move in a planned way in its highly structured world.

At both SRI and the nearby Stanford Artificial Intelligence Laboratory (SAIL), which was founded by John McCarthy in 1962, a tightly knit group of researchers was attempting to build machines that mimicked human capabilities. To this group, Shakey was a striking portent of the future; they believed that the scientific breakthrough that would enable machines to act like humans was coming in just a few short years. Indeed, among the small community of AI researchers who were working on both coasts during the mid-Sixties, there was virtually boundless optimism...

Late on the evening of October 29, 1969, Duvall connected the NLS system in Menlo Park, via a data line leased from the phone company, to a computer controlled by another young hacker in Los Angeles. It was the first time that two computers connected over the network that would become the Internet. Duvall’s leap from the Shakey laboratory to Engelbart’s NLS made him one of the earliest people to stand on both sides of a line that even today distinguishes two rival engineering communities. One of these communities has relentlessly pursued the automation of the human experience — artificial intelligence. The other, human-computer interaction — what Engelbart called intelligence augmentation — has concerned itself with “man-machine symbiosis.” What separates AI and IA is partly their technical approaches, but the distinction also implies differing ethical stances toward the relationship of man to machine...
...[T]oday, AI is beginning to meet some of the promises made for it by SAIL and SRI researchers half a century ago, and artificial intelligence is poised to have an impact on society that may be greater than the effect of personal computing and the Internet...

...[T]he falling costs of sensors, computer processing, and information storage, along with the gradual shift away from symbolic logic and toward more pragmatic statistical and machine-learning algorithms, have made it possible for engineers and programmers to create computerized systems that see, speak, listen, and move around in the world.

As a result, AI has been transformed from an academic curiosity into a force that is altering countless aspects of the modern world. This has created an increasingly clear choice for designers — a choice that has become philosophical and ethical, rather than simply technical: will we design humans into or out of the systems that transport us, that grow our food, manufacture our goods, and provide our entertainment?

As computing and robotics systems have grown from laboratory curiosities into the fabric that weaves together modern life, the AI and IA communities have continued to speak past each other. The field of human-computer interface has largely operated within the philosophical framework originally set down by Engelbart — that computers should be used to assist humans. In contrast, the artificial-intelligence community has for the most part remained unconcerned with preserving a role for individual humans in the systems it creates...

...Google mined the wealth of human knowledge and returned it in searchable form to society, while reserving for itself the right to monetize the results.

Since it established its search box as the world’s most powerful information monopoly, Google has yo-yoed between IA and AI applications and services. The ill-fated Google Glass was intended as a “reality-augmentation system,” while the company’s driverless-car project represents a pure AI — replacing human agency and intelligence with a machine. Recently, Google has undertaken what it loosely identifies as “brain” projects, which suggests a new wave of AI...

...[I]t is becoming increasingly possible — and “rational” — to design humans out of systems for both performance and cost reasons. In manufacturing, where robots can directly replace human labor, the impact of artificial intelligence will be easily visible. In other cases the direct effects will be more difficult to discern. Winston Churchill said, “We shape our buildings, and afterwards our buildings shape us.” Today our computational systems have become immense edifices that define the way we interact with our society...

AI and machine-learning algorithms have already led to transformative applications in areas as diverse as science, manufacturing, and entertainment. Machine vision and pattern recognition have been essential to improving quality in semiconductor design. Drug-discovery algorithms have systematized the creation of new pharmaceuticals. The same breakthroughs have also brought us increased government surveillance and social-media companies whose business model depends on invading privacy for profit.

Optimists hope that the potential abuses of our computer systems will be minimized if the application of artificial intelligence, genetic engineering, and robotics remains focused on humans rather than algorithms. But the tech industry has not had a track record that speaks to moral enlightenment. It would be truly remarkable if a Silicon Valley company rejected a profitable technology for ethical reasons. Today, decisions about implementing technology are made largely on the basis of profitability and efficiency. What is needed is a new moral calculus.

Any import here with respect to the topic of my prior post "Personalized Medicine" and "Omics" -- HIT and QA considerations? My earlier citations of Nicholas Carr's book "The Glass Cage"? My citations of Morozov's compelling book "To Save Everything, Click HERE"? Peter Thiel's book "Zero to One"? Simon Head's "Mindless"?


Meanwhile, in the Health IT trenches:
Physicians Vent EHR Frustrations
Lena J. Weiner, for HealthLeaders Media, July 21, 2015

The American Medical Association gives physicians a platform to air their grievances about electronic health records systems, but the technology is here to stay, says an executive with the College of Healthcare Information Management.

Longstanding physician dissatisfaction over electronic health record systems, Meaningful Use, and the federal regulations behind them lit up a town hall-style meeting Monday night, hosted in Atlanta by the American Medical Association and the Medical Association of Georgia and webcast live.

Rep. Tom Price, MD, (R-GA), formerly medical director of the orthopedic clinic at Grady Memorial Hospital in Atlanta and co-host of the town hall kicked things off with one specific complaint of doctors, "inconsistency is a problem." The event was part of the AMA's Break the Red Tape campaign, which aims to postpone the finalization of MU Stage 3 regulations.

AMA President Steven J. Stack, MD, told attendees that the meeting was an opportunity for them to be heard. "This is not for you to hear me talking to you, but for me to hear you talking to me... Has workflow in your office changed?" he goaded the crowd. At least 80% raised their hands. A sole hand remained raised when Stack asked if the change was for the better.

Almost immediately, physicians gave voice to the barriers to care they say are caused by electronic health records systems. Over the course of the 90-minute meeting they raised concerns over reduced productivity, the security of private patient medical records, interoperability, and government regulation.

"We're removing the science from medicine," said one physician who described having to check "yes" and "no" boxes rather than being able to note subtle nuances his patients reported.

"We're removing the science from medicine," said one physician who described having to check "yes" and "no" boxes rather than being able to note subtle nuances his patients reported.

"Thank God I learned to type in high school—I never thought I'd use it," said another, explaining that she now has to make sure every employee she hires can type, regardless of the job for which they are hired.

Some physicians tweeted their frustrations during the meeting, using the hashtag #fixEHR...
"We're removing the science from medicine"

Well, coding and categorical check-box documentation are inescapably what I call "lossy compression." To what extent "nuance loss" is "unscientific" is not all that clear, though. Subjective "nuance impressions" may really come more under the "Art of Medicine."




More to come...

No comments:

Post a Comment