Search the KHIT Blog

Tuesday, August 6, 2019

A.I. for the masses?

What could possibly go wrong?

In my latest snailmail Science Magazine:
Bringing machine learning to the masses

Yang-Hui He, a mathematical physicist at the University of London, is an expert in string theory, one of the most abstruse areas of physics. But when it comes to artificial intelligence (AI) and machine learning, he was naïve. “What is this thing everyone is talking about?” he recalls thinking. Then his go-to software program, Mathematica, added machine learning tools that were ready to use, no expertise required. He began to play around, and realized AI might help him choose the plausible geometries for the countless multidimensional models of the universe that string theory proposes.

In a 2017 paper, He showed that, with just a few extra lines of code, he could enlist the off-the-shelf AI to greatly speed up his calculations. “I don't have to get down to the nitty gritty,” He says. Now, He says he is “on a crusade” to get mathematicians and physicists to use machine learning, and gives about 20 talks a year on the power of these new user-friendly versions.

AI used to be the specialized domain of data scientists and computer programmers. But companies such as Wolfram Research, which makes Mathematica, are trying to democratize the field, so scientists without AI skills can harness the technology for recognizing patterns in big data. In some cases, they don't need to code at all. Insights are just a drag-and-drop away. Computational power is no longer much of a limiting factor in science, says Juliana Freire, a computer scientist at New York University in New York City who is developing a ready-to-use AI tool with funding from the Defense Advanced Research Projects Agency (DARPA). “To a large extent, the bottleneck to scientific discoveries now lies with people.”…

The AI tools are more than mere toys for nonprogrammers, says Tim Kraska, a computer scientist at the Massachusetts Institute of Technology in Cambridge who leads Northstar, a machine learning tool supported by the $80 million DARPA program called Data-Driven Discovery of Models. Wade Shen, who leads the DARPA program, says the tools can outperform data scientists at building models, and they're even better with a subject matter expert in the loop.

In a demo for Science, Kraska showed how easy it was to use Northstar's drag-and-drop interface for a serious problem. He loaded a freely available database of 60,000 critical care patients that includes details on their demographics, lab tests, and medications. In a couple of clicks, Kraska created several heart failure prediction models, which quickly identified risk factors for the condition. One model fingered ischemia—a poor blood supply to the heart—which doctors know is often codiagnosed with heart failure. That was “almost like cheating,” Kraska said, so he dragged ischemia off the list of inputs and the models immediately began to retrain to look for other predictive factors.

Maciej Baranski, a physicist at the Singapore-MIT Alliance for Research & Technology Centre, says the group plans to use Northstar to explore cell therapies for fighting cancer or replacing damaged cartilage. The system will help biologists combine the optical, genetic, and chemical data they've collected from cells to predict their behavior…

The trend toward off-the-shelf AI has risks. Machine learning algorithms are often called black boxes, their inner workings shrouded in mystery, and the prepackaged versions can be even more opaque. Novices who don't bother to look under the hood might not recognize problems with their data sets or models, leading to overconfidence in biased or inaccurate results.
But Kraska says Northstar has a safeguard against misuse: more AI. It includes a module that anticipates and counteracts typical rookie mistakes, such as assuming any pattern an algorithm finds is statistically significant. “In the end it actually tries to mimic what a data scientist would do,” he says.
"The trend toward off-the-shelf AI has risks. Machine learning algorithms are often called black boxes, their inner workings shrouded in mystery...Novices who don't bother to look under the hood might not recognize problems with their data sets or models, leading to overconfidence in biased or inaccurate results."
I'll re-post something from last year:

Another "Holy Shit" book. Yikes.

ALMOST two decades ago, when I wrote the preface to my book Causality (2000), I made a rather daring remark that friends advised me to tone down. “Causality has undergone a major transformation,” I wrote, “from a concept shrouded in mystery into a mathematical object with well-defined semantics and well-founded logic. Paradoxes and controversies have been resolved, slippery concepts have been explicated, and practical problems relying on causal information that long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Put simply, causality has been mathematized.”

Reading this passage today, I feel I was somewhat shortsighted. What I described as a “transformation” turned out to be a “revolution” that has changed the thinking in many of the sciences. Many now call it “the Causal Revolution,” and the excitement that it has generated in research circles is spilling over to education and applications. I believe the time is ripe to share it with a broader audience.

This book strives to fulfill a three-pronged mission: first, to lay before you in nonmathematical language the intellectual content of the Causal Revolution and how it is affecting our lives as well as our future; second, to share with you some of the heroic journeys, both successful and failed, that scientists have embarked on when confronted by critical cause-effect questions.

Finally, returning the Causal Revolution to its womb in artificial intelligence, I aim to describe to you how robots can be constructed that learn to communicate in our mother tongue— the language of cause and effect. This new generation of robots should explain to us why things happened, why they responded the way they did, and why nature operates one way and not another. More ambitiously, they should also teach us about ourselves: why our mind clicks the way it does and what it means to think rationally about cause and effect, credit and regret, intent and responsibility…

Pearl, Judea; Mackenzie, Dana. The Book of Why: The New Science of Cause and Effect (Kindle Locations 47-61). Basic Books. Kindle Edition.
This one is gonna be fun. Stay tuned. From the Atlantic interview article: Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently...
"If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do."
In short, being unreflectively "data-driven" (that fashionable tech cliche) is a both naive and a cop-out. (Note: some of this will surely go -- at least tangentially --  to the "information ethics" topic of my prior post.)

See also my 2018 post "Data Science?"


This is a hoot:
Artificial intelligence is not intelligent enough or, more exactly, not imaginative enough or creative enough to make us resign thinking. Tests for artificial intelligence are not rigorous enough. It does not take intelligence to meet the Turing test – impersonating a human interlocutor – or win a game of chess or general knowledge. You will know that intelligence is artificial only when your sexbot says, ‘No.’

Fernández-Armesto, Felipe. Out of Our Minds. University of California Press. Kindle Edition, location 7720. 
This book, wow!
The speed and reach of the computer revolution raised the question of how much further it could go. Hopes and fears intensified of machines that might emulate human minds. Controversy grew over whether artificial intelligence was a threat or a promise. Smart robots excited boundless expectations. In 1950, Alan Turing, the master cryptographer whom artificial intelligence researchers revere, wrote, ‘I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’ The conditions Turing predicted have not yet been met, and may be unrealistic. Human intelligence is probably fundamentally unmechanical: there is a ghost in the human machine. But even without replacing human thought, computers can affect and infect it. Do they corrode memory, or extend its access? Do they erode knowledge when they multiply information? Do they expand networks or trap sociopaths? Do they subvert attention spans or enable multi-tasking? Do they encourage new arts or undermine old ones? Do they squeeze sympathies or broaden minds? If they do all these things, where does the balance lie? We have hardly begun to see how cyberspace can change the psyche. [Ibid, location 7441]

Amazon recommended this book to me:

Only $4.99 Kindle price. 5 star reviews. I precipitously did 1-Click.

My Bad. It's awful. Reads like it was written by A.I.

Machine learning is one in all the quickest growing areas of technology, with far-reaching applications. This textbook is intended to give a proper introduction of machine learning, and all the algorithmic paradigms that machine learning offers, in a principled way. The book provides an intensive hypothesis of the basic concepts underlying machine learning and also the mathematical derivations that remodel these principles into practical algorithms. After a presentation of the basics of the sector, the book covers a wide range of central topics that have never been addressed by previous textbooks. These embody a discussion of the process complexity of learning and also the ideas of convexity and stability; major algorithmic paradigms together with stochastic gradient descent, neural networks, and structured output learning; and rising theoretical ideas like the PAC-Bayes approach and compression-based bounds. Designed for a starting graduate or refined student course, the text makes the elemental and algorithms of machine learning accessible to non-expert readers and pupils of arithmetics, engineering, statistics and computer science.

Samelson, Steven. Machine Learning: The Absolute Complete Beginner’s Guide to Learn and Understand Machine Learning From Beginners, Intermediate, Advanced, To Expert Concepts (pp. 1-2). Kindle Edition.
Seriously? Need I really elaborate? Got played this time.


Reported at TechCrunch:
The UK’s National Health Service is launching an AI lab

The UK government has announced it’s rerouting £250M (~$300M) in public funds for the country’s National Health Service (NHS) to set up an artificial intelligence lab that will work to expand the use of AI technologies within the service.

The Lab, which will sit within a new NHS unit tasked with overseeing the digitisation of the health and care system (aka: NHSX), will act as an interface for academic and industry experts, including potentially startups, encouraging research and collaboration with NHS entities (and data) — to drive health-related AI innovation and the uptake of AI-driven healthcare within the NHS.

Last fall the then new in post health secretary, Matt Hancock, set out a tech-first vision of future healthcare provision — saying he wanted to transform NHS IT so it can accommodate “healthtech” to support “preventative, predictive and personalised care”.

In a press release announcing the AI lab, the Department of Health and Social Care suggested it would seek to tackle “some of the biggest challenges in health and care, including earlier cancer detection, new dementia treatments and more personalised care”.

Other suggested areas of focus include:

  • improving cancer screening by speeding up the results of tests, including mammograms, brain scans, eye scans and heart monitoring
  • using predictive models to better estimate future needs of beds, drugs, devices or surgeries
  • identifying which patients could be more easily treated in the community, reducing the pressure on the NHS and helping patients receive treatment closer to home
  • identifying patients most at risk of diseases such as heart disease or dementia, allowing for earlier diagnosis and cheaper, more focused, personalised prevention
  • building systems to detect people at risk of post-operative complications, infections or requiring follow-up from clinicians, improving patient safety and reducing readmission rates
  • upskilling the NHS workforce so they can use AI systems for day-to-day tasks
  • inspecting algorithms already used by the NHS to increase the standards of AI safety, making systems fairer, more robust and ensuring patient confidentiality is protected
  • automating routine admin tasks to free up clinicians so more time can be spent with patients...
Have to wonder what Seamus O'Mahony would say?

More to come...

No comments:

Post a Comment