Search the KHIT Blog

Friday, December 13, 2019

The 2019 AI Now Institute Report

An important read, and not just for us techie people.

https://ainowinstitute.org/AI_Now_2019_Report.pdf
Click the cover image for the full PDF
Forty pages of endnotes. Kudos.

See my prior post "Ethical artificial intelligence?"

Stay tuned. A busy topical week.

UPDATE: AI REPORT HEALTH TECH SECTION
2.6 Health
AI technologies today mediate people’s experiences of health in many ways: from popular consumer-based technologies like Fitbits and the Apple Watch, to automated diagnostic support systems in hospitals, to the use of predictive analytics on social-media platforms to predict self-harming behaviors. AI also plays a role in how health insurance companies generate health-risk scores and in the ways government agencies and healthcare organizations allocate medical resources.

Much of this activity comes with the aim of improving people’s health and well-being through increased personalization of health, new forms of engagement, and clinical efficiency, popularly characterizing AI in health as an example of “AI for good” and an opportunity to tackle global health challenges. This appeals to concerns about information complexities of biomedicine, population-based health needs, and the rising costs of healthcare. However, as AI technologies have rapidly moved from controlled lab environments into real-life health contexts, new social concerns are also fast emerging.

The Expanding Scale and Scope of Algorithmic Health Infrastructures

Advances in machine learning techniques and cloud-computing resources have made it possible to classify and analyze large amounts of medical data, allowing the automated and accurate detection of conditions like diabetic retinopathy and forms of skin cancer in medical settings. At the same time, eager to apply AI techniques to health challenges, technology companies have been analyzing everyday experiences like going for a walk, food shopping, sleeping, and menstruating to make inferences and predictions about people’s health behavior and status.

While such developments may offer future positive health benefits, little empirical research has been published about how AI will impact patient health outcomes or experiences of care. Furthermore, the data- and cloud-computing resources required for training models to AI health systems have created troubling new opportunities, expanding what counts as “health data,” but also the boundaries of healthcare. The scope and scale of these new “algorithmic health infrastructures” give rise to a number of social, economic, and political concerns.

The proliferation of corporate-clinical alliances for sharing data to train AI models illustrates these infrastructural impacts. The resulting commercial incentives and conflicts of interest have made ethical and legal issues around health data front-page news. Most recently, a whistle-blower report alerted the public to serious privacy risks stemming from a partnership, known as Project Nightingale, between Google and Ascension, one of the largest nonprofit health systems in the US. The report claimed that patient data transferred between Ascension and Google was not “de-identified.” Google helped migrate Ascension’s infrastructure to their cloud environment, and in return received access to hundreds of thousands of privacy-protected patient medical records to use in developing AI solutions for Ascension and also to sell to other healthcare systems.

Google, however, is not alone. Microsoft, IBM, Apple, Amazon, and Facebook, as well as a wide range of healthcare start-ups, have all made lucrative “data partnership” agreements with a wide range of healthcare organizations (including many university research hospitals and insurance companies) to gain access to health data for the training and development of AI-driven health systems. Several of these have resulted in federal probes and lawsuits around improper use of patient data.

However, even when current regulatory policies like HIPAA are strictly followed, security and privacy vulnerabilities can exist within larger technology infrastructures, presenting serious challenges for the safe collection and use of Electronic Health Record (EHR) data. New research shows that it is possible to accurately link two different de-identified EHR datasets using computational methods, so as to create a more complete history of a patient without using any personal health information of the patient in question. Another recent research study showed that it is possible to create reconstructions of patients’ faces using de-identified MRI images, which could then be identified using facial-recognition systems. Similar concerns have prompted a lawsuit against the University of Chicago Medical Center and Google claiming that Google is “uniquely able to determine the identity of almost every medical record the university released” due to its expertise and resources in AI development. The potential harm from misuse of these new health data capabilities is of grave concern, especially as AI health technologies continue to focus on predicting risks that could impact healthcare access or stigmatize individuals, such as recent attempts to diagnose complex behavioral health conditions like depression and schizophrenia from social-media data.

New Social Challenges for the Healthcare Community
This year a number of reports, papers, and op-eds were published on AI ethics in healthcare. Although mostly generated by physicians and medical ethicists in Europe and North America, these early efforts are important for better understanding the situated uses of AI systems in healthcare.

For example, the European and North American Radiology Societies recently issued a statement that outlines key ethical issues for the field, including algorithmic and automation bias in relation to medical imaging. Radiology is currently one of the medical specialties where AI systems are the most advanced. The statement openly acknowledges how clinicians are reckoning with the increased value and potential harms around health data used for AI systems: “AI has noticeably altered our perception of radiology data—their value, how to use them, and how they may be misused.”

These challenges include possible social harms for patients, such as the potential for clinical decisions to be nudged or guided by AI systems in ways that don’t (necessarily) bring people health benefits, but are in service to quality metric requirements or increased profit. Importantly, misuses also extend beyond the ethics of patient care to consider how AI technologies are reshaping medical organizations themselves (e.g., “radiologist and radiology departments will also be data” for healthcare administrators) and the wider health domain by “blurring the line” between academic research and commercial AI uses of health data.

Importantly, medical groups are also pushing back against the techno-solutionist promises of AI, crafting policy recommendations to address social concerns. For example, the Academy of Medical Royal Colleges (UK) 2019 report, “Artificial Intelligence in Healthcare,” pragmatically states: “Politicians and policymakers should avoid thinking that AI is going to solve all the problems the health and care systems across the UK are facing.” The American Medical Association has been working on an AI agenda for healthcare, too, also adopting the policy “Augmented Intelligence in Health Care”as a framework for thinking about AI in relation to multiple stakeholder concerns, which include the needs of physicians, patients, and the broader healthcare community.

There have also been recent calls for setting a more engaged agenda around AI and health. This year Eric Topol, a physician and AI/ML researcher, questioned the promises of AI to fix systemic healthcare issues, like clinician burnout, without the collective action and involvement of healthcare workers. Physician organizing is needed not because doctors should fear being replaced by AI, but to ensure that AI benefits people’s experiences of care. “The potential of A.I. to restore the human dimension in health care,” Topol argues, “will depend on doctors stepping up to make their voices heard.”

More voices are urgently needed at the table—including the expertise of patient groups, family caregivers, community health workers, and nurses—in order to better understand how AI technologies will impact diverse populations and health contexts. We have seen how overly narrow approaches to AI in health have resulted in systems that failed to account for darker skin tones in medical imaging data, and cancer treatment recommendations that could lead to racially disparate outcomes due to training data from predominantly white patients.

Importantly, algorithmic bias in health data cannot always be corrected by gathering more data, but requires understanding the social context of the health data that has already been collected. Recently, Optum’s algorithm designed to identify “high-risk” patients in the US was based on the number of medical services a person used, but didn’t account for the numerous socioeconomic reasons around the nonuse of needed health services, such as being underinsured or the inability to take time off from work. With long histories of addressing such social complexities, research from fields like medical sociology and anthropology, nursing, human-computer interaction, and public health is needed to protect against the implementation of AI systems that (even when designed with good intentions) worsen health inequities.
_____________

More to come...

No comments:

Post a Comment