Search the KHIT Blog

Thursday, November 9, 2017

Artificial Intelligence and Ethics


I was already tee'd up for the topic of this post, but serendipitously just ran across this interesting piece over at Naked Capitalism.
Why You Should NEVER Buy an Amazon Echo or Even Get Near One
by Yves Smith


At the Philadelphia meetup, I got to chat at some length with a reader who had a considerable high end IT background, including at some cutting-edge firms, and now has a job in the Beltway where he hangs out with military-surveillance types. He gave me some distressing information on the state of snooping technology, and as we’ll get to shortly, is particularly alarmed about the new “home assistants” like Amazon Echo and Google Home.

He pointed out that surveillance technology is more advanced than most people realize, and that lots of money and “talent” continues to be thrown at it. For instance, some spooky technologies are already decades old…
Read all of it, including the numerous comments.

"Your Digital Mosaic"

My three prior posts have returned to my episodic riffing on AI and robotics topics: see here, here, and here.

Earlier this week I ran across this article over at Wired:
WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results…
Again, highly recommend you read all of it.

That led me to the "AI Now 2017 Report." (pdf)


The AI Now authors' 36-pg report examines in heavily documented detail (191 footnotes) four topical areas of AI applications and their attendant ethical issues: [1] Labor and Automation; [2] Bias and Inclusion; [3] Rights and Liberties, and, [4] Ethics and Governance.

From the Institute's website:
Rights & Liberties
As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.

Labor & Automation
Automation and early-stage artificial intelligence systems are already changing the nature of employment and working conditions in multiple sectors. AI Now works with social scientists, economists, labor organizers, and others to better understand AI's implications for labor and work – examining who benefits and who bears the cost of these rapid changes.

Bias & Inclusion
Data reflects the social, historical and political conditions in which it was created. Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes. AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.

Safety & Critical Infrastructure
As artificial intelligence systems are introduced into our core infrastructures, from hospitals to the power grid, the risks posed by errors and blind spots increase. AI Now studies the way in which AI and related technologies are being applied within these domains and to understand possibilities for safe and responsible AI integration.
The 2017 Report proffers ten policy recommendations:
1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards…

2 — Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings…

3 — After releasing an AI system, companies should continue to monitor its use across different contexts and communities. The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized…

4 — More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion…

5 — Develop standards to track the provenance, development, and use of training datasets throughout their life cycle. This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work…

6 — Expand AI bias research and mitigation strategies beyond a narrowly technical approach. Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain — such as education, healthcare or criminal justice — legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines…

7 — Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed. Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision…

8 — Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development. Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces…

9 — The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms…

10 — Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles…
I printed it out and went old-school on it with yellow highlighter and red pen.


It is excellent. A must-read, IMO. It remains to be seen, though, how much traction these proposals get in a tech world of transactionalism and "proprietary IP/data" everywhere.

Given that my grad degree is in "applied ethics" ("Ethics and Policy Studies"), I am all in on these ideas. The "rights and liberties" stuff was particularly compelling for me. I've had a good run at privacy/technology issues on another of my blogs. See my post "Clapp Trap" and its antecedent "Privacy and the 4th Amendment amid the 'War on Terror'."


"DIGITAL EXHAUST"

Another recent read on the topic.

Introduction

This book is for everyone who wants to understand the implications of the Big Data phenomenon and the Internet Economy; what it is, why it is different, the technologies that power it, how companies, governments, and everyday citizens are benefiting from it, and some of the threats it may present to society in the future.

That’s a pretty tall order, because the companies and technologies we explore in this book— the huge Internet tech groups like Google and Yahoo!, global retailers like Walmart, smartphone and tablet producers like Apple, the massive online shopping groups like Amazon or Alibaba, or social media and messaging companies like Facebook or Twitter— are now among the most innovative, complex, fast-changing, and financially powerful organizations in the world. Understanding the recent past and likely future of these Internet powerhouses helps us to appreciate where digital innovation is leading us, and is the key to understanding what the Big Data phenomenon is all about. Important, too, are the myriad innovative frameworks and database technologies— NoSQL, Hadoop, or MapReduce— that are dramatically altering the way we collect, manage, and analyze digital data…


Neef, Dale (2014-11-05). Digital Exhaust: What Everyone Should Know About Big Data, Digitization and Digitally Driven Innovation (FT Press Analytics) (Kindle Locations 148-157). Pearson Education. Kindle Edition.
UPDATE

From MIT Technology Review:
Despite All Our Fancy AI, Solving Intelligence Remains “the Greatest Problem in Science”
Autonomous cars and Go-playing computers are impressive, but we’re no closer to machines that can think like people, says neuroscientist Tomaso Poggio.


Recent advances that let computers play board games and drive cars haven’t brought the world any closer to true artificial intelligence.


That’s according to Tomaso Poggio, a professor at the McGovern Institute for Brain Research at MIT who has trained many of today’s AI leaders.


“Is this getting us closer to human intelligence? I don’t think so,” the neuroscientist said at MIT Technology Review’s EmTech conference on Tuesday.

Poggio leads a program at MIT that’s helped train several of today’s AI stars, including Demis Hassabis, cofounder of DeepMind, and Amnon Shashua, cofounder of the self-driving tech company Mobileye, which was acquired by Intel earlier this year for $15.3 billion.

“AlphaGo is one of the two main successes of AI, and the other is the autonomous-car story,” he says. “Very soon they’ll be quite autonomous.”


But Poggio said these programs are no closer to real human intelligence than before. Responding to a warning by physicist Stephen Hawking that AI could be more dangerous than nuclear weapons, Poggio called that “just hype.”…
BTW - apropos, see The MIT Center for Brains, Minds, and Machines.

"The Center for Brains, Minds and Machines (CBMM)
is a multi-institutional NSF Science and Technology Center
dedicated to the study of intelligence - how the brain produces intelligent
behavior and how we may be able to replicate intelligence in machines."

Interesting. See their (dreadful video quality) YouTube video "Discussion Panel: the Ethics of Artificial Intelligence."

In sum, I'm not sure that difficulty achieving "general AI" -- "one that can think for itself and solve many kinds of novel problems" -- is really the central issue going to applied ethics concerns. Again, read the AI Now 2017 Report.

WHAT OF "ETHICS?" ("MORAL PHILOSOPHY")

Couple of good, succinct resources for you, here and here. My elevator speech take on "ethics" is that it is not about a handy "good vs bad cookbook." It goes to honest (albeit frequently difficult) moral deliberation involving critical thinking, deliberation that takes into account "values" that pass rational muster -- surpassing the "Appeal to Tradition" fallacy.

UPDATE

Two new issues of my hardcopy Science Magazine showed up in the snailmail today. This one in particular caught my attention.

What is consciousness, and could machines have it?

Abstract
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain?

I question here the scientific and strategic underpinnings of the runaway enthusiasm for industrial-scale projects at the interface between “wet” (biology) and “hard” (physics, microelectronics and computer science) sciences. Rather than presenting the achievements and hopes fueled by big-data–driven strategies—already covered in depth in special issues of leading journals—I focus on three major issues: (i) Is the industrialization of neuroscience the soundest way to achieve substantial progress in knowledge about the brain? (ii) Do we have a safe “roadmap,” based on a scientific consensus? (iii) Do these large-scale approaches guarantee that we will reach a better understanding of the brain?

This “opinion” paper emphasizes the contrast between the accelerating technological development and the relative lack of progress in conceptual and theoretical understanding in brain sciences. It underlines the risks of creating a scientific bubble driven by economic and political promises at the expense of more incremental approaches in fundamental research, based on a diversity of roadmaps and theory-driven hypotheses. I conclude that we need to identify current bottlenecks with appropriate accuracy and develop new interdisciplinary tools and strategies to tackle the complexity of brain and mind processes…
Interesting stuff. Stay tuned.
__

CODA

Save the date.

Link
____________

More to come...

No comments:

Post a Comment