“How much should we matter to an Ethical AI?” by Cansu Canca

Abstract:
When we try to navigate potential impacts of artificial intelligence (AI), we invariably ask: Is AI for the good? Often, implicit in the question is: Is AI good for humans and humanity? This vagueness is captured by the interchangeable use of “ethical AI” and “human-centered AI”. But those two AI systems—and those two questions posed above—might differ significantly. While an ethical AI is, by definition, for the good, it might not necessarily be good for humans and humanity in all circumstances. Put differently, a human-centered AI might not be an ethical AI.
As we design complex AI systems that assist us in our tasks and decision-making, we have to face the daunting task of integrating value trade-offs into these systems. Going forward, as AI systems grow more robust making more complex or autonomous decisions, these value trade-offs will increasingly matter in how AI systems weigh various competing demands. More specifically, AI systems will have to weigh the value of human well-being against the value of other beings (including the well-being of AI agents if and when AI systems acquire moral status). How then should we design AI systems that accurately take into account the value of ourselves and other beings?

Date: 6 November 2019, Wednesday
Time: 3pm to 5pm
Venue: Philosophy Meeting Room (AS3-05-23)

About the Speaker:
Cansu Canca is a philosopher and the founder/director of the AI Ethics Lab, an initiative facilitating interdisciplinary research and providing guidance to researchers and practitioners. She has a Ph.D. (NUS, 2012) in philosophy specializing in applied ethics. She primarily works on ethics of technology and on ethics and health. Prior to the AI Ethics Lab, she was a lecturer at the University of Hong Kong, and a researcher at the Harvard Law School, Harvard School of Public Health, Harvard Medical School, Osaka University, and the World Health Organization.

All are welcome

Comments are closed.