This talk will be on zoom. To receive the zoom link please send an email to Sebastian Watzl
Title: Proceeding with less caution
Abstract: In previous work (Zimmermann & Lee-Stronach, “Proceed with Caution,” Canadian Journal of Philosophy 52, no. 1 (2022), 6-25), I argue that we have a moral and epistemic duty to avoid doxastic negligence when it comes to our human response to algorithmic outputs in high-stakes, complex decision settings. In other words, we often have strong reasons to proceed with caution in such settings. Proceeding with caution can require, for instance, (i) recognizing—and leaving room for—uncertainty by suspending belief about algorithmic outputs; (ii) initiating and continuing processes of inquiry, even in maximally complete information settings, e.g. checking and reconsidering algorithmic decision rules and input data; and (iii) gathering and explicitly considering additional information with the goal of achieving maximally informative input data, including data on sensitive attributes.
However, sometimes our moral and epistemic duties pull in the opposing direction: too much caution can undermine important normative goals—the same goals which motivate proceeding with caution in the first place. This paper explores what this implies for how and why we ought to—and ought not to—engage in further inquiry with respect to a given algorithmic output.
About the Speaker:
Annette Zimmermann is a political philosopher working on the ethics and politics of artificial intelligence, machine learning, and big data. She is an Assistant Professor of Philosophy at the University of Wisconsin-Madison (starting August 2022) and a Technology & Human Rights Fellow at Harvard University. Before that, Annette was a Lecturer (Assistant Professor) at the University of York and a postdoctoral fellow at Princeton University.
Annette’s research explores questions like: what is algorithmic injustice, and how do its effects compound over time? What role do risk and uncertainty play in this context? What does it mean to trust AI? Whose voices should we prioritize in collective decisions about AI design and deployment—and whose voices are currently excluded? Whose rights are most at risk? How can we place AI under meaningful democratic control—and would that solve the problem of algorithmic injustice?
According to Annette, the algorithmic is political. AI does not exist in a moral and political vacuum. Technological models interact dynamically with the social world, including larger-scale patterns of injustice. How we deal with this problem is a moral and a political choice.
- (this description of Prof. Zimmermann's research has been taken from https://www.annette-zimmermann.com/. For more information about her see that website)
About the Talk Series:
The Oslo Philosophy and Artificial Intelligence talk series is organized by Sebastian Watzl and the Warring with Machines Project.
Upcoming Talks:
- Date tbd, Hima Lakkaraju (Harvard University), specializes in trustworthy machine learning
Past Talks:
- Herman Capellen (University of Hong Kong)
- Bruce Swett (chief AI architect at Northrup Grumman)
- Round Table with Einar Bøhn (UiA, member of CAIR), Ophelia Deroy (LMU, Coordinator of AI-partners), George Lukas (author of Ethics and Cyber Warfare), Greg Reichberg (PRIO), Camilla Sterck-Hansen (Professor of Philosophy), Henrik Syse (PRIO), Shannon Vallor (director of the Centre for Technomoral Futures), Sebastian Watzl (GoodAttention),