Philosophy and AI Seminar: Annette Zimmermann

Part of the Oslo Philosophy and Artificial Intelligence talk series

Annette Zimmermann

Photo from https://www.annette-zimmermann.com/

This talk will be on zoom. To receive the zoom link please send an

Title: Proceeding with less caution

Abstract: In previous work (Zimmermann & Lee-Stronach, “Proceed with Caution,” Canadian Journal of Philosophy 52, no. 1 (2022), 6-25), I argue that we have a moral and epistemic duty to avoid doxastic negligence when it comes to our human response to algorithmic outputs in high-stakes, complex decision settings. In other words, we often have strong reasons to proceed with caution in such settings. Proceeding with caution can require, for instance, (i) recognizing—and leaving room for—uncertainty by suspending belief about algorithmic outputs; (ii) initiating and continuing processes of inquiry, even in maximally complete information settings, e.g. checking and reconsidering algorithmic decision rules and input data; and (iii) gathering and explicitly considering additional information with the goal of achieving maximally informative input data, including data on sensitive attributes.
However, sometimes our moral and epistemic duties pull in the opposing direction: too much caution can undermine important normative goals—the same goals which motivate proceeding with caution in the first place. This paper explores what this implies for how and why we ought to—and ought not to—engage in further inquiry with respect to a given algorithmic output.

About the Speaker:

Annette Zimmermann is a political philosopher working on the ethics and politics of artificial intelligence, machine learning, and big data. She is an Assistant Professor of Philosophy at the University of Wisconsin-Madison (starting August 2022) and a Technology & Human Rights Fellow at Harvard University. Before that, Annette was a Lecturer (Assistant Professor) at the University of York and a postdoctoral fellow at Princeton University.

Annette’s research explores questions like: what is algorithmic injustice, and how do its effects compound over time? What role do risk and uncertainty play in this context? What does it mean to trust AI? Whose voices should we prioritize in collective decisions about AI design and deployment—and whose voices are currently excluded? Whose rights are most at risk? How can we place AI under meaningful democratic control—and would that solve the problem of algorithmic injustice?

According to Annette, the algorithmic is political. AI does not exist in a moral and political vacuum. Technological models interact dynamically with the social world, including larger-scale patterns of injustice. How we deal with this problem is a moral and a political choice.

About the Talk Series:

The Oslo Philosophy and Artificial Intelligence talk series is organized by Sebastian Watzl and the Warring with Machines Project.

Upcoming Talks:

  • Date tbd, Hima Lakkaraju (Harvard University), specializes in trustworthy machine learning

Past Talks:

 

Published Oct. 24, 2022 10:47 AM - Last modified June 11, 2024 12:41 PM