Skip navigation

Explainable AI, Disagreement, and Idealization


About This Event

AI systems are being used for a rapidly increasing number of important decisions. Many of these systems are “black boxes”: their functioning is opaque both to the people affected by them and to those developing them. Black box AI systems are difficult to evaluate for accuracy, fairness, and general trustworthiness. Explainable AI (XAI) methods aim to alleviate the opacity of complex AI systems. However, there is debate about whether these methods can provide adequate explanations for the behavior of black box AI systems. One of the biggest problems for XAI methods is that they are prone to disagree with one another. In this paper, I argue that we should understand these XAI methods as producing idealized models of black box systems, much like idealized scientific models of other complex phenomena. This account of XAI methods employs resources from epistemology and philosophy of science to help understand what it would mean for XAI methods to successfully explain black box AI systems.

Featured Guests

Will Fleisher, Department of Philosophy, Georgetown University

Co-sponsors

Department of Philosophy, University of Rochester

April 21, 2023, 2 p.m. to 4 p.m.

Room 2110D, Dewey Hall, University of Rochester

DH11: AI and Human Values


Audience: Open to the Public

Host: University of Rochester

Category: Lecture