On the 10th of February, the Platform for the Ethics and Politics of Technology organizes a PEPTalk in which LANCAR research affiliate Federica Russo is a guest speaker. In her talk, she will explain how the ethics and epistemology of AI can be more connected. Based on joint work with Eric Schliesser and Jean Wagemans, Russo introduces an epistemology for glass box AI which can explicitly incorporate values and other normative considerations.
A typical line of argument in ethics of AI is that the need for fair and just AI is related to the possibility of understanding the AI system itself. A fair and just AI, then, requires turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected.
In this talk Federica Russo will focus on how to remedy this problem and introduce an epistemology for glass box AI that can explicitly incorporate values and other normative considerations. The proposed framework draws on existing work in argumentation theory on how to model the handling, eliciting, and interrogation of the authority and trustworthiness of expert opinion, as we as work on inductive risk in the philosophy of science to think through how social consequences that harm intersectionally vulnerable populations can be modelled in the context of AI design and implementation.
This talk is based on joint work with Eric Schliesser and Jean Wagemans as part of their RPA-Human(e) AI project “Towards an epistemological and ethical ‘explainable AI’”.
More information about the PEPTalk can be found here.