Institut Jean Nicod

Accueil > Membres > Etudiant·e·s > Doctorant·e·s > SOUVERAIN Thomas > SOUVERAIN Thomas


Peut-on expliquer l’IA ? Solutions techniques, enjeux éthiques en finance


Etablissement d’inscription au doctorat



Paul Egré






Research Interests : Artificial Intelligence, Machine-Learning, Philosophy of Science, Explanation, Finance

I am working on a 3-year PhD thesis on AI explainability and ethics (2020-2023). My current research focuses on the links between philosophy and explainable AI : “what exactly does it mean to explain AI ?” (see below).

More generally, I am passionate about ethical issues concerning AI and society. How to communicate with non-experts to make technical aspects understandable ? Is scientific discourse sufficient to allow citizens to form an opinion about ethical consequences of AI products ? Regarding for example AI driverless car (security), domestic devices (privacy), justice or financial services (equity), I like to observe and think about the diversity of ethical AI dilemmas… These are the kind of issues on my research horizon.

My PhD research starts from the "black box" problem at the core of recent AI, machine-learning technologies. Machine-learning methods can learn without being explicitly programmed which can make them opaque, especially since they have a growing predictive power. The lack of transparency in their functioning, which is often too complex and too fast for a human observer, has earned these technologies the nickname of "black boxes".

I am thus facing this problem of philosophy of knowledge : on which elements does a good explanation in AI rely ? Companies are currently developing many explainability tools. We identify their philosophical basis of explanation, comparing these techniques according to the way they explain – type of reasoning, scale and level of explanation... Financial services are our field of study. We focus on use cases where explaining AI is fraught with ethical consequences versus the "black box" opacity : for example, loan granting where mortgage denial requires a justification of how the machine analysed the data. Our PhD thesis is thus a two-pronged research, epistemological and ethical, to think about the issues of explanation in AI.