logo
ResearchBunny Logo
What do algorithms explain? The issue of the goals and capabilities of Explainable Artificial Intelligence (XAI)

Computer Science

What do algorithms explain? The issue of the goals and capabilities of Explainable Artificial Intelligence (XAI)

M. Renftle, H. Trittenbach, et al.

Discover the intriguing world of Explainable AI as Moritz Renftle, Holger Trittenbach, Michael Poznic, and Reinhard Heil examine the flaws in current reasoning schemes in XAI literature. They challenge conventional understandings of interpretability and trust, proposing a fresh, user-centric viewpoint on simplifying machine learning models with interpretable attributes. This paper sheds light on the dual challenges of approximation and translation in AI interpretation.

00:00
00:00
~3 min • Beginner • English
Abstract
The increasing ubiquity of machine learning (ML) motivates research on algorithms to "explain" models and their predictions—so-called Explainable Artificial Intelligence (XAI). Despite many publications and discussions, the goals and capabilities of such algorithms are far from being well understood. We argue that this is because of a problematic reasoning scheme in the literature: Such algorithms are said to complement machine learning models with desired capabilities, such as interpretability or explainability. These capabilities are in turn assumed to contribute to a goal, such as trust in a system. But most capabilities lack precise definitions and their relationship to such goals is far from obvious. The result is a reasoning scheme that obfuscates research results and leaves an important question unanswered: What can one expect from XAI algorithms? In this paper, we clarify the modest capabilities of these algorithms from a concrete perspective: that of their users. We show that current algorithms can only answer user questions that can be traced back to the question: "How can one represent an ML model as a simple function that uses interpreted attributes?". Answering this core question can be trivial, difficult or even impossible, depending on the application. The result of the paper is the identification of two key challenges for XAI research: the approximation and the translation of ML models.
Publisher
Humanities and Social Sciences Communications
Published On
Jun 14, 2024
Authors
Moritz Renftle, Holger Trittenbach, Michael Poznic, Reinhard Heil
Tags
Explainable Artificial Intelligence
machine learning
interpretability
trust
user-centric perspective
approximation
translation
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny