logo
ResearchBunny Logo
Abstract
Explainable Artificial Intelligence (XAI) algorithms aim to clarify machine learning (ML) models and predictions. However, the paper argues that the goals and capabilities of XAI are poorly understood due to a flawed reasoning scheme in the literature. This scheme assumes that XAI algorithms directly contribute to goals like trust by providing capabilities like interpretability, which lack precise definitions and clear links to goals. The authors propose a user-centric perspective, focusing on the question: "How can one represent an ML model as a simple function that uses interpreted attributes?" They identify two key challenges for XAI: approximation (representing complex models with simpler ones) and translation (mapping technical attributes to interpretable ones). The paper concludes by reviewing the current state of XAI algorithms in addressing these challenges and advocating for a more realistic approach to describing XAI's capabilities.
Publisher
Humanities and Social Sciences Communications
Published On
Jun 14, 2024
Authors
Moritz Renftle, Holger Trittenbach, Michael Poznic, Reinhard Heil
Tags
Explainable Artificial Intelligence
machine learning
interpretability
trust
user-centric perspective
approximation
translation
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny