logo
ResearchBunny Logo
Robust Counterfactual Explanations in Machine Learning: A Survey

Computer Science

Robust Counterfactual Explanations in Machine Learning: A Survey

J. Jiang, F. Leofante, et al.

Counterfactual explanations promise actionable algorithmic recourse but recent work highlights serious robustness failures. This survey reviews the fast-growing literature on robust CEs, analyzes different notions of robustness, and discusses existing solutions and limitations — research conducted by Junqi Jiang, Francesco Leofante, Antonio Rago, and Francesca Toni.

00:00
00:00
~3 min • Beginner • English
Abstract
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order. In this survey, we review works in the rapidly growing area of robust CEs and perform an in-depth analysis of the forms of robustness they consider. We also discuss existing solutions and their limitations, providing a solid foundation for future developments.
Publisher
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24)
Published On
Authors
Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Tags
Counterfactual explanations
Algorithmic recourse
Robustness
Machine learning
Survey
Interpretability
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny