logo
ResearchBunny Logo
Abstract
Evaluating the quality and reliability of explanations for Graph Neural Networks (GNNs) is crucial as their use increases in high-stakes applications. Existing graph datasets lack reliable ground-truth explanations, hindering this evaluation. This paper introduces SHAPEGGEN, a synthetic graph data generator, which creates diverse benchmark datasets with ground-truth explanations. SHAPEGGEN's flexibility allows it to mimic real-world data characteristics. These datasets, along with several real-world datasets, are integrated into GRAPHXAI, a graph explainability library. GRAPHXAI provides data loaders, processing functions, visualizers, GNN models, and evaluation metrics to benchmark GNN explainability methods. The authors evaluate eight state-of-the-art GNN explanation methods using GRAPHXAI, revealing limitations in handling large explanations and fairness preservation.
Publisher
Scientific Data
Published On
Mar 18, 2023
Authors
Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, Marinka Zitnik
Tags
Graph Neural Networks
explainability
synthetic graph data
benchmark datasets
SHAPEGGEN
evaluation metrics
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny