Evaluating the quality and reliability of explanations for Graph Neural Networks (GNNs) is crucial as their use increases in high-stakes applications. Existing graph datasets lack reliable ground-truth explanations, hindering this evaluation. This paper introduces SHAPEGGEN, a synthetic graph data generator, which creates diverse benchmark datasets with ground-truth explanations. SHAPEGGEN's flexibility allows it to mimic real-world data characteristics. These datasets, along with several real-world datasets, are integrated into GRAPHXAI, a graph explainability library. GRAPHXAI provides data loaders, processing functions, visualizers, GNN models, and evaluation metrics to benchmark GNN explainability methods. The authors evaluate eight state-of-the-art GNN explanation methods using GRAPHXAI, revealing limitations in handling large explanations and fairness preservation.