Attribution-based explanations are popular in computer vision but have limitations for fine-grained classification. This paper introduces GALORE, a GenerAlized expLanatiOn fRamEwork, unifying attributive, deliberative, and counterfactual explanations. Deliberative explanations address "why" a class was chosen by visualizing model insecurities. Counterfactual explanations address "why not" other classes, efficiently computed via attribution maps and confidence scores. Evaluation on CUB200 and ADE20K datasets shows improved explanation accuracy, insightful deliberative explanations correlating with human reasoning, and enhanced performance in machine teaching experiments using counterfactual explanations.
Publisher
IEEE Transactions on Pattern Analysis and Machine Intelligence