This paper explores improving the diagnostic performance and clinical usability of neural networks for pathology detection. The authors demonstrate that adversarially trained models, further enhanced by dual batch normalization, significantly improve the interpretability of saliency maps as rated by radiologists. Contrary to previous findings, accuracy remains comparable to standard models when using sufficiently large datasets and dual batch normalization. Results are validated on an external test set, highlighting the need for distinct training paths for adversarial and real images to achieve state-of-the-art results with superior clinical interpretability.
Publisher
Nature Communications
Published On
Jul 14, 2021
Authors
Tianyu Han, Sven Nebelung, Federico Pedersoli, Markus Zimmermann, Maximilian Schulze-Hagen, Michael Ho, Christoph Haarburger, Fabian Kiessling, Christiane Kuhl, Volkmar Schulz, Daniel Truhn
Tags
neural networks
pathology detection
adversarial training
clinical usability
interpretability
saliency maps
batch normalization
Related Publications
Explore these studies to deepen your understanding of the subject.