This paper addresses the challenge of evaluating the quality of pre-trained neural network models without access to training or testing data. The authors conduct a meta-analysis of hundreds of publicly available pre-trained models from computer vision and natural language processing, examining norm-based and power law-based metrics. They find that power law-based metrics are superior in discriminating well-trained from poorly trained models and in identifying model problems not detectable through training/test accuracies alone.
Publisher
Nature Communications
Published On
Jul 05, 2021
Authors
Charles H. Martin, Tongsu (Serena) Peng, Michael W. Mahoney
Tags
neural networks
model evaluation
pre-trained models
power law metrics
computer vision
natural language processing
Related Publications
Explore these studies to deepen your understanding of the subject.