logo
ResearchBunny Logo
Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data

Computer Science

Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data

C. H. Martin, T. (. Peng, et al.

Discover groundbreaking insights from Charles H. Martin, Tongsu (Serena) Peng, and Michael W. Mahoney as they tackle the daunting challenge of evaluating pre-trained neural network models without any access to training data. Their research reveals that power law-based metrics significantly outperform traditional measures in distinguishing model quality and uncovering hidden issues.

00:00
00:00
~3 min • Beginner • English
Abstract
In many applications, one works with neural network models trained by someone else. For such pretrained models, one may not have access to training data or test data. Moreover, one may not know details about the model, e.g., the specifics of the training data, the loss function, the hyperparameter values, etc. Given one or many pretrained models, it is a challenge to say anything about the expected performance or quality of the models. Here, we address this challenge by providing a detailed meta-analysis of hundreds of publicly available pretrained models. We examine norm-based capacity control metrics as well as power law based metrics from the recently-developed Theory of Heavy-Tailed Self Regularization. We find that norm based metrics correlate well with reported test accuracies for well-trained models, but that they often cannot distinguish well-trained versus poorly trained models. We also find that power law based metrics can do much better—quantitatively better at discriminating among series of well-trained models with a given architecture; and qualitatively better at discriminating well-trained versus poorly trained models. These methods can be used to identify when a pretrained neural network has problems that cannot be detected simply by examining training/test accuracies.
Publisher
Nature Communications
Published On
Jul 05, 2021
Authors
Charles H. Martin, Tongsu (Serena) Peng, Michael W. Mahoney
Tags
neural networks
model evaluation
pre-trained models
power law metrics
computer vision
natural language processing
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny