This paper investigates power-law scaling in deep learning, specifically focusing on how optimized test errors converge to zero with increasing database size. The study explores this relationship for both single-epoch and multi-epoch training, examining the impact on test error and the implications for rapid decision-making. The research establishes a benchmark for measuring training complexity and provides a quantitative hierarchy of machine learning tasks and algorithms.
Publisher
Scientific Reports
Published On
Nov 12, 2020
Authors
Yuval Meir, Shira Sardi, Shiri Hodassman, Karin Kisos, Itamar Ben-Noam, Amir Goldental, Ido Kanter
Tags
power-law scaling
deep learning
test errors
database size
training complexity
machine learning
decision-making
Related Publications
Explore these studies to deepen your understanding of the subject.