logo
ResearchBunny Logo
Abstract
This paper investigates power-law scaling in deep learning, specifically focusing on how optimized test errors converge to zero with increasing database size. The study explores this relationship for both single-epoch and multi-epoch training, examining the impact on test error and the implications for rapid decision-making. The research establishes a benchmark for measuring training complexity and provides a quantitative hierarchy of machine learning tasks and algorithms.
Publisher
Scientific Reports
Published On
Nov 12, 2020
Authors
Yuval Meir, Shira Sardi, Shiri Hodassman, Karin Kisos, Itamar Ben-Noam, Amir Goldental, Ido Kanter
Tags
power-law scaling
deep learning
test errors
database size
training complexity
machine learning
decision-making
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny