logo
ResearchBunny Logo
Power-law scaling to assist with key challenges in artificial intelligence

Computer Science

Power-law scaling to assist with key challenges in artificial intelligence

Y. Meir, S. Sardi, et al.

This study reveals how optimized test errors in deep learning diminish as database sizes increase, crucial for swift decision-making. Conducted by a team of researchers at Bar-Ilan University, this research sets a benchmark for assessing training complexity across various machine learning tasks and algorithms.

00:00
00:00
Playback language: English
Abstract
This paper investigates power-law scaling in deep learning, specifically focusing on how optimized test errors converge to zero with increasing database size. The study explores this relationship for both single-epoch and multi-epoch training, examining the impact on test error and the implications for rapid decision-making. The research establishes a benchmark for measuring training complexity and provides a quantitative hierarchy of machine learning tasks and algorithms.
Publisher
Scientific Reports
Published On
Nov 12, 2020
Authors
Yuval Meir, Shira Sardi, Shiri Hodassman, Karin Kisos, Itamar Ben-Noam, Amir Goldental, Ido Kanter
Tags
power-law scaling
deep learning
test errors
database size
training complexity
machine learning
decision-making
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny