logo
ResearchBunny Logo
A large-scale audit of dataset licensing and attribution in AI

Interdisciplinary Studies

A large-scale audit of dataset licensing and attribution in AI

S. Longpre, R. Mahari, et al.

This groundbreaking study by Shayne Longpre, Robert Mahari, and others delves into the complexities of data transparency in large language models. With an audit of over 1,800 text datasets, it uncovers critical disparities in licensing and highlights the urgent need for tools that ensure responsible AI development.

00:00
00:00
~3 min • Beginner • English
Abstract
The race to train language models on vast, diverse and inconsistently documented datasets raises pressing legal and ethical concerns. To improve data transparency and understanding, we convene a multi-disciplinary effort between legal and machine learning experts to systematically audit and trace more than 1,800 text datasets. We develop tools and standards to trace the lineage of these datasets, including their source, creators, licences and subsequent use. Our landscape analysis highlights sharp divides in the composition and focus of data licenced for commercial use. Important categories including low-resource languages, creative tasks and new synthetic data all tend to be restrictively licenced. We observe frequent miscategorization of licences on popular dataset hosting sites, with licence omission rates of more than 70% and error rates of more than 50%. This highlights a crisis in misattribution and informed use of popular datasets driving many recent breakthroughs. Our analysis of data sources also explains the application of copyright law and fair use to finetuning data. As a contribution to continuing improvements in dataset transparency and responsible use, we release our audit, with an interactive user interface, the Data Provenance Explorer, to enable practitioners to trace and filter on data provenance for the most popular finetuning data collections: www.dataprovenance.org.
Publisher
Nature Machine Intelligence
Published On
Aug 30, 2024
Authors
Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, Xinyi (Alexis) Wu, Enrico Shippole, Kurt Bollacker, Tongshuang Wu, Luis Villa, Sandy Pentland, Sara Hooker
Tags
large language models
data transparency
dataset lineage
licensing issues
responsible AI
data misattribution
multidisciplinary study
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny