logo
ResearchBunny Logo
Abstract
This study explores the necessity and direction of safety regulations for Artificial Intelligence (AI), drawing parallels from the International Atomic Energy Agency (IAEA)'s nuclear safety regulations. The rapid advancement and global proliferation of AI necessitate standardized safety norms to minimize discrepancies between national regulations and enhance consistency and effectiveness. International collaboration and stakeholder engagement are crucial for appropriate and continuously updated regulations. The study highlights the risks of improperly tuned subgoal setting mechanisms in AI's decision-making, advocating for regulatory approaches to guarantee safe and predictable AI operations. It acknowledges limitations of directly applying IAEA models to AI due to distinct characteristics and risks, calling for future research on a tailored framework.
Publisher
Humanities and Social Sciences Communications
Published On
Mar 28, 2024
Authors
Seokki Cha
Tags
Artificial Intelligence
safety regulations
International Atomic Energy Agency
standardized norms
stakeholder engagement
decision-making
predictable operations
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny