This study explores the necessity and direction of safety regulations for Artificial Intelligence (AI), drawing parallels from the International Atomic Energy Agency (IAEA)'s nuclear safety regulations. The rapid advancement and global proliferation of AI necessitate standardized safety norms to minimize discrepancies between national regulations and enhance consistency and effectiveness. International collaboration and stakeholder engagement are crucial for appropriate and continuously updated regulations. The study highlights the risks of improperly tuned subgoal setting mechanisms in AI's decision-making, advocating for regulatory approaches to guarantee safe and predictable AI operations. It acknowledges limitations of directly applying IAEA models to AI due to distinct characteristics and risks, calling for future research on a tailored framework.
Publisher
Humanities and Social Sciences Communications
Published On
Mar 28, 2024
Authors
Seokki Cha
Tags
Artificial Intelligence
safety regulations
International Atomic Energy Agency
standardized norms
stakeholder engagement
decision-making
predictable operations
Related Publications
Explore these studies to deepen your understanding of the subject.