logo
ResearchBunny Logo
Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement

Computer Science

Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement

S. Serafimova

This paper by Silviya Serafimova delves into the intricate moral implications of algorithmic decision-making in machine ethics, challenging the notion that ethical intelligence can equate to human-level moral agency. It scrutinizes key normative theories to illuminate the complexities of crafting genuine ethical agents.

00:00
Playback language: English
Abstract
This paper investigates the moral implications of algorithmic decision-making in machine ethics. It challenges the assumption that even well-functioning ethical intelligent agents (AI) necessarily meet the requirements of autonomous moral agents like humans. Three first-order normative theories—Kantianism, act utilitarianism, and virtue ethics—are examined to illustrate the difficulties in creating both implicit and explicit ethical agents. By comparing these theories and highlighting the differences between calculation and moral estimation, the paper questions the feasibility of both 'strong' (human-level moral cognition) and 'weak' (pre-programmed ethical behavior) moral AI scenarios.
Publisher
Humanities & Social Sciences Communications
Published On
Oct 07, 2020
Authors
Silviya Serafimova
Tags
algorithmic decision-making
machine ethics
moral agents
normative theories
ethical behavior
Kantianism
utilitarianism
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny