This paper investigates the moral implications of algorithmic decision-making in machine ethics. It challenges the assumption that even well-functioning ethical intelligent agents (AI) necessarily meet the requirements of autonomous moral agents like humans. Three first-order normative theories—Kantianism, act utilitarianism, and virtue ethics—are examined to illustrate the difficulties in creating both implicit and explicit ethical agents. By comparing these theories and highlighting the differences between calculation and moral estimation, the paper questions the feasibility of both 'strong' (human-level moral cognition) and 'weak' (pre-programmed ethical behavior) moral AI scenarios.
Publisher
Humanities & Social Sciences Communications
Published On
Oct 07, 2020
Authors
Silviya Serafimova
Tags
algorithmic decision-making
machine ethics
moral agents
normative theories
ethical behavior
Kantianism
utilitarianism
Related Publications
Explore these studies to deepen your understanding of the subject.