This paper examines the ethical implications of using machine learning (ML) and artificial intelligence (AI) in decision-making. It highlights the "black box" nature of many ML algorithms, leading to concerns about fairness, accuracy, accountability, and transparency. The paper reviews existing guidelines and literature on AI ethics, focusing on beneficence, non-maleficence, autonomy, justice, and explicability. Two case studies—risk assessment in the criminal justice system and autonomous vehicles—illustrate ethical challenges and trade-offs. The paper concludes by discussing potential ways forward, including the need for governance, interpretable models, and greater stakeholder participation.
Publisher
Humanities and Social Sciences Communications
Published On
Jun 17, 2020
Authors
Samuele Lo Piano
Tags
machine learning
artificial intelligence
ethical implications
decision-making
accountability
transparency
fairness
Related Publications
Explore these studies to deepen your understanding of the subject.