logo
ResearchBunny Logo
The impact of artificial intelligence on human society and bioethics

Interdisciplinary Studies

The impact of artificial intelligence on human society and bioethics

M. C. Tai

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is set to transform how we work, relate, and even what we know about ourselves. This article examines AI’s industrial, social, and economic impacts in the 21st century and proposes a set of principles for AI bioethics. Research was conducted by Michael Cheng-Tek Tai.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper introduces artificial intelligence as human-designed machine intelligence that interprets external data, learns from it, and adapts to achieve specific goals. It situates AI’s rapid penetration into daily life (e.g., OCR, Siri) and outlines the article’s purpose: to explain what AI is, distinguish weak/narrow AI from strong/general AI (AGI), assess AI’s social, industrial, and economic impacts, especially in healthcare, and propose bioethical principles to guide AI’s development so that its benefits are realized while harms are mitigated.
Literature Review
The review synthesizes established definitions and perspectives on AI (e.g., Kaplan & Haenlein; Russell & Norvig; Schank; Nilsson) and the weak vs. strong AI/AGI distinction. It discusses warnings about AGI risks and convergent behaviors (Stephen Hawking; Nick Bostrom) and surveys applications across sectors (autonomous vehicles, search, assistants, image recognition). In healthcare, sources describe diagnostics, robotic surgery, radiology, and remote presence. Ethical analyses and policy frameworks are highlighted, including the EU High-Level Expert Group’s Ethics Guidelines for Trustworthy AI (lawful, ethical, robust; seven requirements) and calls for explainability and interpretability (Intelligence Community; Bostrom & Yudkowsky). It also notes concerns about algorithmic bias and societal harms (NeurIPS debates; Nature report) and the physician-in-the-loop concept.
Methodology
Key Findings
- AI types: Weak/narrow AI is task-specific (e.g., facial recognition, Siri, self-driving cars); strong AI/AGI aspires to human-level general cognition and could outperform humans across tasks. - Negative societal impacts: Potential disruption of community life and diminished face-to-face interaction; unemployment due to automation (assembly lines, self-checkout); widening wealth inequality (“M-shaped” distribution); risk of autonomous behavior beyond human control once systems are trained; prospect of programmer-induced bias or harmful targeting (e.g., weaponization, discriminatory applications such as predictive policing or facial recognition). - Positive impacts in healthcare: Fast, accurate diagnostics (e.g., IBM Watson suggesting diagnoses and treatments based on loaded exam data); socially therapeutic robots to reduce loneliness and assist seniors; reduction of human-error due to fatigue; AI-assisted and robotic surgery enabling minimally invasive procedures with high precision (e.g., da Vinci system). An autonomous robot demonstration stitched a pig’s bowel and reportedly performed better than a human surgeon at Children’s National Medical Center. Radiology improvements since the 1970s (CT introduced in 1971; MRI in 1977; algorithmic advances for detection and analysis). Virtual presence enables remote examination and specialist support for patients unable to travel. - Ethical frameworks and safeguards: Persistent need for human experts (“physician-in-the-loop”) to prevent misclassification and uncontrolled actions; acknowledgment that bias in machine learning cannot be fully eliminated and must be mitigated by policy. EU guidelines emphasize lawfulness, ethics, robustness; seven requirements include human oversight, security and accuracy, privacy, transparency and explainability, fairness/non-discrimination, sustainability, and auditability. The author proposes four core AI bioethics principles: Beneficence, Value-upholding, Lucidity (transparency, explainability, interpretability), and Accountability.
Discussion
The article argues that AI’s transformative capabilities necessitate robust bioethical guidance to ensure that technological progress serves human welfare. By detailing both risks (autonomy beyond human control, unemployment, inequality, bias, potential harm) and benefits (diagnostics, robotics, radiology, remote care), it shows that AI can profoundly reshape health care and society. Ethical frameworks—human oversight, transparency, explainability, fairness, privacy, security, sustainability, and auditability—address these challenges by constraining AI within human values and legal norms. The proposed principles (Beneficence, Value-upholding, Lucidity, Accountability) aim to align AI development with societal good, reduce harm, and maintain trust. The discussion underscores that AI must remain a tool under human judgment; physicians and experts should stay in the loop, and high-risk systems must be tested and certified before deployment.
Conclusion
AI is a permanent feature of modern society and must be governed by bioethical principles of beneficence, value-upholding, lucidity, and accountability. Because AI lacks human empathy, compassion, and wisdom, it should not make consequential decisions without human oversight. Bioethics is a process of conscientization, not mere calculation, and AI will remain a tool that requires extreme caution in development and deployment. High-risk AI must be tested and certified prior to entering the market, and AI must serve people and respect their rights. The paper emphasizes enforcing ethical safeguards to ensure AI benefits humanity while minimizing harm.
Limitations
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny