logo
ResearchBunny Logo
Navigating the perils of artificial intelligence: a focused review on ChatGPT and responsible research and innovation

Computer Science

Navigating the perils of artificial intelligence: a focused review on ChatGPT and responsible research and innovation

A. Polyportis and N. Pahos

Discover the potential pitfalls of AI tools like ChatGPT as researched by Athanasios Polyportis and Nikolaos Pahos. This article delves into the impact of technology on relationships, employment, privacy, and biases, and introduces a framework to ensure responsible innovation in AI development.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses whether ChatGPT constitutes a net benefit or risk to society by outlining both potential advantages and challenges of advanced AI. ChatGPT’s emergence in late 2022 showcases unprecedented language capabilities that can transform sectors such as education, healthcare, finance, and public services. Alongside these opportunities, the authors highlight pressing ethical concerns, including privacy, inclusivity, inequality, bias, safety, and the possibility that ChatGPT may negatively influence users’ moral judgments. Given the nascent state of AI ethics, the study argues for frameworks that embed ethical considerations throughout the AI lifecycle. The review sets two research questions: (RQ1) What are the perils of irresponsible ChatGPT (and similar AI tools) development and implementation? (RQ2) In what ways can a multi-stakeholder Responsible Research and Innovation (RRI) framework guide the sustainable development and use of chatbots? The study aims to map risks across individual, organizational, and societal levels and to propose an RRI-based, stakeholder-inclusive framework to guide responsible AI.
Literature Review
The review situates ChatGPT within broader debates on AI’s ethical implications, referencing concerns about existential risk, superintelligence, and singularity. It synthesizes prior work identifying AI-related dangers: privacy intrusions, bias, misinformation, safety issues, and inequities. The authors draw on Responsible Research and Innovation (RRI) scholarship—especially Von Schomberg’s definition emphasizing ethical acceptability, sustainability, and societal desirability, and Stilgoe et al.’s four dimensions: anticipation, inclusiveness, reflexivity, and responsiveness. RRI’s policy roots in the European Commission and its normative anchor points (safety, privacy, sustainability, quality of life, gender equality, underpinned by transparency) are reviewed. The paper also connects stakeholder theory to RRI to argue that AI innovators are accountable to a broad ecosystem, not only shareholders. Prior literature underscores public concern about AI, the limited impact of ethics guidelines on developer decisions without incentives, and the need for robust, multi-actor governance. This theoretical grounding motivates the proposed multi-stakeholder RRI framework for AI and chatbots.
Methodology
The study employs a focused literature review to synthesize concepts and theories related to advanced AI and RRI, rather than an exhaustive survey or quantitative bibliometric analysis. Searches were conducted in Scopus and Google Scholar using keyword combinations such as: “ChatGPT AND responsible research and innovation,” “artificial intelligence AND responsible research and innovation,” “chatbot AND responsible research and innovation,” “ChatGPT AND responsible development,” “ChatGPT AND perils,” “ChatGPT AND challenges,” and “ChatGPT AND implications,” yielding 1,892 records. Inclusion criteria were: English language; publication years 2011–2023; full-text availability; journal articles and conference papers; exclusion of non-international journals, preprints, and non–peer-reviewed items. After screening, 118 records met inclusion criteria; 58 duplicates were removed. A cited-reference snowballing step added 12 unique records. The final sample consisted of 72 studies. Selection emphasized relevance to the research questions based on the authors’ judgment. The approach distills key concepts, maps AI tool challenges, and informs construction of a targeted multi-stakeholder RRI framework for chatbots.
Key Findings
- RQ1—Perils across levels: • Individual: Risk of misinformation/disinformation due to plausible but inaccurate outputs; potential targeting of vulnerable users with harmful content and psychological harm; privacy and security risks including data harvesting and identity theft; fear and uncertainty affecting human–AI interactions; possible substitution for human relationships leading to alienation. • Organizational: Labor market disruptions, job displacement, and shift toward part-time/unstable work; reduced human-to-human contact in customer support and management; diminished self-efficacy and perceived creepiness in human–AI interactions harming brand experience and loyalty; intellectual property theft and copyright issues; cybersecurity threats and phishing; exposure of confidential information; legal and reputational risks. • Societal: Educational integrity concerns (reduced creativity, exam cheating), authorship and research integrity issues; digital inequities if access becomes paywalled; healthcare risks including system errors, patient privacy concerns, and ethical dilemmas in critical decisions; governmental surveillance implications; broader concerns about superintelligence and singularity leading to autonomy loss and upheaval. - Ethical necessity: Literature converges on the need for principled, enforceable governance to mitigate risks and align AI with ethical and sustainable goals. - RQ2—RRI framework: Advocates a multi-stakeholder RRI approach grounded in stakeholder theory and RRI’s four dimensions (anticipation, inclusiveness, reflexivity, responsiveness) and values (safety, privacy, sustainability, quality of life, gender equality; transparency overarching). The framework positions the AI innovator, regulators, and direct/indirect stakeholders in a feedback loop to guide ethical development and deployment. - Practical guidelines: 1) Build robust, safe, secure, transparent, and accountable systems; pursue shared prosperity via multi-layered AI governance (government, civil society, private sector, academia) aligned to UN SDGs and UNESCO’s global AI ethics agreement. 2) Incentivize responsible AI through financial and non-financial mechanisms (e.g., tax relief, regulatory relief, training, prestige) and consumer pressure aligning with CSR. 3) Establish trustworthy AI principles and oversight with ex-ante and ex-post regulation (e.g., EU AI Act), impact assessments (UNESCO Ethical Impact Assessment, RAM), and broad stakeholder participation. 4) Invest in research on transparency, fairness, inclusivity; adopt participatory co-design with end-users (e.g., clinicians/patients in healthcare, students/educators in education). - Data points: Literature screening flow: 1,892 records identified; 118 retained post-inclusion screening; 58 duplicates removed; 12 additional via citation search; 72 studies included.
Discussion
The findings address RQ1 by detailing multi-level risks posed by ChatGPT, reinforcing prior research on AI’s labor and societal impacts and highlighting urgent needs around misinformation control, privacy, equity, and governance. Addressing RQ2, the paper argues that a multi-stakeholder RRI framework is well-suited to embed ethics into AI lifecycles, aligning chatbot development with ecosystem values (safety, privacy, sustainability, quality of life, gender equality) via anticipation, inclusiveness, reflexivity, and responsiveness. Implications include: encouraging developers to prioritize transparency, bias mitigation, and regular evaluations; advocating stakeholder engagement and partnerships; proposing sector-specific guidance (notably in education and healthcare) to maintain integrity and safety; and calling for robust AI governance frameworks that operate across local, national, and international levels. The discussion emphasizes building trust, legitimacy, and accountability, mobilizing policy tools (procurement standards, funding), and ensuring global coordination for transboundary AI impacts. The study positions its framework as a foundation for future empirical work on RRI in AI ecosystems.
Conclusion
The paper concludes that ChatGPT, as a human-made tool, can be used for both beneficial and harmful purposes, making responsible development and use imperative. It proposes a multi-stakeholder RRI approach, grounded in stakeholder theory and RRI principles, to guide ethical, sustainable chatbot innovation. The review offers a comprehensive framework and actionable guidelines, stressing continuous dialogue among innovators, regulators, academics, practitioners, and the public to ensure alignment with societal values through anticipation, inclusivity, reflexivity, and responsiveness. The article calls for future empirical research on stakeholder engagement effectiveness, roles of diverse actors in shaping ChatGPT, and the impact of regulatory measures on promoting responsible innovation, with the goal of ensuring equity and clarity in evolving AI systems.
Limitations
As a focused review, the study does not aim for exhaustive coverage of the entire research domain. Article selection prioritized relevance to the research questions based on authors’ judgment, introducing potential selection bias. The review was limited to English-language, peer-reviewed journal and conference articles published between 2011 and 2023, excluding preprints and non-international journals. The study did not perform bibliometric analyses, and it synthesizes existing literature rather than providing new empirical data, which may limit generalizability and causal inference.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny