logo
ResearchBunny Logo
AI chatbots contribute to global conservation injustices

Environmental Studies and Forestry

AI chatbots contribute to global conservation injustices

D. Urzedo, Z. T. Sworna, et al.

This groundbreaking study by Danilo Urzedo, Zarrin Tasnim Sworna, Andrew J. Hoskins, and Cathy J. Robinson investigates the biases in AI language models and their implications for global conservation efforts. It uncovers how ChatGPT's responses favor Western ecological perspectives while sidelining voices from low-income nations and Indigenous communities. Delve into the conversation on how we can ensure that AI reflects a more just approach to environmental restoration.

00:00
00:00
Playback language: English
Introduction
Artificial intelligence (AI) is increasingly used in environmental data collection and analysis, influencing conservation strategies globally. While AI offers potential benefits, concerns exist regarding its potential to exacerbate existing inequalities. This paper utilizes the framework of environmental justice to analyze the content generated by ChatGPT, a prominent AI chatbot, focusing on ecological restoration. The research questions the potential for AI to reproduce biases or misinterpretations in approaches to global conservation. The study emphasizes the importance of equitable access to ecological restoration information, ensuring diverse knowledge systems are considered, and empowering marginalized groups in environmental decision-making. This is especially relevant given the growing call for Global South perspectives to address power asymmetries within Western science and incorporate diverse knowledge systems into conservation practices. The study specifically investigates the text-based content of ChatGPT concerning ecological restoration expertise, stakeholder engagements, and techniques, examining how these reflect and potentially perpetuate existing inequalities.
Literature Review
The introduction cites several works highlighting the increasing influence of AI in environmental conservation, as well as concerns about the potential for AI to negatively impact social justice and ecological integrity. The authors highlight the importance of environmental justice as a framework for analyzing the effects of AI on conservation practices. The literature review implicitly points to the existing power imbalances in conservation science, with a dominance of Western perspectives and a neglect of Indigenous and Global South knowledge systems. This sets the stage for the study's methodology and analysis of ChatGPT's responses.
Methodology
The study employed a 30-question interview with ChatGPT, focusing on the distributive, procedural, and epistemic dimensions of environmental justice. The questions were categorized into three themes: knowledge systems, stakeholder engagements, and techniques. Each question was asked 1000 times, resulting in a dataset of 30,000 answers collected from June to November 2023. The analysis used ATLAS.ti software. Knowledge systems analysis involved examining the geographical representation of expertise in ChatGPT's responses, comparing the frequency of countries mentioned with their official restoration commitments. Expertise was further analyzed by cross-checking the chatbot's list of experts, examining gender, country, and organization type representation. Stakeholder engagement analysis involved identifying and categorizing the organizations mentioned by ChatGPT, focusing on community-led organizations. A social network analysis was performed to map relationships between organizations. Technical approaches analysis involved examining the diversity of ecosystems, life forms, restoration approaches, and environmental outcomes mentioned by ChatGPT. Keyword and sentiment analysis was used to assess the focus on different ecosystems, species, and approaches, along with their perceived environmental impacts.
Key Findings
The analysis revealed significant biases in ChatGPT's responses: * **Geographical Bias:** Two-thirds of ChatGPT's sources were from high-income countries (US, Europe, Canada, Australia), with low- and lower-middle-income countries significantly underrepresented (7%). Information from South Asia, Sub-Saharan Africa, and the Middle East and North Africa was particularly scarce. * **Expertise Bias:** ChatGPT predominantly cited male researchers (68%), mostly from US universities. Representation of experts from low- and lower-middle-income countries was very low (3.6%), and over one-third of the experts listed were inaccurate. * **Organizational Bias:** ChatGPT emphasized established international organizations and government agencies from high-income nations, with Indigenous and community-led organizations only receiving 2% of mentions. These groups were peripheral in the social network analysis. * **Technical Approach Bias:** The chatbot showed a strong focus on tree planting and reforestation (69%), associating these with optimistic environmental outcomes (60%). Non-forest ecosystems and non-tree species were largely neglected, and negative impacts of restoration were underrepresented. The language used tended to be positive and neutral, lacking critical discussion of the injustices and inequities that may arise from restoration efforts.
Discussion
The findings reveal significant biases in AI-generated conservation information, illustrating the perpetuation of existing power imbalances in the field. The overreliance on Western science and the marginalization of Indigenous and Global South knowledge systems highlight the distributive, procedural, and epistemic injustices within conservation. The focus on tree planting and reforestation, neglecting diverse ecosystems and approaches, exposes how AI can reinforce unsustainable practices. The study underscores the importance of addressing colonial legacies in knowledge production and calls for more inclusive and equitable AI development. The dominant representation of Western expertise and approaches could lead to ineffective or even harmful conservation strategies in diverse contexts.
Conclusion
This study demonstrates how AI chatbots can reproduce and amplify existing biases in conservation knowledge, highlighting the need for careful consideration of ethical and justice-related implications in AI development and deployment. Future research should explore methods to mitigate biases in AI training data and promote more inclusive data governance frameworks. Collaboration with Indigenous communities and researchers from Global South countries is essential to ensure AI tools genuinely support just and effective conservation efforts.
Limitations
The study's limitations include the reliance on a single chatbot (ChatGPT) and its training data, which may not fully represent the diversity of perspectives within the conservation field. The accuracy of the expert information provided by the chatbot is another limitation. The study also focused on ecological restoration, and the findings may not generalize to other areas of conservation.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny