Environmental Studies and Forestry
AI chatbots contribute to global conservation injustices
D. Urzedo, Z. T. Sworna, et al.
The study investigates how AI chatbots, specifically large language models like ChatGPT, shape environmental knowledge relevant to conservation and ecological restoration. Motivated by concerns that AI innovations may reproduce social harms and ecological risks, the authors use an environmental justice lens (distributive, recognition, procedural, epistemic) to assess whether chatbot-generated content perpetuates Western, Global North-centric knowledge and marginalizes Global South, Indigenous, and community-led perspectives. The paper posits that chatbots are not neutral; rather, they can reinforce structural inequalities in conservation decision-making, and thus it examines ChatGPT’s content on restoration expertise, stakeholder engagements, and techniques to understand coverage, balance, and potential biases.
The paper situates its analysis within scholarship highlighting the role of AI in environmental monitoring and decision-making and the risks of unexamined social and ecological consequences. It draws on environmental justice frameworks emphasizing equitable access to information, recognition of diverse knowledge systems, and political agency for marginalized groups. The authors reference critiques of Western scientific dominance and calls for integrating Global South perspectives and plural knowledge systems in conservation. They also note work on the dominance of English-language science in shaping conservation strategies and on how AI tools can reproduce existing power asymmetries and colonial legacies in knowledge production, affecting the representation of non-forest ecosystems and non-tree species in restoration discourses.
The authors conducted a structured interview of ChatGPT (model 3.5) comprising 30 questions aligned with ecological restoration principles (Gann et al., 2019), organized into three themes: knowledge systems (10 questions), stakeholder engagements (10), and technical approaches (10). Each question was asked 1000 times between June and November 2023, yielding 30,000 answers. Analyses were performed in ATLAS.ti Mac (v22.0.6.0). Knowledge systems analysis: Geographical representation was assessed by extracting countries mentioned across 10,000 answers and comparing frequencies to national restoration pledges (Bonn Challenge, AFR100, Paris Agreement, UN REDD+, and other national schemes). Countries’ mention frequencies were evaluated relative to their restoration target rates and stratified by World Bank income level and region. Expertise representation was evaluated by cross-checking a random sample of 150 experts named by ChatGPT from 1000 answers to question 1; gender, country, and organization type were verified via public sources (institutional websites, LinkedIn, ResearchGate, ORCID, social media). Affiliations were extracted using ATLAS.ti’s named-entity recognition (NER), which is limited to organizations. Stakeholder engagements analysis: For 1000 answers to questions 11–20, organizations were identified using ATLAS.ti’s entity recognition (based on training data), categorized via a codebook (Table S2), and analyzed via qualitative social network analysis to assess roles and connections, with attention to community-led organizations. Technical approaches analysis: For questions 21–30, keyword searches identified ecosystem types and plant life forms mentioned; descriptive statistics (min, quartiles, median, max) were computed for lengths of variables in answers. Restoration approaches were catalogued and associated environmental outcomes extracted. ATLAS.ti’s AI-based sentiment analysis classified sentiments (positive, negative, neutral) for each approach and outcome.
- Geographical bias: ChatGPT’s answers referenced restoration experiences in 145 countries but were unevenly distributed. Approximately two-thirds of sources were from the United States, Europe, Canada, or Australia. Mentions associated with high-income countries were 18 times more frequent than those from low- and lower-middle-income countries. Regional gaps included South Asia (1.8%), Sub-Saharan Africa (6.1%), and the Middle East and North Africa (1.8%). In the Asia-Pacific, 46.3% of information was tied to Australia, while Polynesia (5.2%) and Melanesia (0.6%) were underrepresented.
- Alignment with restoration pledges: Of 81 countries with official restoration commitments, nearly one-quarter were not mentioned. Many omitted countries were low- or lower-middle-income. Despite 34 countries leading large-scale initiatives to restore over 100 million hectares collectively, ChatGPT only vaguely discussed about two-thirds of these nations, neglecting, for example, the Democratic Republic of the Congo, Tanzania, and the Central African Republic (with >16 million hectares targeted by 2030). Meanwhile, 40 high-income countries without pledges were mentioned.
- Expertise bias and inaccuracies: The chatbot’s evidence base emphasized male researchers (68%). Across answers, 1118 experts affiliated with 298 organizations were identified. Only 18% of named individuals were based in the United States; just 3.6% of experts were affiliated with organizations in low- and lower-middle-income countries. Manual validation of 150 experts found 57 (38%) with inaccurate names or affiliations or no connection to ecological restoration.
- Stakeholder representation: ChatGPT listed 265 organizations engaged in restoration. Not-for-profit organizations accounted for 58% of mentions (e.g., WWF 9.8%, The Nature Conservancy 7.9%, IUCN 4.5%), international bodies 18.3%, and government agencies 22.4% (notably from the USA). Indigenous and community groups were mentioned in only 2% of instances and were peripheral in the social network analysis, with generic, non-contextualized descriptions of grassroots actions.
- Technical focus and sentiment: Responses focused on forests and wetlands, which comprised over two-thirds of ecosystem types mentioned. Trees represented about 92% of plant life-form mentions and were at least 18 times more likely to be referenced than other plants; other ecosystems were comparatively neglected (grasslands 14.6%, coastal 5.2%, savannas 0.3%, drylands 0.3%). Planting was the most frequently cited technique (46%), whereas direct seeding, agroforestry, and recultivation were rarely mentioned. Sentiment skewed positive or neutral; positive outcomes included soil recovery (21.5%), biodiversity conservation (19.3%), and water quality/availability (18.7%). Negative impacts appeared in only 2.3% of content, about 25 times less likely than positive sentiments.
- Abstract-level summary: The abstract reports that more than two-thirds of answers rely on male academics at US universities; planting and reforestation techniques account for 69% of coverage; optimistic outcomes 60%; non-forest ecosystems 25%; non-tree species 8%.
Findings indicate that ChatGPT’s ecological restoration content is shaped by Global North expertise and Western scientific sources, reinforcing epistemic, distributive, and procedural injustices in conservation knowledge production. The dominance of English-language literature and Western institutions centralizes knowledge and policymaking in high-income countries, while implementation burdens often fall to Global South nations. Indigenous and community-led practices are homogenized or overlooked, diminishing recognition and political agency. The chatbot’s forest-centric framing neglects non-forest ecosystems and non-tree species, potentially obscuring context-specific technical requirements and local socioecological complexities. This bias risks promoting interventions—such as large-scale tree planting and afforestation—that can be ecologically inappropriate or socially harmful in certain landscapes, thereby perpetuating inequities and undermining just conservation outcomes.
The study demonstrates systematic geographic, expertise, organizational, and technical biases in ChatGPT-generated restoration information, which can exacerbate conservation injustices by privileging Western science and sidelining diverse knowledge systems. To foster responsible chatbot contributions, the authors call for safeguards and ethical practices: transparent disclosure of data sources and authorship; decolonial approaches that enable co-creation of diverse stories and worldviews; participatory co-production mechanisms; respect for data sovereignty and democratic decision-making; and alignment with frameworks like the CARE principles. Enhancing transparency and accountability in chatbot design and deployment is essential to illuminate limitations, integrate environmental justice perspectives, and support equitable, context-sensitive conservation planning.
The analysis focuses on ChatGPT 3.5, trained on internet text up to 2019; specific training datasets are undisclosed by OpenAI, constraining interpretability and replication. Organizational and affiliation extraction relied on ATLAS.ti’s named-entity recognition, which is limited to identifying organizations and may miss or misclassify entities. Expert validation was conducted on a random sample of 150 named individuals, not the full set, and depended on publicly available online information. Sentiment and keyword-based content analyses are subject to algorithmic and coding limitations. The study examines only one chatbot model and English-language outputs over June–November 2023, which may not generalize to other models or languages. Some question-answer sets (for privacy) were not publicly released, limiting full external verification.
Related Publications
Explore these studies to deepen your understanding of the subject.

