This paper examines how AI-driven language models (chatbots) contribute to global conservation injustices. The authors interviewed ChatGPT 30,000 times on ecological restoration, finding that its responses predominantly relied on expertise from male academics in the US, neglecting contributions from low- and lower-middle-income countries and Indigenous communities. The chatbot showed a bias towards tree planting and reforestation, overlooking holistic approaches and non-forest ecosystems. The study highlights how biases in AI knowledge production can reinforce Western science and calls for safeguard mechanisms to ensure just principles are incorporated into chatbot development.
Publisher
Humanities and Social Sciences Communications
Published On
Feb 03, 2024
Authors
Danilo Urzedo, Zarrin Tasnim Sworna, Andrew J. Hoskins, Cathy J. Robinson
Tags
AI
language models
conservation
bias
ecological restoration
Indigenous communities
Western science
Related Publications
Explore these studies to deepen your understanding of the subject.