logo
ResearchBunny Logo
Abstract
Large Language Models (LLMs) like ChatGPT offer a promising new approach to simulating public opinion, potentially aiding public policy development. However, concerns remain about their global applicability and potential biases across demographics and themes. This research uses data from the World Values Survey to assess ChatGPT's performance in diverse contexts, revealing significant performance disparities across countries and demographic groups (gender, ethnicity, age, education, and social class). Thematic biases in political and environmental simulations were also uncovered, highlighting the need for improved LLM representativeness and bias mitigation for equitable integration into public opinion research.
Publisher
Humanities & Social Sciences Communications
Published On
Aug 28, 2024
Authors
Yao Qu, Jue Wang
Tags
Large Language Models
public opinion
bias
World Values Survey
demographics
policy development
simulation
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny