Large Language Models (LLMs) like ChatGPT offer a promising new approach to simulating public opinion, potentially aiding public policy development. However, concerns remain about their global applicability and potential biases across demographics and themes. This research uses data from the World Values Survey to assess ChatGPT's performance in diverse contexts, revealing significant performance disparities across countries and demographic groups (gender, ethnicity, age, education, and social class). Thematic biases in political and environmental simulations were also uncovered, highlighting the need for improved LLM representativeness and bias mitigation for equitable integration into public opinion research.
Publisher
Humanities & Social Sciences Communications
Published On
Aug 28, 2024
Authors
Yao Qu, Jue Wang
Tags
Large Language Models
public opinion
bias
World Values Survey
demographics
policy development
simulation
Related Publications
Explore these studies to deepen your understanding of the subject.