May 6, 2024
Assessing Algorithmic Bias in Large Language Models' Predictions of Public Opinion Across Demographics
Khai Tran,Sev Geraskin,Doroteya Stoyanova,Jord Nguyen
Summary
The rise of large language models (LLMs) has opened up new possibilities for gauging public opinion on societal issues through survey simulations. However, the potential for algorithmic bias in these models raises concerns about their ability to accurately represent diverse viewpoints, especially those of minority and marginalized groups.
This project examines the threat posed by LLMs exhibiting demographic biases when predicting individuals' beliefs, emotions, and policy preferences on important issues. We focus specifically on how well state-of-the-art LLMs like GPT-3.5 and GPT-4 capture the nuances in public opinion across demographics in two distinct regions of Canada - British Columbia and Quebec.
Cite this work:
@misc {
title={
Assessing Algorithmic Bias in Large Language Models' Predictions of Public Opinion Across Demographics
},
author={
Khai Tran,Sev Geraskin,Doroteya Stoyanova,Jord Nguyen
},
date={
5/6/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}