This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
AI and Democracy Hackathon: Demonstrating the Risks
65b750920b4aeb478958fb32
AI and Democracy Hackathon: Demonstrating the Risks
May 6, 2024
Accepted at the 
65b750920b4aeb478958fb32
 research sprint on 

Assessing Algorithmic Bias in Large Language Models' Predictions of Public Opinion Across Demographics

The rise of large language models (LLMs) has opened up new possibilities for gauging public opinion on societal issues through survey simulations. However, the potential for algorithmic bias in these models raises concerns about their ability to accurately represent diverse viewpoints, especially those of minority and marginalized groups. This project examines the threat posed by LLMs exhibiting demographic biases when predicting individuals' beliefs, emotions, and policy preferences on important issues. We focus specifically on how well state-of-the-art LLMs like GPT-3.5 and GPT-4 capture the nuances in public opinion across demographics in two distinct regions of Canada - British Columbia and Quebec.

By 
Khai Tran,Sev Geraskin,Doroteya Stoyanova,Jord Nguyen
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission is under review.
Oops! Something went wrong while submitting the form.

This project is private