This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Women in AI Safety Hackathon
679781551b57b97e23660edd
Women in AI Safety Hackathon
March 10, 2025
Accepted at the 
679781551b57b97e23660edd
 research sprint on 

AI Bias in Resume Screening

Our project investigates gender bias in AI-driven resume screening using mechanistic interpretability techniques. By testing a language model's decision-making process on resumes differing only by gendered names, we uncovered a statistically significant bias favoring male-associated names in ambiguous cases. Using Goodfire’s Ember API, we analyzed model logits and performed rigorous statistical evaluations (t-tests, ANOVA, logistic regression). Findings reveal that male names received more positive responses when skill matching was uncertain, highlighting potential discrimination risks in automated hiring systems. To address this, we propose mitigation strategies such as anonymization, fairness constraints, and continuous bias audits using interpretability tools. Our research underscores the importance of AI fairness and the need for transparent hiring practices in AI-powered recruitment. This work contributes to AI safety by exposing and quantifying biases that could perpetuate systemic inequalities, urging the adoption of responsible AI development in hiring processes.

By 
Aliane Inès, Abidal Mauro
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This project is private