Mar 10, 2025

AI Bias in Resume Screening

Aliane Inès, Abidal Mauro

Our project investigates gender bias in AI-driven resume screening using mechanistic interpretability techniques. By testing a language model's decision-making process on resumes differing only by gendered names, we uncovered a statistically significant bias favoring male-associated names in ambiguous cases. Using Goodfire’s Ember API, we analyzed model logits and performed rigorous statistical evaluations (t-tests, ANOVA, logistic regression).

Findings reveal that male names received more positive responses when skill matching was uncertain, highlighting potential discrimination risks in automated hiring systems. To address this, we propose mitigation strategies such as anonymization, fairness constraints, and continuous bias audits using interpretability tools. Our research underscores the importance of AI fairness and the need for transparent hiring practices in AI-powered recruitment.

This work contributes to AI safety by exposing and quantifying biases that could perpetuate systemic inequalities, urging the adoption of responsible AI development in hiring processes.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Good idea - especially on IWD weekend! I like the mech interp approach to understanding model biases. As next steps, I'd recommend reading around the topic further - bias in AI is something that has been explored considerably previously and delving deep into the existing literature could help ground further work. Understanding AI bias has key implications on the AI safety space - I'd recommend testing the mitigation strategies and seeing how well they work; this would be a very valuable contribution to the field. It's also possible to extend this research to more demographic qualities like race. Great start and lots of scope for exciting further work!

Cite this work

@misc {

title={

AI Bias in Resume Screening

},

author={

Aliane Inès, Abidal Mauro

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.