This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Women in AI Safety Hackathon
679781551b57b97e23660edd
Women in AI Safety Hackathon
March 10, 2025
Accepted at the 
679781551b57b97e23660edd
 research sprint on 

AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology

We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluate GPT-4o, QvQ 72B, LLaMA 70B, Mixtral 8x22B, and DeepSeek V3 using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety.

By 
Akash Kundu, Rishika Goswami
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This project is private