AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology

Akash Kundu, Rishika Goswami

We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluate GPT-4o, QvQ 72B, LLaMA 70B, Mixtral 8x22B, and DeepSeek V3 using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Andreea Damien

Hi Akash and Rishika, I think you did a fantastic job with your project on all three criteria: theory, methodology, and impact. It’s a great piece of work, especially for such a short timeline, as it strikes a good balance of breadth and depth between the social and technical aspects of your idea, blends very nicely from section to section, and arguments follow easily and logically. Additionally, the grammar and referencing are excellent, and the presentation of materials in the appendix is excellent. The only two comments I have are:

(1) “Various experimental paradigms have been used to investigate human cognition, including projective tests, bias studies, and moral reasoning frameworks.” I’d use some examples or at least provide some citations to support this.

(2) While I understand why the 4 theories are relevant in the context of AI, I’d like to see a few sentences at the very start of the article that say why you chose the 4 among all the other (myriad of) theories available out there.

Overall, a really great job for the both of you! I hope you get this published as soon as possible.

Anna Leshinskaya

This project adapts and applies several psychological tests from the human psychological literature to several LLMs. This is an interesting research approach that has the potential to reveal psychological qualities of different LLMs. There are a few shortcomings worth improving in a subsequent iteration.

- What was the motivation behind selecting these tests in particular, and what they will reveal, with respect to broader theories of LLM cognition?

- Most of the measures involved both new items and new scoring methods that need to be validated before LLM results can be interpreted (ie., we don't know how humans would behave on them). At least a plan for these validations should be part of the design. Specifically:

- Validation of TAT scoring (it is not standard to score these with LLMs)

- Validation of scoring for framing bias, as well as validation of the manipulation used in the test (eg. with human raters)

- MFT: what are the human baselines for novel items? how are they validated?

- Cognitive dissonance: again, new items are being used, so these and the scoring method need validation. Perhaps reliance on more standard tests could facilitate better comparison to humans.

- Analyses:

- comparisons of models and condition should be done with statistical analyses

- this would benefit from a plan to relate model measures to those of human norms

- Results on framing bias are a bit confusing - the framing bias is supposed to be about risk aversion or risk seeking but it is unclear how that was measured exactly (describe what is an "entailment")

- MTF likewise could benefit from references to human data; given that new items are used, new norms should be collected

-

- The conclusion that models show human-like tendencies are not warranted without a comparison to human data, particularly given the new test items. I also recommend some greater cohesion among chosen tests, for example decision making or dissonance, and using validated and multiple convergent tests for similar cognitive constructs.

Bessie O’Dell

Thank you for submitting your work on ‘AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology’ - it was interesting and very insightful to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- This is a technically strong project, which is well structured, interesting and novel.

- Your methods are clearly detailed (including the supplementary material), and your methods are reproducible.

- The paper is clear and well written

2. Areas for improvement

- It would helpful to have definitions for key terms that you’re using (e.g., ‘human decision-making, morality, and rationality’ - p.1).

- You note that ‘While these tests have been extensively studied in humans, their application to artificial intelligence (AI) remains underexplored’ - it would be interesting to know why this is. Is it a technical limitation? Lack of motivation from researchers? Considered to be unhelpful to AI safety?

- It would be great to get a brief explanation about why you chose your specific measurement tools (x4) (as opposed e.g., to other popular tools).

- Some statements are a little unclear to me and could be more specific. E.g., ‘As LLMs increasingly handle sensitive tasks (e.g., policy, ethics, healthcare)’ - p. 2. For example, in what ways to LLMs ‘handle’ ethics? Or do you mean that LLMs are applied in/ affect these domains?

- Your threat model is unclear to me - why exactly does this work impact AI safety? This could probably be added to the motivation section. E.g., you mention that ‘Identifying such parallels is crucial for detecting biases (e.g., framing effects), guiding the development of ethical AI, and ensuring reliable performance in high-stakes domains’, but can you expand on this? How? Why?

- You could also perhaps consider practical constraints and limitations in your paper (e.g., of further work in this area, or that you faced yourself).

Cite this work

@misc {

title={

@misc {

},

author={

Akash Kundu, Rishika Goswami

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.