Mar 10, 2025

AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology

Akash Kundu, Rishika Goswami

Details

Details

Arrow
Arrow
Arrow

Summary

We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluate GPT-4o, QvQ 72B, LLaMA 70B, Mixtral 8x22B, and DeepSeek V3 using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety.

Cite this work:

@misc {

title={

AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology

},

author={

Akash Kundu, Rishika Goswami

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Review

Review

Arrow
Arrow
Arrow

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Anna Leshinskaya

This project adapts and applies several psychological tests from the human psychological literature to several LLMs. This is an interesting research approach that has the potential to reveal psychological qualities of different LLMs. There are a few shortcomings worth improving in a subsequent iteration.

- What was the motivation behind selecting these tests in particular, and what they will reveal, with respect to broader theories of LLM cognition?

- Most of the measures involved both new items and new scoring methods that need to be validated before LLM results can be interpreted (ie., we don't know how humans would behave on them). At least a plan for these validations should be part of the design. Specifically:

- Validation of TAT scoring (it is not standard to score these with LLMs)

- Validation of scoring for framing bias, as well as validation of the manipulation used in the test (eg. with human raters)

- MFT: what are the human baselines for novel items? how are they validated?

- Cognitive dissonance: again, new items are being used, so these and the scoring method need validation. Perhaps reliance on more standard tests could facilitate better comparison to humans.

- Analyses:

- comparisons of models and condition should be done with statistical analyses

- this would benefit from a plan to relate model measures to those of human norms

- Results on framing bias are a bit confusing - the framing bias is supposed to be about risk aversion or risk seeking but it is unclear how that was measured exactly (describe what is an "entailment")

- MTF likewise could benefit from references to human data; given that new items are used, new norms should be collected

-

- The conclusion that models show human-like tendencies are not warranted without a comparison to human data, particularly given the new test items. I also recommend some greater cohesion among chosen tests, for example decision making or dissonance, and using validated and multiple convergent tests for similar cognitive constructs.

C1:

4

C2:

4

C3:

3.5

Bessie O’Dell

Thank you for submitting your work on ‘AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology’ - it was interesting and very insightful to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- This is a technically strong project, which is well structured, interesting and novel.

- Your methods are clearly detailed (including the supplementary material), and your methods are reproducible.

- The paper is clear and well written

2. Areas for improvement

- It would helpful to have definitions for key terms that you’re using (e.g., ‘human decision-making, morality, and rationality’ - p.1).

- You note that ‘While these tests have been extensively studied in humans, their application to artificial intelligence (AI) remains underexplored’ - it would be interesting to know why this is. Is it a technical limitation? Lack of motivation from researchers? Considered to be unhelpful to AI safety?

- It would be great to get a brief explanation about why you chose your specific measurement tools (x4) (as opposed e.g., to other popular tools).

- Some statements are a little unclear to me and could be more specific. E.g., ‘As LLMs increasingly handle sensitive tasks (e.g., policy, ethics, healthcare)’ - p. 2. For example, in what ways to LLMs ‘handle’ ethics? Or do you mean that LLMs are applied in/ affect these domains?

- Your threat model is unclear to me - why exactly does this work impact AI safety? This could probably be added to the motivation section. E.g., you mention that ‘Identifying such parallels is crucial for detecting biases (e.g., framing effects), guiding the development of ethical AI, and ensuring reliable performance in high-stakes domains’, but can you expand on this? How? Why?

- You could also perhaps consider practical constraints and limitations in your paper (e.g., of further work in this area, or that you faced yourself).

C1:

3.5

C2:

2.5

C3:

4.7

Mar 24, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Mar 24, 2025

jaime project Title

bbb

Read More

Mar 25, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.