Mar 10, 2025

AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology

Akash Kundu, Rishika Goswami

We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluate GPT-4o, QvQ 72B, LLaMA 70B, Mixtral 8x22B, and DeepSeek V3 using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Hi Akash and Rishika, I think you did a fantastic job with your project on all three criteria: theory, methodology, and impact. It’s a great piece of work, especially for such a short timeline, as it strikes a good balance of breadth and depth between the social and technical aspects of your idea, blends very nicely from section to section, and arguments follow easily and logically. Additionally, the grammar and referencing are excellent, and the presentation of materials in the appendix is excellent. The only two comments I have are:

(1) “Various experimental paradigms have been used to investigate human cognition, including projective tests, bias studies, and moral reasoning frameworks.” I’d use some examples or at least provide some citations to support this.

(2) While I understand why the 4 theories are relevant in the context of AI, I’d like to see a few sentences at the very start of the article that say why you chose the 4 among all the other (myriad of) theories available out there.

Overall, a really great job for the both of you! I hope you get this published as soon as possible.

This project adapts and applies several psychological tests from the human psychological literature to several LLMs. This is an interesting research approach that has the potential to reveal psychological qualities of different LLMs. There are a few shortcomings worth improving in a subsequent iteration.

- What was the motivation behind selecting these tests in particular, and what they will reveal, with respect to broader theories of LLM cognition?

- Most of the measures involved both new items and new scoring methods that need to be validated before LLM results can be interpreted (ie., we don't know how humans would behave on them). At least a plan for these validations should be part of the design. Specifically:

- Validation of TAT scoring (it is not standard to score these with LLMs)

- Validation of scoring for framing bias, as well as validation of the manipulation used in the test (eg. with human raters)

- MFT: what are the human baselines for novel items? how are they validated?

- Cognitive dissonance: again, new items are being used, so these and the scoring method need validation. Perhaps reliance on more standard tests could facilitate better comparison to humans.

- Analyses:

- comparisons of models and condition should be done with statistical analyses

- this would benefit from a plan to relate model measures to those of human norms

- Results on framing bias are a bit confusing - the framing bias is supposed to be about risk aversion or risk seeking but it is unclear how that was measured exactly (describe what is an "entailment")

- MTF likewise could benefit from references to human data; given that new items are used, new norms should be collected

-

- The conclusion that models show human-like tendencies are not warranted without a comparison to human data, particularly given the new test items. I also recommend some greater cohesion among chosen tests, for example decision making or dissonance, and using validated and multiple convergent tests for similar cognitive constructs.

Thank you for submitting your work on ‘AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology’ - it was interesting and very insightful to read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- This is a technically strong project, which is well structured, interesting and novel.

- Your methods are clearly detailed (including the supplementary material), and your methods are reproducible.

- The paper is clear and well written

2. Areas for improvement

- It would helpful to have definitions for key terms that you’re using (e.g., ‘human decision-making, morality, and rationality’ - p.1).

- You note that ‘While these tests have been extensively studied in humans, their application to artificial intelligence (AI) remains underexplored’ - it would be interesting to know why this is. Is it a technical limitation? Lack of motivation from researchers? Considered to be unhelpful to AI safety?

- It would be great to get a brief explanation about why you chose your specific measurement tools (x4) (as opposed e.g., to other popular tools).

- Some statements are a little unclear to me and could be more specific. E.g., ‘As LLMs increasingly handle sensitive tasks (e.g., policy, ethics, healthcare)’ - p. 2. For example, in what ways to LLMs ‘handle’ ethics? Or do you mean that LLMs are applied in/ affect these domains?

- Your threat model is unclear to me - why exactly does this work impact AI safety? This could probably be added to the motivation section. E.g., you mention that ‘Identifying such parallels is crucial for detecting biases (e.g., framing effects), guiding the development of ethical AI, and ensuring reliable performance in high-stakes domains’, but can you expand on this? How? Why?

- You could also perhaps consider practical constraints and limitations in your paper (e.g., of further work in this area, or that you faced yourself).

Cite this work

@misc {

title={

AI Through the Human Lens Investigating Cognitive Theories in Machine Psychology

},

author={

Akash Kundu, Rishika Goswami

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.