Beyond Statistical Parrots: Unveiling Cognitive Similarities and Exploring AI Psychology through Human-AI Interaction

Aisulu Zhussupbayeva

Recent critiques labeling large language models as mere "statistical parrots" overlook essential parallels between machine computation and human cognition. This work revisits the notion by contrasting human decision-making—rooted in both rapid, intuitive judgments and deliberate, probabilistic reasoning (System 1 and 2) —with the token-based operations of contemporary AI. Another important consideration is that both human and machine systems operate under constraints of bounded rationality. The paper also emphasizes that understanding AI behavior isn’t solely about its internal mechanisms but also requires an examination of the evolving dynamics of Human-AI interaction. Personalization is a key factor in this evolution, as it actively shapes the interaction landscape by tailoring responses and experiences to individual users, which functions as a double-edged sword. On one hand, it introduces risks, such as over-trust and inadvertent bias amplification, especially when users begin to ascribe human-like qualities to AI systems. On the other hand, it drives improvements in system responsiveness and perceived relevance by adapting to unique user profiles, which is highly important in AI alignment, as there is no common ground truth and alignment should be culturally situated. Ultimately, this interdisciplinary approach challenges simplistic narratives about AI cognition and offers a more nuanced understanding of its capabilities.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Andreea Damien

Hi Aisulu, I thought your project idea was very thought provoking and going against the grain of the mainstream narratives, which is applaudable. If you’re gonna use this project forward, it would fit perfectly as a perspective peer-reviewed article, that could be published in many academic journals. To strengthen your arguments, I’d strike to find a balance between theory and practice, in particular, I’d suggest expanding on section “6. World Models”, and more generally, expanding on what’s already out there in practice to add credibility to your claims.

Anna Leshinskaya

The principal claim of this project proposal is that similarities between LLMs and human cognition can support the idea that LLMs exhibit "genuine understanding" in contrast to being "statistical parrots". While the investigation of human-likeness of LLM cognition is an important research direction, the project described here requires substantial further conceptual development and refinement. The construct of "statistical mechanisms" used in the human research reviewed is not unified nor actually the same construct used in the literature to suggest that LLMs do not have genuine understanding. For example, the fact that humans reason about probabilities or use probabilistic reasoning is a very different notion than the idea that LLMs only learn statistics and do not acquire actual understanding. Philosophical work relating understanding to consciousness is another topic entirely, not clearly related to the idea of probabilities or statistics. Bounded rationality is a yet different topic also distinct from probabilistic reasoning and does not relate specifically to whether learning is "statistical". It is not clear how human-AI interaction relates to these themes. While these ideas are worth developing further, the main missing element is that there is no concrete research direction proposed. I hope the author develops their ideas into a concrete research idea in a future draft.

Bessie O’Dell

Thank you for submitting your work on ‘Beyond Statistical Parrots: Unveiling Cognitive Similarities and Exploring AI Psychology through Human-AI Interaction’ - it was an interesting read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- The introduction gives me a good overall sense of why your project is important, supported by evidence/ work and critiques to date.

- Overall your paper is very interesting to read - it succinctly covers a topic that I’ve been wondering about for some time, but which I didn’t have any background information on yet.

- The paper is generally well referenced.

- The paper offers some interesting insights that made me pause and reflect. For example, I particularly liked your insight that ‘The fundamental difference may not be in the statistical nature of processing, but rather in the embodied context […].

2. Areas for improvement

- It would be helpful to have a definition of "statistical parrots" and what is meant (in contrast) by ‘genuine understanding’ (p.1). This also applies to other terms used in the paper, e.g., ‘bounded rationality’.

- Full in-text citations should be used (e.g., Kahneman’s dual-process theory on p.1 is unreferenced).

- I’d perhaps avoid sweeping statements (e.g., ‘many psychological conditions […] cannot be fully explained through neurological mechanisms alone’ - can’t they? Or is it just that we currently lack the methodologies required to explain these mechanisms?).

- Some sections contain quite big statements but no reference - I’d recommend citations here (e.g., ‘the tendency to attribute human-like qualities to AI systems creates inflated expectations and a misplaced sense of understanding […] p. 4) - although I appreciate that generally you’ve made a strong effort to cite your work.

- The project is quite descriptive overall, and would benefit from more critical analysis/ a deeper level of engagement with different methodologies.

- The paper is missing documentation/ clear methods, so this is the largest area of improvement.

Cite this work

@misc {

title={

@misc {

},

author={

Aisulu Zhussupbayeva

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.