Mar 10, 2025

Beyond Statistical Parrots: Unveiling Cognitive Similarities and Exploring AI Psychology through Human-AI Interaction

Aisulu Zhussupbayeva

Recent critiques labeling large language models as mere "statistical parrots" overlook essential parallels between machine computation and human cognition. This work revisits the notion by contrasting human decision-making—rooted in both rapid, intuitive judgments and deliberate, probabilistic reasoning (System 1 and 2) —with the token-based operations of contemporary AI. Another important consideration is that both human and machine systems operate under constraints of bounded rationality. The paper also emphasizes that understanding AI behavior isn’t solely about its internal mechanisms but also requires an examination of the evolving dynamics of Human-AI interaction. Personalization is a key factor in this evolution, as it actively shapes the interaction landscape by tailoring responses and experiences to individual users, which functions as a double-edged sword. On one hand, it introduces risks, such as over-trust and inadvertent bias amplification, especially when users begin to ascribe human-like qualities to AI systems. On the other hand, it drives improvements in system responsiveness and perceived relevance by adapting to unique user profiles, which is highly important in AI alignment, as there is no common ground truth and alignment should be culturally situated. Ultimately, this interdisciplinary approach challenges simplistic narratives about AI cognition and offers a more nuanced understanding of its capabilities.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Hi Aisulu, I thought your project idea was very thought provoking and going against the grain of the mainstream narratives, which is applaudable. If you’re gonna use this project forward, it would fit perfectly as a perspective peer-reviewed article, that could be published in many academic journals. To strengthen your arguments, I’d strike to find a balance between theory and practice, in particular, I’d suggest expanding on section “6. World Models”, and more generally, expanding on what’s already out there in practice to add credibility to your claims.

The principal claim of this project proposal is that similarities between LLMs and human cognition can support the idea that LLMs exhibit "genuine understanding" in contrast to being "statistical parrots". While the investigation of human-likeness of LLM cognition is an important research direction, the project described here requires substantial further conceptual development and refinement. The construct of "statistical mechanisms" used in the human research reviewed is not unified nor actually the same construct used in the literature to suggest that LLMs do not have genuine understanding. For example, the fact that humans reason about probabilities or use probabilistic reasoning is a very different notion than the idea that LLMs only learn statistics and do not acquire actual understanding. Philosophical work relating understanding to consciousness is another topic entirely, not clearly related to the idea of probabilities or statistics. Bounded rationality is a yet different topic also distinct from probabilistic reasoning and does not relate specifically to whether learning is "statistical". It is not clear how human-AI interaction relates to these themes. While these ideas are worth developing further, the main missing element is that there is no concrete research direction proposed. I hope the author develops their ideas into a concrete research idea in a future draft.

Thank you for submitting your work on ‘Beyond Statistical Parrots: Unveiling Cognitive Similarities and Exploring AI Psychology through Human-AI Interaction’ - it was an interesting read. Please see below for some feedback on your project:

1. Strengths of your proposal/ project:

- The introduction gives me a good overall sense of why your project is important, supported by evidence/ work and critiques to date.

- Overall your paper is very interesting to read - it succinctly covers a topic that I’ve been wondering about for some time, but which I didn’t have any background information on yet.

- The paper is generally well referenced.

- The paper offers some interesting insights that made me pause and reflect. For example, I particularly liked your insight that ‘The fundamental difference may not be in the statistical nature of processing, but rather in the embodied context […].

2. Areas for improvement

- It would be helpful to have a definition of "statistical parrots" and what is meant (in contrast) by ‘genuine understanding’ (p.1). This also applies to other terms used in the paper, e.g., ‘bounded rationality’.

- Full in-text citations should be used (e.g., Kahneman’s dual-process theory on p.1 is unreferenced).

- I’d perhaps avoid sweeping statements (e.g., ‘many psychological conditions […] cannot be fully explained through neurological mechanisms alone’ - can’t they? Or is it just that we currently lack the methodologies required to explain these mechanisms?).

- Some sections contain quite big statements but no reference - I’d recommend citations here (e.g., ‘the tendency to attribute human-like qualities to AI systems creates inflated expectations and a misplaced sense of understanding […] p. 4) - although I appreciate that generally you’ve made a strong effort to cite your work.

- The project is quite descriptive overall, and would benefit from more critical analysis/ a deeper level of engagement with different methodologies.

- The paper is missing documentation/ clear methods, so this is the largest area of improvement.

Cite this work

@misc {

title={

Beyond Statistical Parrots: Unveiling Cognitive Similarities and Exploring AI Psychology through Human-AI Interaction

},

author={

Aisulu Zhussupbayeva

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.