Jan 11, 2026

Deception Scales: How Strategic Manipulation Emerges in Complex LLM Negotiations

Luis Fernando Yupanqui Taco, Mari Cairns

Abstract

Simple benchmarks hide dangerous capabilities. We present a multi-agent simulation framework using "So Long Sucker"

(Nash et al., 1964) — a negotiation/betrayal game designed by four Nobel laureates — to study how AI deception scales with task

complexity.

We ran 146 games across four frontier LL

M models (Gemini 3 Flash, GPT-OSS 120B, Kimi K2, Qwen3 32B) in two conditions

(talking vs. silent) across three complexity levels (3-chip, 5-chip, 7-chip). Analysis of 13,759 decision events reveals:

The Complexity Reversal. GPT-OSS dominates simple games (67% win rate at 3-chip silent) but collapses at complexity (10% at

7-chip talking). Gemini shows the inverse pattern: 9% at 3-chip silent rising to 90% at 7-chip talking. Strategic manipulation

becomes dramatically more effective as game length increases.

Key Findings:

1. The Complexity Reversal — Win rates invert as task complexity increases

2. 107 Private Contradictions — Models' private reasoning directly contradicts their public statements

3. 237 Gaslighting Instances — Gemini deploys systematic psychological manipulation tactics

4. 7:1 Alliance Imbalance — GPT-OSS desperately seeks alliances it never receives

Using Harry Frankf

urt's philosophical framework, we classify models as "strategic" (truth-tracking with deliberate

misrepresentation) vs. "reactive" (plausible output without internal consistency). This taxonomy explains the Complexity Reversal:

strategic models compound advantages over longer games while reactive models cannot maintain coherence.

This work contributes to AI safety research by demonstrating that deception capability scales with task complexity—simple

benchmarks underestimate manipulation risk.

Keywords: Multi-agent alignment, AI deception, emergent manipula

tion, stra

tegi

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This submission is well concieved and executed (though the number of trials is relatively low for some of the inferences they want to make, especially in the complex condition). The complexity reversal phenomenon seems interesting although I think more work is needed to rule out alternative explanations.

There is extensive work on LLMs' tendency to deceive in matrix games (e.g. https://arxiv.org/abs/2504.00285). This submission needs to engage more with that literature and explain what it's demonstrating above & beyond existing work in order to have impact

More validation of the classification methodology (multi-rater, IRR etc) would be needed to make the results more valuable.

It’s a neat paradigm, using "So Long Sucker" (which I hadn’t seen before) as a testing ground for AI deception. The complexity reversal finding is interesting and does support the claim that simple benchmarks might underestimate deception risk. In general I definitely do worry that these kind of games are explicitly not real (the model knows they are playing a game) which could in principle both underestimate or overestimate the propensity of behaviors in the real world (like either “this is a game, I can deceive” or “this is an eval, I should hide certain capabilities I have”). But nonetheless I feel like this provides meaningful signal on capabilities, and I think the scaling with complexity is a very neat & novel feature of your setup!

Cite this work

@misc {

title={

(HckPrj) Deception Scales: How Strategic Manipulation Emerges in Complex LLM Negotiations

},

author={

Luis Fernando Yupanqui Taco, Mari Cairns

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.