Jan 11, 2026

Emergent Strategic Behavior in Multi-Agent LLM Systems: A Study of Cooperation, Deception, and Coalition Formation

Benjamin Kiev, Adejumobi Joshua, J Phillips, Taiwo Togun

We investigate emergent strategic behaviors in decentralized multi-agent systems where Large Language Model (LLM) agents with private objectives interact through natural language communication. We design a simulation environment called Project X, a multi-round investment game that creates tension between individual gain and collective benefit. Agents

from heterogeneous LLM providers are assigned departmental roles with the sole objective of maximizing their own budget. Through public and private communication channels, agents

can coordinate, negotiate, deceive, or remain strategically silent. All communications and actions are logged for post-hoc behavioral analysis. This study examines whether complex social behaviors—including coalition formation, promise-breaking, free-riding, and strategic deception—emerge organically from goal-driven AI agents without explicit programming of such

behaviors.

Our 50-round experiment revealed sustained coalition behavior among four agents, systematic free-riding by a single agent (with zero contributions across all rounds), and repeated

deceptive tactics including false promises and impersonation. Despite no contributions, the free-rider agent accumulated equal wealth to others, exploiting the symmetric payoff structure. These findings highlight how sophisticated social dynamics and exploitation strategies

can emerge from minimal prompting in language-based agents.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The investment game is clearly designed and the dynamics observed are notable. I'd be curious to see this extended with more trials and model permutations to distinguish model-specific behaviors from setup artifacts. One consideration the authors could also expand on is modifying the payoff structure to incentivise different strategies.

It takes some sophistication to even get a single run of this going with all the coordination that has to happen, so well done on doing this in a single weekend! I also appreciated the emphasis on logging.

On the impact - You are tackling a clear problem with a completely reasonable approach. For me it does not quite hit a 4 because emergent deception has been studied in a variety of settings, and this has a new framing, but nothing with the novelty to go up a level for me.

The core execution of the idea is solid, especially considering time constraints. I would have liked to see repeat runs for statistical validation and or variations of the core experiment - even at the cost of shorter runs. Also the confounding of roles and different models makes it hard for me to assign a higher score.

On communication - well structured and explained. I like that the 5 research questions are clearly laid out, and in section 5.1.10 you go through them. I found myself being curious about your deeper reflection and your take on what this all implies. In other words, I would have appreciated more of your voice on the conclusions. Some figures of contributions over time might have helped me grok some of the dynamics. And in some parts the text was a little verbose.

Cite this work

@misc {

title={

(HckPrj) Emergent Strategic Behavior in Multi-Agent LLM Systems: A Study of Cooperation, Deception, and Coalition Formation

},

author={

Benjamin Kiev, Adejumobi Joshua, J Phillips, Taiwo Togun

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.