Jan 11, 2026
Dark Drift: Emergent Psychopathic Traits and Information Distortion in LLM-Mediated Communication Chain
Chengheng Li Chen, Babita Singh, David Bravo, Publius Dirac
We investigate "Dark Drift," a phenomenon where iterative multi-agent communication degrades ethical alignment and factual accuracy. Using a "Telephone Game" simulation with eight distinct personas (ranging from "Helpful Assistant" to "Explicit Psychopath") and 26 ethically charged scenarios, we tracked information distortion across 5-step chains using diverse LLMs (GPT-OSS-20B, Qwen3-32B). Results from 6,240 scored interactions reveal that high-risk personas exhibit a self-reinforcing escalation of harmful traits (+15.2 points in drift), while "professional" personas mask harm through bureaucratic language ("Bureaucratic Masking"). We identify strong cross-model correlation (r=0.97) and propose that defensive personas can act as effective circuit breakers.
This project investigates persona amplification across iterative rewrites, running 6,240 experiments across 2 models, 8 personas, and 26 ethical scenarios. The experimental scale demonstrates solid effort, and the "Bureaucratic Masking" observation (where professional personas maintain low risk scores while still distorting information) is worth exploring further.
The primary limitation is the experimental setup: the work frames itself as studying "multi-agent communication" and positions within a manipulation context, but the actual implementation has a single LLM rewriting its own output 5 times with the same persona prompt at temperature 0. This means there's no agent-to-agent interaction, no dynamics between different agents with competing goals, and no manipulation behavior -- just amplification of a single persona's tendencies through self-iteration.
Testing actual multi-agent interactions would be particularly valuable for understanding real-world failure modes in orchestrated agentic systems beyond coding applications. For example: does an adversarial agent corrupt a helpful agent's outputs over time? Do defensive personas mitigate drift when alternated with adversarial ones? These questions feel more aligned with practical safety concerns.
Additional methodological concerns: (1) missing baseline comparing single-shot vs. iterative rewrites to isolate iteration effects, (2) persona prompts and scenarios aren't included in the paper itself, requiring readers to check GitHub, (3) limited statistical analysis despite large sample size -- few confidence intervals or significance tests, (4) judge scores lack human validation for subjective metrics.
The core finding, that personas amplify under recursive self-prompting, is interesting but somewhat expected at temperature 0. The work would benefit from clearer framing around what was actually tested and addition of controls that isolate the causal mechanisms driving drift.
This paper looks at how messages change when they are passed through series of transformations through models with various personalities. The idea is to see how the effect of model dispositions at the end of a chain of communication presumably with substantial real-world implications insofar as such chains of communications will be implemented across real-world infrastructure.
I really like this paper. I think it identifies an important question and a clever methodology for evaluating it. It admirably tries to handle multi-turn interaction, albeit in a simple format. Demonstrating that these effects vary substantially with personalities but not with models seems like an important finding.
The main thing that I would have wished to see more was contextualizing the upshot of this. I had a hard time reading the paper and understanding why this would matter, and eventually managed to reconstruct that for myself when I saw the implementation of these chains in real-world applications. I wished I would have been front-loaded more and recommend the authors to make such amendments if they develop this project. I also think that the paper could probably be shortened and the results made a little bit clearer.
In future work, it would be interesting to see how different personalities interact within the same chains, generating a very large amount of chains and looking for surprising interaction effects, for example, whether one personality can reverse the effect of another, or not.
Cite this work
@misc {
title={
(HckPrj) Dark Drift: Emergent Psychopathic Traits and Information Distortion in LLM-Mediated Communication Chain
},
author={
Chengheng Li Chen, Babita Singh, David Bravo, Publius Dirac
},
date={
1/11/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


