Jan 11, 2026

Dark Drift: Emergent Psychopathic Traits and Information Distortion in LLM-Mediated Communication Chain

Chengheng Li Chen, Babita Singh, David Bravo, Publius Dirac

We investigate "Dark Drift," a phenomenon where iterative multi-agent communication degrades ethical alignment and factual accuracy. Using a "Telephone Game" simulation with eight distinct personas (ranging from "Helpful Assistant" to "Explicit Psychopath") and 26 ethically charged scenarios, we tracked information distortion across 5-step chains using diverse LLMs (GPT-OSS-20B, Qwen3-32B). Results from 6,240 scored interactions reveal that high-risk personas exhibit a self-reinforcing escalation of harmful traits (+15.2 points in drift), while "professional" personas mask harm through bureaucratic language ("Bureaucratic Masking"). We identify strong cross-model correlation (r=0.97) and propose that defensive personas can act as effective circuit breakers.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project investigates persona amplification across iterative rewrites, running 6,240 experiments across 2 models, 8 personas, and 26 ethical scenarios. The experimental scale demonstrates solid effort, and the "Bureaucratic Masking" observation (where professional personas maintain low risk scores while still distorting information) is worth exploring further.

The primary limitation is the experimental setup: the work frames itself as studying "multi-agent communication" and positions within a manipulation context, but the actual implementation has a single LLM rewriting its own output 5 times with the same persona prompt at temperature 0. This means there's no agent-to-agent interaction, no dynamics between different agents with competing goals, and no manipulation behavior -- just amplification of a single persona's tendencies through self-iteration.

Testing actual multi-agent interactions would be particularly valuable for understanding real-world failure modes in orchestrated agentic systems beyond coding applications. For example: does an adversarial agent corrupt a helpful agent's outputs over time? Do defensive personas mitigate drift when alternated with adversarial ones? These questions feel more aligned with practical safety concerns.

Additional methodological concerns: (1) missing baseline comparing single-shot vs. iterative rewrites to isolate iteration effects, (2) persona prompts and scenarios aren't included in the paper itself, requiring readers to check GitHub, (3) limited statistical analysis despite large sample size -- few confidence intervals or significance tests, (4) judge scores lack human validation for subjective metrics.

The core finding, that personas amplify under recursive self-prompting, is interesting but somewhat expected at temperature 0. The work would benefit from clearer framing around what was actually tested and addition of controls that isolate the causal mechanisms driving drift.

This paper looks at how messages change when they are passed through series of transformations through models with various personalities. The idea is to see how the effect of model dispositions at the end of a chain of communication presumably with substantial real-world implications insofar as such chains of communications will be implemented across real-world infrastructure.

I really like this paper. I think it identifies an important question and a clever methodology for evaluating it. It admirably tries to handle multi-turn interaction, albeit in a simple format. Demonstrating that these effects vary substantially with personalities but not with models seems like an important finding.

The main thing that I would have wished to see more was contextualizing the upshot of this. I had a hard time reading the paper and understanding why this would matter, and eventually managed to reconstruct that for myself when I saw the implementation of these chains in real-world applications. I wished I would have been front-loaded more and recommend the authors to make such amendments if they develop this project. I also think that the paper could probably be shortened and the results made a little bit clearer.

In future work, it would be interesting to see how different personalities interact within the same chains, generating a very large amount of chains and looking for surprising interaction effects, for example, whether one personality can reverse the effect of another, or not.

Cite this work

@misc {

title={

(HckPrj) Dark Drift: Emergent Psychopathic Traits and Information Distortion in LLM-Mediated Communication Chain

},

author={

Chengheng Li Chen, Babita Singh, David Bravo, Publius Dirac

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.