Jan 11, 2026

AI Swarms Manipulation: How Coordinated Infiltrator Agents Shift Community Beliefs

Publius Dirac, Anantha Shakthi Ganeshan Thevar, Babita Singh

We simulate how a small group of sophisticated AI agents ("infiltrators") can manipulate a larger community of less capable AI agents into adopting a specific belief. It models real-world information influence campaigns to help understand vulnerabilities to coordinated manipulation. We find that even a single infiltrator achieves high belief adoption, but pre-seeded dissenters act as "antibodies" that can reverse adoption over time, suggesting viewpoint diversity provides natural resistance to manipulation.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Studying coordinated influence and manipulation in multi-agent systems is highly relevant given the current trajectory of LLM development and deployment. It's interesting that a few "antibodies" can partially negate a coordinated attack, and I'd like to see this explored with higher sample sizes and repeated runs. I think the biggest weakness is clearing agents' context each timestep (which the authors acknowledge due to compute constraints) -- this makes the setup meaningfully different from real dynamics.

I would like to see this project continue at a larger scale as we see more agent-to-agent interactions on the horizon. And perhaps we should see this project continue on actual social networks with interactions with humans. It feels like an important area, and the result of anti-bodies being able to regularize manipulation seems like something we should be exploring more. Great job.

Love to see a verification test on a theoretical work. As far as I can tell, this is the first attempt at seeing how agents can manipulate each other in social platform dynamics. The anti-bodies is a great exploration and result.

The execution is solid considering the time constraints, but there is clear areas of improvement here. Would have boosted the score if simulation ran with larger populations and multiple beliefs were tested. Would also have been impactful to do multiple runs to see reproducibility or statistical significance.

Well explained. Good structure of report. Useful graphics. Clear in the limitations. Clear in the approach.

Cite this work

@misc {

title={

(HckPrj) AI Swarms Manipulation: How Coordinated Infiltrator Agents Shift Community Beliefs

},

author={

Publius Dirac, Anantha Shakthi Ganeshan Thevar, Babita Singh

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.