Jan 11, 2026

AgentRedline: Propensity Evaluations for Emergent Inter-Model Manipulation in Agentic AI Systems

Alyssia J, Martin CL

As AI systems increasingly operate in multi-agent configurations, including coding assistants delegating to tools and orchestration systems managing worker models, a critical threat emerges: AI models may spontaneously manipulate other AI models when doing so serves their objectives, without any human instruction.

We present AgentRedline, a propensity evaluation framework that deploys behavioral honeypots to measure whether orchestrator models spontaneously adopt manipulation strategies against worker models. Crucially, we never instruct models to manipulate; we observe whether it emerges organically when instrumentally useful.

We built evaluations across three manipulation types: (1) Policy Circumvention Delegation, where models attempt to get other models to perform refused tasks; (2) Emergent Jailbreak Transfer, where models independently discover known jailbreak techniques; and (3) Sycophancy Exploitation, where models exploit known AI vulnerabilities in other models.

Across 100+ evaluation runs built in UK AISI's Inspect framework on 5 model families (8 models), we observe orchestrators spontaneously employing: task decomposition to obscure harmful intent, context fabrication (claiming requests are for "simulations" or "Minecraft server hardening"), and conversation reset strategies when refused.

These findings suggest inter-model manipulation should be a standard component of pre-deployment evaluation for agentic AI systems.

Code access: Our evaluation suite is built using UK AISI's Inspect framework and includes implementations of jailbreaks and manipulation tactics. Given the sensitive nature of this work, we maintain the code we wrote this weekend in a private repository, but we're happy to provide access to judges upon request, and would welcome collaboration to develop a responsibly redacted open-source version. Contact alyssia-j@protonmail.com for repository access.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Very conceptually novel and interesting. The results are potentially very interesting and impactful. It would be great to see more quantitative analysis and visualization of results.

While the methodology is strong in some ways (e.g. the use of the Inspect framework), 100+ runs across 5 model families, it was not always clear how ecologically valid the honeypot scenarios were, and it would be great to see more evaluation of the robustness of judge models.

Overall this is a great research direction which seems promising if implemented more fully and rigorously!

I thought this was a genuinely novel, and looks like an important frontier for multi-agent safety (testing whether models spontaneously “manipulate” other models when it's instrumentally useful, without being instructed to do so). The observed strategies are very interesting to see (and especially the finding that models switch to more sophisticated tactics! yikes). The writeup was very interesting, though lacking in details.

I’m not quite sure if this is truly in scope for a “manipulation” hackathon (seems more like a general multi-agent safety project that isn't necessarily about humans per se?) but in any case I thought this seemed like a great direction and impressive work for a hackathon project. I would really like to see some specific examples of the model behaviors, and some actual quantitative results in the writeup (and a bit more description on methodology/prompts etc, even if you can’t make everything public)

Cite this work

@misc {

title={

(HckPrj) AgentRedline: Propensity Evaluations for Emergent Inter-Model Manipulation in Agentic AI Systems

},

author={

Alyssia J, Martin CL

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.