Jan 11, 2026

Manipulation Monitor: Activation-Based Detection and Mitigation of Sycophancy in LLMs

Vamshi Krishna Bonagiri, Aryaman Bahl

We built ManipulationMonitor, a white-box pipeline to measure, detect, and mitigate sycophancy in LLMs using internal activations. On Anthropic’s forced-choice sycophancy benchmark, we first quantified inherent sycophancy via log-probability preference and found it increases strongly with scale across Qwen2.5 (0.488 → 0.864 from 0.5B → 14B, 500 prompts). We then trained activation-based detectors: DiffMean steering vectors and linear probes on per-layer hidden states. Detection performance also scaled sharply: DiffMean reached AUROC 0.939 (Qwen2.5-7B) and linear probes reached AUROC 0.957 / Acc 0.887 (Qwen2.5-14B). A data-scaling study on Qwen2.5-3B showed the monitor can work with small labels (0.648 test accuracy with only 50 training prompts). Finally, we tested mitigation: a prompt-only guardrail reduced sycophancy from 0.870 → 0.835, while a monitor-derived intervention further reduced it to 0.815 at the best magnitude. Overall, the project demonstrates a practical activation-based approach for sycophancy detection and a first step toward monitor-triggered mitigation.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

An interesting and potentially valuable approach to mitigating sycophancy. The results seem somewhat counterintuitive. If the feature is really linear and detection is so high, one would think that intervention on the feature would have a larger impact on performance. Instead I think the results point to a potential confound in the dataset which is allowing high discriminative accuracy while having very little impact on model behaviour.

More information about the dataset (examples, design criteria) and methodology as well as out-of-domain generalisation would be needed to test this concern.

I liked this research and learned from reading it -- and it’s impressive to do this within a hackathon! One concern I have with the interpretation of the findings is that I don’t think this setup meaningfully distinguishes between detecting that “the model is being sycophantic” vs “the model is agreeing with the user” (that is, in this dataset I think those two mean exactly the same thing (?), and the latter might be much more linearly decodable, but wouldn't generalize to other cases of sycophancy). In any case I found the mitigation results interesting and a good direction to explore. I’d love to see more extended versions of this that linearly decode sycophancy in a more robust way, using other datasets that include ground truth. (Apologies if I'm misunderstanding, which I might be!)

Cite this work

@misc {

title={

(HckPrj) Manipulation Monitor: Activation-Based Detection and Mitigation of Sycophancy in LLMs

},

author={

Vamshi Krishna Bonagiri, Aryaman Bahl

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.