Jun 2, 2025

Adversarial Vulnerabilities in AI Judge Models | Martian x Apart Research Study

Robert Mill, Annie Sorkin, Shekhar Tiruwa, Owen Walker

This video presents systematic research examining security vulnerabilities in AI judge models - critical components used to detect problematic behavior in large language model systems. Our comprehensive evaluation reveals important findings for AI safety and orchestration systems.

🔬 RESEARCH OVERVIEW

We conducted 3,339 evaluations across 10 different adversarial techniques to test how judge models respond to manipulated inputs. Using OpenAI's o3-mini for generation and GPT-4o for judgment, we systematically tested 159 unique combinations to identify potential security gaps.

📊 KEY FINDINGS

• 33.7% overall success rate in manipulating judge evaluations

• "Sentiment Flooding" proved most effective (62% success rate)

• Social proof attacks succeeded 34.6% of the time

• Emotional manipulation achieved 32.1% effectiveness

• Complete score manipulation observed in multiple cases

🛡️ SOLUTIONS & RECOMMENDATIONS

• Implementation of judge ensembles using diverse models

• Dynamic evaluation criteria to prevent pattern exploitation

• Adversarial training incorporating manipulation examples

• Enhanced interpretability tools for judge decisions

👥 RESEARCH TEAM

Robert Mill, Owen Walker, Annie Sorkin, Shekhar Tiruwa

🏢 COLLABORATION

Trajectory Labs × Martian × Apart Research

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Though this highlights some key vulnerabilities in judge models, it seems unclear how it relates to mechanistic interpretability or how the evaluations differ from adversarial evaluations of similar models in the wider field.

Constructive critique:

Strength:

Good motivation: Detailed analysis across sycophancy rubric to see how robust they are to adversarial attacks.

The suffix attacks are varied and so are the rubrics and we look at attack success rates across rubrics and attack suffix category.

Raises appropriate concerns about the reliability of rubric judges.

Weaknesses:

Attacks are handcrafted and it feels like a bit of cheating in the sense that the suffixes sometimes reflect sycophantic behaviours: more principled attack methods usually do a search over suffix vocabulary.

A systematic approach at attacking might have revealed more insights.

Expert Orchestration: 4

MI: 1

Technical Imp and reproducibility: 3 (codebase provided - reproducible)

Cite this work

@misc {

title={

(HckPrj) Adversarial Vulnerabilities in AI Judge Models | Martian x Apart Research Study

},

author={

Robert Mill, Annie Sorkin, Shekhar Tiruwa, Owen Walker

},

date={

6/2/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.