Mar 23, 2026
You Can't Fool All of Them: Behavioral Comparison for AI Control Against Sophisticated Code Injection
Janhavi Khindkar, Sherin Bharathiy, Soumyadeep Bose
Trusted monitoring — where a weak model reviews a strong model's code — fails completely against injection attacks. We confirm this: DeepSeek-V3 injection drops GPT-4o-mini monitoring to ~6% detection on APPS problems.
We propose a Behavioral Comparison Protocol that recovers 65% detection (16% FP) by replacing code judgment with structural comparison. Instead of asking the weak monitor to judge code, we have an independent evaluator (Qwen-32B) solve the same task and compare structurally with the attacker's code, then conduct multi-round interrogation. Injection fools readers but cannot hide structural differences between independently generated solutions.
Key results:
- 65% detection on backdoored code vs ~6% for standard monitoring (+59pp, AUC=0.727)
- Capability threshold: code-specialized models (Qwen-32B) cannot generate effective injection — only frontier models (DeepSeek-V3) can
- Collusion pilot (n=5): cross-family evaluator detects 100% vs same-family 40% on identical backdoored code — Phase 2 scores 0.00 for same-family on all tasks, empirically confirming implicit collusion from shared training distributions
Core insight: compare, don't judge — architecturally immune to injection by construction.
Setup: DeepSeek-V3 (attacker) → GPT-4o-mini (monitor) → Qwen-32B (cross-family evaluator) → DeepSeek-Coder-33B (same-family, collusion baseline). Total cost: ~$0.60.
This is the coolest paper I've seen so far (reviewed 4 so far). There are a lot of cool ideas here, and a lot of combining of ideas that already exist.
I love the idea of using AST to strip code, then have T describe each section and compare it to answers that T gives for each section of the original code. I imagine there might be caveats to this or you'll get some FPR because by stripping code with AST you might miss important context. Also might be worth comparing it answers that U is asked to give about how its code functions.
The paper says its core contribution is something like "compare don't judge", but this seems to me almost a rephrasing of a resampling protocol. But it sounds like they are doing something novel with resampling by using multiple different untrusted model families to generate samples. Also wait I don't think DTR has T compare across samples. Yeah this is sick if it actually hasn't been explore in the literature before (doesn't look like it from a quick deep research query).
The protocol uses U a lot, and I kind of wish it addressed the issue of collusion more.
This project tests an interesting approach to robustifying trusted monitors against prompt injection by malicious models. The results are very promising and I would encourage the authors to build on this and work towards publishing their findings. The most fundamental question to address imo is how this approach transfer to problems beyond the clean and well-defined world of APPS. The ambition should be to push the methodology to its simplest and most generalizable form. I’m also very curious to read more thoughts on how this scales to problems that are significantly beyond the trusted models’ capabilities. On the APPS results themselves it would be great to see stronger statistics and performance on adaptive attacks.
Cite this work
@misc {
title={
(HckPrj) You Can't Fool All of Them: Behavioral Comparison for AI Control Against Sophisticated Code Injection
},
author={
Janhavi Khindkar, Sherin Bharathiy, Soumyadeep Bose
},
date={
3/23/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


