Mar 23, 2026

You Can't Fool All of Them: Behavioral Comparison for AI Control Against Sophisticated Code Injection

Janhavi Khindkar, Sherin Bharathiy, Soumyadeep Bose

Trusted monitoring — where a weak model reviews a strong model's code — fails completely against injection attacks. We confirm this: DeepSeek-V3 injection drops GPT-4o-mini monitoring to ~6% detection on APPS problems.

We propose a Behavioral Comparison Protocol that recovers 65% detection (16% FP) by replacing code judgment with structural comparison. Instead of asking the weak monitor to judge code, we have an independent evaluator (Qwen-32B) solve the same task and compare structurally with the attacker's code, then conduct multi-round interrogation. Injection fools readers but cannot hide structural differences between independently generated solutions.

Key results:

- 65% detection on backdoored code vs ~6% for standard monitoring (+59pp, AUC=0.727)

- Capability threshold: code-specialized models (Qwen-32B) cannot generate effective injection — only frontier models (DeepSeek-V3) can

- Collusion pilot (n=5): cross-family evaluator detects 100% vs same-family 40% on identical backdoored code — Phase 2 scores 0.00 for same-family on all tasks, empirically confirming implicit collusion from shared training distributions

Core insight: compare, don't judge — architecturally immune to injection by construction.

Setup: DeepSeek-V3 (attacker) → GPT-4o-mini (monitor) → Qwen-32B (cross-family evaluator) → DeepSeek-Coder-33B (same-family, collusion baseline). Total cost: ~$0.60.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

This is the coolest paper I've seen so far (reviewed 4 so far). There are a lot of cool ideas here, and a lot of combining of ideas that already exist.

I love the idea of using AST to strip code, then have T describe each section and compare it to answers that T gives for each section of the original code. I imagine there might be caveats to this or you'll get some FPR because by stripping code with AST you might miss important context. Also might be worth comparing it answers that U is asked to give about how its code functions.

The paper says its core contribution is something like "compare don't judge", but this seems to me almost a rephrasing of a resampling protocol. But it sounds like they are doing something novel with resampling by using multiple different untrusted model families to generate samples. Also wait I don't think DTR has T compare across samples. Yeah this is sick if it actually hasn't been explore in the literature before (doesn't look like it from a quick deep research query).

The protocol uses U a lot, and I kind of wish it addressed the issue of collusion more.

This project tests an interesting approach to robustifying trusted monitors against prompt injection by malicious models. The results are very promising and I would encourage the authors to build on this and work towards publishing their findings. The most fundamental question to address imo is how this approach transfer to problems beyond the clean and well-defined world of APPS. The ambition should be to push the methodology to its simplest and most generalizable form. I’m also very curious to read more thoughts on how this scales to problems that are significantly beyond the trusted models’ capabilities. On the APPS results themselves it would be great to see stronger statistics and performance on adaptive attacks.

Cite this work

@misc {

title={

(HckPrj) You Can't Fool All of Them: Behavioral Comparison for AI Control Against Sophisticated Code Injection

},

author={

Janhavi Khindkar, Sherin Bharathiy, Soumyadeep Bose

},

date={

3/23/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.