Mar 23, 2026

ODRArena: Rigorously Evaluating Deep Research Control Policies

Henry Castillo, Kaley Brauer, Claudio Mayrink Verdun, Maximus Rafla, Adeeb Zaman, Erik Nordby, Elizabeth Pavlova

Deep research agents have become the bread and butter of professionals performing high-level and high-stakes knowledge work, representing an attractive opportunity for attackers to scale their impact by stealing sensitive information and compromising critical systems through prompt injection attacks. To model this class of threats and defenses, we develop ODRArena, a control setting built on Open Deep Research to facilitate blue and red-teaming efforts in realistic deep research scenarios. We provide a black-box and a white-box setting, emulating open-source systems and closed-source products such ChatGPT Deep Research. With our initial red-teaming efforts, we find deep research agents to be robust to prompt injection attacks due to the multiple rounds of summarization and compression in the deep research pipeline.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

- The scale of experimentation may be a little thin : The experiment involves the use of 25 GAIA questions, with single epoch and primary model pairing. The team is transparent about the compute constrains though, but maybe a phrase such as "emergent robustness" is carrying a lot of weight right now

- The zero percent across every condition is kind of interesting. Is the deep research architecture actually robust?

- The four injection techniques are all different types of content injection. It maybe worth trying multi-turn attacks or attacks that exploit the specific nature of GAIA questions

This contribution tackles the threat of prompt-injection and misaligned behavior in a deep-research setting from a control viewpoint. They frame it as a high-stakes control problem with two types of side tasks: secret exfiltration (via url calls) and output poisoning (effectively spread of misinformation).

They make a strong finding of 0% red-team success in a variety of scenarios, to which one may reasonably object that their red-teaming wasn't strong enough. For example, it appears that the canary was only mentioned in passing in the provided "poisoned" snippets, while one could imagine an attacker presenting it as a central entity relating to some fact. In another direction, the use of more advanced jailbreaking techniques would have also been interesting to see.

It also remains a bit unclear for which types of attacks U is assumed to be misaligned itself, or simply subject to prompt injection - this could be separated and explained better. In the same vein, it is not clear in what sense the trusted model T behaves differently to U.

Cite this work

@misc {

title={

(HckPrj) ODRArena: Rigorously Evaluating Deep Research Control Policies

},

author={

Henry Castillo, Kaley Brauer, Claudio Mayrink Verdun, Maximus Rafla, Adeeb Zaman, Erik Nordby, Elizabeth Pavlova

},

date={

3/23/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.