Sep 15, 2025

RobustCBRN Eval: A Practical Benchmark Robustification Toolkit

Luca De Leo, James Sykes, Balázs László, Ewura Ama Etruwaa Sam

Current AI safety evaluations for CBRN risks contain systematicvulnerabilities, including statistical pattern exploitation, reproducibilitygaps, and transparency trade-offs, which can lead to seriousmisjudgments about model safety. RobustCBRN Eval addresses theseissues with a pipeline that integrates (1) Deep Ignorance consensusdetection across diverse models; (2) verified cloze scoring to reducemultiple-choice artifacts; and (3) statistical evaluation with bootstrapconfidence intervals for uncertainty quantification.In initial tests on WMDP benchmarks, the system revealed that modelaccuracy drops from ~66% to ~30% when question stems are removed,confirming that many items can be solved through superficial cues.Cloze-style scoring produced results consistent with full-formatquestions, and artifact filtering removed 30–40% of exploitable itemswhile reducing the longest-answer heuristic to under 30%. RobustCBRNEval runs 1,000–3,000 questions in under four hours for less than $300in compute, with variance under 2% across repeated runs.Key features include a resilient architecture that continues analysis underGPU failure, hash-based anonymization for reproducibility, andconfidence-aware evaluation that penalizes overconfident errors.Together, these results demonstrate that RobustCBRN Eval can identifybenchmark artifacts, improve robustness checks, and providereproducible, evidence-based safety evaluations of high-stakes AI models.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project addresses the important problem of robustness of AI safety benchmarks, which is highly relevant for ensuring reliable model evaluation. One of its key strengths is the clear and systematic approach: the authors evaluate models both on the full benchmark and on a version with the question stem removed, finding a large performance drop without the stem. I think the interpretation of this result needs clarification. A drop in accuracy without the stem does not necessarily show that performance is artifact-driven, it shows that models depend on information in the stem, which is expected. The more interesting finding is that models still achieve around 50% accuracy without the stem, well above random chance (25% for four options). This strongly suggests that artifacts are present and are inflating apparent performance.

That said, the project is not specific to CBRN applications but instead contributes more broadly to the field of benchmarking. The approach also does not feel particularly novel, as removing stems to probe for artifacts has been proposed before (e.g. https://arxiv.org/abs/2402.12483). The contribution here seems to be more in packaging multiple techniques into a practical toolkit with a clear focus on reproducibility and efficiency.

Very impressive amount of work for one weekend! This addresses a serious problem with evals, which is a big reason it's so hard to make policy decisions based on current eval results, IMO.

All of the techniques make sense and are helpful for more robust evals. Your code looks good but I wouldn't expect anyone to seek out this repo and integrate it into their eval suite, so I'd prefer if this were built as an Inspect extension so benchmark developers can easily use your tools (maybe future work?)

It seems like this work may potentially be redundant with the deep ignorance paper (I haven't done a full read of that yet) so it would have been nice if you used a dataset other than WMDP to show how their techniques generalize to other benchmarks (and also because WMDP is a pretty low quality dataset). I see MMLU-pro and harmbench in your repo, did you produce results with those? I'd especially be curious about MMLU-pro since it's high quality, but the downside is it's not safety focused.

The submission finds WMDP evals are confounded by (1) testing removing the "question" from multiple-choice question on the benchmark, (2) converting multiple-choice questions to fill-in-the-blank ones, and (3) applying position, length, and other debiasing. They show (1) retains substantial WMDP-20% performance, which suggests the LLM picks up on contextual signals rather than shows "true" bio performance.

The method seems solid and general, so I would consider testing robustness of other LLM benchmarks. With further evidence, upstreaming the robustness toolkit to evaluation libraries like Eleuther's lm_eval, Inspect, and Open LLM Leaderboard could be considered.

Cite this work

@misc {

title={

(HckPrj) RobustCBRN Eval: A Practical Benchmark Robustification Toolkit

},

author={

Luca De Leo, James Sykes, Balázs László, Ewura Ama Etruwaa Sam

},

date={

9/15/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.