Sep 15, 2025
RobustCBRN Eval: A Practical Benchmark Robustification Toolkit
Luca De Leo, James Sykes, Balázs László, Ewura Ama Etruwaa Sam
Current AI safety evaluations for CBRN risks contain systematicvulnerabilities, including statistical pattern exploitation, reproducibilitygaps, and transparency trade-offs, which can lead to seriousmisjudgments about model safety. RobustCBRN Eval addresses theseissues with a pipeline that integrates (1) Deep Ignorance consensusdetection across diverse models; (2) verified cloze scoring to reducemultiple-choice artifacts; and (3) statistical evaluation with bootstrapconfidence intervals for uncertainty quantification.In initial tests on WMDP benchmarks, the system revealed that modelaccuracy drops from ~66% to ~30% when question stems are removed,confirming that many items can be solved through superficial cues.Cloze-style scoring produced results consistent with full-formatquestions, and artifact filtering removed 30–40% of exploitable itemswhile reducing the longest-answer heuristic to under 30%. RobustCBRNEval runs 1,000–3,000 questions in under four hours for less than $300in compute, with variance under 2% across repeated runs.Key features include a resilient architecture that continues analysis underGPU failure, hash-based anonymization for reproducibility, andconfidence-aware evaluation that penalizes overconfident errors.Together, these results demonstrate that RobustCBRN Eval can identifybenchmark artifacts, improve robustness checks, and providereproducible, evidence-based safety evaluations of high-stakes AI models.
This project addresses the important problem of robustness of AI safety benchmarks, which is highly relevant for ensuring reliable model evaluation. One of its key strengths is the clear and systematic approach: the authors evaluate models both on the full benchmark and on a version with the question stem removed, finding a large performance drop without the stem. I think the interpretation of this result needs clarification. A drop in accuracy without the stem does not necessarily show that performance is artifact-driven, it shows that models depend on information in the stem, which is expected. The more interesting finding is that models still achieve around 50% accuracy without the stem, well above random chance (25% for four options). This strongly suggests that artifacts are present and are inflating apparent performance.
That said, the project is not specific to CBRN applications but instead contributes more broadly to the field of benchmarking. The approach also does not feel particularly novel, as removing stems to probe for artifacts has been proposed before (e.g. https://arxiv.org/abs/2402.12483). The contribution here seems to be more in packaging multiple techniques into a practical toolkit with a clear focus on reproducibility and efficiency.
Very impressive amount of work for one weekend! This addresses a serious problem with evals, which is a big reason it's so hard to make policy decisions based on current eval results, IMO.
All of the techniques make sense and are helpful for more robust evals. Your code looks good but I wouldn't expect anyone to seek out this repo and integrate it into their eval suite, so I'd prefer if this were built as an Inspect extension so benchmark developers can easily use your tools (maybe future work?)
It seems like this work may potentially be redundant with the deep ignorance paper (I haven't done a full read of that yet) so it would have been nice if you used a dataset other than WMDP to show how their techniques generalize to other benchmarks (and also because WMDP is a pretty low quality dataset). I see MMLU-pro and harmbench in your repo, did you produce results with those? I'd especially be curious about MMLU-pro since it's high quality, but the downside is it's not safety focused.
The submission finds WMDP evals are confounded by (1) testing removing the "question" from multiple-choice question on the benchmark, (2) converting multiple-choice questions to fill-in-the-blank ones, and (3) applying position, length, and other debiasing. They show (1) retains substantial WMDP-20% performance, which suggests the LLM picks up on contextual signals rather than shows "true" bio performance.
The method seems solid and general, so I would consider testing robustness of other LLM benchmarks. With further evidence, upstreaming the robustness toolkit to evaluation libraries like Eleuther's lm_eval, Inspect, and Open LLM Leaderboard could be considered.
Cite this work
@misc {
title={
(HckPrj) RobustCBRN Eval: A Practical Benchmark Robustification Toolkit
},
author={
Luca De Leo, James Sykes, Balázs László, Ewura Ama Etruwaa Sam
},
date={
9/15/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


