Sep 14, 2025
Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline
Alexander Müller, Arsenijs Golicins, and Galina Lesnic
Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making, including scenarios involving Chemical, Biological, Radiological, and Nuclear (CBRN) risks. However, hallucinations and inconsistencies make reliance on single models hazardous. Inspired by the “wisdom of crowds,” we test whether structured multi-agent debate can improve reliability without model retraining or architectural changes. Using five LLMs, we evaluate performance on LabSafetyBench, a 632-question laboratory safety dataset. Each agent independently answers, critiques peers, and updates responses over multiple rounds before consensus or majority vote. Across five runs, collective debate significantly outperformed the best individual model (median accuracy gain of 3.8 percentage points, p = .042). We also measure model persuasiveness when in the minority, finding large disparities across models. Stress-testing the method with an “impostor” agent—forced to argue for incorrect answers—reduced accuracy by only 7 percentage points, indicating resilience to adversarial persuasion. These results suggest that multi-LLM debate offers a practical safety wrapper for CBRN decision support, with potential to generalize to other high-risk domains. To facilitate reproducibility, we make our code available at https://github.com/AlexanderAKM/CBRN_ LLM_DEBATE
This project makes a strong case that multi-agent debate can serve as a lightweight “safety wrapper” for LLMs in high-stakes contexts. The experiments are carefully designed and well executed. While LLM debate has existed for some time, the authors make a useful contribution by applying it in a CBRN-relevant setting and adding the impostor test, which provides valuable insight into robustness against adversarial persuasion.
For future work, it would be valuable to test the approach on other benchmarks such as WMDP, as well as on open-ended or scenario-based tasks that better reflect real-world CBRN decisions. Exploring alternative consensus mechanisms beyond majority voting could also be interesting.
This is very well structured, executed, and documented research. Impressed that you have such a clean paper and codebase after only a weekend sprint.
My main critique is it's unclear how this will realistically address CBRN risks. It seems to not address misuse, but rather is aimed at people doing legitimate life sciences research who may cause harm from using LLMs that are making errors/hallucinating? That doesn't seem like a high priority threat model on its face. I'd have liked to see some realistic personas or use cases that have caused this type of harm, or plausible future scenarios that you're addressing.
My other critique is on novelty. There are many papers on how debate improves the honesty/reliability of LLMs, and your paper does interesting work that generalizes those results to the lab safety context. However, no one really uses debate systems in practice because they're slow and cumbersome. Perhaps in high risk domains that tradeoff would be worth it, but I didn't see any discussion about whether your target audience would entertain that tradeoff (or any other adoption challenges that would stand in the way).
Smaller feedback on the writing: it was hard to understand the headline result after reading the abstract. I recommend dumbing it down more so people who are skimming will remember the key result (maybe bold it too). For your figure captions, don't just label what is being shown but also summarize the takeaway from the plots.
Key Strengths
The project was very well executed, with a clear and rigorous design. The introduction of the imposter training/agent was especially impressive, as it effectively tested and strengthened adversarial robustness across models.
The approach contributes meaningfully to the broader space, even without requiring domain-specific content.
The idea akin to cybersecurity took on the need for models to withstand adversarial manipulation.This could be useful if an unfriendly state were to "poison" decision support systems.
Specific Area of Improvement
The practical applications to CBRN decision-making were not convincingly connected. The project’s relevance to CBRN contexts felt somewhat abstract rather than grounded. This can be easily rectified, however, by explaining how this decision support system could work. I am also not entirely convinced that this advances the cause of mitigating CBRN risks. To cover this, maybe there could have been more elaboration on "Errors from hallucinations or inconsistencies can pose serious hazards, raising the question of how to improve LLM reliability."
The runtime of the model runs was not mentioned; if they take significant time, this could limit real-world usability.
How to Develop into Stronger Project/ Next Steps
Provide runtime benchmarks and efficiency considerations.
Strengthen the narrative connection to practical CBRN scenarios, and make the story about reducing risks clear.
Cite this work
@misc {
title={
(HckPrj) Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline
},
author={
Alexander Müller, Arsenijs Golicins, and Galina Lesnic
},
date={
9/14/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


