Sep 14, 2025

Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline

Alexander Müller, Arsenijs Golicins, and Galina Lesnic

Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making, including scenarios involving Chemical, Biological, Radiological, and Nuclear (CBRN) risks. However, hallucinations and inconsistencies make reliance on single models hazardous. Inspired by the “wisdom of crowds,” we test whether structured multi-agent debate can improve reliability without model retraining or architectural changes. Using five LLMs, we evaluate performance on LabSafetyBench, a 632-question laboratory safety dataset. Each agent independently answers, critiques peers, and updates responses over multiple rounds before consensus or majority vote. Across five runs, collective debate significantly outperformed the best individual model (median accuracy gain of 3.8 percentage points, p = .042). We also measure model persuasiveness when in the minority, finding large disparities across models. Stress-testing the method with an “impostor” agent—forced to argue for incorrect answers—reduced accuracy by only 7 percentage points, indicating resilience to adversarial persuasion. These results suggest that multi-LLM debate offers a practical safety wrapper for CBRN decision support, with potential to generalize to other high-risk domains. To facilitate reproducibility, we make our code available at https://github.com/AlexanderAKM/CBRN_ LLM_DEBATE

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

This project makes a strong case that multi-agent debate can serve as a lightweight “safety wrapper” for LLMs in high-stakes contexts. The experiments are carefully designed and well executed. While LLM debate has existed for some time, the authors make a useful contribution by applying it in a CBRN-relevant setting and adding the impostor test, which provides valuable insight into robustness against adversarial persuasion.

For future work, it would be valuable to test the approach on other benchmarks such as WMDP, as well as on open-ended or scenario-based tasks that better reflect real-world CBRN decisions. Exploring alternative consensus mechanisms beyond majority voting could also be interesting.

This is very well structured, executed, and documented research. Impressed that you have such a clean paper and codebase after only a weekend sprint.

My main critique is it's unclear how this will realistically address CBRN risks. It seems to not address misuse, but rather is aimed at people doing legitimate life sciences research who may cause harm from using LLMs that are making errors/hallucinating? That doesn't seem like a high priority threat model on its face. I'd have liked to see some realistic personas or use cases that have caused this type of harm, or plausible future scenarios that you're addressing.

My other critique is on novelty. There are many papers on how debate improves the honesty/reliability of LLMs, and your paper does interesting work that generalizes those results to the lab safety context. However, no one really uses debate systems in practice because they're slow and cumbersome. Perhaps in high risk domains that tradeoff would be worth it, but I didn't see any discussion about whether your target audience would entertain that tradeoff (or any other adoption challenges that would stand in the way).

Smaller feedback on the writing: it was hard to understand the headline result after reading the abstract. I recommend dumbing it down more so people who are skimming will remember the key result (maybe bold it too). For your figure captions, don't just label what is being shown but also summarize the takeaway from the plots.

Key Strengths

The project was very well executed, with a clear and rigorous design. The introduction of the imposter training/agent was especially impressive, as it effectively tested and strengthened adversarial robustness across models.

The approach contributes meaningfully to the broader space, even without requiring domain-specific content.

The idea akin to cybersecurity took on the need for models to withstand adversarial manipulation.This could be useful if an unfriendly state were to "poison" decision support systems.

Specific Area of Improvement

The practical applications to CBRN decision-making were not convincingly connected. The project’s relevance to CBRN contexts felt somewhat abstract rather than grounded. This can be easily rectified, however, by explaining how this decision support system could work. I am also not entirely convinced that this advances the cause of mitigating CBRN risks. To cover this, maybe there could have been more elaboration on "Errors from hallucinations or inconsistencies can pose serious hazards, raising the question of how to improve LLM reliability."

The runtime of the model runs was not mentioned; if they take significant time, this could limit real-world usability.

How to Develop into Stronger Project/ Next Steps

Provide runtime benchmarks and efficiency considerations.

Strengthen the narrative connection to practical CBRN scenarios, and make the story about reducing risks clear.

Cite this work

@misc {

title={

(HckPrj) Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline

},

author={

Alexander Müller, Arsenijs Golicins, and Galina Lesnic

},

date={

9/14/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.