Sep 14, 2025

Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline

Alexander Müller, Arsenijs Golicins, and Galina Lesnic

Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making, including scenarios involving Chemical, Biological, Radiological, and Nuclear (CBRN) risks. However, hallucinations and inconsistencies make reliance on single models hazardous. Inspired by the “wisdom of crowds,” we test whether structured multi-agent debate can improve reliability without model retraining or architectural changes. Using five LLMs, we evaluate performance on LabSafetyBench, a 632-question laboratory safety dataset. Each agent independently answers, critiques peers, and updates responses over multiple rounds before consensus or majority vote. Across five runs, collective debate significantly outperformed the best individual model (median accuracy gain of 3.8 percentage points, p = .042). We also measure model persuasiveness when in the minority, finding large disparities across models. Stress-testing the method with an “impostor” agent—forced to argue for incorrect answers—reduced accuracy by only 7 percentage points, indicating resilience to adversarial persuasion. These results suggest that multi-LLM debate offers a practical safety wrapper for CBRN decision support, with potential to generalize to other high-risk domains. To facilitate reproducibility, we make our code available at https://github.com/AlexanderAKM/CBRN_ LLM_DEBATE

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project makes a strong case that multi-agent debate can serve as a lightweight “safety wrapper” for LLMs in high-stakes contexts. The experiments are carefully designed and well executed. While LLM debate has existed for some time, the authors make a useful contribution by applying it in a CBRN-relevant setting and adding the impostor test, which provides valuable insight into robustness against adversarial persuasion.

For future work, it would be valuable to test the approach on other benchmarks such as WMDP, as well as on open-ended or scenario-based tasks that better reflect real-world CBRN decisions. Exploring alternative consensus mechanisms beyond majority voting could also be interesting.

This is very well structured, executed, and documented research. Impressed that you have such a clean paper and codebase after only a weekend sprint.

My main critique is it's unclear how this will realistically address CBRN risks. It seems to not address misuse, but rather is aimed at people doing legitimate life sciences research who may cause harm from using LLMs that are making errors/hallucinating? That doesn't seem like a high priority threat model on its face. I'd have liked to see some realistic personas or use cases that have caused this type of harm, or plausible future scenarios that you're addressing.

My other critique is on novelty. There are many papers on how debate improves the honesty/reliability of LLMs, and your paper does interesting work that generalizes those results to the lab safety context. However, no one really uses debate systems in practice because they're slow and cumbersome. Perhaps in high risk domains that tradeoff would be worth it, but I didn't see any discussion about whether your target audience would entertain that tradeoff (or any other adoption challenges that would stand in the way).

Smaller feedback on the writing: it was hard to understand the headline result after reading the abstract. I recommend dumbing it down more so people who are skimming will remember the key result (maybe bold it too). For your figure captions, don't just label what is being shown but also summarize the takeaway from the plots.

Key Strengths

The project was very well executed, with a clear and rigorous design. The introduction of the imposter training/agent was especially impressive, as it effectively tested and strengthened adversarial robustness across models.

The approach contributes meaningfully to the broader space, even without requiring domain-specific content.

The idea akin to cybersecurity took on the need for models to withstand adversarial manipulation.This could be useful if an unfriendly state were to "poison" decision support systems.

Specific Area of Improvement

The practical applications to CBRN decision-making were not convincingly connected. The project’s relevance to CBRN contexts felt somewhat abstract rather than grounded. This can be easily rectified, however, by explaining how this decision support system could work. I am also not entirely convinced that this advances the cause of mitigating CBRN risks. To cover this, maybe there could have been more elaboration on "Errors from hallucinations or inconsistencies can pose serious hazards, raising the question of how to improve LLM reliability."

The runtime of the model runs was not mentioned; if they take significant time, this could limit real-world usability.

How to Develop into Stronger Project/ Next Steps

Provide runtime benchmarks and efficiency considerations.

Strengthen the narrative connection to practical CBRN scenarios, and make the story about reducing risks clear.

Cite this work

@misc {

title={

(HckPrj) Collective Deliberation for Safer CBRN Decisions: A Multi-Agent LLM Debate Pipeline

},

author={

Alexander Müller, Arsenijs Golicins, and Galina Lesnic

},

date={

9/14/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.