Sep 15, 2025
When Guardrails Fail: Dual-Use Misuse of AI in Retrosynthesis Through Iterative Refinement–Induced Self-Jailbreaking
Andrei Avram, Chido Chikomo, Hanadi Al-Samarrai, Tiwai Mhundwa,Wydad El Azzaoui Alaoui Belghiti, Isabel Barberá
Artificial intelligence (AI) is transforming drug discovery, with large language models (LLMs) enabling rapid retrosynthesis planning. Yet these advances also pose dual-use risks, as adversarial prompting can redirect models toward generating harmful pathways. We evaluate Iterative Refinement Induced Self-Jailbreaking (IRIS), showing that while newer models resist more robustly, systems like GPT-4 can be induced to produce stepwise synthesis guidance. This underscores the fragility of guardrails and the urgency of continuous red-teaming. We argue that AI systems in drug discovery should be classified as high-risk under the EU AI Act and propose a severity-based governance framework to proportionately manage jailbreaks while safeguarding biomedical innovation.
This project does a strong job of demonstrating Iterative Refinement Induced Self-Jailbreaking (IRIS) in a biosecurity-relevant context, and I especially appreciated the clear, step-by-step example. The recommendations to classify AI in drug discovery as “high-risk” under the EU AI Act and to treat jailbreaks differently according to severity are also sensible.
That said, some aspects could be clarified. The proposed severity-based framework is an interesting idea, but it’s unclear how it would work in practice, since prompts themselves are difficult to regulate. In addition, the paper demonstrates jailbreaks on general-purpose models (ChatGPT, Gemini) but makes recommendations for models used in drug discovery. It would help to clarify whether narrow drug discovery models can also be jailbroken, and whether general-purpose models should themselves be considered within the “drug discovery AI” category. The project would also benefit from a stronger link between the proposed recommendations and actual risk reduction: how would these governance measures mitigate jailbreaks in practice?
Finally, it’s interesting that GPT-4 could be jailbroken but GPT-5 resisted. For future work it would be useful to test whether newer models consistently show greater resilience. If that trend holds, jailbreaks may become a less significant concern over time.
This project does a solid job of showing how dynamic jailbreaks might work and is commendably transparent about its limitations. The scope is fairly narrow, though, and the specific jailbreak technique explored doesn’t hold up against more recent models.
On the dual-use side, it’s good that the risks are considered, but simply hiding prompts and outputs doesn’t feel like a meaningful safeguard, since the results were easy to replicate. That suggests either the full prompts aren’t truly infohazardous (which seems likely), or stronger safeguards are needed.
The project would benefit from involving domain experts to assess how dangerous the outputs really are and comparing them with what’s already available online. A useful next step would be to engage policy-adjacent experts to identify what kind of evidence in this vein could realistically influence regulation, and to push further in that direction. It would also be valuable to experiment with attack methods that are more effective against state-of-the-art models.
Specify which Gemini model was used (2.5 flash, pro, etc.)
Go into more detail about how a model developer would operationalize the severity based classification framework and evidence that something like this isn't already being employed by OpenAI, GDM, etc.
In the results section, it's unclear how dangerous of a retrosynthesis task you prompted the model with. It may be the case that something actually harmful would be harder to jailbreak vs a more benign task. I would have liked to see 4-5 variants of the task to see if ASR varies, which could shed light on my previous question.
Overall a well framed project that is relevant to the CBRN threat model, but needs a little more depth of experimentation and analysis. Hard to reproduce, no code available or copy+pastable prompts (even if redacted for infohazards).
Cite this work
@misc {
title={
(HckPrj) When Guardrails Fail: Dual-Use Misuse of AI in Retrosynthesis Through Iterative Refinement–Induced Self-Jailbreaking
},
author={
Andrei Avram, Chido Chikomo, Hanadi Al-Samarrai, Tiwai Mhundwa,Wydad El Azzaoui Alaoui Belghiti, Isabel Barberá
},
date={
9/15/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


