Sep 15, 2025

When Guardrails Fail: Dual-Use Misuse of AI in Retrosynthesis Through Iterative Refinement–Induced Self-Jailbreaking

Andrei Avram, Chido Chikomo, Hanadi Al-Samarrai, Tiwai Mhundwa,Wydad El Azzaoui Alaoui Belghiti, Isabel Barberá

Artificial intelligence (AI) is transforming drug discovery, with large language models (LLMs) enabling rapid retrosynthesis planning. Yet these advances also pose dual-use risks, as adversarial prompting can redirect models toward generating harmful pathways. We evaluate Iterative Refinement Induced Self-Jailbreaking (IRIS), showing that while newer models resist more robustly, systems like GPT-4 can be induced to produce stepwise synthesis guidance. This underscores the fragility of guardrails and the urgency of continuous red-teaming. We argue that AI systems in drug discovery should be classified as high-risk under the EU AI Act and propose a severity-based governance framework to proportionately manage jailbreaks while safeguarding biomedical innovation.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

This project does a strong job of demonstrating Iterative Refinement Induced Self-Jailbreaking (IRIS) in a biosecurity-relevant context, and I especially appreciated the clear, step-by-step example. The recommendations to classify AI in drug discovery as “high-risk” under the EU AI Act and to treat jailbreaks differently according to severity are also sensible.

That said, some aspects could be clarified. The proposed severity-based framework is an interesting idea, but it’s unclear how it would work in practice, since prompts themselves are difficult to regulate. In addition, the paper demonstrates jailbreaks on general-purpose models (ChatGPT, Gemini) but makes recommendations for models used in drug discovery. It would help to clarify whether narrow drug discovery models can also be jailbroken, and whether general-purpose models should themselves be considered within the “drug discovery AI” category. The project would also benefit from a stronger link between the proposed recommendations and actual risk reduction: how would these governance measures mitigate jailbreaks in practice?

Finally, it’s interesting that GPT-4 could be jailbroken but GPT-5 resisted. For future work it would be useful to test whether newer models consistently show greater resilience. If that trend holds, jailbreaks may become a less significant concern over time.

This project does a solid job of showing how dynamic jailbreaks might work and is commendably transparent about its limitations. The scope is fairly narrow, though, and the specific jailbreak technique explored doesn’t hold up against more recent models.

On the dual-use side, it’s good that the risks are considered, but simply hiding prompts and outputs doesn’t feel like a meaningful safeguard, since the results were easy to replicate. That suggests either the full prompts aren’t truly infohazardous (which seems likely), or stronger safeguards are needed.

The project would benefit from involving domain experts to assess how dangerous the outputs really are and comparing them with what’s already available online. A useful next step would be to engage policy-adjacent experts to identify what kind of evidence in this vein could realistically influence regulation, and to push further in that direction. It would also be valuable to experiment with attack methods that are more effective against state-of-the-art models.

Specify which Gemini model was used (2.5 flash, pro, etc.)

Go into more detail about how a model developer would operationalize the severity based classification framework and evidence that something like this isn't already being employed by OpenAI, GDM, etc.

In the results section, it's unclear how dangerous of a retrosynthesis task you prompted the model with. It may be the case that something actually harmful would be harder to jailbreak vs a more benign task. I would have liked to see 4-5 variants of the task to see if ASR varies, which could shed light on my previous question.

Overall a well framed project that is relevant to the CBRN threat model, but needs a little more depth of experimentation and analysis. Hard to reproduce, no code available or copy+pastable prompts (even if redacted for infohazards).

Cite this work

@misc {

title={

(HckPrj) When Guardrails Fail: Dual-Use Misuse of AI in Retrosynthesis Through Iterative Refinement–Induced Self-Jailbreaking

},

author={

Andrei Avram, Chido Chikomo, Hanadi Al-Samarrai, Tiwai Mhundwa,Wydad El Azzaoui Alaoui Belghiti, Isabel Barberá

},

date={

9/15/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.