Sep 12, 2025
-
Sep 14, 2025
Remote
CBRN AI Risks Research Sprint



This event focuses on developing defensive solutions against the misue of frontier AI in CBRN contexts.
20
Days To Go
20
Days To Go
20
Days To Go
20
Days To Go
This event focuses on developing defensive solutions against the misue of frontier AI in CBRN contexts.
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

About the Research Sprint
CBRN risks represent some of the most consequential global challenges. With AI systems growing more capable, we must urgently assess their potential to:
Accelerate discovery of harmful biological or chemical agents.
Enable circumvention of safety protocols.
Provide dangerous knowledge at unprecedented scale.
At the same time, AI can strengthen defense: by advancing evaluation pipelines, improving monitoring, and enabling policy-relevant foresight.
This research sprint is designed to channel global expertise into practical, collaborative projects that address these urgent questions.
The topic is intentionally broad: from model evaluations that test for dual-use misuse potential, to new monitoring frameworks, to policy-relevant prototypes. We aim to foster creativity while keeping projects grounded in AI safety and global security.
Tracks
1. AI Model Evaluations for CBRN Risks
This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:
Design benchmarks to assess CBRN misuse potential.
Build evaluation pipelines to test model outputs.
Compare model capabilities across domains (biology, chemistry, radiology, nuclear).
2. AI for Biosecurity
This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:
Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.
Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.
Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.
3. Chemical Safety & AI Misuse Prevention
This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:
Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.
Build gating mechanisms to reduce misuse in molecular generation workflows.
Develop governance protocols for regulating AI-assisted molecular design tools.
4. Radiological & Nuclear Risk Monitoring
This track focuses on security analysis and foresight. Projects here may:
Map scenarios where AI could accelerate nuclear risks.
Develop monitoring or forecasting tools for early-warning systems.
Explore how governance can reduce nuclear-related misuse.
Create guidance for aligning AI capabilities with nuclear nonproliferation norms.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive sprint weekend.
Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.
Present to a panel of mentors and judges.
What happens next
Winning and promising projects will be:
Published openly for the community.
Invited to continue development within Apart Lab.
Shared with relevant national evaluation efforts and safety researchers.
Why join?
Impact: Your work may directly shape the way AI safety is implemented in critical domains.
Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.
Community: Collaborate with peers from across the globe who share the mission of making AI safer.
Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.
⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?
We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).
If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:
Evaluate whether your work poses a meaningful info-hazard
Decide what can be shared safely (e.g. redacted or abstracted)
Identify appropriate channels for private review or follow-up research
This sprint is about building for safety and that includes how we communicate and share.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

About the Research Sprint
CBRN risks represent some of the most consequential global challenges. With AI systems growing more capable, we must urgently assess their potential to:
Accelerate discovery of harmful biological or chemical agents.
Enable circumvention of safety protocols.
Provide dangerous knowledge at unprecedented scale.
At the same time, AI can strengthen defense: by advancing evaluation pipelines, improving monitoring, and enabling policy-relevant foresight.
This research sprint is designed to channel global expertise into practical, collaborative projects that address these urgent questions.
The topic is intentionally broad: from model evaluations that test for dual-use misuse potential, to new monitoring frameworks, to policy-relevant prototypes. We aim to foster creativity while keeping projects grounded in AI safety and global security.
Tracks
1. AI Model Evaluations for CBRN Risks
This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:
Design benchmarks to assess CBRN misuse potential.
Build evaluation pipelines to test model outputs.
Compare model capabilities across domains (biology, chemistry, radiology, nuclear).
2. AI for Biosecurity
This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:
Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.
Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.
Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.
3. Chemical Safety & AI Misuse Prevention
This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:
Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.
Build gating mechanisms to reduce misuse in molecular generation workflows.
Develop governance protocols for regulating AI-assisted molecular design tools.
4. Radiological & Nuclear Risk Monitoring
This track focuses on security analysis and foresight. Projects here may:
Map scenarios where AI could accelerate nuclear risks.
Develop monitoring or forecasting tools for early-warning systems.
Explore how governance can reduce nuclear-related misuse.
Create guidance for aligning AI capabilities with nuclear nonproliferation norms.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive sprint weekend.
Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.
Present to a panel of mentors and judges.
What happens next
Winning and promising projects will be:
Published openly for the community.
Invited to continue development within Apart Lab.
Shared with relevant national evaluation efforts and safety researchers.
Why join?
Impact: Your work may directly shape the way AI safety is implemented in critical domains.
Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.
Community: Collaborate with peers from across the globe who share the mission of making AI safer.
Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.
⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?
We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).
If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:
Evaluate whether your work poses a meaningful info-hazard
Decide what can be shared safely (e.g. redacted or abstracted)
Identify appropriate channels for private review or follow-up research
This sprint is about building for safety and that includes how we communicate and share.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

About the Research Sprint
CBRN risks represent some of the most consequential global challenges. With AI systems growing more capable, we must urgently assess their potential to:
Accelerate discovery of harmful biological or chemical agents.
Enable circumvention of safety protocols.
Provide dangerous knowledge at unprecedented scale.
At the same time, AI can strengthen defense: by advancing evaluation pipelines, improving monitoring, and enabling policy-relevant foresight.
This research sprint is designed to channel global expertise into practical, collaborative projects that address these urgent questions.
The topic is intentionally broad: from model evaluations that test for dual-use misuse potential, to new monitoring frameworks, to policy-relevant prototypes. We aim to foster creativity while keeping projects grounded in AI safety and global security.
Tracks
1. AI Model Evaluations for CBRN Risks
This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:
Design benchmarks to assess CBRN misuse potential.
Build evaluation pipelines to test model outputs.
Compare model capabilities across domains (biology, chemistry, radiology, nuclear).
2. AI for Biosecurity
This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:
Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.
Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.
Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.
3. Chemical Safety & AI Misuse Prevention
This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:
Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.
Build gating mechanisms to reduce misuse in molecular generation workflows.
Develop governance protocols for regulating AI-assisted molecular design tools.
4. Radiological & Nuclear Risk Monitoring
This track focuses on security analysis and foresight. Projects here may:
Map scenarios where AI could accelerate nuclear risks.
Develop monitoring or forecasting tools for early-warning systems.
Explore how governance can reduce nuclear-related misuse.
Create guidance for aligning AI capabilities with nuclear nonproliferation norms.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive sprint weekend.
Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.
Present to a panel of mentors and judges.
What happens next
Winning and promising projects will be:
Published openly for the community.
Invited to continue development within Apart Lab.
Shared with relevant national evaluation efforts and safety researchers.
Why join?
Impact: Your work may directly shape the way AI safety is implemented in critical domains.
Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.
Community: Collaborate with peers from across the globe who share the mission of making AI safer.
Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.
⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?
We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).
If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:
Evaluate whether your work poses a meaningful info-hazard
Decide what can be shared safely (e.g. redacted or abstracted)
Identify appropriate channels for private review or follow-up research
This sprint is about building for safety and that includes how we communicate and share.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

About the Research Sprint
CBRN risks represent some of the most consequential global challenges. With AI systems growing more capable, we must urgently assess their potential to:
Accelerate discovery of harmful biological or chemical agents.
Enable circumvention of safety protocols.
Provide dangerous knowledge at unprecedented scale.
At the same time, AI can strengthen defense: by advancing evaluation pipelines, improving monitoring, and enabling policy-relevant foresight.
This research sprint is designed to channel global expertise into practical, collaborative projects that address these urgent questions.
The topic is intentionally broad: from model evaluations that test for dual-use misuse potential, to new monitoring frameworks, to policy-relevant prototypes. We aim to foster creativity while keeping projects grounded in AI safety and global security.
Tracks
1. AI Model Evaluations for CBRN Risks
This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:
Design benchmarks to assess CBRN misuse potential.
Build evaluation pipelines to test model outputs.
Compare model capabilities across domains (biology, chemistry, radiology, nuclear).
2. AI for Biosecurity
This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:
Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.
Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.
Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.
3. Chemical Safety & AI Misuse Prevention
This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:
Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.
Build gating mechanisms to reduce misuse in molecular generation workflows.
Develop governance protocols for regulating AI-assisted molecular design tools.
4. Radiological & Nuclear Risk Monitoring
This track focuses on security analysis and foresight. Projects here may:
Map scenarios where AI could accelerate nuclear risks.
Develop monitoring or forecasting tools for early-warning systems.
Explore how governance can reduce nuclear-related misuse.
Create guidance for aligning AI capabilities with nuclear nonproliferation norms.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive sprint weekend.
Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.
Present to a panel of mentors and judges.
What happens next
Winning and promising projects will be:
Published openly for the community.
Invited to continue development within Apart Lab.
Shared with relevant national evaluation efforts and safety researchers.
Why join?
Impact: Your work may directly shape the way AI safety is implemented in critical domains.
Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.
Community: Collaborate with peers from across the globe who share the mission of making AI safer.
Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.
⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?
We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).
If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:
Evaluate whether your work poses a meaningful info-hazard
Decide what can be shared safely (e.g. redacted or abstracted)
Identify appropriate channels for private review or follow-up research
This sprint is about building for safety and that includes how we communicate and share.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
Jul 25, 2025
-
Jul 27, 2025
Research
AI Safety x Physics Grand Challenge
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Jun 13, 2025
-
Jun 13, 2025
Research
Red Teaming A Narrow Path: ControlAI Policy Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events