Sep 12, 2025

-

Sep 14, 2025

Remote

CBRN AI Risks Research Sprint

This event focuses on developing defensive solutions against the misuse of frontier AI in CBRN contexts.

01

Days To Go

01

Days To Go

01

Days To Go

01

Days To Go

This event focuses on developing defensive solutions against the misuse of frontier AI in CBRN contexts.

This event is ongoing.

This event has concluded.

147

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Chemical, Biological, Radiological, and Nuclear (CBRN) risks represent some of the most consequential global challenges. Frontier AI systems could accelerate discovery of dangerous agents, bypass safety protocols, and spread hazardous knowledge at scale.

That’s why we’re launching the CBRN × AI Risks Research Sprint.

In this sprint, you will:

  • Build evaluation pipelines that test AI models for CBRN misuse risks

  • Create safeguards and biosecurity tools

  • Develope monitoring systems for chemical, radiological, or nuclear risks

  • Write policy briefs and governance proposals that improve safety

You will work in teams over one weekend and submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

💰 $2,000 in prizes will be awarded to the top projects.

Winning projects will be published openly, shared with safety researchers, and invited to continue development within Apart Lab

Tracks

1. AI Model Evaluations for CBRN Risks

This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:

  • Design benchmarks to assess CBRN misuse potential.

  • Build evaluation pipelines to test model outputs.

  • Compare model capabilities across domains (biology, chemistry, radiology, nuclear).

2. AI for Biosecurity

This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:

  • Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.

  • Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.

  • Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.

3. Chemical Safety & AI Misuse Prevention

This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:

  • Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.

  • Build gating mechanisms to reduce misuse in molecular generation workflows.

  • Develop governance protocols for regulating AI-assisted molecular design tools.

4. Radiological & Nuclear Risk Monitoring

This track focuses on security analysis and foresight. Projects here may:

  • Map scenarios where AI could accelerate nuclear risks.

  • Develop monitoring or forecasting tools for early-warning systems.

  • Explore how governance can reduce nuclear-related misuse.

  • Create guidance for aligning AI capabilities with nuclear nonproliferation norms.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive sprint weekend.

  • Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

  • Present to a panel of mentors and judges.

What happens next

Winning and promising projects will be:

  • Published openly for the community.

  • Invited to continue development within Apart Lab.

  • Shared with relevant national evaluation efforts and safety researchers.

Why join?

  • Impact: Your work may directly shape the way AI safety is implemented in critical domains.

  • Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.

  • Community: Collaborate with peers from across the globe who share the mission of making AI safer.

  • Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.

⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?

We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).

If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:

  • Evaluate whether your work poses a meaningful info-hazard

  • Decide what can be shared safely (e.g. redacted or abstracted)

  • Identify appropriate channels for private review or follow-up research

This sprint is about building for safety and that includes how we communicate and share.

147

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Chemical, Biological, Radiological, and Nuclear (CBRN) risks represent some of the most consequential global challenges. Frontier AI systems could accelerate discovery of dangerous agents, bypass safety protocols, and spread hazardous knowledge at scale.

That’s why we’re launching the CBRN × AI Risks Research Sprint.

In this sprint, you will:

  • Build evaluation pipelines that test AI models for CBRN misuse risks

  • Create safeguards and biosecurity tools

  • Develope monitoring systems for chemical, radiological, or nuclear risks

  • Write policy briefs and governance proposals that improve safety

You will work in teams over one weekend and submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

💰 $2,000 in prizes will be awarded to the top projects.

Winning projects will be published openly, shared with safety researchers, and invited to continue development within Apart Lab

Tracks

1. AI Model Evaluations for CBRN Risks

This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:

  • Design benchmarks to assess CBRN misuse potential.

  • Build evaluation pipelines to test model outputs.

  • Compare model capabilities across domains (biology, chemistry, radiology, nuclear).

2. AI for Biosecurity

This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:

  • Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.

  • Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.

  • Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.

3. Chemical Safety & AI Misuse Prevention

This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:

  • Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.

  • Build gating mechanisms to reduce misuse in molecular generation workflows.

  • Develop governance protocols for regulating AI-assisted molecular design tools.

4. Radiological & Nuclear Risk Monitoring

This track focuses on security analysis and foresight. Projects here may:

  • Map scenarios where AI could accelerate nuclear risks.

  • Develop monitoring or forecasting tools for early-warning systems.

  • Explore how governance can reduce nuclear-related misuse.

  • Create guidance for aligning AI capabilities with nuclear nonproliferation norms.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive sprint weekend.

  • Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

  • Present to a panel of mentors and judges.

What happens next

Winning and promising projects will be:

  • Published openly for the community.

  • Invited to continue development within Apart Lab.

  • Shared with relevant national evaluation efforts and safety researchers.

Why join?

  • Impact: Your work may directly shape the way AI safety is implemented in critical domains.

  • Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.

  • Community: Collaborate with peers from across the globe who share the mission of making AI safer.

  • Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.

⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?

We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).

If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:

  • Evaluate whether your work poses a meaningful info-hazard

  • Decide what can be shared safely (e.g. redacted or abstracted)

  • Identify appropriate channels for private review or follow-up research

This sprint is about building for safety and that includes how we communicate and share.

147

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Chemical, Biological, Radiological, and Nuclear (CBRN) risks represent some of the most consequential global challenges. Frontier AI systems could accelerate discovery of dangerous agents, bypass safety protocols, and spread hazardous knowledge at scale.

That’s why we’re launching the CBRN × AI Risks Research Sprint.

In this sprint, you will:

  • Build evaluation pipelines that test AI models for CBRN misuse risks

  • Create safeguards and biosecurity tools

  • Develope monitoring systems for chemical, radiological, or nuclear risks

  • Write policy briefs and governance proposals that improve safety

You will work in teams over one weekend and submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

💰 $2,000 in prizes will be awarded to the top projects.

Winning projects will be published openly, shared with safety researchers, and invited to continue development within Apart Lab

Tracks

1. AI Model Evaluations for CBRN Risks

This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:

  • Design benchmarks to assess CBRN misuse potential.

  • Build evaluation pipelines to test model outputs.

  • Compare model capabilities across domains (biology, chemistry, radiology, nuclear).

2. AI for Biosecurity

This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:

  • Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.

  • Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.

  • Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.

3. Chemical Safety & AI Misuse Prevention

This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:

  • Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.

  • Build gating mechanisms to reduce misuse in molecular generation workflows.

  • Develop governance protocols for regulating AI-assisted molecular design tools.

4. Radiological & Nuclear Risk Monitoring

This track focuses on security analysis and foresight. Projects here may:

  • Map scenarios where AI could accelerate nuclear risks.

  • Develop monitoring or forecasting tools for early-warning systems.

  • Explore how governance can reduce nuclear-related misuse.

  • Create guidance for aligning AI capabilities with nuclear nonproliferation norms.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive sprint weekend.

  • Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

  • Present to a panel of mentors and judges.

What happens next

Winning and promising projects will be:

  • Published openly for the community.

  • Invited to continue development within Apart Lab.

  • Shared with relevant national evaluation efforts and safety researchers.

Why join?

  • Impact: Your work may directly shape the way AI safety is implemented in critical domains.

  • Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.

  • Community: Collaborate with peers from across the globe who share the mission of making AI safer.

  • Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.

⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?

We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).

If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:

  • Evaluate whether your work poses a meaningful info-hazard

  • Decide what can be shared safely (e.g. redacted or abstracted)

  • Identify appropriate channels for private review or follow-up research

This sprint is about building for safety and that includes how we communicate and share.

147

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Chemical, Biological, Radiological, and Nuclear (CBRN) risks represent some of the most consequential global challenges. Frontier AI systems could accelerate discovery of dangerous agents, bypass safety protocols, and spread hazardous knowledge at scale.

That’s why we’re launching the CBRN × AI Risks Research Sprint.

In this sprint, you will:

  • Build evaluation pipelines that test AI models for CBRN misuse risks

  • Create safeguards and biosecurity tools

  • Develope monitoring systems for chemical, radiological, or nuclear risks

  • Write policy briefs and governance proposals that improve safety

You will work in teams over one weekend and submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

💰 $2,000 in prizes will be awarded to the top projects.

Winning projects will be published openly, shared with safety researchers, and invited to continue development within Apart Lab

Tracks

1. AI Model Evaluations for CBRN Risks

This track focuses on establishing transparent, reproducible evaluation methods that can inform labs, researchers, and policymakers. Projects may:

  • Design benchmarks to assess CBRN misuse potential.

  • Build evaluation pipelines to test model outputs.

  • Compare model capabilities across domains (biology, chemistry, radiology, nuclear).

2. AI for Biosecurity

This track focuses on safe applications, safeguards and biosecurity guardrails. Projects may:

  • Create detection systems that flag dangerous biological content, e.g. DNA synthesis screening.

  • Prototype safer interfaces for life-science LLMs with built in guardrails and control mechanisms.

  • Develop governance tools that ensure AI supports biosafety while preventing dual-use misuse.

3. Chemical Safety & AI Misuse Prevention

This track focuses on identifying misuse pathways and building safeguards into model deployment. Projects here may:

  • Design monitoring systems for chemical safety in AI research labs and for APIs and model weights in chemistry-relevant domains.

  • Build gating mechanisms to reduce misuse in molecular generation workflows.

  • Develop governance protocols for regulating AI-assisted molecular design tools.

4. Radiological & Nuclear Risk Monitoring

This track focuses on security analysis and foresight. Projects here may:

  • Map scenarios where AI could accelerate nuclear risks.

  • Develop monitoring or forecasting tools for early-warning systems.

  • Explore how governance can reduce nuclear-related misuse.

  • Create guidance for aligning AI capabilities with nuclear nonproliferation norms.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive sprint weekend.

  • Submit open-source prototypes, evaluations, policy briefs, research reports, or scenario/forecast analyses addressing CBRN-related AI safety.

  • Present to a panel of mentors and judges.

What happens next

Winning and promising projects will be:

  • Published openly for the community.

  • Invited to continue development within Apart Lab.

  • Shared with relevant national evaluation efforts and safety researchers.

Why join?

  • Impact: Your work may directly shape the way AI safety is implemented in critical domains.

  • Mentorship: Experienced researchers and practitioners will guide projects throughout the research sprint.

  • Community: Collaborate with peers from across the globe who share the mission of making AI safer.

  • Visibility: Top projects will be featured on Apart Research’s platforms and connected to follow-up opportunities.

⚠️ Info-hazards & Risky Research: What If My Work Is Too Dangerous to Share?

We take information hazards seriously (true information that increases the probability of a GCBR if it falls into the wrong hands).

If you suspect your work might constitute an information hazard, or involves capability insights or evaluations that could be misused (e.g., showing how to bypass safety systems or enhance CBRN misuse potential) we ask that you reach out to the organizing team before publishing. We can help you:

  • Evaluate whether your work poses a meaningful info-hazard

  • Decide what can be shared safely (e.g. redacted or abstracted)

  • Identify appropriate channels for private review or follow-up research

This sprint is about building for safety and that includes how we communicate and share.

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.