13 : 10 : 51 : 59
13 : 10 : 51 : 59
13 : 10 : 51 : 59
13 : 10 : 51 : 59
Keep Apart Research Going: Donate Today
Jun 13, 2025
-
Jun 13, 2025
Online & In-Person
Red Teaming A Narrow Path: Control AI Policy Sprint




Help stress-test the policies that could prevent human extinction from AI - before they reach lawmakers' desks.
07 : 10 : 51 : 59
07 : 10 : 51 : 59
07 : 10 : 51 : 59
07 : 10 : 51 : 59
Help stress-test the policies that could prevent human extinction from AI - before they reach lawmakers' desks.
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Join us for a critical one-day sprint to red team the policy framework that could determine humanity's future relationship with artificial intelligence. Control AI has developed "A Narrow Path" - the first comprehensive plan to address extinction risks from Artificial Superintelligence (ASI). These policies are already being actively pushed to lawmakers in the UK and US through their Direct Institutional Plan (DIP), making this red teaming exercise directly relevant to real-world policy implementation.
Your mission: Help strengthen the Phase 0 policies by identifying weaknesses, implementation challenges, and gaps before they reach legislators' desks.
The Challenge We Face
While most AI developments are beneficial, the rise of superintelligent AI (ASI) threatens humanity with extinction. We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species.
Current AI development is proceeding without adequate safety measures, with reasonable estimates indicating that it could cost only tens to hundreds of billions of dollars to create artificial superintelligence. Meanwhile, the very people developing advanced AI are warning about these risks.
A Narrow Path: The Solution
Control AI has found no other plan that comprehensively tries to address the issue, so they made one. "A Narrow Path" is structured in three phases:
Phase 0: Safety - Immediate policies to prevent ASI development for 20 years
Phase 1: Stability - International oversight that doesn't collapse over time
Phase 2: Flourishing - Building foundations for safe transformative AI under human control
Real-World Impact
This isn't theoretical policy research. Control AI launched a pilot campaign focused on UK lawmakers that validated their approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported their campaign. They succeeded in gaining support in 1 out of every 3 cases when briefing lawmakers.
Recent polling shows that a large majority (74%) of Brits support placing the UK's AI Safety Institute on a statutory footing, and 16 British lawmakers have signed a statement calling for new AI laws targeted specifically at "superintelligent" AI systems.
Focus of This Sprint
We're exclusively red teaming Phase 0 policies because:
These are the policies actively being pushed through the DIP
They are the necessary first step - stopping ASI development is the precondition for everything else
Real-world implementation is imminent, making your feedback immediately actionable
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Join us for a critical one-day sprint to red team the policy framework that could determine humanity's future relationship with artificial intelligence. Control AI has developed "A Narrow Path" - the first comprehensive plan to address extinction risks from Artificial Superintelligence (ASI). These policies are already being actively pushed to lawmakers in the UK and US through their Direct Institutional Plan (DIP), making this red teaming exercise directly relevant to real-world policy implementation.
Your mission: Help strengthen the Phase 0 policies by identifying weaknesses, implementation challenges, and gaps before they reach legislators' desks.
The Challenge We Face
While most AI developments are beneficial, the rise of superintelligent AI (ASI) threatens humanity with extinction. We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species.
Current AI development is proceeding without adequate safety measures, with reasonable estimates indicating that it could cost only tens to hundreds of billions of dollars to create artificial superintelligence. Meanwhile, the very people developing advanced AI are warning about these risks.
A Narrow Path: The Solution
Control AI has found no other plan that comprehensively tries to address the issue, so they made one. "A Narrow Path" is structured in three phases:
Phase 0: Safety - Immediate policies to prevent ASI development for 20 years
Phase 1: Stability - International oversight that doesn't collapse over time
Phase 2: Flourishing - Building foundations for safe transformative AI under human control
Real-World Impact
This isn't theoretical policy research. Control AI launched a pilot campaign focused on UK lawmakers that validated their approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported their campaign. They succeeded in gaining support in 1 out of every 3 cases when briefing lawmakers.
Recent polling shows that a large majority (74%) of Brits support placing the UK's AI Safety Institute on a statutory footing, and 16 British lawmakers have signed a statement calling for new AI laws targeted specifically at "superintelligent" AI systems.
Focus of This Sprint
We're exclusively red teaming Phase 0 policies because:
These are the policies actively being pushed through the DIP
They are the necessary first step - stopping ASI development is the precondition for everything else
Real-world implementation is imminent, making your feedback immediately actionable
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Join us for a critical one-day sprint to red team the policy framework that could determine humanity's future relationship with artificial intelligence. Control AI has developed "A Narrow Path" - the first comprehensive plan to address extinction risks from Artificial Superintelligence (ASI). These policies are already being actively pushed to lawmakers in the UK and US through their Direct Institutional Plan (DIP), making this red teaming exercise directly relevant to real-world policy implementation.
Your mission: Help strengthen the Phase 0 policies by identifying weaknesses, implementation challenges, and gaps before they reach legislators' desks.
The Challenge We Face
While most AI developments are beneficial, the rise of superintelligent AI (ASI) threatens humanity with extinction. We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species.
Current AI development is proceeding without adequate safety measures, with reasonable estimates indicating that it could cost only tens to hundreds of billions of dollars to create artificial superintelligence. Meanwhile, the very people developing advanced AI are warning about these risks.
A Narrow Path: The Solution
Control AI has found no other plan that comprehensively tries to address the issue, so they made one. "A Narrow Path" is structured in three phases:
Phase 0: Safety - Immediate policies to prevent ASI development for 20 years
Phase 1: Stability - International oversight that doesn't collapse over time
Phase 2: Flourishing - Building foundations for safe transformative AI under human control
Real-World Impact
This isn't theoretical policy research. Control AI launched a pilot campaign focused on UK lawmakers that validated their approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported their campaign. They succeeded in gaining support in 1 out of every 3 cases when briefing lawmakers.
Recent polling shows that a large majority (74%) of Brits support placing the UK's AI Safety Institute on a statutory footing, and 16 British lawmakers have signed a statement calling for new AI laws targeted specifically at "superintelligent" AI systems.
Focus of This Sprint
We're exclusively red teaming Phase 0 policies because:
These are the policies actively being pushed through the DIP
They are the necessary first step - stopping ASI development is the precondition for everything else
Real-world implementation is imminent, making your feedback immediately actionable
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Join us for a critical one-day sprint to red team the policy framework that could determine humanity's future relationship with artificial intelligence. Control AI has developed "A Narrow Path" - the first comprehensive plan to address extinction risks from Artificial Superintelligence (ASI). These policies are already being actively pushed to lawmakers in the UK and US through their Direct Institutional Plan (DIP), making this red teaming exercise directly relevant to real-world policy implementation.
Your mission: Help strengthen the Phase 0 policies by identifying weaknesses, implementation challenges, and gaps before they reach legislators' desks.
The Challenge We Face
While most AI developments are beneficial, the rise of superintelligent AI (ASI) threatens humanity with extinction. We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species.
Current AI development is proceeding without adequate safety measures, with reasonable estimates indicating that it could cost only tens to hundreds of billions of dollars to create artificial superintelligence. Meanwhile, the very people developing advanced AI are warning about these risks.
A Narrow Path: The Solution
Control AI has found no other plan that comprehensively tries to address the issue, so they made one. "A Narrow Path" is structured in three phases:
Phase 0: Safety - Immediate policies to prevent ASI development for 20 years
Phase 1: Stability - International oversight that doesn't collapse over time
Phase 2: Flourishing - Building foundations for safe transformative AI under human control
Real-World Impact
This isn't theoretical policy research. Control AI launched a pilot campaign focused on UK lawmakers that validated their approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported their campaign. They succeeded in gaining support in 1 out of every 3 cases when briefing lawmakers.
Recent polling shows that a large majority (74%) of Brits support placing the UK's AI Safety Institute on a statutory footing, and 16 British lawmakers have signed a statement calling for new AI laws targeted specifically at "superintelligent" AI systems.
Focus of This Sprint
We're exclusively red teaming Phase 0 policies because:
These are the policies actively being pushed through the DIP
They are the necessary first step - stopping ASI development is the precondition for everything else
Real-world implementation is imminent, making your feedback immediately actionable
Speakers & Collaborators
Adam Shimi
Organiser
Epistemology researcher at Control AI developing policy frameworks to address superintelligence risks, with expertise in technical writing and AI safety theory.
Max Wyrga
Organiser
Communications analyst at Control AI focused on preventing artificial superintelligence extinction threats through strategic messaging and public engagement
Tolga Bilge
Judge
Policy researcher and superforecaster at Control AI, co-author of "A Narrow Path," with expertise in AI risk assessment and coalition building for international AI governance.
Speakers & Collaborators

Adam Shimi
Organiser
Epistemology researcher at Control AI developing policy frameworks to address superintelligence risks, with expertise in technical writing and AI safety theory.

Max Wyrga
Organiser
Communications analyst at Control AI focused on preventing artificial superintelligence extinction threats through strategic messaging and public engagement
Registered Jam Sites
Register A Location
Register A Location
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
May 30, 2025
-
Jun 1, 2025
Research
Apart x Martian Mechanistic Router Interpretability Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Sign Up
Apr 25, 2025
-
Apr 27, 2025
Research
Economics of Transformative AI
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events