Mar 29, 2025
-
Mar 30, 2025
London & Online
AI Control Hackathon 2025




Join us in advancing the critical field of AI control through collaborative innovation. Together, we can develop more robust techniques to ensure AI systems remain safe and aligned, even as they become more capable.
00:00:00:00
00:00:00:00
00:00:00:00
00:00:00:00
Join us in advancing the critical field of AI control through collaborative innovation. Together, we can develop more robust techniques to ensure AI systems remain safe and aligned, even as they become more capable.
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

The Control Hackathon brings together researchers, engineers, security professionals, and AI enthusiasts to tackle the emerging challenges in AI control - techniques that mitigate security risks from AI even when the AI itself might be trying to subvert them.
🔒 About the Hackathon
As AI systems become more capable and autonomous, ensuring robust control mechanisms becomes increasingly critical. This hackathon, co-organized by Redwood Research, and the UK AI Safety Institute, provides a practical extension to ControlConf (March 27-28), allowing participants to apply theoretical concepts to real-world control challenges.
💡 Challenge Tracks
Note: this list is not exhaustive and we will be posting a list of Open Problems in Control from Redwood Research here soon
1. ControlArena Challenges
Work with the ControlArena from UK AISI to develop, test, and evaluate control protocols in realistic deployment environments. Try to design robust protocols that can withstand adversarial agents or develop novel attack strategies that break existing protocols.
2. Control Protocol Design
Design and implement novel control protocols that effectively restrict AI systems from performing harmful actions while maintaining their usefulness for legitimate tasks. Focus areas include:
Runtime monitoring systems
Adversarial stress testing
Formal verification approaches
Bounded optimization techniques
3. Red Teaming & Vulnerability Research
Design and implement strategies to "red team" AI systems, attempting to subvert safety mechanisms while adhering to ethical guidelines.
Focus areas include:
Generating adversarial inputs to bypass monitoring systems.
Testing robustness against intentional misalignment or malicious behavior
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Participants from ControlConf joining in person at LISA. Luma event info will follow up.
AI researchers and engineers
Information security professionals
ML and systems engineers
Students in related fields
Policy researchers interested in technical AI safety
No prior experience with AI control specifically is required, though familiarity with machine learning, programming, or information security is helpful.
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

The Control Hackathon brings together researchers, engineers, security professionals, and AI enthusiasts to tackle the emerging challenges in AI control - techniques that mitigate security risks from AI even when the AI itself might be trying to subvert them.
🔒 About the Hackathon
As AI systems become more capable and autonomous, ensuring robust control mechanisms becomes increasingly critical. This hackathon, co-organized by Redwood Research, and the UK AI Safety Institute, provides a practical extension to ControlConf (March 27-28), allowing participants to apply theoretical concepts to real-world control challenges.
💡 Challenge Tracks
Note: this list is not exhaustive and we will be posting a list of Open Problems in Control from Redwood Research here soon
1. ControlArena Challenges
Work with the ControlArena from UK AISI to develop, test, and evaluate control protocols in realistic deployment environments. Try to design robust protocols that can withstand adversarial agents or develop novel attack strategies that break existing protocols.
2. Control Protocol Design
Design and implement novel control protocols that effectively restrict AI systems from performing harmful actions while maintaining their usefulness for legitimate tasks. Focus areas include:
Runtime monitoring systems
Adversarial stress testing
Formal verification approaches
Bounded optimization techniques
3. Red Teaming & Vulnerability Research
Design and implement strategies to "red team" AI systems, attempting to subvert safety mechanisms while adhering to ethical guidelines.
Focus areas include:
Generating adversarial inputs to bypass monitoring systems.
Testing robustness against intentional misalignment or malicious behavior
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Participants from ControlConf joining in person at LISA. Luma event info will follow up.
AI researchers and engineers
Information security professionals
ML and systems engineers
Students in related fields
Policy researchers interested in technical AI safety
No prior experience with AI control specifically is required, though familiarity with machine learning, programming, or information security is helpful.
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

The Control Hackathon brings together researchers, engineers, security professionals, and AI enthusiasts to tackle the emerging challenges in AI control - techniques that mitigate security risks from AI even when the AI itself might be trying to subvert them.
🔒 About the Hackathon
As AI systems become more capable and autonomous, ensuring robust control mechanisms becomes increasingly critical. This hackathon, co-organized by Redwood Research, and the UK AI Safety Institute, provides a practical extension to ControlConf (March 27-28), allowing participants to apply theoretical concepts to real-world control challenges.
💡 Challenge Tracks
Note: this list is not exhaustive and we will be posting a list of Open Problems in Control from Redwood Research here soon
1. ControlArena Challenges
Work with the ControlArena from UK AISI to develop, test, and evaluate control protocols in realistic deployment environments. Try to design robust protocols that can withstand adversarial agents or develop novel attack strategies that break existing protocols.
2. Control Protocol Design
Design and implement novel control protocols that effectively restrict AI systems from performing harmful actions while maintaining their usefulness for legitimate tasks. Focus areas include:
Runtime monitoring systems
Adversarial stress testing
Formal verification approaches
Bounded optimization techniques
3. Red Teaming & Vulnerability Research
Design and implement strategies to "red team" AI systems, attempting to subvert safety mechanisms while adhering to ethical guidelines.
Focus areas include:
Generating adversarial inputs to bypass monitoring systems.
Testing robustness against intentional misalignment or malicious behavior
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Participants from ControlConf joining in person at LISA. Luma event info will follow up.
AI researchers and engineers
Information security professionals
ML and systems engineers
Students in related fields
Policy researchers interested in technical AI safety
No prior experience with AI control specifically is required, though familiarity with machine learning, programming, or information security is helpful.
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

The Control Hackathon brings together researchers, engineers, security professionals, and AI enthusiasts to tackle the emerging challenges in AI control - techniques that mitigate security risks from AI even when the AI itself might be trying to subvert them.
🔒 About the Hackathon
As AI systems become more capable and autonomous, ensuring robust control mechanisms becomes increasingly critical. This hackathon, co-organized by Redwood Research, and the UK AI Safety Institute, provides a practical extension to ControlConf (March 27-28), allowing participants to apply theoretical concepts to real-world control challenges.
💡 Challenge Tracks
Note: this list is not exhaustive and we will be posting a list of Open Problems in Control from Redwood Research here soon
1. ControlArena Challenges
Work with the ControlArena from UK AISI to develop, test, and evaluate control protocols in realistic deployment environments. Try to design robust protocols that can withstand adversarial agents or develop novel attack strategies that break existing protocols.
2. Control Protocol Design
Design and implement novel control protocols that effectively restrict AI systems from performing harmful actions while maintaining their usefulness for legitimate tasks. Focus areas include:
Runtime monitoring systems
Adversarial stress testing
Formal verification approaches
Bounded optimization techniques
3. Red Teaming & Vulnerability Research
Design and implement strategies to "red team" AI systems, attempting to subvert safety mechanisms while adhering to ethical guidelines.
Focus areas include:
Generating adversarial inputs to bypass monitoring systems.
Testing robustness against intentional misalignment or malicious behavior
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Participants from ControlConf joining in person at LISA. Luma event info will follow up.
AI researchers and engineers
Information security professionals
ML and systems engineers
Students in related fields
Policy researchers interested in technical AI safety
No prior experience with AI control specifically is required, though familiarity with machine learning, programming, or information security is helpful.
Speakers & Collaborators
Tyler Tracy
Organiser
Experienced software engineer turned AI control researcher focused on developing technical safeguards for advanced AI systems with expertise in monitoring frameworks
Aryan Bhatt
Judge
AI safety researcher at Redwood with background in alignment research and quantitative analysis, combining formal methods expertise with practical control systems.
Adam Kaufman
Judge
Harvard physics and CS student researching AI control at Redwood, bringing interdisciplinary perspective to containment mechanisms through academic rigor.
Jaime Raldua
Organiser
Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.
Pascal Wieler
Judge
Serial tech entrepreneur and YC founder, building AI agents for process automation at Basepilot. Previously, he founded a robotics startup, worked on self-driving at Mercedes-Benz and did research in CMU’s Machine Learning Department
Jason Schreiber
Organizer and Judge
Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Buck Shlegeris
Speaker and Judge
Former MIRI researcher turned CEO of Redwood Research, pioneering technical AI safety and control mechanisms while bridging fundamental research with practical applications.
Jasmine Wang
Judge
Jasmine is the control empirics team lead at UK AISI. Previously, she built and exited one of the first startups to use GPT-3, and worked at Partnership on AI and OpenAI.
Rogan Inglis
Judge
Rogan is a Senior Research Engineer at AISI and is was the co-founder of Intelistyle- an AI Styling Solution
Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Speakers & Collaborators

Tyler Tracy
Organiser
Experienced software engineer turned AI control researcher focused on developing technical safeguards for advanced AI systems with expertise in monitoring frameworks

Aryan Bhatt
Judge
AI safety researcher at Redwood with background in alignment research and quantitative analysis, combining formal methods expertise with practical control systems.

Adam Kaufman
Judge
Harvard physics and CS student researching AI control at Redwood, bringing interdisciplinary perspective to containment mechanisms through academic rigor.

Jaime Raldua
Organiser
Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.

Pascal Wieler
Judge
Serial tech entrepreneur and YC founder, building AI agents for process automation at Basepilot. Previously, he founded a robotics startup, worked on self-driving at Mercedes-Benz and did research in CMU’s Machine Learning Department

Jason Schreiber
Organizer and Judge
Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.

Buck Shlegeris
Speaker and Judge
Former MIRI researcher turned CEO of Redwood Research, pioneering technical AI safety and control mechanisms while bridging fundamental research with practical applications.

Jasmine Wang
Judge
Jasmine is the control empirics team lead at UK AISI. Previously, she built and exited one of the first startups to use GPT-3, and worked at Partnership on AI and OpenAI.

Rogan Inglis
Judge
Rogan is a Senior Research Engineer at AISI and is was the co-founder of Intelistyle- an AI Styling Solution
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Research
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Apr 25, 2025
-
Apr 26, 2025
Research
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events