Mar 20, 2026

-

Mar 22, 2026

Remote

AI Control Hackathon 2026

Join us in advancing the critical field of AI control through collaborative innovation. Together, we can develop more robust techniques to ensure AI systems remain safe and aligned, even as they become more capable.

07

Days To Go

412

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The AI Control Hackathon brings together researchers, engineers, security professionals, and AI safety enthusiasts to tackle one of the most pressing open problems: how do we keep AI systems safe when they might be actively working against us?

This is the second edition of the hackathon, co-organized with Redwood Research (the group that founded the field of AI control). Over three days, participants will develop novel control protocols, build evaluation tools, and stress-test existing safety measures using infrastructure like ControlArena and SHADE-Arena.

Top teams get

First Place gets fully funded trip to ControlConf

Invitation to The Apart Fellowship*

$2,000 in cash prizes

🥇 1st Place

$1,000

🥈 2nd Place

$500

🥉 3rd Place

$300

🏅 4th Place

$100

🏅 5th Place

$100

*Not all winning projects may receive a fellowship invitation. An invitation might also depend on other external factors like the Apart Core team's capacity to onboard new research teams.

What is AI Control?

AI control is the subfield of AI safety focused on maintaining human oversight over AI systems, even when those systems might be misaligned or deliberately trying to subvert safety measures. Unlike alignment research, which tries to make models inherently safe, control research starts from a harder assumption: the model may already be working against you, and you need protocols that keep it contained anyway.

Why this hackathon?

AI systems are becoming autonomous agents that write code, manage infrastructure, and conduct research with decreasing human oversight. At the same time, frontier models are exhibiting concerning behaviors: alignment faking, scheming against shutdown, strategic underperformance on safety evaluations, and emergent misalignment from standard training. These are not hypothetical risks. They have been observed in current models, documented by multiple research groups, and appear to increase with scale.

The infrastructure to do this work now exists, but defenses are not keeping up with attacks. Adaptive adversaries can reduce protocol safety from 99% to under 60%. Prompt injections bypass most monitoring at over 90% success rates. The field needs more researchers building stronger defenses. That is what this hackathon is for.

Hackathon Tracks

1. ControlArena Challenges

Work directly with ControlArena, the open-source library built by UK AISI and Redwood Research for running AI control experiments. The platform provides 14 evaluation settings, each pairing a legitimate task with a covert malicious objective, along with pre-built monitors, scorers, and analysis tools. Participants can create new evaluation environments, improve existing settings, develop novel monitors, or benchmark new approaches against standardized metrics. Current settings cover code sabotage, infrastructure compromise, training data poisoning, evaluation manipulation, financial system exploitation, and more.

2. Control Protocol Design

Design and implement novel control protocols that restrict AI systems from performing harmful actions while preserving useful capabilities. The best current protocols combine multiple techniques: trusted monitoring catches obvious attacks, resampling handles suspicious actions, and deferring on critical operations provides a safety net. This track covers runtime monitoring, factored cognition, adaptive deployment, chain-of-thought monitoring, scalable oversight mechanisms like debate, and shutdown and containment protocols for autonomous agents.

3. Red Teaming & Vulnerability Research

Identify and exploit weaknesses in existing control protocols and safety measures. Current protocols have known blind spots: adaptive attacks on monitors, feedback exploitation, steganographic collusion between agents, and strategic attack timing all represent real threats. This track covers adversarial inputs, attacks on monitoring systems, sandbagging strategies, jailbreaking techniques targeting reasoning models, and demonstrations of alignment faking or emergent misalignment in controlled settings. The goal is to push the frontier of what attackers can do, so defenders can build stronger systems.

Who should participate?

  • AI safety researchers working on alignment, interpretability, control, or governance

  • Machine learning engineers and researchers

  • Security researchers and red teamers

  • Software engineers with experience in agent systems or infrastructure

  • Policy researchers interested in technical safety measures

  • Students and early-career researchers exploring AI safety

No prior AI safety experience is required. The Resources tab has a curated reading list organized by track. Teams typically form at the start of the event.

What you will do

Over three days, you will:

  1. Form teams and choose a challenge track

  2. Research and scope a specific problem in AI control using the provided resources

  3. Build a project developing tools, protocols, evaluations, or attack demonstrations

  4. Submit a research report (PDF) documenting your approach, results, and implications

  5. Present your work to judges from leading AI safety organizations

What happens next

After the hackathon, all submitted projects will be reviewed by expert judges. Top projects receive prizes and are featured on the Apart Research website. Outstanding work may be invited for further development, publication, or presentation at upcoming events. ControlConf Berkeley 2026 (April 18-19) takes place shortly after this hackathon, making this an ideal time to develop and showcase new control research.

Why join?

Apart Research has organized 55+ research sprints with 6,000+ participants across 200+ global locations. Our hackathons produce real research output: published papers, new research collaborations, and contributions to open-source safety tools used across the field.

412

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The AI Control Hackathon brings together researchers, engineers, security professionals, and AI safety enthusiasts to tackle one of the most pressing open problems: how do we keep AI systems safe when they might be actively working against us?

This is the second edition of the hackathon, co-organized with Redwood Research (the group that founded the field of AI control). Over three days, participants will develop novel control protocols, build evaluation tools, and stress-test existing safety measures using infrastructure like ControlArena and SHADE-Arena.

Top teams get

First Place gets fully funded trip to ControlConf

Invitation to The Apart Fellowship*

$2,000 in cash prizes

🥇 1st Place

$1,000

🥈 2nd Place

$500

🥉 3rd Place

$300

🏅 4th Place

$100

🏅 5th Place

$100

*Not all winning projects may receive a fellowship invitation. An invitation might also depend on other external factors like the Apart Core team's capacity to onboard new research teams.

What is AI Control?

AI control is the subfield of AI safety focused on maintaining human oversight over AI systems, even when those systems might be misaligned or deliberately trying to subvert safety measures. Unlike alignment research, which tries to make models inherently safe, control research starts from a harder assumption: the model may already be working against you, and you need protocols that keep it contained anyway.

Why this hackathon?

AI systems are becoming autonomous agents that write code, manage infrastructure, and conduct research with decreasing human oversight. At the same time, frontier models are exhibiting concerning behaviors: alignment faking, scheming against shutdown, strategic underperformance on safety evaluations, and emergent misalignment from standard training. These are not hypothetical risks. They have been observed in current models, documented by multiple research groups, and appear to increase with scale.

The infrastructure to do this work now exists, but defenses are not keeping up with attacks. Adaptive adversaries can reduce protocol safety from 99% to under 60%. Prompt injections bypass most monitoring at over 90% success rates. The field needs more researchers building stronger defenses. That is what this hackathon is for.

Hackathon Tracks

1. ControlArena Challenges

Work directly with ControlArena, the open-source library built by UK AISI and Redwood Research for running AI control experiments. The platform provides 14 evaluation settings, each pairing a legitimate task with a covert malicious objective, along with pre-built monitors, scorers, and analysis tools. Participants can create new evaluation environments, improve existing settings, develop novel monitors, or benchmark new approaches against standardized metrics. Current settings cover code sabotage, infrastructure compromise, training data poisoning, evaluation manipulation, financial system exploitation, and more.

2. Control Protocol Design

Design and implement novel control protocols that restrict AI systems from performing harmful actions while preserving useful capabilities. The best current protocols combine multiple techniques: trusted monitoring catches obvious attacks, resampling handles suspicious actions, and deferring on critical operations provides a safety net. This track covers runtime monitoring, factored cognition, adaptive deployment, chain-of-thought monitoring, scalable oversight mechanisms like debate, and shutdown and containment protocols for autonomous agents.

3. Red Teaming & Vulnerability Research

Identify and exploit weaknesses in existing control protocols and safety measures. Current protocols have known blind spots: adaptive attacks on monitors, feedback exploitation, steganographic collusion between agents, and strategic attack timing all represent real threats. This track covers adversarial inputs, attacks on monitoring systems, sandbagging strategies, jailbreaking techniques targeting reasoning models, and demonstrations of alignment faking or emergent misalignment in controlled settings. The goal is to push the frontier of what attackers can do, so defenders can build stronger systems.

Who should participate?

  • AI safety researchers working on alignment, interpretability, control, or governance

  • Machine learning engineers and researchers

  • Security researchers and red teamers

  • Software engineers with experience in agent systems or infrastructure

  • Policy researchers interested in technical safety measures

  • Students and early-career researchers exploring AI safety

No prior AI safety experience is required. The Resources tab has a curated reading list organized by track. Teams typically form at the start of the event.

What you will do

Over three days, you will:

  1. Form teams and choose a challenge track

  2. Research and scope a specific problem in AI control using the provided resources

  3. Build a project developing tools, protocols, evaluations, or attack demonstrations

  4. Submit a research report (PDF) documenting your approach, results, and implications

  5. Present your work to judges from leading AI safety organizations

What happens next

After the hackathon, all submitted projects will be reviewed by expert judges. Top projects receive prizes and are featured on the Apart Research website. Outstanding work may be invited for further development, publication, or presentation at upcoming events. ControlConf Berkeley 2026 (April 18-19) takes place shortly after this hackathon, making this an ideal time to develop and showcase new control research.

Why join?

Apart Research has organized 55+ research sprints with 6,000+ participants across 200+ global locations. Our hackathons produce real research output: published papers, new research collaborations, and contributions to open-source safety tools used across the field.

412

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The AI Control Hackathon brings together researchers, engineers, security professionals, and AI safety enthusiasts to tackle one of the most pressing open problems: how do we keep AI systems safe when they might be actively working against us?

This is the second edition of the hackathon, co-organized with Redwood Research (the group that founded the field of AI control). Over three days, participants will develop novel control protocols, build evaluation tools, and stress-test existing safety measures using infrastructure like ControlArena and SHADE-Arena.

Top teams get

First Place gets fully funded trip to ControlConf

Invitation to The Apart Fellowship*

$2,000 in cash prizes

🥇 1st Place

$1,000

🥈 2nd Place

$500

🥉 3rd Place

$300

🏅 4th Place

$100

🏅 5th Place

$100

*Not all winning projects may receive a fellowship invitation. An invitation might also depend on other external factors like the Apart Core team's capacity to onboard new research teams.

What is AI Control?

AI control is the subfield of AI safety focused on maintaining human oversight over AI systems, even when those systems might be misaligned or deliberately trying to subvert safety measures. Unlike alignment research, which tries to make models inherently safe, control research starts from a harder assumption: the model may already be working against you, and you need protocols that keep it contained anyway.

Why this hackathon?

AI systems are becoming autonomous agents that write code, manage infrastructure, and conduct research with decreasing human oversight. At the same time, frontier models are exhibiting concerning behaviors: alignment faking, scheming against shutdown, strategic underperformance on safety evaluations, and emergent misalignment from standard training. These are not hypothetical risks. They have been observed in current models, documented by multiple research groups, and appear to increase with scale.

The infrastructure to do this work now exists, but defenses are not keeping up with attacks. Adaptive adversaries can reduce protocol safety from 99% to under 60%. Prompt injections bypass most monitoring at over 90% success rates. The field needs more researchers building stronger defenses. That is what this hackathon is for.

Hackathon Tracks

1. ControlArena Challenges

Work directly with ControlArena, the open-source library built by UK AISI and Redwood Research for running AI control experiments. The platform provides 14 evaluation settings, each pairing a legitimate task with a covert malicious objective, along with pre-built monitors, scorers, and analysis tools. Participants can create new evaluation environments, improve existing settings, develop novel monitors, or benchmark new approaches against standardized metrics. Current settings cover code sabotage, infrastructure compromise, training data poisoning, evaluation manipulation, financial system exploitation, and more.

2. Control Protocol Design

Design and implement novel control protocols that restrict AI systems from performing harmful actions while preserving useful capabilities. The best current protocols combine multiple techniques: trusted monitoring catches obvious attacks, resampling handles suspicious actions, and deferring on critical operations provides a safety net. This track covers runtime monitoring, factored cognition, adaptive deployment, chain-of-thought monitoring, scalable oversight mechanisms like debate, and shutdown and containment protocols for autonomous agents.

3. Red Teaming & Vulnerability Research

Identify and exploit weaknesses in existing control protocols and safety measures. Current protocols have known blind spots: adaptive attacks on monitors, feedback exploitation, steganographic collusion between agents, and strategic attack timing all represent real threats. This track covers adversarial inputs, attacks on monitoring systems, sandbagging strategies, jailbreaking techniques targeting reasoning models, and demonstrations of alignment faking or emergent misalignment in controlled settings. The goal is to push the frontier of what attackers can do, so defenders can build stronger systems.

Who should participate?

  • AI safety researchers working on alignment, interpretability, control, or governance

  • Machine learning engineers and researchers

  • Security researchers and red teamers

  • Software engineers with experience in agent systems or infrastructure

  • Policy researchers interested in technical safety measures

  • Students and early-career researchers exploring AI safety

No prior AI safety experience is required. The Resources tab has a curated reading list organized by track. Teams typically form at the start of the event.

What you will do

Over three days, you will:

  1. Form teams and choose a challenge track

  2. Research and scope a specific problem in AI control using the provided resources

  3. Build a project developing tools, protocols, evaluations, or attack demonstrations

  4. Submit a research report (PDF) documenting your approach, results, and implications

  5. Present your work to judges from leading AI safety organizations

What happens next

After the hackathon, all submitted projects will be reviewed by expert judges. Top projects receive prizes and are featured on the Apart Research website. Outstanding work may be invited for further development, publication, or presentation at upcoming events. ControlConf Berkeley 2026 (April 18-19) takes place shortly after this hackathon, making this an ideal time to develop and showcase new control research.

Why join?

Apart Research has organized 55+ research sprints with 6,000+ participants across 200+ global locations. Our hackathons produce real research output: published papers, new research collaborations, and contributions to open-source safety tools used across the field.

412

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The AI Control Hackathon brings together researchers, engineers, security professionals, and AI safety enthusiasts to tackle one of the most pressing open problems: how do we keep AI systems safe when they might be actively working against us?

This is the second edition of the hackathon, co-organized with Redwood Research (the group that founded the field of AI control). Over three days, participants will develop novel control protocols, build evaluation tools, and stress-test existing safety measures using infrastructure like ControlArena and SHADE-Arena.

Top teams get

First Place gets fully funded trip to ControlConf

Invitation to The Apart Fellowship*

$2,000 in cash prizes

🥇 1st Place

$1,000

🥈 2nd Place

$500

🥉 3rd Place

$300

🏅 4th Place

$100

🏅 5th Place

$100

*Not all winning projects may receive a fellowship invitation. An invitation might also depend on other external factors like the Apart Core team's capacity to onboard new research teams.

What is AI Control?

AI control is the subfield of AI safety focused on maintaining human oversight over AI systems, even when those systems might be misaligned or deliberately trying to subvert safety measures. Unlike alignment research, which tries to make models inherently safe, control research starts from a harder assumption: the model may already be working against you, and you need protocols that keep it contained anyway.

Why this hackathon?

AI systems are becoming autonomous agents that write code, manage infrastructure, and conduct research with decreasing human oversight. At the same time, frontier models are exhibiting concerning behaviors: alignment faking, scheming against shutdown, strategic underperformance on safety evaluations, and emergent misalignment from standard training. These are not hypothetical risks. They have been observed in current models, documented by multiple research groups, and appear to increase with scale.

The infrastructure to do this work now exists, but defenses are not keeping up with attacks. Adaptive adversaries can reduce protocol safety from 99% to under 60%. Prompt injections bypass most monitoring at over 90% success rates. The field needs more researchers building stronger defenses. That is what this hackathon is for.

Hackathon Tracks

1. ControlArena Challenges

Work directly with ControlArena, the open-source library built by UK AISI and Redwood Research for running AI control experiments. The platform provides 14 evaluation settings, each pairing a legitimate task with a covert malicious objective, along with pre-built monitors, scorers, and analysis tools. Participants can create new evaluation environments, improve existing settings, develop novel monitors, or benchmark new approaches against standardized metrics. Current settings cover code sabotage, infrastructure compromise, training data poisoning, evaluation manipulation, financial system exploitation, and more.

2. Control Protocol Design

Design and implement novel control protocols that restrict AI systems from performing harmful actions while preserving useful capabilities. The best current protocols combine multiple techniques: trusted monitoring catches obvious attacks, resampling handles suspicious actions, and deferring on critical operations provides a safety net. This track covers runtime monitoring, factored cognition, adaptive deployment, chain-of-thought monitoring, scalable oversight mechanisms like debate, and shutdown and containment protocols for autonomous agents.

3. Red Teaming & Vulnerability Research

Identify and exploit weaknesses in existing control protocols and safety measures. Current protocols have known blind spots: adaptive attacks on monitors, feedback exploitation, steganographic collusion between agents, and strategic attack timing all represent real threats. This track covers adversarial inputs, attacks on monitoring systems, sandbagging strategies, jailbreaking techniques targeting reasoning models, and demonstrations of alignment faking or emergent misalignment in controlled settings. The goal is to push the frontier of what attackers can do, so defenders can build stronger systems.

Who should participate?

  • AI safety researchers working on alignment, interpretability, control, or governance

  • Machine learning engineers and researchers

  • Security researchers and red teamers

  • Software engineers with experience in agent systems or infrastructure

  • Policy researchers interested in technical safety measures

  • Students and early-career researchers exploring AI safety

No prior AI safety experience is required. The Resources tab has a curated reading list organized by track. Teams typically form at the start of the event.

What you will do

Over three days, you will:

  1. Form teams and choose a challenge track

  2. Research and scope a specific problem in AI control using the provided resources

  3. Build a project developing tools, protocols, evaluations, or attack demonstrations

  4. Submit a research report (PDF) documenting your approach, results, and implications

  5. Present your work to judges from leading AI safety organizations

What happens next

After the hackathon, all submitted projects will be reviewed by expert judges. Top projects receive prizes and are featured on the Apart Research website. Outstanding work may be invited for further development, publication, or presentation at upcoming events. ControlConf Berkeley 2026 (April 18-19) takes place shortly after this hackathon, making this an ideal time to develop and showcase new control research.

Why join?

Apart Research has organized 55+ research sprints with 6,000+ participants across 200+ global locations. Our hackathons produce real research output: published papers, new research collaborations, and contributions to open-source safety tools used across the field.

Speakers & Collaborators

Aryan Bhatt

Keynote Speaker and Judge

I Safety Researcher at Redwood Research. Co-author of the "Ctrl-Z" paper on controlling AI agents via resampling and contributor to BashArena, a control setting for studying AI safety in security-critical environments.

Buck Shlegeris

Judge and Co-organizer

CEO of Redwood Research, a nonprofit focused on mitigating risks from advanced AI. A leading voice on AI control: safely deploying AI systems that might be misaligned through monitoring, oversight, and containment.

Tyler Tracy

Speaker, Judge and Co-organizer

Experienced software engineer turned AI control researcher at Redwood Research, focused on developing technical safeguards for advanced AI systems with expertise in monitoring frameworks. Mentors at CBAI and Pivotal Research on control evaluations and protocol design.

Jaime Raldua

Organizer

CEO of Apart Research. Joined through one of Apart's early AI safety hackathons and has since led the organization as Research Engineer, CTO, and now CEO.

Jason Hoelscher-Obermaier

Judge and Organizer

Director of Research at Apart Research. Ph.D. in physics from Oxford. Focuses on AI safety evaluations, interpretability, and alignment.

Kamil Alaa

Organizer

Operations at Apart Research, managing research sprints and hackathons.

Akshay Iyer

Judge and Co-organizer

CS and Entrepreneurship at Columbia University, IIT Bombay alum. Research experience in neuromorphic engineering and federated learning. Apart Research judge and internal organizer.

Rogan Inglis

Speaker and Judge

Senior Research Engineer on the Control team at the UK AI Security Institute (AISI). 7+ years of experience building and evaluating AI safety systems.

Jai Dhyani

Speaker and Judge

Builder of Luthien Proxy at Luthien Research, bringing Redwood-style AI control to real deployments. Co-author of RE-Bench (ICML 2025) with Elizabeth Barnes at METR.

James Lucassen

Speaker and Judge

Researcher at Redwood Research working on AI control. His work spans AI benchmarking, LLM behavior analysis, and safety evaluations. Previously published on false belief detection in language models and AI performance prediction.

Ram Potham

Speaker and Judge

Astra Fellow at Redwood Research, focused on mitigating loss-of-control risk. His agent safety research was accepted for an oral presentation at the ICML 2025 Technical AI Governance workshop.

Nick Kuhn

Speaker and Judge

Postdoctoral researcher at the University of Oxford, collaborating with Dominic Joyce. PhD from Stanford University. Research focuses on moduli spaces in algebraic geometry. Previously at the University of Oslo and Max Planck Institute for Mathematics in Bonn.

Myles Heller

Speaker and Judge

Freelance programmer contributing to AI model safety evaluations at EquiStamp, designing benchmarks and evaluation tasks for AI agents. Volunteer developer at AISafety.info, building a RAG chatbot, and contributor to AI-Plans.com.

Justin Shenk

Judge

Independent AI researcher based in Berlin, contracting at Redwood Research and METR. 13+ years of experience in machine learning, focused on AI safety evaluation and alignment auditing.

Francis Rhys Ward

Judge

Researcher at Safe & Trusted AI and Future of Life Institute. Co-author of "CTRL-ALT-DECEIT" on sabotage evaluations (NeurIPS 2025 Spotlight), finding that sandbagging is much harder to detect than code sabotage.

Mia Hopman

Judge

Member of Technical Staff at Apollo Research, working on alignment, control, and security of large language models. Published on evaluating scheming propensity in LLM agents.

Anshul Khandelwal

Judge

Researcher at Redwood Research. His work spans alignment drift, power-seeking evaluations, and white-box control methods.

Alex van Grootel

Judge

Fellow at the Future of Life Foundation with 10 years of experience applying AI to materials science and decision-making. Previously a Product Manager at Microsoft (Fabric/AI copilots) and Data Scientist Team Lead at Citrine Informatics. MS from MIT.

Cameron Tice

Judge

Co-Director and Co-Founder of Geodesic Research, a technical AI safety organization at the University of Cambridge. Marshall Scholar and former Apart Research Fellow.

Theo Ryzhenkov

Judge

AI Safety Research Engineer and Founding Engineer at Palisade Research. NeurIPS 2025 author on detecting sandbagging in language models. SPAR, ARENA, and AISF alumni.

Pablo Bernabeu Perez

Judge

AI Safety Researcher and Research Engineer at the Barcelona Supercomputing Center. Co-author of "CoT Red-Handed" on stress-testing chain-of-thought monitoring at LASR Labs. Research Mentor at SPAR and Algoverse, with work on debate protocols for AI control accepted at NeurIPS 2025.

Akash Kundu

Judge

Research Collaborator at FAR.AI working on AI safety, LLM evaluations, and multilingual model safety. ICLR 2025 Oral presentation recipient and upcoming Cooperative AI Research Fellow.

Ramanpreet Singh Khinda

Judge

Staff Software Engineer at LinkedIn. IEEE Senior Member, Google I/O Codelab author, and experienced hackathon judge with expertise in mobile AI systems.

Hugo Delahousse

Judge

Software Engineer at Charcoal, an AI retrieval startup backed by investors from Cursor, Notion, and Front. Previously at Front for 5+ years building developer tools. 10 years of software engineering experience with a focus on AI automation and workflow security.

Daniel Reuter

Judge

MATS Scholar working on compute verification mechanisms. Previously a Pre-Doctoral Research Fellow at Opportunity Insights and Research Assistant to Joseph Stiglitz at Columbia. BA in Economics and Mathematics from Columbia.

Jord Nguyen

Judge

Research Manager at antoan.ai (AI Safety Vietnam) and Founder of the Hanoi AI Safety Network. Former Apart Research Fellow and Non-trivial facilitator.

Kaushik Prabhakar

Judge

Research Fellow at Apart Research. His research spans AI safety and alignment, with work on LLM evaluations. ARENA Cohort 4 alumni.

Helios Lyons

Judge

Embedded Software Engineer at Disguise and former Research Fellow at Apart Research. His AI safety work focuses on detecting sycophancy and dark patterns in language model activations.

Nguyen Nhat Minh

Judge

AI Engineer at Mobifone IT Center and researcher at BKAI Lab, Hanoi University of Science and Technology. Published at FSE and TechDebt on vulnerability detection using graph neural networks.

Speakers & Collaborators

Aryan Bhatt

Keynote Speaker and Judge

I Safety Researcher at Redwood Research. Co-author of the "Ctrl-Z" paper on controlling AI agents via resampling and contributor to BashArena, a control setting for studying AI safety in security-critical environments.

Buck Shlegeris

Judge and Co-organizer

CEO of Redwood Research, a nonprofit focused on mitigating risks from advanced AI. A leading voice on AI control: safely deploying AI systems that might be misaligned through monitoring, oversight, and containment.

Tyler Tracy

Speaker, Judge and Co-organizer

Experienced software engineer turned AI control researcher at Redwood Research, focused on developing technical safeguards for advanced AI systems with expertise in monitoring frameworks. Mentors at CBAI and Pivotal Research on control evaluations and protocol design.

Jaime Raldua

Organizer

CEO of Apart Research. Joined through one of Apart's early AI safety hackathons and has since led the organization as Research Engineer, CTO, and now CEO.

Jason Hoelscher-Obermaier

Judge and Organizer

Director of Research at Apart Research. Ph.D. in physics from Oxford. Focuses on AI safety evaluations, interpretability, and alignment.

Kamil Alaa

Organizer

Operations at Apart Research, managing research sprints and hackathons.

Akshay Iyer

Judge and Co-organizer

CS and Entrepreneurship at Columbia University, IIT Bombay alum. Research experience in neuromorphic engineering and federated learning. Apart Research judge and internal organizer.

Rogan Inglis

Speaker and Judge

Senior Research Engineer on the Control team at the UK AI Security Institute (AISI). 7+ years of experience building and evaluating AI safety systems.

Jai Dhyani

Speaker and Judge

Builder of Luthien Proxy at Luthien Research, bringing Redwood-style AI control to real deployments. Co-author of RE-Bench (ICML 2025) with Elizabeth Barnes at METR.

James Lucassen

Speaker and Judge

Researcher at Redwood Research working on AI control. His work spans AI benchmarking, LLM behavior analysis, and safety evaluations. Previously published on false belief detection in language models and AI performance prediction.

Ram Potham

Speaker and Judge

Astra Fellow at Redwood Research, focused on mitigating loss-of-control risk. His agent safety research was accepted for an oral presentation at the ICML 2025 Technical AI Governance workshop.

Nick Kuhn

Speaker and Judge

Postdoctoral researcher at the University of Oxford, collaborating with Dominic Joyce. PhD from Stanford University. Research focuses on moduli spaces in algebraic geometry. Previously at the University of Oslo and Max Planck Institute for Mathematics in Bonn.

Myles Heller

Speaker and Judge

Freelance programmer contributing to AI model safety evaluations at EquiStamp, designing benchmarks and evaluation tasks for AI agents. Volunteer developer at AISafety.info, building a RAG chatbot, and contributor to AI-Plans.com.

Justin Shenk

Judge

Independent AI researcher based in Berlin, contracting at Redwood Research and METR. 13+ years of experience in machine learning, focused on AI safety evaluation and alignment auditing.

Francis Rhys Ward

Judge

Researcher at Safe & Trusted AI and Future of Life Institute. Co-author of "CTRL-ALT-DECEIT" on sabotage evaluations (NeurIPS 2025 Spotlight), finding that sandbagging is much harder to detect than code sabotage.

Mia Hopman

Judge

Member of Technical Staff at Apollo Research, working on alignment, control, and security of large language models. Published on evaluating scheming propensity in LLM agents.

Anshul Khandelwal

Judge

Researcher at Redwood Research. His work spans alignment drift, power-seeking evaluations, and white-box control methods.

Alex van Grootel

Judge

Fellow at the Future of Life Foundation with 10 years of experience applying AI to materials science and decision-making. Previously a Product Manager at Microsoft (Fabric/AI copilots) and Data Scientist Team Lead at Citrine Informatics. MS from MIT.

Cameron Tice

Judge

Co-Director and Co-Founder of Geodesic Research, a technical AI safety organization at the University of Cambridge. Marshall Scholar and former Apart Research Fellow.

Theo Ryzhenkov

Judge

AI Safety Research Engineer and Founding Engineer at Palisade Research. NeurIPS 2025 author on detecting sandbagging in language models. SPAR, ARENA, and AISF alumni.

Pablo Bernabeu Perez

Judge

AI Safety Researcher and Research Engineer at the Barcelona Supercomputing Center. Co-author of "CoT Red-Handed" on stress-testing chain-of-thought monitoring at LASR Labs. Research Mentor at SPAR and Algoverse, with work on debate protocols for AI control accepted at NeurIPS 2025.

Akash Kundu

Judge

Research Collaborator at FAR.AI working on AI safety, LLM evaluations, and multilingual model safety. ICLR 2025 Oral presentation recipient and upcoming Cooperative AI Research Fellow.

Ramanpreet Singh Khinda

Judge

Staff Software Engineer at LinkedIn. IEEE Senior Member, Google I/O Codelab author, and experienced hackathon judge with expertise in mobile AI systems.

Hugo Delahousse

Judge

Software Engineer at Charcoal, an AI retrieval startup backed by investors from Cursor, Notion, and Front. Previously at Front for 5+ years building developer tools. 10 years of software engineering experience with a focus on AI automation and workflow security.

Daniel Reuter

Judge

MATS Scholar working on compute verification mechanisms. Previously a Pre-Doctoral Research Fellow at Opportunity Insights and Research Assistant to Joseph Stiglitz at Columbia. BA in Economics and Mathematics from Columbia.

Jord Nguyen

Judge

Research Manager at antoan.ai (AI Safety Vietnam) and Founder of the Hanoi AI Safety Network. Former Apart Research Fellow and Non-trivial facilitator.

Kaushik Prabhakar

Judge

Research Fellow at Apart Research. His research spans AI safety and alignment, with work on LLM evaluations. ARENA Cohort 4 alumni.

Helios Lyons

Judge

Embedded Software Engineer at Disguise and former Research Fellow at Apart Research. His AI safety work focuses on detecting sycophancy and dark patterns in language model activations.

Nguyen Nhat Minh

Judge

AI Engineer at Mobifone IT Center and researcher at BKAI Lab, Hanoi University of Science and Technology. Published at FSE and TechDebt on vulnerability detection using graph neural networks.

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.