Jan 30, 2026
-
Feb 1, 2026
Remote
The Technical AI Governance Challenge



This hackathon brings together 500+ builders to prototype verification tools, compliance systems, and coordination infrastructure that enable international cooperation on frontier AI safety.
00:00:00:00
Days To Go
00:00:00:00
Days To Go
00:00:00:00
Days To Go
00:00:00:00
Days To Go
This hackathon brings together 500+ builders to prototype verification tools, compliance systems, and coordination infrastructure that enable international cooperation on frontier AI safety.
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Frontier AI labs are training systems that could pose international risks. Countries want agreements on safe development. Labs need ways to demonstrate compliance without exposing competitive advantages.
The technical infrastructure to make this possible doesn't exist yet. We have policy frameworks without verification systems. International agreements without monitoring tools. Compliance requirements without practical implementation paths.
This hackathon focuses on building that missing infrastructure. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination systems that could enable enforceable international cooperation on AI safety.
Top teams get:
💰 $2000 in cash prizes + Fast track to: | |
| |
|
Fast-tracks include at least one interview with leadership from Lucid Computing, or researchers at MIRI TGT or Apart Research.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
Why this hackathon?
The Problem
AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if you're an engineer, researcher, or developer who wants to work on consequential problems and build practical verification, monitoring, or compliance infrastructure.
No prior governance research experience required. We provide resources, mentors, and starter templates.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded $2,000 in cash prizes
Fast-tracked for interviews with Lucid Computing, MIRI TGT, or Apart Research
Published openly for the community
Invited to continue development within the Apart Fellowship
Shared with relevant safety researchers and policymakers.
Why join?
Work on consequential problems: Build infrastructure that could enable international cooperation on frontier AI safety
Learn from experts: Get guidance from AI safety researchers and technical governance practitioners throughout the weekend
Build your network: Collaborate with technical talent from across the globe focused on AI safety
Develop practical skills: Gain hands-on experience with verification systems, cryptographic proofs, or monitoring infrastructure that employers value
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Frontier AI labs are training systems that could pose international risks. Countries want agreements on safe development. Labs need ways to demonstrate compliance without exposing competitive advantages.
The technical infrastructure to make this possible doesn't exist yet. We have policy frameworks without verification systems. International agreements without monitoring tools. Compliance requirements without practical implementation paths.
This hackathon focuses on building that missing infrastructure. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination systems that could enable enforceable international cooperation on AI safety.
Top teams get:
💰 $2000 in cash prizes + Fast track to: | |
| |
|
Fast-tracks include at least one interview with leadership from Lucid Computing, or researchers at MIRI TGT or Apart Research.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
Why this hackathon?
The Problem
AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if you're an engineer, researcher, or developer who wants to work on consequential problems and build practical verification, monitoring, or compliance infrastructure.
No prior governance research experience required. We provide resources, mentors, and starter templates.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded $2,000 in cash prizes
Fast-tracked for interviews with Lucid Computing, MIRI TGT, or Apart Research
Published openly for the community
Invited to continue development within the Apart Fellowship
Shared with relevant safety researchers and policymakers.
Why join?
Work on consequential problems: Build infrastructure that could enable international cooperation on frontier AI safety
Learn from experts: Get guidance from AI safety researchers and technical governance practitioners throughout the weekend
Build your network: Collaborate with technical talent from across the globe focused on AI safety
Develop practical skills: Gain hands-on experience with verification systems, cryptographic proofs, or monitoring infrastructure that employers value
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Frontier AI labs are training systems that could pose international risks. Countries want agreements on safe development. Labs need ways to demonstrate compliance without exposing competitive advantages.
The technical infrastructure to make this possible doesn't exist yet. We have policy frameworks without verification systems. International agreements without monitoring tools. Compliance requirements without practical implementation paths.
This hackathon focuses on building that missing infrastructure. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination systems that could enable enforceable international cooperation on AI safety.
Top teams get:
💰 $2000 in cash prizes + Fast track to: | |
| |
|
Fast-tracks include at least one interview with leadership from Lucid Computing, or researchers at MIRI TGT or Apart Research.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
Why this hackathon?
The Problem
AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if you're an engineer, researcher, or developer who wants to work on consequential problems and build practical verification, monitoring, or compliance infrastructure.
No prior governance research experience required. We provide resources, mentors, and starter templates.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded $2,000 in cash prizes
Fast-tracked for interviews with Lucid Computing, MIRI TGT, or Apart Research
Published openly for the community
Invited to continue development within the Apart Fellowship
Shared with relevant safety researchers and policymakers.
Why join?
Work on consequential problems: Build infrastructure that could enable international cooperation on frontier AI safety
Learn from experts: Get guidance from AI safety researchers and technical governance practitioners throughout the weekend
Build your network: Collaborate with technical talent from across the globe focused on AI safety
Develop practical skills: Gain hands-on experience with verification systems, cryptographic proofs, or monitoring infrastructure that employers value
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

Frontier AI labs are training systems that could pose international risks. Countries want agreements on safe development. Labs need ways to demonstrate compliance without exposing competitive advantages.
The technical infrastructure to make this possible doesn't exist yet. We have policy frameworks without verification systems. International agreements without monitoring tools. Compliance requirements without practical implementation paths.
This hackathon focuses on building that missing infrastructure. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination systems that could enable enforceable international cooperation on AI safety.
Top teams get:
💰 $2000 in cash prizes + Fast track to: | |
| |
|
Fast-tracks include at least one interview with leadership from Lucid Computing, or researchers at MIRI TGT or Apart Research.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
Why this hackathon?
The Problem
AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if you're an engineer, researcher, or developer who wants to work on consequential problems and build practical verification, monitoring, or compliance infrastructure.
No prior governance research experience required. We provide resources, mentors, and starter templates.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded $2,000 in cash prizes
Fast-tracked for interviews with Lucid Computing, MIRI TGT, or Apart Research
Published openly for the community
Invited to continue development within the Apart Fellowship
Shared with relevant safety researchers and policymakers.
Why join?
Work on consequential problems: Build infrastructure that could enable international cooperation on frontier AI safety
Learn from experts: Get guidance from AI safety researchers and technical governance practitioners throughout the weekend
Build your network: Collaborate with technical talent from across the globe focused on AI safety
Develop practical skills: Gain hands-on experience with verification systems, cryptographic proofs, or monitoring infrastructure that employers value
Speakers & Collaborators
Kristian Rönn
Keynote Speaker
Kristian Rönn is the CEO and co-founder of Lucid Computing, an AI hardware governance company building verification infrastructure for compute export controls. Before pivoting to AI safety, he spent 11 years building Normative, a carbon accounting platform that became a leading tool for corporate emissions tracking. His path to tech entrepreneurship started at Oxford's Future of Humanity Institute, where he worked on global catastrophic risks. He's the author of The Darwinian Trap, which examines how evolutionary pressures shape systemic risks to humanity's future.
Charbel-Raphaël Segerie
Speaker
Charbel-Raphaël Segerie is the Executive Director of CeSIA, France's leading AI safety research organization. He created Europe's first AI safety course for general-purpose models at ENS Paris-Saclay and founded ML4Good, a bootcamp series that has trained researchers across six countries. His technical work focuses on RLHF limitations and interpretability. He led the Global Call for AI Red Lines, an international campaign signed by 10 Nobel laureates and introduced at the UN General Assembly, and contributes to the EU AI Office's Code of Practice for general-purpose AI systems.
Henry Papadatos
Speaker
Henry Papadatos is the Executive Director of SaferAI, where he works on technical solutions for frontier AI risk management. He contributed to the EU AI Act's Codes of Practice as part of the expert working group on risk taxonomy and assessment, and helped draft the G7 Hiroshima AI Process reporting framework through the OECD task force. His technical work includes an AI risk management ratings system for developers and current research on quantitative risk modeling for AI-enabled cyber threats. Before SaferAI, he conducted alignment research on large language models at UC Berkeley's Center for Human-Compatible AI.
Peter Barnett
Speaker
Peter Barnett is a Technical AI Governance Researcher at the Machine Intelligence Research Institute, where he focuses on preventing catastrophic and extinction risks from artificial intelligence. He co-authored MIRI's international agreement proposal to prevent premature development of artificial superintelligence, which includes verification mechanisms for AI chip usage and training restrictions. His work includes developing AI governance research agendas and addressing trust challenges in multilateral AI agreements. Before MIRI, he conducted alignment research at UC Berkeley's Center for Human-Compatible AI and holds a Master's degree in Physics from the University of Otago, where he specialized in quantum optics simulations.
Speakers & Collaborators

Kristian Rönn
Keynote Speaker
Kristian Rönn is the CEO and co-founder of Lucid Computing, an AI hardware governance company building verification infrastructure for compute export controls. Before pivoting to AI safety, he spent 11 years building Normative, a carbon accounting platform that became a leading tool for corporate emissions tracking. His path to tech entrepreneurship started at Oxford's Future of Humanity Institute, where he worked on global catastrophic risks. He's the author of The Darwinian Trap, which examines how evolutionary pressures shape systemic risks to humanity's future.

Charbel-Raphaël Segerie
Speaker
Charbel-Raphaël Segerie is the Executive Director of CeSIA, France's leading AI safety research organization. He created Europe's first AI safety course for general-purpose models at ENS Paris-Saclay and founded ML4Good, a bootcamp series that has trained researchers across six countries. His technical work focuses on RLHF limitations and interpretability. He led the Global Call for AI Red Lines, an international campaign signed by 10 Nobel laureates and introduced at the UN General Assembly, and contributes to the EU AI Office's Code of Practice for general-purpose AI systems.

Henry Papadatos
Speaker
Henry Papadatos is the Executive Director of SaferAI, where he works on technical solutions for frontier AI risk management. He contributed to the EU AI Act's Codes of Practice as part of the expert working group on risk taxonomy and assessment, and helped draft the G7 Hiroshima AI Process reporting framework through the OECD task force. His technical work includes an AI risk management ratings system for developers and current research on quantitative risk modeling for AI-enabled cyber threats. Before SaferAI, he conducted alignment research on large language models at UC Berkeley's Center for Human-Compatible AI.

Peter Barnett
Speaker
Peter Barnett is a Technical AI Governance Researcher at the Machine Intelligence Research Institute, where he focuses on preventing catastrophic and extinction risks from artificial intelligence. He co-authored MIRI's international agreement proposal to prevent premature development of artificial superintelligence, which includes verification mechanisms for AI chip usage and training restrictions. His work includes developing AI governance research agendas and addressing trust challenges in multilateral AI agreements. Before MIRI, he conducted alignment research at UC Berkeley's Center for Human-Compatible AI and holds a Master's degree in Physics from the University of Otago, where he specialized in quantum optics simulations.
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
Nov 21, 2025
-
Nov 23, 2025
Research
Defensive Acceleration Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Oct 31, 2025
-
Nov 2, 2025
Research
The AI Forecasting Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events