Jan 30, 2026
-
Feb 1, 2026
Remote
The Technical AI Governance Challenge



This hackathon brings together 500+ builders to prototype verification tools, compliance systems, and coordination infrastructure that enable international cooperation on frontier AI safety.
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Resources

General Introduction
International AI Safety Report 2025
Led by Yoshua Bengio, backed by 30 countries and international organizations
The inaugural comprehensive scientific review of general-purpose AI capabilities and risks. Essential reading that establishes the evidence base for AI governance discussions, covering capability assessments, risk taxonomies, and technical approaches to safety. Participants will gain a shared vocabulary and understanding of the threat landscape that underpins all hackathon tracks.
Computing Power and the Governance of AI
Centre for the Governance of AI (GovAI)
The foundational paper explaining why compute is uniquely governable compared to other AI inputs (data, algorithms). Covers compute's detectability, excludability, quantifiability, and supply chain concentration. Essential for understanding why hardware-focused governance is feasible and how visibility, allocation, and enforcement mechanisms work in practice.
The Annual AI Governance Report 2025: Steering the Future of AI
International Telecommunication Union (ITU)
Comprehensive overview of global AI governance approaches, from Europe's risk-based AI Act to Asia's innovation-driven models. Covers the transition from principles to operational tools, regional variations in governance philosophy, and the emerging role of international coordination. Provides crucial context on the political landscape participants will be building for.
Track 1: Hardware Verification, Attestation & Lifecycle Security
Technology to Secure the AI Chip Supply Chain: A Working Paper
Center for a New American Security (CNAS) – April 2025
Detailed primer on Hardware-Enabled Mechanisms (HEMs) including location verification, offline licensing, and workload attestation. Explains how these mechanisms could enable targeted export controls, privacy-preserving compliance reporting, and enforcement of international agreements. Essential reading for understanding the technical building blocks of chip governance.
Hardware-Enabled Mechanisms for Verifying Responsible AI Development
arXiv – April 2025
Technical deep-dive into location verification, offline licensing, workload classification, and detailed verification approaches. Covers open challenges including anti-tamper techniques, privacy protections, and cluster configuration flexibility. Includes practical discussion of Trusted Execution Environments (TEEs) and remote attestation.
Flexible Hardware-Enabled Guarantees (FlexHEG) Report
Future of Life Institute – January 2025
Ambitious proposal for a family of hardware mechanisms consisting of secure processors within tamper-resistant enclosures that locally enforce flexible policies. Addresses privacy-preserving verification, mutual verification between geopolitical rivals, and the potential for international agreements. Forward-looking vision of what mature chip governance could look like.
Supplementary Resources
Secure, Governable Chips – CNAS foundational report on on-chip governance
Can "Location Verification" Stop AI Chip Smuggling? – Accessible overview of delay-based location verification
Request for Proposals on Hardware-Enabled Mechanisms – Longview Philanthropy funding priorities
Harnessing Silicon Lifecycle Management For Chip Security – Industry perspective on chip security
Track 2: Compliance Infrastructure, Monitoring & Privacy-Preserving Proofs
How the EU's Code of Practice Advances AI Safety
AI Frontiers – July 2025
Explains the EU Code of Practice's requirements including risk estimation, external evaluation, incident reporting, and public transparency. Critical for understanding what compliance actually requires under the first major frontier AI regulation—and therefore what compliance infrastructure must support.
AI Lab Watch – Commitments Tracker
Zach Stein-Perlman
Comprehensive tracking of AI company commitments, from the Seoul Summit pledges to responsible scaling policies. Documents the gap between stated commitments and actual implementation. Essential context on what independent monitoring looks like today and where gaps exist (this effort is winding down, creating an important gap).
Future of Life Institute – December 2025
Systematic evaluation of 7 leading AI companies across 33 indicators of responsible AI development. Provides methodology for assessing compliance with safety commitments, covering risk management, governance, transparency, and disclosure. Model for what rigorous independent monitoring looks like.
Supplementary Resources:
AI Compliance in 2025: Standards and Frameworks – Overview of NIST AI RMF, EU AI Act, sector-specific requirements
Privacy & AI Compliance 2025 – Privacy-enhancing technologies and compliance strategies
Zero Knowledge Proofs in Web3/DeFi – Technical primer on ZKP market growth and applications
Track 3: Risk Thresholds, Modeling & Compute Verification
Common Elements of Frontier AI Safety Policies
METR (Model Evaluation & Threat Research)
Comprehensive analysis of capability thresholds, risk tiers, and safeguards across 12 published frontier safety policies from OpenAI, Anthropic, DeepMind, xAI, Amazon, and others. Essential for understanding how labs currently define "dangerous" and where approaches differ. Includes direct quotes from each framework.
AI Safety under the EU AI Code of Practice
Georgetown CSET – July 2025
Analysis of how the Code of Practice sets a "minimum standard for appropriate risk management" that goes beyond current industry practices. Covers pre-defined risk tiers, capability-based thresholds, and the requirement for external evaluation. Critical for understanding what risk threshold harmonization might look like.
The Role of Compute Thresholds for AI Governance
Institute for Law & AI – February 2025
Deep analysis of how compute thresholds function as regulatory triggers, their limitations (algorithmic progress, post-training enhancements), and verification challenges. Discusses how "on-chip governance mechanisms" could verify compute claims. Essential for understanding threshold-based governance.
Supplementary Resources:
Epoch AI Key Trends – Data on training compute, algorithmic efficiency improvements
AI 2027 Compute Forecast – Projections for compute scaling and algorithmic progress
Global Call for AI Red Lines – International campaign for binding AI limits with 300+ signatories
EU AI Act Code of Practice Portal – Official text and analysis from the chairs
Track 4: International Verification & Coordination Infrastructure
Lawfare – November 2024
Careful analysis of whether and how IAEA models apply to AI governance. Covers IAEA's monitoring and verification functions, its limitations (North Korea, Iran), and unique challenges for AI (software copyability, compute repurposing, ephemeral training runs). Essential framing for international verification discussions.
The Global Landscape of AI Safety Institutes
All Tech Is Human – May 2025
Comprehensive catalogue of national AI Safety Institutes worldwide, their functions, and the International Network of AI Safety Institutes. Analyzes the tension between national sovereignty and international coordination, and the recent UK rebranding to "AI Security Institute." Essential map of the institutional landscape.
Mechanisms to Verify International Agreements About AI Development
ResearchGate – June 2025
Technical paper on low-tech and high-tech approaches to international verification. Covers self-reporting with inspection, on-chip mechanisms for remote attestation, and workload classification for detecting large training runs. Practical roadmap for what near-term verification could look like.
Supplementary Resources:
An Institutional Analysis of the IAEA and IPCC – Lessons for AI governance institution design
International Network of AI Safety Institutes Launch – NIST fact sheet on the network
AI Safety Institute Wikipedia – Current status of national institutes (CAISI renaming, etc.)
Track 5: Research Governance & Dual-Use Detection
Dual-use Capabilities of Concern of Biological AI Models
PLOS Computational Biology – May 2025
Technical analysis of biosecurity risks from AI, including prevention and mitigation strategies: data exclusion, machine unlearning, API access restrictions, and governmental risk-benefit assessment. Model for how dual-use research governance discussions apply to AI-specific contexts.
Framework for Artificial Intelligence Diffusion
Bureau of Industry and Security – January 2025
The Biden administration's framework for controlling AI chip exports and model weights. Establishes the first regulatory framework treating AI model weights as controlled items alongside hardware. Essential context for understanding export control approaches to dual-use AI.
Regulating Artificial Intelligence: U.S. and International Approaches
Congressional Research Service – 2025
Comprehensive overview of U.S. AI regulatory approaches including compute thresholds, dual-use reporting requirements, and the policy shift under the Trump administration. Provides essential context on the regulatory landscape and open questions for Congress.
Supplementary Resources
AI Policies in Academic Publishing 2025 – How journals are handling AI disclosure
America's AI Action Plan – Current U.S. policy direction
Boosting Safety Research – AI Lab Watch – Tracking labs' safety research output
Useful Datasets, Benchmarks & Tools
Datasets
Epoch AI Training Compute Database – Historical compute usage for notable models
AI Incident Database – Documented AI failures and harms
OECD.AI Policy Observatory – Global AI policy tracking
Benchmarks & Evaluations
METR Evaluations – Autonomous capability evaluations
UK AISI Inspect Framework – Open-source evaluation framework
Anthropic Model Card – Example of safety documentation
Open-Source Tools
Three.js – 3D visualization (for hardware/network simulations)
D3.js – Data visualization
LangChain/LlamaIndex – For AI-powered document analysis
Plotly – Interactive charts
Project Ideas (a list of ideas in this link)
Project Scoping Advice
Based on successful hackathon retrospectives:
Focus on MVP, Not Production. In 2 days, aim for:
Day 1: Set up environment, implement core functionality, get basic pipeline working
Day 2: Add 1-2 key features, create demo, prepare presentation
Use Mock/Simulated Data rather than integrating real APIs or databases, use:
Synthetic chip registries or compliance records
Simulated protocol interactions (e.g., attestation handshakes)
Pre-compiled policy documents and model cards
This eliminates authentication, rate limiting, and data quality issues.
Leverage Existing Frameworks. Don't build from scratch. Use:
Published policy texts (EU Code of Practice, lab RSPs) as structured inputs
Epoch AI datasets for compute trends
AI Incident Database for documented cases
Existing comparison frameworks as starting points
Clear Success Criteria. Define what "working" means:
For hardware verification: Models 3+ attack scenarios with documented threat assumptions
For compliance tools: Tracks 5+ lab commitments with structured change detection
For threshold analysis: Compares definitions across 4+ major labs with gap taxonomy
For international verification: Maps 10+ arms control precedents to AI-specific challenges
For research governance: Classifies 50+ papers with documented methodology and error cases
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.
But we lack the practical tools to implement them at scale.
This hackathon focuses on building technical solutions that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.
Top teams get:
💰 $2000 in cash prizes + Priority Access to:
1. Your Next Job at Lucid Computing (Details TBA)
2. The MIRI Fellowship
3. The Apart Fellowship
“Fast-tracks” include at least one interview with leadership from Lucid Computing, and researchers at MIRI and Apart Research for potential job opportunities and fellowship onboarding to work on Technical Solutions for International AI Governance in depth.
Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.
In this hackathon, you can build:
Hardware verification systems that enable compute monitoring and attestation at datacenter scale
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that help labs and regulators coordinate on safety levels
International coordination tools that enable verification between parties without full trust
Dual-use detection systems for identifying dangerous capabilities in AI research
Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls
You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.
Why this hackathon?
The Problem
The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You're interested in understanding how international cooperation on AI safety can be made technically feasible
You want to build practical verification, monitoring, or compliance tools
You're concerned about the gap between AI governance policy and technical infrastructure
No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.
Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded with $2,000 worth of prizes in cash.
An interview with leadership at Lucid Computing.
An interview with researchers at MIRI.
An interview with researchers at Apart Research.
Published openly for the community.
Shared with relevant safety researchers.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.
But we lack the practical tools to implement them at scale.
This hackathon focuses on building technical solutions that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.
Top teams get:
💰 $2000 in cash prizes + Priority Access to:
1. Your Next Job at Lucid Computing (Details TBA)
2. The MIRI Fellowship
3. The Apart Fellowship
“Fast-tracks” include at least one interview with leadership from Lucid Computing, and researchers at MIRI and Apart Research for potential job opportunities and fellowship onboarding to work on Technical Solutions for International AI Governance in depth.
Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.
In this hackathon, you can build:
Hardware verification systems that enable compute monitoring and attestation at datacenter scale
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that help labs and regulators coordinate on safety levels
International coordination tools that enable verification between parties without full trust
Dual-use detection systems for identifying dangerous capabilities in AI research
Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls
You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.
Why this hackathon?
The Problem
The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You're interested in understanding how international cooperation on AI safety can be made technically feasible
You want to build practical verification, monitoring, or compliance tools
You're concerned about the gap between AI governance policy and technical infrastructure
No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.
Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded with $2,000 worth of prizes in cash.
An interview with leadership at Lucid Computing.
An interview with researchers at MIRI.
An interview with researchers at Apart Research.
Published openly for the community.
Shared with relevant safety researchers.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.
But we lack the practical tools to implement them at scale.
This hackathon focuses on building technical solutions that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.
Top teams get:
💰 $2000 in cash prizes + Priority Access to:
1. Your Next Job at Lucid Computing (Details TBA)
2. The MIRI Fellowship
3. The Apart Fellowship
“Fast-tracks” include at least one interview with leadership from Lucid Computing, and researchers at MIRI and Apart Research for potential job opportunities and fellowship onboarding to work on Technical Solutions for International AI Governance in depth.
Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.
In this hackathon, you can build:
Hardware verification systems that enable compute monitoring and attestation at datacenter scale
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that help labs and regulators coordinate on safety levels
International coordination tools that enable verification between parties without full trust
Dual-use detection systems for identifying dangerous capabilities in AI research
Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls
You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.
What is International Technical AI Governance?
Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:
Hardware verification that tracks compute resources used in training frontier models
Attestation systems that cryptographically prove model properties without revealing weights
Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments
Risk threshold frameworks that define when capabilities trigger safety requirements
International coordination mechanisms that enable verification between parties without full trust
Dual-use detection for identifying dangerous capabilities in AI research before deployment
Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.
What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.
Why this hackathon?
The Problem
The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.
This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.
Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.
Why International Technical AI Governance Matters Now
International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.
We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.
Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.
Hackathon Tracks
1. Hardware Verification & Attestation
Design hardware verification protocols for tracking compute resources in datacenter environments
Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights
Create compute monitoring tools that detect training runs above regulatory thresholds
Develop chip-level security mechanisms for remote verification of AI hardware properties
2. Compliance Infrastructure & Privacy-Preserving Proofs
Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information
Create privacy-preserving audit mechanisms for federated learning or distributed training
Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks
Design cryptographic protocols that enable verification between parties without full trust
3. Risk Thresholds & Compute Verification
Build risk assessment frameworks that map compute thresholds to capability levels
Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks
Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)
Design monitoring systems for responsible scaling policies and deployment safeguards
4. International Verification & Coordination
Build coordination infrastructure for International Network of AI Safety Institutes
Create verification mechanisms inspired by IAEA frameworks adapted for AI governance
Develop systems for cross-border information sharing that respect national security concerns
Design tools for implementing global AI safety standards and red lines
5. Research Governance & Dual-Use Detection
Build detection systems for identifying dangerous capabilities in pre-publication research
Create frameworks for assessing dual-use risks in biological AI models or other specialized domains
Develop pre-publication review tools that scale across research communities
Design capability-based threat assessment systems for frontier AI research
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You're interested in understanding how international cooperation on AI safety can be made technically feasible
You want to build practical verification, monitoring, or compliance tools
You're concerned about the gap between AI governance policy and technical infrastructure
No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.
Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance
Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.
What happens next
Winning and promising projects will be:
Awarded with $2,000 worth of prizes in cash.
An interview with leadership at Lucid Computing.
An interview with researchers at MIRI.
An interview with researchers at Apart Research.
Published openly for the community.
Shared with relevant safety researchers.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
Nov 21, 2025
-
Nov 23, 2025
Research
Defensive Acceleration Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Oct 31, 2025
-
Nov 2, 2025
Research
The AI Forecasting Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events