Jan 30, 2026

-

Feb 1, 2026

Remote

Technical International Governance

This hackathon brings together 500+ builders to prototype verification tools, compliance systems, and coordination infrastructure that enable international cooperation on frontier AI safety.

19

Days To Go

19

Days To Go

19

Days To Go

19

Days To Go

This hackathon brings together 500+ builders to prototype verification tools, compliance systems, and coordination infrastructure that enable international cooperation on frontier AI safety.

This event is ongoing.

This event has concluded.

0

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.

But we lack the practical tools to implement them at scale.

This hackathon focuses on building real systems that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.

Top teams get:

  • 💰 $[Placeholder] in cash prizes

  • The chance to continue development through Apart Research's Fellowship program

Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.

In this hackathon, you can build:

  • Hardware verification systems that enable compute monitoring and attestation at datacenter scale

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that help labs and regulators coordinate on safety levels

  • International coordination tools that enable verification between parties without full trust

  • Dual-use detection systems for identifying dangerous capabilities in AI research

  • Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls

You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.

What is Technical International Governance?

Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:

  • Hardware verification that tracks compute resources used in training frontier models

  • Attestation systems that cryptographically prove model properties without revealing weights

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that define when capabilities trigger safety requirements

  • International coordination mechanisms that enable verification between parties without full trust

  • Dual-use detection for identifying dangerous capabilities in AI research before deployment

Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.

What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.

Why this hackathon?

The Problem

The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.

This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.

Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.

Why AI Manipulation Defense Matters Now

International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.

We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.

Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.

Hackathon Tracks

1. Hardware Verification & Attestation

  • Design hardware verification protocols for tracking compute resources in datacenter environments

  • Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights

  • Create compute monitoring tools that detect training runs above regulatory thresholds

  • Develop chip-level security mechanisms for remote verification of AI hardware properties

2. Compliance Infrastructure & Privacy-Preserving Proofs

  • Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information

  • Create privacy-preserving audit mechanisms for federated learning or distributed training

  • Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks

  • Design cryptographic protocols that enable verification between parties without full trust

3. Risk Thresholds & Compute Verification

  • Build risk assessment frameworks that map compute thresholds to capability levels

  • Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks

  • Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)

  • Design monitoring systems for responsible scaling policies and deployment safeguards

4. International Verification & Coordination

  • Build coordination infrastructure for International Network of AI Safety Institutes

  • Create verification mechanisms inspired by IAEA frameworks adapted for AI governance

  • Develop systems for cross-border information sharing that respect national security concerns

  • Design tools for implementing global AI safety standards and red lines

5. Research Governance & Dual-Use Detection

  • Build detection systems for identifying dangerous capabilities in pre-publication research

  • Create frameworks for assessing dual-use risks in biological AI models or other specialized domains

  • Develop pre-publication review tools that scale across research communities

  • Design capability-based threat assessment systems for frontier AI research

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You're interested in understanding how international cooperation on AI safety can be made technically feasible

  • You want to build practical verification, monitoring, or compliance tools

  • You're concerned about the gap between AI governance policy and technical infrastructure

No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.

Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance

Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.

What happens next

Winning and promising projects will be:

  • Awarded with $2,000 worth of prizes in cash.

  • Published openly for the community.

  • Invited to continue development within the Apart Fellowship.

  • Shared with relevant safety researchers.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities

0

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.

But we lack the practical tools to implement them at scale.

This hackathon focuses on building real systems that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.

Top teams get:

  • 💰 $[Placeholder] in cash prizes

  • The chance to continue development through Apart Research's Fellowship program

Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.

In this hackathon, you can build:

  • Hardware verification systems that enable compute monitoring and attestation at datacenter scale

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that help labs and regulators coordinate on safety levels

  • International coordination tools that enable verification between parties without full trust

  • Dual-use detection systems for identifying dangerous capabilities in AI research

  • Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls

You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.

What is Technical International Governance?

Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:

  • Hardware verification that tracks compute resources used in training frontier models

  • Attestation systems that cryptographically prove model properties without revealing weights

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that define when capabilities trigger safety requirements

  • International coordination mechanisms that enable verification between parties without full trust

  • Dual-use detection for identifying dangerous capabilities in AI research before deployment

Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.

What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.

Why this hackathon?

The Problem

The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.

This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.

Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.

Why AI Manipulation Defense Matters Now

International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.

We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.

Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.

Hackathon Tracks

1. Hardware Verification & Attestation

  • Design hardware verification protocols for tracking compute resources in datacenter environments

  • Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights

  • Create compute monitoring tools that detect training runs above regulatory thresholds

  • Develop chip-level security mechanisms for remote verification of AI hardware properties

2. Compliance Infrastructure & Privacy-Preserving Proofs

  • Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information

  • Create privacy-preserving audit mechanisms for federated learning or distributed training

  • Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks

  • Design cryptographic protocols that enable verification between parties without full trust

3. Risk Thresholds & Compute Verification

  • Build risk assessment frameworks that map compute thresholds to capability levels

  • Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks

  • Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)

  • Design monitoring systems for responsible scaling policies and deployment safeguards

4. International Verification & Coordination

  • Build coordination infrastructure for International Network of AI Safety Institutes

  • Create verification mechanisms inspired by IAEA frameworks adapted for AI governance

  • Develop systems for cross-border information sharing that respect national security concerns

  • Design tools for implementing global AI safety standards and red lines

5. Research Governance & Dual-Use Detection

  • Build detection systems for identifying dangerous capabilities in pre-publication research

  • Create frameworks for assessing dual-use risks in biological AI models or other specialized domains

  • Develop pre-publication review tools that scale across research communities

  • Design capability-based threat assessment systems for frontier AI research

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You're interested in understanding how international cooperation on AI safety can be made technically feasible

  • You want to build practical verification, monitoring, or compliance tools

  • You're concerned about the gap between AI governance policy and technical infrastructure

No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.

Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance

Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.

What happens next

Winning and promising projects will be:

  • Awarded with $2,000 worth of prizes in cash.

  • Published openly for the community.

  • Invited to continue development within the Apart Fellowship.

  • Shared with relevant safety researchers.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities

0

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.

But we lack the practical tools to implement them at scale.

This hackathon focuses on building real systems that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.

Top teams get:

  • 💰 $[Placeholder] in cash prizes

  • The chance to continue development through Apart Research's Fellowship program

Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.

In this hackathon, you can build:

  • Hardware verification systems that enable compute monitoring and attestation at datacenter scale

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that help labs and regulators coordinate on safety levels

  • International coordination tools that enable verification between parties without full trust

  • Dual-use detection systems for identifying dangerous capabilities in AI research

  • Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls

You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.

What is Technical International Governance?

Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:

  • Hardware verification that tracks compute resources used in training frontier models

  • Attestation systems that cryptographically prove model properties without revealing weights

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that define when capabilities trigger safety requirements

  • International coordination mechanisms that enable verification between parties without full trust

  • Dual-use detection for identifying dangerous capabilities in AI research before deployment

Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.

What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.

Why this hackathon?

The Problem

The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.

This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.

Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.

Why AI Manipulation Defense Matters Now

International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.

We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.

Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.

Hackathon Tracks

1. Hardware Verification & Attestation

  • Design hardware verification protocols for tracking compute resources in datacenter environments

  • Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights

  • Create compute monitoring tools that detect training runs above regulatory thresholds

  • Develop chip-level security mechanisms for remote verification of AI hardware properties

2. Compliance Infrastructure & Privacy-Preserving Proofs

  • Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information

  • Create privacy-preserving audit mechanisms for federated learning or distributed training

  • Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks

  • Design cryptographic protocols that enable verification between parties without full trust

3. Risk Thresholds & Compute Verification

  • Build risk assessment frameworks that map compute thresholds to capability levels

  • Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks

  • Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)

  • Design monitoring systems for responsible scaling policies and deployment safeguards

4. International Verification & Coordination

  • Build coordination infrastructure for International Network of AI Safety Institutes

  • Create verification mechanisms inspired by IAEA frameworks adapted for AI governance

  • Develop systems for cross-border information sharing that respect national security concerns

  • Design tools for implementing global AI safety standards and red lines

5. Research Governance & Dual-Use Detection

  • Build detection systems for identifying dangerous capabilities in pre-publication research

  • Create frameworks for assessing dual-use risks in biological AI models or other specialized domains

  • Develop pre-publication review tools that scale across research communities

  • Design capability-based threat assessment systems for frontier AI research

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You're interested in understanding how international cooperation on AI safety can be made technically feasible

  • You want to build practical verification, monitoring, or compliance tools

  • You're concerned about the gap between AI governance policy and technical infrastructure

No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.

Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance

Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.

What happens next

Winning and promising projects will be:

  • Awarded with $2,000 worth of prizes in cash.

  • Published openly for the community.

  • Invited to continue development within the Apart Fellowship.

  • Shared with relevant safety researchers.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities

0

Sign Ups

0

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

The hardware that trains frontier AI systems can be tracked. Training runs leave compute signatures. Safety evaluations can generate cryptographic proofs. International verification mechanisms exist and work.

But we lack the practical tools to implement them at scale.

This hackathon focuses on building real systems that could enable enforceable international agreements on AI development. You'll have one intensive weekend to create verification protocols, monitoring tools, privacy-preserving compliance proofs, or coordination infrastructure that advances technical governance.

Top teams get:

  • 💰 $[Placeholder] in cash prizes

  • The chance to continue development through Apart Research's Fellowship program

Apply if you believe we need functioning governance infrastructure before frontier AI capabilities advance beyond our ability to coordinate effectively.

In this hackathon, you can build:

  • Hardware verification systems that enable compute monitoring and attestation at datacenter scale

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that help labs and regulators coordinate on safety levels

  • International coordination tools that enable verification between parties without full trust

  • Dual-use detection systems for identifying dangerous capabilities in AI research

  • Real-world compliance automation for EU AI Act, responsible scaling policies, or export controls

You'll work in teams over one weekend and submit open-source verification tools, monitoring systems, compliance frameworks, or research advancing international AI governance capabilities.

What is Technical International Governance?

Technical international governance refers to the practical infrastructure needed to verify, monitor, and enforce international agreements on AI development. This includes:

  • Hardware verification that tracks compute resources used in training frontier models

  • Attestation systems that cryptographically prove model properties without revealing weights

  • Privacy-preserving compliance proofs using zero-knowledge cryptography or trusted execution environments

  • Risk threshold frameworks that define when capabilities trigger safety requirements

  • International coordination mechanisms that enable verification between parties without full trust

  • Dual-use detection for identifying dangerous capabilities in AI research before deployment

Labs are training increasingly capable systems. Some pose risks that cross borders. International cooperation requires technical mechanisms to verify compliance without exposing sensitive information or creating security vulnerabilities.

What makes this urgent: the infrastructure doesn't exist yet. Training runs that could trigger international agreements happen now, but we lack systems to verify them. Frontier capabilities advance faster than our coordination mechanisms.

Why this hackathon?

The Problem

The gap is widening. AI systems get more capable, our governance infrastructure doesn't. Multiple labs are training models above the EU's 10²⁵ FLOP threshold for systemic risk. Some frontier models now require ASL-3 safeguards. Decentralized training makes compute monitoring harder to implement.

This is already happening. The EU AI Act took effect in August 2024, but practical compliance tools remain scarce. Export controls on AI chips lack verification mechanisms. Responsible scaling policies define thresholds without automated monitoring. Labs coordinate through voluntary frameworks that lack enforcement infrastructure.

Most governance proposals assume technical capabilities that don't exist yet. They require compute tracking systems not deployed at scale, attestation mechanisms not integrated into hardware, and verification protocols not tested between adversarial parties. We're building policy without infrastructure.

Why AI Manipulation Defense Matters Now

International cooperation depends on verifiable compliance. If agreements can't be verified without exposing sensitive IP or creating security risks, countries won't sign them. Labs won't share information that compromises their competitive position.

We're massively under-investing in governance infrastructure. Most effort goes into capabilities research or post-deployment harm mitigation. Far less into building the verification systems, monitoring tools, and coordination mechanisms that enable international agreements.

Better technical infrastructure could give us agreements that labs can verify without exposing model weights, monitoring systems that respect privacy while enabling compliance checks, and coordination mechanisms that work between parties without full trust. It could create the practical foundation needed for international cooperation on frontier AI safety.

Hackathon Tracks

1. Hardware Verification & Attestation

  • Design hardware verification protocols for tracking compute resources in datacenter environments

  • Build attestation systems using trusted execution environments (TEEs) that prove model properties without exposing weights

  • Create compute monitoring tools that detect training runs above regulatory thresholds

  • Develop chip-level security mechanisms for remote verification of AI hardware properties

2. Compliance Infrastructure & Privacy-Preserving Proofs

  • Build zero-knowledge proof systems that demonstrate regulatory compliance without revealing sensitive information

  • Create privacy-preserving audit mechanisms for federated learning or distributed training

  • Develop compliance automation tools for EU AI Act requirements, GPAI reporting, or safety frameworks

  • Design cryptographic protocols that enable verification between parties without full trust

3. Risk Thresholds & Compute Verification

  • Build risk assessment frameworks that map compute thresholds to capability levels

  • Create tools for harmonizing ASL/CCL terminology across different lab safety frameworks

  • Develop capability evaluation systems for dual-use risks (CBRN, cyber, autonomous AI R&D)

  • Design monitoring systems for responsible scaling policies and deployment safeguards

4. International Verification & Coordination

  • Build coordination infrastructure for International Network of AI Safety Institutes

  • Create verification mechanisms inspired by IAEA frameworks adapted for AI governance

  • Develop systems for cross-border information sharing that respect national security concerns

  • Design tools for implementing global AI safety standards and red lines

5. Research Governance & Dual-Use Detection

  • Build detection systems for identifying dangerous capabilities in pre-publication research

  • Create frameworks for assessing dual-use risks in biological AI models or other specialized domains

  • Develop pre-publication review tools that scale across research communities

  • Design capability-based threat assessment systems for frontier AI research

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You're interested in understanding how international cooperation on AI safety can be made technically feasible

  • You want to build practical verification, monitoring, or compliance tools

  • You're concerned about the gap between AI governance policy and technical infrastructure

No prior governance research experience required. We provide resources, mentors, and starter templates. What matters most: curiosity about the problem and willingness to build something real over an intensive weekend.

Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source verification tools, compliance systems, monitoring infrastructure, or empirical research advancing international AI governance

Please note: Due to the high volume of submissions, we cannot guarantee written feedback for every participant, although all projects will be evaluated.

What happens next

Winning and promising projects will be:

  • Awarded with $2,000 worth of prizes in cash.

  • Published openly for the community.

  • Invited to continue development within the Apart Fellowship.

  • Shared with relevant safety researchers.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert AI safety researchers, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

We haven't announced jam sites yet

Check back later

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

We haven't announced jam sites yet

Check back later