Jul 25, 2025

-

Jul 27, 2025

Online

AI Safety x Physics Grand Challenge

Ready to bridge physics and AI safety? Apply your rigorous quantitative training to one of the most impactful technical challenges of our time.

13

Days To Go

13

Days To Go

13

Days To Go

13

Days To Go

Ready to bridge physics and AI safety? Apply your rigorous quantitative training to one of the most impactful technical challenges of our time.

This event is ongoing.

This event has concluded.

87

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Bringing Physics to AI Safety's Most Pressing Challenges

The AI Safety x Physics Grand Challenge is a research hackathon designed to engage physicists in tackling critical AI safety problems. This is an excellent entry point for physicists who want to contribute to AI safety—you'll work alongside peers who share your technical background and passion for rigorous problem-solving.

Why Physics for AI Safety?

Physicists have so much to offer to AI safety:

* The ability to integrate empirical, theoretical, and computational methodologies to gain comprehensive understanding of a real-world system

* A deep understanding of scale and emergence in complex or strongly-coupled systems -- knowing how much to 'zoom in' to get an accurate, faithful description

* Comfort with uncertainty, statistical thinking, and order-of-magnitude sanity checking

Two Research Tracks

Project Track

How well does the project connect physics concepts with key alignment or interpretability challenges? Ideal for participants with ML/AI experience who want to apply physics methods to safety problems

Papers Track

Propose novel research agendas that tackle an AI safety challenge, or explore a potential bridge between an AI safety problem and a physics concept

What to Expect

  • Expert mentorship from researchers at the intersection of physics and AI

  • Structured problem areas with starter materials and concrete research directions

  • Global community of highly talented participants enthusiastic about tackling high-impact challenges

  • Follow-up support for exceptional projects through Apart Research programs

  • Optional pre-event orientation for those new to AI safety


Apply

Join us in applying your academic training to one of the most important technical challenges of our time. Connect with a growing community of researchers working to ensure AI systems are developed safely and beneficially.

87

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Bringing Physics to AI Safety's Most Pressing Challenges

The AI Safety x Physics Grand Challenge is a research hackathon designed to engage physicists in tackling critical AI safety problems. This is an excellent entry point for physicists who want to contribute to AI safety—you'll work alongside peers who share your technical background and passion for rigorous problem-solving.

Why Physics for AI Safety?

Physicists have so much to offer to AI safety:

* The ability to integrate empirical, theoretical, and computational methodologies to gain comprehensive understanding of a real-world system

* A deep understanding of scale and emergence in complex or strongly-coupled systems -- knowing how much to 'zoom in' to get an accurate, faithful description

* Comfort with uncertainty, statistical thinking, and order-of-magnitude sanity checking

Two Research Tracks

Project Track

How well does the project connect physics concepts with key alignment or interpretability challenges? Ideal for participants with ML/AI experience who want to apply physics methods to safety problems

Papers Track

Propose novel research agendas that tackle an AI safety challenge, or explore a potential bridge between an AI safety problem and a physics concept

What to Expect

  • Expert mentorship from researchers at the intersection of physics and AI

  • Structured problem areas with starter materials and concrete research directions

  • Global community of highly talented participants enthusiastic about tackling high-impact challenges

  • Follow-up support for exceptional projects through Apart Research programs

  • Optional pre-event orientation for those new to AI safety


Apply

Join us in applying your academic training to one of the most important technical challenges of our time. Connect with a growing community of researchers working to ensure AI systems are developed safely and beneficially.

87

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Bringing Physics to AI Safety's Most Pressing Challenges

The AI Safety x Physics Grand Challenge is a research hackathon designed to engage physicists in tackling critical AI safety problems. This is an excellent entry point for physicists who want to contribute to AI safety—you'll work alongside peers who share your technical background and passion for rigorous problem-solving.

Why Physics for AI Safety?

Physicists have so much to offer to AI safety:

* The ability to integrate empirical, theoretical, and computational methodologies to gain comprehensive understanding of a real-world system

* A deep understanding of scale and emergence in complex or strongly-coupled systems -- knowing how much to 'zoom in' to get an accurate, faithful description

* Comfort with uncertainty, statistical thinking, and order-of-magnitude sanity checking

Two Research Tracks

Project Track

How well does the project connect physics concepts with key alignment or interpretability challenges? Ideal for participants with ML/AI experience who want to apply physics methods to safety problems

Papers Track

Propose novel research agendas that tackle an AI safety challenge, or explore a potential bridge between an AI safety problem and a physics concept

What to Expect

  • Expert mentorship from researchers at the intersection of physics and AI

  • Structured problem areas with starter materials and concrete research directions

  • Global community of highly talented participants enthusiastic about tackling high-impact challenges

  • Follow-up support for exceptional projects through Apart Research programs

  • Optional pre-event orientation for those new to AI safety


Apply

Join us in applying your academic training to one of the most important technical challenges of our time. Connect with a growing community of researchers working to ensure AI systems are developed safely and beneficially.

87

Sign Ups

0

Entries

Overview

Resources

Guidelines

Entries

Overview

Arrow

Bringing Physics to AI Safety's Most Pressing Challenges

The AI Safety x Physics Grand Challenge is a research hackathon designed to engage physicists in tackling critical AI safety problems. This is an excellent entry point for physicists who want to contribute to AI safety—you'll work alongside peers who share your technical background and passion for rigorous problem-solving.

Why Physics for AI Safety?

Physicists have so much to offer to AI safety:

* The ability to integrate empirical, theoretical, and computational methodologies to gain comprehensive understanding of a real-world system

* A deep understanding of scale and emergence in complex or strongly-coupled systems -- knowing how much to 'zoom in' to get an accurate, faithful description

* Comfort with uncertainty, statistical thinking, and order-of-magnitude sanity checking

Two Research Tracks

Project Track

How well does the project connect physics concepts with key alignment or interpretability challenges? Ideal for participants with ML/AI experience who want to apply physics methods to safety problems

Papers Track

Propose novel research agendas that tackle an AI safety challenge, or explore a potential bridge between an AI safety problem and a physics concept

What to Expect

  • Expert mentorship from researchers at the intersection of physics and AI

  • Structured problem areas with starter materials and concrete research directions

  • Global community of highly talented participants enthusiastic about tackling high-impact challenges

  • Follow-up support for exceptional projects through Apart Research programs

  • Optional pre-event orientation for those new to AI safety


Apply

Join us in applying your academic training to one of the most important technical challenges of our time. Connect with a growing community of researchers working to ensure AI systems are developed safely and beneficially.

Speakers & Collaborators

Jason Hoelscher-Obermaier

Organizer & Judge

Jason is co-director of Apart Research and leads Apart Lab, the research program supporting top hackathon participants and projects.

Ari Brill

Area Chair

Astrophysicist turned AI safety researcher. PhD in Physics from Columbia, former NASA postdoc studying black holes. Now develops mathematical models for AI system representations and alignment.


Lauren Greenspan

Area Chair

An interdisciplinary researcher bridging theoretical physics, social studies of science, and AI safety, Lauren works on the PIBBSS horizon scanning team. She is currently focused on research and field building to close the theory-practice gap in AI safety.

Paul Riechers

Speaker

Theoretical physicist and Co-founder of BITS and Simplex AI Safety. Research Lead at Astera Institute, focusing on the intersection of physics theory and AI alignment research.

Dmitry Vaintrob

Speaker

Dmitry Vaintrob is an AI safety and interpretability researcher. He has a mathematics background, having studied interactions between algebraic geometry, representation theory and physics. He is now working at the Horizon Scanning Team at PIBBSS

Jesse H

Speaker

Executive Director of Timaeus, leading research on singular learning theory and developmental interpretability for AI safety. Former research assistant at University of Cambridge studying the science of deep learning.

Speakers & Collaborators

Jason Hoelscher-Obermaier

Organizer & Judge

Jason is co-director of Apart Research and leads Apart Lab, the research program supporting top hackathon participants and projects.

Ari Brill

Area Chair

Astrophysicist turned AI safety researcher. PhD in Physics from Columbia, former NASA postdoc studying black holes. Now develops mathematical models for AI system representations and alignment.


Lauren Greenspan

Area Chair

An interdisciplinary researcher bridging theoretical physics, social studies of science, and AI safety, Lauren works on the PIBBSS horizon scanning team. She is currently focused on research and field building to close the theory-practice gap in AI safety.

Paul Riechers

Speaker

Theoretical physicist and Co-founder of BITS and Simplex AI Safety. Research Lead at Astera Institute, focusing on the intersection of physics theory and AI alignment research.

Dmitry Vaintrob

Speaker

Dmitry Vaintrob is an AI safety and interpretability researcher. He has a mathematics background, having studied interactions between algebraic geometry, representation theory and physics. He is now working at the Horizon Scanning Team at PIBBSS

Jesse H

Speaker

Executive Director of Timaeus, leading research on singular learning theory and developmental interpretability for AI safety. Former research assistant at University of Cambridge studying the science of deep learning.

Registered Jam Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

We haven't announced jam sites yet

Check back later

Registered Jam Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

We haven't announced jam sites yet

Check back later