Apr 14, 2023
-
Apr 17, 2023
Online & In-Person
The Interpretability Hackathon 2.0



We focus on interpreting the innards of neural networks using methods developed from mechanistic interpretability
32
Sign Ups
Overview
Resources
Schedule
Resources

Inspiration
We have many ideas available for inspiration on the aisi.ai Interpretability Hackathon ideas list. A lot of interpretability research is available on distill.pub, transformer circuits, and Anthropic's research page.
Introductions to mechanistic interpretability
A video walkthrough of A Mathematical Framework for Transformer Circuits.
An annotated list of good interpretability papers, along with summaries and takes on what to focus on.
Digestible research
Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers summarizes a long list of papers that is definitely useful for your interpretability projects.
Andrej Karpathy's "Understanding what convnets learn"
12 toy language models designed to be easier to interpret, in the style of a Mathematical Framework for Transformer Circuits: 1, 2, 3 and 4 layer models, for each size one is attention-only, one has GeLU activations and one has SoLU activations (an activation designed to make the model's neurons more interpretable - https://transformer-circuits.pub/2022/solu/index.html) (these aren't well documented yet, but are available in TransformerLens)
Anthropic Twitter thread going through some language model results
Below is a talk by Esben on a principled introduction to interpretability for safety:
32
Sign Ups
Overview
Resources
Schedule
Overview

Hosted by Zaki, Apart Research, Esben Kran, StefanHex, calcan, Neel Nanda · #alignmentjam
Join this AI safety hackathon to find new perspectives on the "brains" of AI!
48 hours intense research in interpretability, the modern AI neuroscience.
We focus on interpreting the innards of neural networks using methods developed from mechanistic interpretability! With the starter Colab code, you should be able to quickly get a technical insight into the process.

We provide you with the best starter templates that you can work from so you can focus on creating interesting research instead of browsing Stack Overflow. You're also very welcome to check out some of the ideas already posted!
How to participate
Create a user on the itch.io (this) website and click participate. We will assume that you are going to participate and ask you to please cancel if you won't be part of the hackathon.
Instructions
You will work on research ideas you generate during the hackathon and you can find more inspiration below.
Submission
Everyone will help rate the submissions together on a set of criteria that we ask everyone to follow. This will happen over a 2-week period and will be open to everyone!
32
Sign Ups
Overview
Resources
Schedule
Overview

Hosted by Zaki, Apart Research, Esben Kran, StefanHex, calcan, Neel Nanda · #alignmentjam
Join this AI safety hackathon to find new perspectives on the "brains" of AI!
48 hours intense research in interpretability, the modern AI neuroscience.
We focus on interpreting the innards of neural networks using methods developed from mechanistic interpretability! With the starter Colab code, you should be able to quickly get a technical insight into the process.

We provide you with the best starter templates that you can work from so you can focus on creating interesting research instead of browsing Stack Overflow. You're also very welcome to check out some of the ideas already posted!
How to participate
Create a user on the itch.io (this) website and click participate. We will assume that you are going to participate and ask you to please cancel if you won't be part of the hackathon.
Instructions
You will work on research ideas you generate during the hackathon and you can find more inspiration below.
Submission
Everyone will help rate the submissions together on a set of criteria that we ask everyone to follow. This will happen over a 2-week period and will be open to everyone!
32
Sign Ups
Overview
Resources
Schedule
Overview

Hosted by Zaki, Apart Research, Esben Kran, StefanHex, calcan, Neel Nanda · #alignmentjam
Join this AI safety hackathon to find new perspectives on the "brains" of AI!
48 hours intense research in interpretability, the modern AI neuroscience.
We focus on interpreting the innards of neural networks using methods developed from mechanistic interpretability! With the starter Colab code, you should be able to quickly get a technical insight into the process.

We provide you with the best starter templates that you can work from so you can focus on creating interesting research instead of browsing Stack Overflow. You're also very welcome to check out some of the ideas already posted!
How to participate
Create a user on the itch.io (this) website and click participate. We will assume that you are going to participate and ask you to please cancel if you won't be part of the hackathon.
Instructions
You will work on research ideas you generate during the hackathon and you can find more inspiration below.
Submission
Everyone will help rate the submissions together on a set of criteria that we ask everyone to follow. This will happen over a 2-week period and will be open to everyone!
Speakers & Collaborators
Neel Nanda
Speaker & Judge
Team lead for the mechanistic interpretability team at Google Deepmind and a prolific advocate for open source interpretability research.
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
Jan 30, 2026
-
Feb 1, 2026
Research
The Technical AI Governance Challenge
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Nov 21, 2025
-
Nov 23, 2025
Research
Defensive Acceleration Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

