Sep 29, 2022
-
Oct 1, 2022
Online & In-Person
Language Model Hackathon



Alignment Jam #1
00:00:00:00
Days To Go
00:00:00:00
Days To Go
00:00:00:00
Days To Go
00:00:00:00
Days To Go
This event has concluded.
Overview
Resources
Schedule
Overview

Join this AI safety hackathon to compete in uncovering novel aspects of how language models work! This follows the "black box interpretability" agenda of Buck Shlegeris:
Interpretability research is sometimes described as neuroscience for ML models. Neuroscience is one approach to understanding how human brains work. But empirical psychology research is another approach. I think more people should engage in the analogous activity for language models: trying to figure out how they work just by looking at their behavior, rather than trying to understand their internals. Read more.

Overview
Resources
Schedule
Overview

Join this AI safety hackathon to compete in uncovering novel aspects of how language models work! This follows the "black box interpretability" agenda of Buck Shlegeris:
Interpretability research is sometimes described as neuroscience for ML models. Neuroscience is one approach to understanding how human brains work. But empirical psychology research is another approach. I think more people should engage in the analogous activity for language models: trying to figure out how they work just by looking at their behavior, rather than trying to understand their internals. Read more.

Overview
Resources
Schedule
Overview

Join this AI safety hackathon to compete in uncovering novel aspects of how language models work! This follows the "black box interpretability" agenda of Buck Shlegeris:
Interpretability research is sometimes described as neuroscience for ML models. Neuroscience is one approach to understanding how human brains work. But empirical psychology research is another approach. I think more people should engage in the analogous activity for language models: trying to figure out how they work just by looking at their behavior, rather than trying to understand their internals. Read more.

Overview
Resources
Schedule
Overview

Join this AI safety hackathon to compete in uncovering novel aspects of how language models work! This follows the "black box interpretability" agenda of Buck Shlegeris:
Interpretability research is sometimes described as neuroscience for ML models. Neuroscience is one approach to understanding how human brains work. But empirical psychology research is another approach. I think more people should engage in the analogous activity for language models: trying to figure out how they work just by looking at their behavior, rather than trying to understand their internals. Read more.

Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
Oct 31, 2025
-
Nov 2, 2025
Research
The AI Forecasting Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Sep 13, 2025
-
Sep 14, 2025
Research
ARENA 6.0 Mechanistic Interpretability Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events