Nov 22, 2024
-
Nov 25, 2024
Online & In-Person
Reprogramming AI Models Hackathon




Whether you're an AI researcher, a curious developer, or passionate about making AI systems more transparent and controllable, this hackathon is for you. As a participant, you will: Collaborate with experts to create novel AI observability tools Learn about mechanistic interpretability from industry leaders Contribute to solving real-world challenges in AI safety and reliability Compete for prizes and the opportunity to influence the future of AI development Register now and be part of the movement towards more transparent, reliable, and beneficial AI systems. We provide access to Goodfire's SDK/API and research preview playground, enabling participation regardless of prior experience with AI observability.
00:00:00:00
00:00:00:00
00:00:00:00
00:00:00:00
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Why This Matters
As AI models become more powerful and widespread, understanding their internal mechanisms isn't just academic curiosity—it's crucial for building reliable, controllable AI systems. Mechanistic interpretability gives us the tools to peek inside these "black boxes" and understand how they actually work, neuron by neuron and feature by feature.
What You'll Get
Exclusive Access: Use Goodfire's API to access an interpretable 8B or 70B model with efficient inference.
Cutting-Edge Tools: Experience Goodfire's SDK/API for feature steering and manipulation
Advanced Capabilities: Work with conditional feature interventions and sophisticated development flows
Free Resources: Compute credits for every team to ensure you can pursue ambitious projects
Expert Guidance: Direct mentorship from industry leaders throughout the weekend
Project Tracks
1. Feature Investigation
Map and analyze feature phenomenology in large language models
Discover and validate useful feature interventions
Research the relationship between feature weights and intervention success
Develop metrics for intervention quality assessment
2. Tooling Development
Build tools for automated feature discovery
Create testing frameworks for intervention reliability
Develop integration tools for existing ML frameworks
Improve auto-interpretation techniques
3. Visualization & Interface
Design intuitive visualizations for feature maps
Create interactive tools for exploring model internals
Build dashboards for monitoring intervention effects
Develop user interfaces for feature manipulation
4. Novel Research
Investigate improvements to auto-interpretation
Study feature interaction patterns
Research intervention transfer between models
Explore new approaches to model steering
Why Goodfire's Tools?
While participants are welcome to use their existing setups, Goodfire's API brings exceptional value to this hackathon as a primary option for participants.
Goodfire provides:
Access to a 70B parameter model via API (with efficient inference)
Feature steering capabilities made simple through the SDK/API
Advanced development workflows including conditional feature interventions
The hackathon serves as a unique opportunity for Goodfire to gather valuable feedback from the developer community on their API/SDK. To ensure all participants can pursue ambitious research projects without constraints, Goodfire is providing free compute credits to every team.
Previous Participant Experiences
"I learned so much about AI Safety and Computational Mechanics. It is a field I have never heard of, and it combines two of my interests - AI and Physics. Through the hackathons, I gained valuable connections and learned a lot from researchers with extensive experience." - Doroteya Stoyanova, Computer Vision Intern
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Why This Matters
As AI models become more powerful and widespread, understanding their internal mechanisms isn't just academic curiosity—it's crucial for building reliable, controllable AI systems. Mechanistic interpretability gives us the tools to peek inside these "black boxes" and understand how they actually work, neuron by neuron and feature by feature.
What You'll Get
Exclusive Access: Use Goodfire's API to access an interpretable 8B or 70B model with efficient inference.
Cutting-Edge Tools: Experience Goodfire's SDK/API for feature steering and manipulation
Advanced Capabilities: Work with conditional feature interventions and sophisticated development flows
Free Resources: Compute credits for every team to ensure you can pursue ambitious projects
Expert Guidance: Direct mentorship from industry leaders throughout the weekend
Project Tracks
1. Feature Investigation
Map and analyze feature phenomenology in large language models
Discover and validate useful feature interventions
Research the relationship between feature weights and intervention success
Develop metrics for intervention quality assessment
2. Tooling Development
Build tools for automated feature discovery
Create testing frameworks for intervention reliability
Develop integration tools for existing ML frameworks
Improve auto-interpretation techniques
3. Visualization & Interface
Design intuitive visualizations for feature maps
Create interactive tools for exploring model internals
Build dashboards for monitoring intervention effects
Develop user interfaces for feature manipulation
4. Novel Research
Investigate improvements to auto-interpretation
Study feature interaction patterns
Research intervention transfer between models
Explore new approaches to model steering
Why Goodfire's Tools?
While participants are welcome to use their existing setups, Goodfire's API brings exceptional value to this hackathon as a primary option for participants.
Goodfire provides:
Access to a 70B parameter model via API (with efficient inference)
Feature steering capabilities made simple through the SDK/API
Advanced development workflows including conditional feature interventions
The hackathon serves as a unique opportunity for Goodfire to gather valuable feedback from the developer community on their API/SDK. To ensure all participants can pursue ambitious research projects without constraints, Goodfire is providing free compute credits to every team.
Previous Participant Experiences
"I learned so much about AI Safety and Computational Mechanics. It is a field I have never heard of, and it combines two of my interests - AI and Physics. Through the hackathons, I gained valuable connections and learned a lot from researchers with extensive experience." - Doroteya Stoyanova, Computer Vision Intern
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Why This Matters
As AI models become more powerful and widespread, understanding their internal mechanisms isn't just academic curiosity—it's crucial for building reliable, controllable AI systems. Mechanistic interpretability gives us the tools to peek inside these "black boxes" and understand how they actually work, neuron by neuron and feature by feature.
What You'll Get
Exclusive Access: Use Goodfire's API to access an interpretable 8B or 70B model with efficient inference.
Cutting-Edge Tools: Experience Goodfire's SDK/API for feature steering and manipulation
Advanced Capabilities: Work with conditional feature interventions and sophisticated development flows
Free Resources: Compute credits for every team to ensure you can pursue ambitious projects
Expert Guidance: Direct mentorship from industry leaders throughout the weekend
Project Tracks
1. Feature Investigation
Map and analyze feature phenomenology in large language models
Discover and validate useful feature interventions
Research the relationship between feature weights and intervention success
Develop metrics for intervention quality assessment
2. Tooling Development
Build tools for automated feature discovery
Create testing frameworks for intervention reliability
Develop integration tools for existing ML frameworks
Improve auto-interpretation techniques
3. Visualization & Interface
Design intuitive visualizations for feature maps
Create interactive tools for exploring model internals
Build dashboards for monitoring intervention effects
Develop user interfaces for feature manipulation
4. Novel Research
Investigate improvements to auto-interpretation
Study feature interaction patterns
Research intervention transfer between models
Explore new approaches to model steering
Why Goodfire's Tools?
While participants are welcome to use their existing setups, Goodfire's API brings exceptional value to this hackathon as a primary option for participants.
Goodfire provides:
Access to a 70B parameter model via API (with efficient inference)
Feature steering capabilities made simple through the SDK/API
Advanced development workflows including conditional feature interventions
The hackathon serves as a unique opportunity for Goodfire to gather valuable feedback from the developer community on their API/SDK. To ensure all participants can pursue ambitious research projects without constraints, Goodfire is providing free compute credits to every team.
Previous Participant Experiences
"I learned so much about AI Safety and Computational Mechanics. It is a field I have never heard of, and it combines two of my interests - AI and Physics. Through the hackathons, I gained valuable connections and learned a lot from researchers with extensive experience." - Doroteya Stoyanova, Computer Vision Intern
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Why This Matters
As AI models become more powerful and widespread, understanding their internal mechanisms isn't just academic curiosity—it's crucial for building reliable, controllable AI systems. Mechanistic interpretability gives us the tools to peek inside these "black boxes" and understand how they actually work, neuron by neuron and feature by feature.
What You'll Get
Exclusive Access: Use Goodfire's API to access an interpretable 8B or 70B model with efficient inference.
Cutting-Edge Tools: Experience Goodfire's SDK/API for feature steering and manipulation
Advanced Capabilities: Work with conditional feature interventions and sophisticated development flows
Free Resources: Compute credits for every team to ensure you can pursue ambitious projects
Expert Guidance: Direct mentorship from industry leaders throughout the weekend
Project Tracks
1. Feature Investigation
Map and analyze feature phenomenology in large language models
Discover and validate useful feature interventions
Research the relationship between feature weights and intervention success
Develop metrics for intervention quality assessment
2. Tooling Development
Build tools for automated feature discovery
Create testing frameworks for intervention reliability
Develop integration tools for existing ML frameworks
Improve auto-interpretation techniques
3. Visualization & Interface
Design intuitive visualizations for feature maps
Create interactive tools for exploring model internals
Build dashboards for monitoring intervention effects
Develop user interfaces for feature manipulation
4. Novel Research
Investigate improvements to auto-interpretation
Study feature interaction patterns
Research intervention transfer between models
Explore new approaches to model steering
Why Goodfire's Tools?
While participants are welcome to use their existing setups, Goodfire's API brings exceptional value to this hackathon as a primary option for participants.
Goodfire provides:
Access to a 70B parameter model via API (with efficient inference)
Feature steering capabilities made simple through the SDK/API
Advanced development workflows including conditional feature interventions
The hackathon serves as a unique opportunity for Goodfire to gather valuable feedback from the developer community on their API/SDK. To ensure all participants can pursue ambitious research projects without constraints, Goodfire is providing free compute credits to every team.
Previous Participant Experiences
"I learned so much about AI Safety and Computational Mechanics. It is a field I have never heard of, and it combines two of my interests - AI and Physics. Through the hackathons, I gained valuable connections and learned a lot from researchers with extensive experience." - Doroteya Stoyanova, Computer Vision Intern
Speakers & Collaborators
Tom McGrath
Organizer & Judge
Chief Scientist at Goodfire, previously Senior Research Scientist at Google DeepMind, where he co-founded the interpretability team
Neel Nanda
Speaker & Judge
Team lead for the mechanistic interpretability team at Google Deepmind and a prolific advocate for open source interpretability research.
Dan Balsam
Organizer & Mentor
CTO at Goodfire, previously Founding Engineer and Head of AI at RippleMatch. Goodfire makes Interpretability products for safe and reliable generative AI models.
Myra Deng
Organizer
Founding PM at Goodfire, Stanford MBA and MS CS graduate previously building modeling platforms at Two Sigma
Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Jaime Raldua
Organiser
Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.
Joseph Bloom
Speaker
Joseph co-founded Decode Research, a non-profit organization aiming to accelerate progress in AI safety research infrastructure, and is a mechanistic interpretability researcher.
Alana X
Judge
Member of Technical Staff at Magic, leading initiatives on model evaluations. Previously a Research Intern at METR
Callum McDougall
Speaker
ARENA Director and SERI-MATS alumnus specializing in mechanistic interpretability and AI alignment education
Mateusz Dziemian
Judge
Member of Technical Staff at Gray Swan AI, recently worked on a U.K AISI collaboration. Previous participant in an Apart Sprint which ended in a NeurIPS workshop paper
Simon Lermen
Judge
MATS Scholar under Jeffrey Ladish and is an independent researcher. Mentored a spar project on AI agents and worked on spear-phishing people with AI agents.
Liv Gorton
Judge
Founding Research Scientist at Goodfire AI. Undertook independent research on sparse autoencoders in InceptionV1
Esben Kran
Organizer
Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Speakers & Collaborators

Tom McGrath
Organizer & Judge
Chief Scientist at Goodfire, previously Senior Research Scientist at Google DeepMind, where he co-founded the interpretability team

Neel Nanda
Speaker & Judge
Team lead for the mechanistic interpretability team at Google Deepmind and a prolific advocate for open source interpretability research.

Dan Balsam
Organizer & Mentor
CTO at Goodfire, previously Founding Engineer and Head of AI at RippleMatch. Goodfire makes Interpretability products for safe and reliable generative AI models.

Myra Deng
Organizer
Founding PM at Goodfire, Stanford MBA and MS CS graduate previously building modeling platforms at Two Sigma

Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.

Jaime Raldua
Organiser
Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.

Joseph Bloom
Speaker
Joseph co-founded Decode Research, a non-profit organization aiming to accelerate progress in AI safety research infrastructure, and is a mechanistic interpretability researcher.

Alana X
Judge
Member of Technical Staff at Magic, leading initiatives on model evaluations. Previously a Research Intern at METR

Callum McDougall
Speaker
ARENA Director and SERI-MATS alumnus specializing in mechanistic interpretability and AI alignment education

Mateusz Dziemian
Judge
Member of Technical Staff at Gray Swan AI, recently worked on a U.K AISI collaboration. Previous participant in an Apart Sprint which ended in a NeurIPS workshop paper

Simon Lermen
Judge
MATS Scholar under Jeffrey Ladish and is an independent researcher. Mentored a spar project on AI agents and worked on spear-phishing people with AI agents.

Liv Gorton
Judge
Founding Research Scientist at Goodfire AI. Undertook independent research on sparse autoencoders in InceptionV1
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Research
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Apr 25, 2025
-
Apr 26, 2025
Research
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events