Apart Hackathons

Join our monthly hackathons and collaborate with brilliant minds worldwide on impactful AI safety research. Explore and sign up to upcoming hackathons here.

Reprogramming AI Models Hackathon

This event concluded on
Nov 25, 2024
with
entries from
signups
This hackathon will gather ambitious AI researchers and builders to work on exciting interp problems using Goodfire’s “interpretable” model APIs. If this sounds exciting, we encourage you to join and either hack on your own research idea or explore one of the following directions below.
Nov 22
to
Nov 25, 2024
19
Nov
19
Nov
19
Nov

Howard University AI Safety Summit & Policy Hackathon

Independently organized SprintX
Washington, D.C. and online
19
Nov
Canceled

Howard University AI Safety Summit & Policy Hackathon

Howard University AI Safety Summit & Policy Hackathon

Independently organized SprintX
Washington, D.C. and online
22
Nov
22
Nov
22
Nov

Reprogramming AI Models Hackathon

Independently organized SprintX
📈 Accelerating AI Safety
Virtual and In-Person
22
Nov
Canceled

Reprogramming AI Models Hackathon

Reprogramming AI Models Hackathon

Independently organized SprintX
📈 Accelerating AI Safety
Virtual and In-Person
24
Jan
24
Jan
24
Jan

Autonomous Agent Evaluations Hackathon [dates TBD]

Independently organized SprintX
Virtual & in-person
24
Jan
Canceled

Autonomous Agent Evaluations Hackathon [dates TBD]

Autonomous Agent Evaluations Hackathon [dates TBD]

Independently organized SprintX
Virtual & in-person

In-Person & Online

Join events on the Discord or at our in-person locations around the world! Follow the calendar here.

Live Mentorship Q&A

Our expert team will be available to help with any questions and theory on the hackathon Discord.

For Everyone

You can join in the middle of the Sprint if you don't find time and we provide code starters, ideas and inspiration; see an example.

Awards & Next Steps

We will help you take the next steps in your research journey with the Apart Lab Fellowship, providing mentorship, help with publication, funding, etc.
With partners and collaborators from

⚡️ Recent research sprints

winning hackathon projects
🥈
2nd
🥉
3rd
🥇

Diamonds are Not All You Need

Michael Andrzejewski, Melwina Albuquerque
  •  
November 10, 2024
🥈
2nd
🥉
3rd
🥇

Omniscient Narrative Agent

Jord Nguyen, Akash Kundu, Gayatri K
  •  
October 3, 2024
November 10, 2024
🥈
2nd
🥉
3rd
🥇

Speculative Consequences of A.I. Misuse

Joseph Karam, Charlie Nguyen, Andrew Lam
  •  
November 10, 2024
🥈
2nd
🥉
3rd
🥇

AI Alignment Knowledge Graph

Matin Mahmood, Samuel Ratnam, Sruthi Kuriakose, Pandelis Mouratoglou
  •  
November 10, 2024
🥈
2nd
🥉
3rd
🥇

Robust Machine Unlearning for Dangerous Capabilities

Neel Jay, Austin Meek, Joshua Ehizibolo
  •  
November 7, 2024
🥈
2nd
🥉
3rd
🥇

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Gabriel Mukobi, Anka Reuel, Juan-Pablo Rivera, Chandler Smith
  •  
October 2, 2023
September 12, 2024
🥈
2nd
🥉
3rd
🥇

DarkForest - Defending the Authentic and Humane Web

Mustafa Yasir
  •  
September 5, 2024
🥈
2nd
🥉
3rd
🥇

Sandbag Detection through Model Degradation

Cam Tice, Philipp Alexander Kreer, Fedor Ryzhenkov, Nathan Helm-Burger, Prithviraj Singh Shahani
  •  
July 8, 2024
🥈
2nd
🥉
3rd
🥇
🥈
2nd
🥉
3rd
🥇

rAInboltBench : Benchmarking user location inference through single images

Le "Qronox" Lam ; Aleksandr Popov ; Jord Nguyen ; Trung Dung "mogu" Hoang ; Marcel M ; Felix Michalak
  •  
May 31, 2024
🥈
2nd
🥉
3rd
🥇

Beyond Refusal: Scrubbing Hazards from Open-Source Models

Kyle Gabriel Reynoso, Ivan Enclonar, Lexley Maree Villasis
  •  
May 8, 2024
🥈
2nd
🥉
3rd
🥇

Investigating Neuron Behaviour via Dataset Example Pruning and Local Search

Alex Foote
  •  
November 15, 2022
February 24, 2024
🥈
2nd
🥉
3rd
🥇

We Discovered An Neuron

Joseph Miller, Clement Neo
  •  
January 25, 2023
February 24, 2024
🥈
2nd
🥉
3rd
🥇

Agreeableness vs. Truthfulness

  •  
October 18, 2022
February 24, 2024

🌏 Global Jam Site Partners

Our global-first hackathons are hosted concurrently across the world at the amazing local nodes of the Apart network.

Visit more of our amazing jam sites and see how you can become a part of this global network of changemakers here!

Apart > Sprints

Apart Sprints Overview

The Apart Sprints are short hackathons and challenges focused on the most important questions in AI security. We collaborate with aligned organizations and labs across the globe.

350+

Research reports

2,000+

Participants

50+

In-person groups

🤗 Hack away with the best vibes

🥳 Hear from previous participants

"
This Hackathon was a perfect blend of learning, testing, and collaboration on cutting-edge AI Safety research. I really feel that I gained practical knowledge that cannot be learned only by reading articles.
"
Yoann Poupart
BlockLoads CTO
"
It was an amazing experience working with people I didn't even know before the hackathon. All three of my teammates were extremely spread out, while I am from India, my teammates were from New York and Taiwan. It was amazing how we pulled this off in 48 hours in spite of the time difference. Moreover, the mentors were extremely encouraging and supportive which helped us gain clarity whenever we got stuck and helped us create an interesting project in the end.
"
Akash Kundu
Apart Lab Fellow
"
It was great meeting such cool people to work with over the weekend! I did not know any of the other people in my group at first, and now I'm looking forward to working with them again on research projects! The organizers were also super helpful and contributed a lot to the success of our project.
"
Lucie Philippon
France Pacific Territories Economic Committee
"
The Interpretability Hackathon exceeded my expectations, it was incredibly well organized with an intelligently curated list of very helpful resources. I had a lot of fun participating and genuinely feel I was able to learn significantly more than I would have, had I spent my time elsewhere. I highly recommend these events to anyone who is interested in this sort of work!
"
Chris Mathwin
MATS scholar

🤯 Publications from hackathons

✨ More than 200 research teams

Search and see all projects on this page.

AI Safety unionization for bottom-up governance
AI Safety Subproblems for Software Engineering Researchers
AI Safety Talent Pool Identification
Analysis of upcoming AGI companies
Authority bias to ChatGPT
ChatGPT Alignment Talent Search
Catalogue of AI safety
Critique of OpenAI's alignment plan
Diversity in AI safety
New AI organization brainstorm
Risk Defense Initiative
Simon's Time-Off Newsletter
Evaluating Myopia in Large Language Models
Marco Bazzani, Felix Binder
In the Mirror: Using Chess to Simulate Agency Loss in Feedback Loops
Helios Lyons
Preserving Agency in Reinforcement Learning under Unknown, Evolving and Under-Represented Intentions
Harry Powell, Luigi Berducci
Omniscient Narrative Agent
Jord Nguyen, Akash Kundu, Gayatri K
Against Agency
Catherine Brewer
Agency as Shanon information. Unveiling limitations and common misconceptions
Ivan Madan, Hennadii Madan
Discovering Agency Features as Latent Space Directions in LLMs via SVD
max max
Very Cooperative Agent
Jakub Fidler
Comparing truthful reporting, intent alignment, agency preservation and value identification
Aksinya Bykova
Uncertainty about value naturally leads to empowerment
Filip Sondej
Sue-Per GPT: Legal RAG Assistant
Atir Petkar, Jay Liu, Chelsea Wong, Nancy Vigil
Agency, value and empowerment.
Benjamin Sturgeon, Leo Hyams
ILLUSION OF CONTROL
Mary Osuka