Alignment hackathons

Join Apart Research's alignment hackathons on interesting and new agendas with professional alignment researchers. Participate in the next alignment jam below!

Go to the Alignment Jam website

This month's alignment jam is about TESTING THE AI! Join to find the most interesting ways to test what AIs do, how fair they are, how easy they are to fool, how conscious they become (maybe?), and so much more.
We provide you with some State-of-the-Art resources to test all the newest models and you can focus on neural network verification, creating fascinating datasets, interesting reinforcement learning agents, deep Transformer interpretable feature extractions, safe RL agents in a choose-your-own-adventure games and red teaming the heck out of current models.

December 16th 2022

Read more
🍯 See the results from the finished interpretability hackathon here! 🍯

Join this Alignment Jam to uncover how the brains of the AI work! You can join without and with years of programming experience, it is a hackathon for all skill levels.

November 11th 2022

Read moreSee project ideas
🍯 The Language Model Hackathon is finished. Click to see the final entries! 🍯

Join this AI safety hackathon to compete in uncovering novel aspects of how language models work! This follows the "black box interpretability" agenda of Buck Shlegeris. Read more.

Compete for $2,000 to create the best research projects in Black Box Interpretation!

September 30th 2022

Read moreSee project ideas