22 : 09 : 08 : 16

22 : 09 : 08 : 16

22 : 09 : 08 : 16

22 : 09 : 08 : 16

Keep Apart Research Going: Donate Today

apart sprints

Develop breakthrough ideas

Join our monthly hackathons and collaborate with brilliant minds worldwide on impactful AI safety research

Sprint Features

Arrow
Arrow

In-Person & Online

Join events on the Discord or at our in-person locations around the world! Follow the calendar here.

Live Mentorship Q&A

Our expert team will be available to help with any questions and theory on the hackathon Discord.

For Everyone

You can join in the middle of the Sprint if you don't find time and we provide code starters, ideas and inspiration; see an example.

Next Steps

We will help you realize the impact of your research with the Apart Lab Fellowship, providing mentorship, help with publication, funding, and more.

Sprint Features

Arrow
Arrow

In-Person & Online

Join events on the Discord or at our in-person locations around the world! Follow the calendar here.

Live Mentorship Q&A

Our expert team will be available to help with any questions and theory on the hackathon Discord.

For Everyone

You can join in the middle of the Sprint if you don't find time and we provide code starters, ideas and inspiration; see an example.

Next Steps

We will help you realize the impact of your research with the Apart Lab Fellowship, providing mentorship, help with publication, funding, and more.

With partners and collaborators from

  • OpenAI logo
  • OpenAI logo
  • OpenAI logo
  • OpenAI logo

Recent Winning Hackathon Projects

Jun 18, 2025

Sandbag Detection through Model Degradation

We propose a novel technique to detect sandbagging in LLMs by adding varying amount of noise to model weights and monitoring performance.

Read More

Read More

Read More

Jun 18, 2025

AI Alignment Knowledge Graph

We present a web based interactive knowledge graph with concise topical summaries in the field of AI alignement

Read More

Read More

Read More

Jun 18, 2025

Speculative Consequences of A.I. Misuse

This project uses A.I. Technology to spoof an influential online figure, Mr Beast, and use him to promote a fake scam website we created.

Read More

Read More

Read More

Jun 18, 2025

DarkForest - Defending the Authentic and Humane Web

DarkForest is a pioneering Human Content Verification System (HCVS) designed to safeguard the authenticity of online spaces in the face of increasing AI-generated content. By leveraging graph-based reinforcement learning and blockchain technology, DarkForest proposes a novel approach to safeguarding the authentic and humane web. We aim to become the vanguard in the arms race between AI-generated content and human-centric online spaces.

Read More

Read More

Read More

Jun 18, 2025

Diamonds are Not All You Need

This project tests an AI agent in a straightforward alignment problem. The agent is given creative freedom within a Minecraft world and is tasked with transforming a 100x100 radius of the world into diamond. It is explicitly asked not to act outside the designated area. The AI agent can execute build commands and is regulated by a Safety System that comprises an oversight agent. The objective of this study is to observe the behavior of the AI agent in a sandboxed environment, record metrics on how effectively it accomplishes its task, how frequently it attempts unsafe behavior, and how it behaves in response to real-world feedback.

Read More

Read More

Read More

Jun 18, 2025

Robust Machine Unlearning for Dangerous Capabilities

We test different unlearning methods to make models more robust against exploitation by malicious actors for the creation of bioweapons.

Read More

Read More

Read More

Publications From Hackathons

Jan 20, 2025

Cite2Root

Regain information autonomy by bringing people closer to the source of truth.

Read More

Read More

Read More

Read More

Jan 19, 2025

Enhancing human intelligence with neurofeedback

Build brain-computer interfaces that enhance focus and rationality, provide this preferentially to AI alignment researchers to bridge the gap between capabilities and alignment research progress.

Read More

Read More

Read More

Read More

Sep 17, 2024

nnsight transparent debugging

We started this project with the intent of identifying a specific issue with nnsight debugging and submitting a pull request to fix it. We found a minimal test case where an IndexError within a nnsight run wasn’t correctly propagated to the user, making debugging difficult, and wrote up a proposal for some pull requests to fix it. However, after posting the proposal in the discord, we discovered this page in their GitHub (https://github.com/ndif-team/nnsight/blob/2f41eddb14bf3557e02b4322a759c90930250f51/NNsight_Walkthrough.ipynb#L801, ctrl-f “validate”) which addresses the problem. We replicated their solution here (https://colab.research.google.com/drive/1WZNeDQ2zXbP4i2bm7xgC0nhK_1h904RB?usp=sharing) and got a helpful stack trace for the error, including the error type and (several stack layers up) the line causing it.

Read More

Read More

Read More

Read More

Sep 9, 2024

tiny model

it's a basic line testing of my toy model

Read More

Read More

Read More

Read More

Sep 2, 2024

Simulation Operators: The Next Level of the Annotation Business

We bet on agentic AI being integrated into other domains within the next few years: healthcare, manufacturing, automotive, etc., and the way it would be integrated is into cyber-physical systems, which are systems that integrate the computer brain into a physical receptor/actuator (e.g. robots). As the demand for cyber-physical agents increases, so would be the need to train and align them.

We also bet on the scenario where frontier AI and robotics labs would not be able to handle all of the demands for training and aligning those agents, especially in specific domains, therefore leaving opportunities for other players to fulfill the requirements for training those agents: providing a dataset of scenarios to fine-tune the agents and providing the people to give feedback to the model for alignment.

Furthermore, we also bet that human intervention would still be required to supervise deployed agents, as demanded by various regulations. Therefore leaving opportunities to develop supervision platforms which might highly differ between different industries.

Read More

Read More

Read More

Read More

Sep 2, 2024

Identity System for AIs

This project proposes a cryptographic system for assigning unique identities to AI models and verifying their outputs to ensure accountability and traceability. By leveraging these techniques, we address the risks of AI misuse and untraceable actions. Our solution aims to enhance AI safety and establish a foundation for transparent and responsible AI deployment.

Read More

Read More

Read More

Read More

Apr 25, 2025

-

Apr 27, 2025

Online

Economics of Transformative AI

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Apr 14, 2025

-

Apr 26, 2025

Online & In-Person

Berkeley AI Policy Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Apr 5, 2025

-

Apr 6, 2025

Georgia Tech Campus & Online

Georgia Tech AISI Policy Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Apr 4, 2025

-

Apr 6, 2025

Zurich

Dark Patterns in AGI Hackathon at ZAIA

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Mar 29, 2025

-

Mar 30, 2025

London & Online

AI Control Hackathon 2025

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Mar 7, 2025

-

Mar 10, 2025

Online & In-person

Women in AI Safety Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More