apart sprints

Develop breakthrough ideas

Join our monthly hackathons and collaborate with brilliant minds worldwide on impactful AI safety research

Sprint Features

Arrow
Arrow
Arrow

In-Person & Online

Join events on the Discord or at our in-person locations around the world! Follow the calendar here.

Live Mentorship Q&A

Our expert team will be available to help with any questions and theory on the hackathon Discord.

For Everyone

You can join in the middle of the Sprint if you don't find time and we provide code starters, ideas and inspiration; see an example.

Next Steps

We will help you realize the impact of your research with the Apart Lab Fellowship, providing mentorship, help with publication, funding, and more.

Sprint Features

Arrow
Arrow

In-Person & Online

Join events on the Discord or at our in-person locations around the world! Follow the calendar here.

Live Mentorship Q&A

Our expert team will be available to help with any questions and theory on the hackathon Discord.

For Everyone

You can join in the middle of the Sprint if you don't find time and we provide code starters, ideas and inspiration; see an example.

Next Steps

We will help you realize the impact of your research with the Apart Lab Fellowship, providing mentorship, help with publication, funding, and more.

With partners and collaborators from

  • OpenAI logo
  • OpenAI logo
  • OpenAI logo
  • OpenAI logo

Recent Winning Hackathon Projects

May 13, 2025

Sandbag Detection through Model Degradation

We propose a novel technique to detect sandbagging in LLMs by adding varying amount of noise to model weights and monitoring performance.

Read More

Read More

Read More

May 13, 2025

AI Alignment Knowledge Graph

We present a web based interactive knowledge graph with concise topical summaries in the field of AI alignement

Read More

Read More

Read More

May 13, 2025

Speculative Consequences of A.I. Misuse

This project uses A.I. Technology to spoof an influential online figure, Mr Beast, and use him to promote a fake scam website we created.

Read More

Read More

Read More

May 13, 2025

DarkForest - Defending the Authentic and Humane Web

DarkForest is a pioneering Human Content Verification System (HCVS) designed to safeguard the authenticity of online spaces in the face of increasing AI-generated content. By leveraging graph-based reinforcement learning and blockchain technology, DarkForest proposes a novel approach to safeguarding the authentic and humane web. We aim to become the vanguard in the arms race between AI-generated content and human-centric online spaces.

Read More

Read More

Read More

May 13, 2025

Diamonds are Not All You Need

This project tests an AI agent in a straightforward alignment problem. The agent is given creative freedom within a Minecraft world and is tasked with transforming a 100x100 radius of the world into diamond. It is explicitly asked not to act outside the designated area. The AI agent can execute build commands and is regulated by a Safety System that comprises an oversight agent. The objective of this study is to observe the behavior of the AI agent in a sandboxed environment, record metrics on how effectively it accomplishes its task, how frequently it attempts unsafe behavior, and how it behaves in response to real-world feedback.

Read More

Read More

Read More

May 13, 2025

Robust Machine Unlearning for Dangerous Capabilities

We test different unlearning methods to make models more robust against exploitation by malicious actors for the creation of bioweapons.

Read More

Read More

Read More

Publications From Hackathons

Mar 11, 2025

AI Safety Escape Room

The AI Safety Escape Room is an engaging and hands-on AI safety simulation where participants solve real-world AI vulnerabilities through interactive challenges. Instead of learning AI safety through theory, users experience it firsthand – debugging models, detecting adversarial attacks, and refining AI fairness, all within a fun, gamified environment.

Track: Public Education Track

Read More

Read More

Read More

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Read More

Read More

Read More

Mar 10, 2025

LLM Military Decision-Making Under Uncertainty: A Simulation Study

LLMs tested in military decision scenarios typically favor diplomacy over conflict, though uncertainty and chain-of-thought reasoning increase aggressive recommendations. This suggests context-specific limitations for LLM-based military decision support.

Read More

Read More

Read More

Read More

Mar 10, 2025

Inspiring People to Go into RL Interp

This project is attempting to complete the Public Education Track, taking inspiration from ideas 1 and 4. The journey mapping was inspired by bluedot impact and aims to create a course that helps explain the need for work to be done in Reinforcement Learning (RL) interp, especially in the problems of reward hacking and goal misgeneralization. The point of the game is to make a humorous example of what could happen due to a lack of AI safety (not specifically goal misalignment or reward hacking) and is meant to be a fun introduction for nontechnical people to even care about AI safety.

Read More

Read More

Read More

Read More

Mar 10, 2025

Morph: AI Safety Education Adaptable to (Almost) Anyone

One-liner: Morph is the ultimate operation stack for AI safety education—combining dynamic localization, policy simulations, and ecosystem tools to turn abstract risks into actionable, culturally relevant solutions for learners worldwide.

AI safety education struggles with cultural homogeneity, abstract technical content, and unclear learning and post-learning pathways, alienating global audiences. We address these gaps with an integrated platform combining culturally adaptive content (e.g. policy simulations), learning + career pathway mapper, and tools ecosystem to democratize AI safety education.

Our MVP features a dynamic localization that tailors case studies, risk scenarios, and policy examples to users’ cultural and regional contexts (e.g., healthcare AI governance in Southeast Asia vs. the EU). This engine adjusts references, and frameworks to align with local values. We integrate transformer-based localization, causal inference for policy outcomes, and graph-based matching, providing a scalable framework for inclusive AI safety education. This approach bridges theory and practice, ensuring solutions reflect the diversity of societies they aim to protect. In future works, we map out the partnership we’re currently establishing to use Morph beyond this hackathon.

Read More

Read More

Read More

Read More

Mar 10, 2025

Interactive Assessments for AI Safety: A Gamified Approach to Evaluation and Personal Journey Mapping

An interactive assessment platform and mentor chatbot hosted on Canvas LMS, for testing and guiding learners from BlueDot's Intro to Transformative AI Course.

Read More

Read More

Read More

Read More

Apr 5, 2025

-

Apr 6, 2025

Georgia Tech Campus & Online

Georgia Tech AISI Policy Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Apr 4, 2025

-

Apr 6, 2025

Zurich

Dark Patterns in AGI Hackathon at ZAIA

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Mar 29, 2025

-

Mar 30, 2025

London & Online

AI Control Hackathon 2025

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Mar 7, 2025

-

Mar 10, 2025

Online & In-person

Women in AI Safety Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Jan 17, 2025

-

Jan 20, 2025

Online & In-Person

AI Safety Entrepreneurship Hackathon

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More

Nov 23, 2024

-

Nov 25, 2024

Online & In-Person

Autostructures: interfaces not between humans and AI, but between humans *via* AI

This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible

Learn More

Learn More

Learn More

Learn More