Nov 21, 2025
-
Nov 23, 2025
Remote
Test



This hackathon brings together builders to prototype defensive systems that could protect us from AI-enabled threats.
00:00:00:00
Days To Go
00:00:00:00
Days To Go
00:00:00:00
Days To Go
00:00:00:00
Days To Go
This hackathon brings together builders to prototype defensive systems that could protect us from AI-enabled threats.
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The line between real and synthetic is blurring fast. AI-generated content is now indistinguishable from human-created media. Coordinated disinformation campaigns run at machine speed. Social engineering attacks are personalized at scale. And our defensive infrastructure is nowhere close to keeping up.
This hackathon brings together 500+ builders to prototype systems that could help us detect, counter, and defend against AI-enabled manipulation. You'll have one intensive weekend to build something real – tools that could actually help people distinguish truth from synthetic deception.
Top teams get:
💰 $2,000 in cash prizes
The chance to continue development through Apart Research's Fellowship program
Apply if you believe we need better tools to protect truth and trust in an AI-saturated world.
In this hackathon, you can build:
Deepfake detection system that analyzes video, audio, and images for synthetic artifacts and provides confidence scores to help users verify authenticity
Disinformation tracking platform that monitors coordinated inauthentic behavior across social platforms and identifies AI-generated campaign patterns at scale
Social engineering defense tool that warns users about AI-powered phishing, spear-phishing, and manipulation attempts by detecting linguistic patterns and behavioral anomalies
Synthetic media provenance tracker that creates tamper-proof chains of custody for digital content, making it easier to verify original sources
Pursue other defensive projects that help people navigate an increasingly manipulated information environment
You'll work in teams over one weekend and submit open-source tools, detection models, analysis frameworks, or research prototypes that advance our ability to defend against AI-enabled manipulation.
What is AI manipulation?
AI manipulation refers to the use of artificial intelligence to deceive, mislead, or coerce people at scale. This includes deepfakes that impersonate real individuals, coordinated disinformation campaigns that flood platforms with synthetic content, personalized social engineering attacks that exploit individual vulnerabilities, and automated propaganda systems that shift public opinion through manufactured consensus.
The problem isn't just that AI can create convincing fakes – it's that AI can create them faster than humans can verify, personalize them based on psychological profiles, and coordinate them across platforms in ways that overwhelm traditional fact-checking.
What makes this particularly dangerous is the asymmetry: creating synthetic content is getting easier and cheaper, while verifying authenticity remains expensive and slow. A single person with access to modern AI tools can now generate thousands of convincing fake personas, fabricated videos, or coordinated disinformation posts. Meanwhile, our verification infrastructure still relies largely on manual review by human fact-checkers.
Why this hackathon?
The Problem
We're entering a world where "pics or it didn't happen" no longer works as evidence. AI can now generate photorealistic images, clone voices with seconds of audio, and create videos that pass casual inspection. Language models write in styles indistinguishable from specific human authors. Automated systems coordinate inauthentic behavior across thousands of accounts simultaneously.
This isn't a future problem – it's happening now. Election interference campaigns use AI to generate localized disinformation. Scammers clone voices to impersonate family members in distress. State actors deploy synthetic personas to build trust before launching influence operations. And we're drastically under-equipped to respond.
The tools we have – reverse image search, manual fact-checking, reactive content moderation – were built for a pre-AI world. They don't scale to the speed and sophistication of modern manipulation techniques.
Why AI Manipulation Defense Matters Now
Trust is infrastructure. When people can't distinguish real from fake, democratic deliberation breaks down, financial markets become vulnerable to manipulation, and interpersonal relationships suffer. The cost of verification can't exceed the cost of creation, or truth loses by default.
Right now, we're massively under-investing in manipulation defense. Most effort flows into reactive content moderation or trying to prevent model capabilities from advancing. Far less goes into building proactive detection systems, provenance tracking, or tools that help people themselves evaluate what's real.
That gap is dangerous. Better defensive technology could:
Give people real-time tools to verify what they're seeing
Help platforms detect coordinated manipulation before it spreads
Create transparency around synthetic content without censoring it
Build the forensic infrastructure we need for digital evidence to remain credible
Restore some symmetry between creation and verification costs
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You believe that strengthening the shield is as important as blunting the spear
You have technical skills and genuine urgency about building better defenses
You're frustrated that defensive work is underfunded relative to its importance
You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.
Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories
What happens next
Winning and promising projects will be:
Awarded with $10,000 worth of prizes in cash.
Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)
Guaranteed spot in BlueDot Impact AGI Strategy course.
Invited to continue development within the Apart Fellowship.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The line between real and synthetic is blurring fast. AI-generated content is now indistinguishable from human-created media. Coordinated disinformation campaigns run at machine speed. Social engineering attacks are personalized at scale. And our defensive infrastructure is nowhere close to keeping up.
This hackathon brings together 500+ builders to prototype systems that could help us detect, counter, and defend against AI-enabled manipulation. You'll have one intensive weekend to build something real – tools that could actually help people distinguish truth from synthetic deception.
Top teams get:
💰 $2,000 in cash prizes
The chance to continue development through Apart Research's Fellowship program
Apply if you believe we need better tools to protect truth and trust in an AI-saturated world.
In this hackathon, you can build:
Deepfake detection system that analyzes video, audio, and images for synthetic artifacts and provides confidence scores to help users verify authenticity
Disinformation tracking platform that monitors coordinated inauthentic behavior across social platforms and identifies AI-generated campaign patterns at scale
Social engineering defense tool that warns users about AI-powered phishing, spear-phishing, and manipulation attempts by detecting linguistic patterns and behavioral anomalies
Synthetic media provenance tracker that creates tamper-proof chains of custody for digital content, making it easier to verify original sources
Pursue other defensive projects that help people navigate an increasingly manipulated information environment
You'll work in teams over one weekend and submit open-source tools, detection models, analysis frameworks, or research prototypes that advance our ability to defend against AI-enabled manipulation.
What is AI manipulation?
AI manipulation refers to the use of artificial intelligence to deceive, mislead, or coerce people at scale. This includes deepfakes that impersonate real individuals, coordinated disinformation campaigns that flood platforms with synthetic content, personalized social engineering attacks that exploit individual vulnerabilities, and automated propaganda systems that shift public opinion through manufactured consensus.
The problem isn't just that AI can create convincing fakes – it's that AI can create them faster than humans can verify, personalize them based on psychological profiles, and coordinate them across platforms in ways that overwhelm traditional fact-checking.
What makes this particularly dangerous is the asymmetry: creating synthetic content is getting easier and cheaper, while verifying authenticity remains expensive and slow. A single person with access to modern AI tools can now generate thousands of convincing fake personas, fabricated videos, or coordinated disinformation posts. Meanwhile, our verification infrastructure still relies largely on manual review by human fact-checkers.
Why this hackathon?
The Problem
We're entering a world where "pics or it didn't happen" no longer works as evidence. AI can now generate photorealistic images, clone voices with seconds of audio, and create videos that pass casual inspection. Language models write in styles indistinguishable from specific human authors. Automated systems coordinate inauthentic behavior across thousands of accounts simultaneously.
This isn't a future problem – it's happening now. Election interference campaigns use AI to generate localized disinformation. Scammers clone voices to impersonate family members in distress. State actors deploy synthetic personas to build trust before launching influence operations. And we're drastically under-equipped to respond.
The tools we have – reverse image search, manual fact-checking, reactive content moderation – were built for a pre-AI world. They don't scale to the speed and sophistication of modern manipulation techniques.
Why AI Manipulation Defense Matters Now
Trust is infrastructure. When people can't distinguish real from fake, democratic deliberation breaks down, financial markets become vulnerable to manipulation, and interpersonal relationships suffer. The cost of verification can't exceed the cost of creation, or truth loses by default.
Right now, we're massively under-investing in manipulation defense. Most effort flows into reactive content moderation or trying to prevent model capabilities from advancing. Far less goes into building proactive detection systems, provenance tracking, or tools that help people themselves evaluate what's real.
That gap is dangerous. Better defensive technology could:
Give people real-time tools to verify what they're seeing
Help platforms detect coordinated manipulation before it spreads
Create transparency around synthetic content without censoring it
Build the forensic infrastructure we need for digital evidence to remain credible
Restore some symmetry between creation and verification costs
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You believe that strengthening the shield is as important as blunting the spear
You have technical skills and genuine urgency about building better defenses
You're frustrated that defensive work is underfunded relative to its importance
You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.
Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories
What happens next
Winning and promising projects will be:
Awarded with $10,000 worth of prizes in cash.
Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)
Guaranteed spot in BlueDot Impact AGI Strategy course.
Invited to continue development within the Apart Fellowship.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The line between real and synthetic is blurring fast. AI-generated content is now indistinguishable from human-created media. Coordinated disinformation campaigns run at machine speed. Social engineering attacks are personalized at scale. And our defensive infrastructure is nowhere close to keeping up.
This hackathon brings together 500+ builders to prototype systems that could help us detect, counter, and defend against AI-enabled manipulation. You'll have one intensive weekend to build something real – tools that could actually help people distinguish truth from synthetic deception.
Top teams get:
💰 $2,000 in cash prizes
The chance to continue development through Apart Research's Fellowship program
Apply if you believe we need better tools to protect truth and trust in an AI-saturated world.
In this hackathon, you can build:
Deepfake detection system that analyzes video, audio, and images for synthetic artifacts and provides confidence scores to help users verify authenticity
Disinformation tracking platform that monitors coordinated inauthentic behavior across social platforms and identifies AI-generated campaign patterns at scale
Social engineering defense tool that warns users about AI-powered phishing, spear-phishing, and manipulation attempts by detecting linguistic patterns and behavioral anomalies
Synthetic media provenance tracker that creates tamper-proof chains of custody for digital content, making it easier to verify original sources
Pursue other defensive projects that help people navigate an increasingly manipulated information environment
You'll work in teams over one weekend and submit open-source tools, detection models, analysis frameworks, or research prototypes that advance our ability to defend against AI-enabled manipulation.
What is AI manipulation?
AI manipulation refers to the use of artificial intelligence to deceive, mislead, or coerce people at scale. This includes deepfakes that impersonate real individuals, coordinated disinformation campaigns that flood platforms with synthetic content, personalized social engineering attacks that exploit individual vulnerabilities, and automated propaganda systems that shift public opinion through manufactured consensus.
The problem isn't just that AI can create convincing fakes – it's that AI can create them faster than humans can verify, personalize them based on psychological profiles, and coordinate them across platforms in ways that overwhelm traditional fact-checking.
What makes this particularly dangerous is the asymmetry: creating synthetic content is getting easier and cheaper, while verifying authenticity remains expensive and slow. A single person with access to modern AI tools can now generate thousands of convincing fake personas, fabricated videos, or coordinated disinformation posts. Meanwhile, our verification infrastructure still relies largely on manual review by human fact-checkers.
Why this hackathon?
The Problem
We're entering a world where "pics or it didn't happen" no longer works as evidence. AI can now generate photorealistic images, clone voices with seconds of audio, and create videos that pass casual inspection. Language models write in styles indistinguishable from specific human authors. Automated systems coordinate inauthentic behavior across thousands of accounts simultaneously.
This isn't a future problem – it's happening now. Election interference campaigns use AI to generate localized disinformation. Scammers clone voices to impersonate family members in distress. State actors deploy synthetic personas to build trust before launching influence operations. And we're drastically under-equipped to respond.
The tools we have – reverse image search, manual fact-checking, reactive content moderation – were built for a pre-AI world. They don't scale to the speed and sophistication of modern manipulation techniques.
Why AI Manipulation Defense Matters Now
Trust is infrastructure. When people can't distinguish real from fake, democratic deliberation breaks down, financial markets become vulnerable to manipulation, and interpersonal relationships suffer. The cost of verification can't exceed the cost of creation, or truth loses by default.
Right now, we're massively under-investing in manipulation defense. Most effort flows into reactive content moderation or trying to prevent model capabilities from advancing. Far less goes into building proactive detection systems, provenance tracking, or tools that help people themselves evaluate what's real.
That gap is dangerous. Better defensive technology could:
Give people real-time tools to verify what they're seeing
Help platforms detect coordinated manipulation before it spreads
Create transparency around synthetic content without censoring it
Build the forensic infrastructure we need for digital evidence to remain credible
Restore some symmetry between creation and verification costs
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You believe that strengthening the shield is as important as blunting the spear
You have technical skills and genuine urgency about building better defenses
You're frustrated that defensive work is underfunded relative to its importance
You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.
Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories
What happens next
Winning and promising projects will be:
Awarded with $10,000 worth of prizes in cash.
Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)
Guaranteed spot in BlueDot Impact AGI Strategy course.
Invited to continue development within the Apart Fellowship.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Sign Ups
Entries
Overview
Resources
Guidelines
Schedule
Entries
Overview

The line between real and synthetic is blurring fast. AI-generated content is now indistinguishable from human-created media. Coordinated disinformation campaigns run at machine speed. Social engineering attacks are personalized at scale. And our defensive infrastructure is nowhere close to keeping up.
This hackathon brings together 500+ builders to prototype systems that could help us detect, counter, and defend against AI-enabled manipulation. You'll have one intensive weekend to build something real – tools that could actually help people distinguish truth from synthetic deception.
Top teams get:
💰 $2,000 in cash prizes
The chance to continue development through Apart Research's Fellowship program
Apply if you believe we need better tools to protect truth and trust in an AI-saturated world.
In this hackathon, you can build:
Deepfake detection system that analyzes video, audio, and images for synthetic artifacts and provides confidence scores to help users verify authenticity
Disinformation tracking platform that monitors coordinated inauthentic behavior across social platforms and identifies AI-generated campaign patterns at scale
Social engineering defense tool that warns users about AI-powered phishing, spear-phishing, and manipulation attempts by detecting linguistic patterns and behavioral anomalies
Synthetic media provenance tracker that creates tamper-proof chains of custody for digital content, making it easier to verify original sources
Pursue other defensive projects that help people navigate an increasingly manipulated information environment
You'll work in teams over one weekend and submit open-source tools, detection models, analysis frameworks, or research prototypes that advance our ability to defend against AI-enabled manipulation.
What is AI manipulation?
AI manipulation refers to the use of artificial intelligence to deceive, mislead, or coerce people at scale. This includes deepfakes that impersonate real individuals, coordinated disinformation campaigns that flood platforms with synthetic content, personalized social engineering attacks that exploit individual vulnerabilities, and automated propaganda systems that shift public opinion through manufactured consensus.
The problem isn't just that AI can create convincing fakes – it's that AI can create them faster than humans can verify, personalize them based on psychological profiles, and coordinate them across platforms in ways that overwhelm traditional fact-checking.
What makes this particularly dangerous is the asymmetry: creating synthetic content is getting easier and cheaper, while verifying authenticity remains expensive and slow. A single person with access to modern AI tools can now generate thousands of convincing fake personas, fabricated videos, or coordinated disinformation posts. Meanwhile, our verification infrastructure still relies largely on manual review by human fact-checkers.
Why this hackathon?
The Problem
We're entering a world where "pics or it didn't happen" no longer works as evidence. AI can now generate photorealistic images, clone voices with seconds of audio, and create videos that pass casual inspection. Language models write in styles indistinguishable from specific human authors. Automated systems coordinate inauthentic behavior across thousands of accounts simultaneously.
This isn't a future problem – it's happening now. Election interference campaigns use AI to generate localized disinformation. Scammers clone voices to impersonate family members in distress. State actors deploy synthetic personas to build trust before launching influence operations. And we're drastically under-equipped to respond.
The tools we have – reverse image search, manual fact-checking, reactive content moderation – were built for a pre-AI world. They don't scale to the speed and sophistication of modern manipulation techniques.
Why AI Manipulation Defense Matters Now
Trust is infrastructure. When people can't distinguish real from fake, democratic deliberation breaks down, financial markets become vulnerable to manipulation, and interpersonal relationships suffer. The cost of verification can't exceed the cost of creation, or truth loses by default.
Right now, we're massively under-investing in manipulation defense. Most effort flows into reactive content moderation or trying to prevent model capabilities from advancing. Far less goes into building proactive detection systems, provenance tracking, or tools that help people themselves evaluate what's real.
That gap is dangerous. Better defensive technology could:
Give people real-time tools to verify what they're seeing
Help platforms detect coordinated manipulation before it spreads
Create transparency around synthetic content without censoring it
Build the forensic infrastructure we need for digital evidence to remain credible
Restore some symmetry between creation and verification costs
Who should participate?
This hackathon is for people who want to build solutions to technological risk using technology itself.
You should participate if:
You're an engineer or developer who wants to work on consequential problems
You're a researcher ready to validate ideas through practical implementation
You believe that strengthening the shield is as important as blunting the spear
You have technical skills and genuine urgency about building better defenses
You're frustrated that defensive work is underfunded relative to its importance
You don't need deep domain expertise in biosecurity or cybersecurity, though it helps. What matters: ability to build functional systems, willingness to learn quickly over a compressed timeframe, and real conviction that this work matters.
Some of the most valuable defensive innovations come from people who aren't constrained by conventional thinking about how things "should" be done. Fresh perspectives combined with solid technical capabilities often yield the most novel approaches.
What you will do
Participants will:
Form teams or join existing groups.
Develop projects over an intensive hackathon weekend.
Submit open-source forecasting models, scenario analyses, monitoring tools, or empirical research advancing our understanding of AI trajectories
What happens next
Winning and promising projects will be:
Awarded with $10,000 worth of prizes in cash.
Awarded a fully-funded trip to London to take part in BlueDot Impact's December Incubator Accelerator week (Dec 1-5)
Guaranteed spot in BlueDot Impact AGI Strategy course.
Invited to continue development within the Apart Fellowship.
Why join?
Impact: Your work may directly inform AI governance decisions and help society prepare for transformative AI
Mentorship: Expert forecasters, AI researchers, and policy practitioners will guide projects throughout the hackathon
Community: Collaborate with peers from across the globe working to understand AI's trajectory and implications
Visibility: Top projects will be featured on Apart Research's platforms and connected to follow-up opportunities
Speakers & Collaborators
Geoff Ralston
Speaker
Geoff Ralston is the former President of Y Combinator. He was the CEO of La La Media, Inc., developer of Lala, a web browser-based music distribution site. Prior to Lala, Ralston worked for Yahoo!, where he was Vice President of Engineering and Chief Product Officer. In 1997, Ralston created Yahoo! Mail
Nora Ammann
Speaker
Nora is an interdisciplinary researcher with expertise in complex systems, philosophy of science, political theory and AI. She focuses on the development of transformative AI and understanding intelligent behavior in natural, social, or artificial systems. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.
Esben Kran
Speaker
Esben is the CEO and Chariman of Apart. He has published award-winning AI safety research in various domains related to cybersecurity, autonomy preservation, and interpretability. He is involved in numerous efforts to ensure AI remains safe for humanity.
Joshua Landes
Organiser
Joshua Landes leads Community & Training at BlueDot Impact, where he runs the AISF community and facilitates AI Governance and Economics of Transformative AI courses. Previously, he worked at AI Safety Germany and the Center for AI Safety, after managing political campaigns for FDP in Germany.
Raina McIntyre
Speaker
Raina MacIntyre is Head of the Biosecurity Program at the Kirby Institute, UNSW Australia. She is a physician and epidemiologists, recognized internationally for her research on prevention and detection of epidemic infections, with a focus on pandemics, epidemics, bioterrorism and vaccines.
Zainab Majid
Speaker
Zainab works at the intersection of AI safety and cybersecurity, leveraging her expertise in incident response investigations to tackle AI security challenges.
Speakers & Collaborators

Geoff Ralston
Speaker
Geoff Ralston is the former President of Y Combinator. He was the CEO of La La Media, Inc., developer of Lala, a web browser-based music distribution site. Prior to Lala, Ralston worked for Yahoo!, where he was Vice President of Engineering and Chief Product Officer. In 1997, Ralston created Yahoo! Mail

Nora Ammann
Speaker
Nora is an interdisciplinary researcher with expertise in complex systems, philosophy of science, political theory and AI. She focuses on the development of transformative AI and understanding intelligent behavior in natural, social, or artificial systems. Before ARIA, she co-founded and led PIBBSS, a research initiative exploring interdisciplinary approaches to AI risk, governance and safety.

Esben Kran
Speaker
Esben is the CEO and Chariman of Apart. He has published award-winning AI safety research in various domains related to cybersecurity, autonomy preservation, and interpretability. He is involved in numerous efforts to ensure AI remains safe for humanity.

Joshua Landes
Organiser
Joshua Landes leads Community & Training at BlueDot Impact, where he runs the AISF community and facilitates AI Governance and Economics of Transformative AI courses. Previously, he worked at AI Safety Germany and the Center for AI Safety, after managing political campaigns for FDP in Germany.

Raina McIntyre
Speaker
Raina MacIntyre is Head of the Biosecurity Program at the Kirby Institute, UNSW Australia. She is a physician and epidemiologists, recognized internationally for her research on prevention and detection of epidemic infections, with a focus on pandemics, epidemics, bioterrorism and vaccines.
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
Nov 21, 2025
-
Nov 23, 2025
Research
Defensive Acceleration Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Oct 31, 2025
-
Nov 2, 2025
Research
The AI Forecasting Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events


