May 2, 2025
-
May 4, 2025
In-Person in Mox (San Francisco)
AGI Cybersecurity Hackathon
This event focuses on developing innovative solutions that enhance the security and safety of increasingly capable AI systems.
13 : 02 : 06 : 05
13 : 02 : 06 : 05
13 : 02 : 06 : 05
13 : 02 : 06 : 05
This event focuses on developing innovative solutions that enhance the security and safety of increasingly capable AI systems.
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

🔒 About the Hackathon
As AI systems advance toward general intelligence capabilities, the security implications become increasingly critical. This hackathon, organized provides a practical forum for participants to develop and test novel approaches to securing AI systems and using AI to improve cybersecurity measures.
💡 Challenge Tracks
Classical Cybersecurity for AI: Work on developing robust protection mechanisms for AI infrastructure and model weights. Focus on creating approaches that safeguard training environments, prevent model theft, and secure the entire AI development pipeline.
AI for Cybersecurity: Leverage AI capabilities to enhance traditional cybersecurity domains. Design systems for automated vulnerability discovery, intrusion detection, threat hunting, and intelligent patching mechanisms that outperform conventional approaches.
Model Security: Design and implement innovative guardrails, safeguards, and protection mechanisms for AI models. Focus areas include preventing PII leakage, defending against adversarial inputs, ensuring behavioral safety for agentic models, and creating robust monitoring systems.
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Cybersecurity professionals
AI researchers and engineers
Red team specialists
Privacy experts
Systems security engineers
Policy researchers interested in AI security governance
No prior experience with AI security specifically is required, though familiarity with either machine learning, cybersecurity, or related fields is helpful.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

🔒 About the Hackathon
As AI systems advance toward general intelligence capabilities, the security implications become increasingly critical. This hackathon, organized provides a practical forum for participants to develop and test novel approaches to securing AI systems and using AI to improve cybersecurity measures.
💡 Challenge Tracks
Classical Cybersecurity for AI: Work on developing robust protection mechanisms for AI infrastructure and model weights. Focus on creating approaches that safeguard training environments, prevent model theft, and secure the entire AI development pipeline.
AI for Cybersecurity: Leverage AI capabilities to enhance traditional cybersecurity domains. Design systems for automated vulnerability discovery, intrusion detection, threat hunting, and intelligent patching mechanisms that outperform conventional approaches.
Model Security: Design and implement innovative guardrails, safeguards, and protection mechanisms for AI models. Focus areas include preventing PII leakage, defending against adversarial inputs, ensuring behavioral safety for agentic models, and creating robust monitoring systems.
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Cybersecurity professionals
AI researchers and engineers
Red team specialists
Privacy experts
Systems security engineers
Policy researchers interested in AI security governance
No prior experience with AI security specifically is required, though familiarity with either machine learning, cybersecurity, or related fields is helpful.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

🔒 About the Hackathon
As AI systems advance toward general intelligence capabilities, the security implications become increasingly critical. This hackathon, organized provides a practical forum for participants to develop and test novel approaches to securing AI systems and using AI to improve cybersecurity measures.
💡 Challenge Tracks
Classical Cybersecurity for AI: Work on developing robust protection mechanisms for AI infrastructure and model weights. Focus on creating approaches that safeguard training environments, prevent model theft, and secure the entire AI development pipeline.
AI for Cybersecurity: Leverage AI capabilities to enhance traditional cybersecurity domains. Design systems for automated vulnerability discovery, intrusion detection, threat hunting, and intelligent patching mechanisms that outperform conventional approaches.
Model Security: Design and implement innovative guardrails, safeguards, and protection mechanisms for AI models. Focus areas include preventing PII leakage, defending against adversarial inputs, ensuring behavioral safety for agentic models, and creating robust monitoring systems.
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Cybersecurity professionals
AI researchers and engineers
Red team specialists
Privacy experts
Systems security engineers
Policy researchers interested in AI security governance
No prior experience with AI security specifically is required, though familiarity with either machine learning, cybersecurity, or related fields is helpful.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

🔒 About the Hackathon
As AI systems advance toward general intelligence capabilities, the security implications become increasingly critical. This hackathon, organized provides a practical forum for participants to develop and test novel approaches to securing AI systems and using AI to improve cybersecurity measures.
💡 Challenge Tracks
Classical Cybersecurity for AI: Work on developing robust protection mechanisms for AI infrastructure and model weights. Focus on creating approaches that safeguard training environments, prevent model theft, and secure the entire AI development pipeline.
AI for Cybersecurity: Leverage AI capabilities to enhance traditional cybersecurity domains. Design systems for automated vulnerability discovery, intrusion detection, threat hunting, and intelligent patching mechanisms that outperform conventional approaches.
Model Security: Design and implement innovative guardrails, safeguards, and protection mechanisms for AI models. Focus areas include preventing PII leakage, defending against adversarial inputs, ensuring behavioral safety for agentic models, and creating robust monitoring systems.
👥 Who Should Participate
We welcome participants from diverse backgrounds, including:
Cybersecurity professionals
AI researchers and engineers
Red team specialists
Privacy experts
Systems security engineers
Policy researchers interested in AI security governance
No prior experience with AI security specifically is required, though familiarity with either machine learning, cybersecurity, or related fields is helpful.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
May 30, 2025
-
Jun 1, 2025
Research
Beyond Single Models: The Martian Routing Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Apr 25, 2025
-
Apr 27, 2025
Research
Economics of Transformative AI
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events