Oct 4, 2024
-
Oct 7, 2025
Online & In-Person
Agent Security Hackathon




During this hackathon, we'll put our research skills to the test by diving deep into the world of agents:
00:00:00:00
00:00:00:00
00:00:00:00
00:00:00:00
This event is ongoing.
This event has concluded.
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

What is Agent Security?
While much AI safety research focuses on large language models, the AI systems being deployed in the real world are far more complex. Enter the realm of Agents — sophisticated combinations of language models and other programs that are reshaping our digital world.
During this hackathon, we'll put our research skills to the test by diving deep into the world of agents:
What are their unique safety properties?
Under what conditions do they fail?
How do they differ from raw language models?
Why Agent Security Matters
The development of AI has brought about systems capable of increasingly autonomous operation. AI agents, which integrate large language models with other programs, represent a significant step in this evolution. These agents can make decisions, execute tasks, and interact with their environment in ways that surpass traditional AI systems.
This progression, while promising, introduces new challenges in ensuring the safety and security of AI systems. The complexity of agents necessitates a reevaluation of existing safety frameworks and the development of novel approaches to security. Agent security research is crucial because it:
It ensures AI agents act in alignment with human values and intentions
It prevents potential misuse or manipulation of AI systems
It protects against unintended consequences of autonomous decision-making
It builds trust in AI technologies, helping responsible adoption in society
What to Expect:
During this hackathon, you'll have the opportunity to:
Collaborate with like-minded individuals passionate about AI safety
Receive continuous feedback from mentors and peers
Attend inspiring HackTalks and keynote speeches
Participate in office hours with established researchers
Develop innovative solutions to real-world AI security challenges
Network with experts in the field of AI safety and security
Contribute to groundbreaking research in agent security
Submission
You will join in teams to submit a report and code repository of your research from this weekend. Established researchers will judge your submission and provide reviews following the hackathon.
What is it like to participate?
Doroteya Stoyanova, Computer Vision Intern
I learnt so much about AI Safety and Computation Mechanics. It is a field I never heard of, and it combines two of my interests - AI, and Physics. Through the hackathons I gained valuable connections, learnt a lot by researchers, people with a lot of experience and this will help me in my research-oriented career-path.
Kevin Vegda, AI Engineer
I loved taking part in the AI Risk Demo-Jam by Apart Research and LISA. It was my first hackathon ever. I greatly appreciate the ability of the environment to churn out ideas as well as to incentivise you to make demo-able projects that are always good for your CV. Moreover, meeting people from the field gave me an opportunity to network and maybe that will help me with my career.
Mustafa Yasir, The Alan Turing Institute
[The technical AI safety startups hackathon] completely changed my idea of what working on 'AI Safety' means, especially from a for-profit entrepreneurial perspective. I went in with very little idea of how a startup can be a means to tackle AI Safety and left with incredibly exciting ideas to work on. This is the first hackathon in which I've kept thinking about my idea, even after the hackathon ended.
Winning the hackathon!
You have a unique chance to win during this hackathon! With our expert panel of judges, we'll review your submissions on the following criteria:
Agent safety: Does the project move the field of agent safety forward? After reading this, do we know more about how to detect dangerous agents, protect against dangerous agents, or build safer agents than before?
AI safety: Does the project solve a concrete problem in AI safety? If this project is fully realized, would we expect the world with superintelligence to be a safer (even marginally) than yesterday?
Methodology: Is the project well-executed and is the code available so we can review it? Do we expect the results to generalize beyond the specific case(s) presented in the submission?
Join us
This hackathon is for anyone who is passionate about AI safety and secure systems research. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and research needed to ensure that agents develop safely.
By participating in the Agent Security Hackathon, you'll:
Gain hands-on experience in cutting-edge AI safety research
Develop valuable skills in AI development, security analysis, and collaborative problem-solving
Network with leading experts and potential future collaborators in the field
Contribute to solving one of the most pressing challenges in AI development
Enhance your resume with a unique and highly relevant project
Potentially kickstart a career in AI safety and securityWhether you're an AI researcher, developer, cybersecurity expert, or simply passionate about ensuring safe AI, your perspective is valuable. Let's work together to build a safer future for AI!
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

What is Agent Security?
While much AI safety research focuses on large language models, the AI systems being deployed in the real world are far more complex. Enter the realm of Agents — sophisticated combinations of language models and other programs that are reshaping our digital world.
During this hackathon, we'll put our research skills to the test by diving deep into the world of agents:
What are their unique safety properties?
Under what conditions do they fail?
How do they differ from raw language models?
Why Agent Security Matters
The development of AI has brought about systems capable of increasingly autonomous operation. AI agents, which integrate large language models with other programs, represent a significant step in this evolution. These agents can make decisions, execute tasks, and interact with their environment in ways that surpass traditional AI systems.
This progression, while promising, introduces new challenges in ensuring the safety and security of AI systems. The complexity of agents necessitates a reevaluation of existing safety frameworks and the development of novel approaches to security. Agent security research is crucial because it:
It ensures AI agents act in alignment with human values and intentions
It prevents potential misuse or manipulation of AI systems
It protects against unintended consequences of autonomous decision-making
It builds trust in AI technologies, helping responsible adoption in society
What to Expect:
During this hackathon, you'll have the opportunity to:
Collaborate with like-minded individuals passionate about AI safety
Receive continuous feedback from mentors and peers
Attend inspiring HackTalks and keynote speeches
Participate in office hours with established researchers
Develop innovative solutions to real-world AI security challenges
Network with experts in the field of AI safety and security
Contribute to groundbreaking research in agent security
Submission
You will join in teams to submit a report and code repository of your research from this weekend. Established researchers will judge your submission and provide reviews following the hackathon.
What is it like to participate?
Doroteya Stoyanova, Computer Vision Intern
I learnt so much about AI Safety and Computation Mechanics. It is a field I never heard of, and it combines two of my interests - AI, and Physics. Through the hackathons I gained valuable connections, learnt a lot by researchers, people with a lot of experience and this will help me in my research-oriented career-path.
Kevin Vegda, AI Engineer
I loved taking part in the AI Risk Demo-Jam by Apart Research and LISA. It was my first hackathon ever. I greatly appreciate the ability of the environment to churn out ideas as well as to incentivise you to make demo-able projects that are always good for your CV. Moreover, meeting people from the field gave me an opportunity to network and maybe that will help me with my career.
Mustafa Yasir, The Alan Turing Institute
[The technical AI safety startups hackathon] completely changed my idea of what working on 'AI Safety' means, especially from a for-profit entrepreneurial perspective. I went in with very little idea of how a startup can be a means to tackle AI Safety and left with incredibly exciting ideas to work on. This is the first hackathon in which I've kept thinking about my idea, even after the hackathon ended.
Winning the hackathon!
You have a unique chance to win during this hackathon! With our expert panel of judges, we'll review your submissions on the following criteria:
Agent safety: Does the project move the field of agent safety forward? After reading this, do we know more about how to detect dangerous agents, protect against dangerous agents, or build safer agents than before?
AI safety: Does the project solve a concrete problem in AI safety? If this project is fully realized, would we expect the world with superintelligence to be a safer (even marginally) than yesterday?
Methodology: Is the project well-executed and is the code available so we can review it? Do we expect the results to generalize beyond the specific case(s) presented in the submission?
Join us
This hackathon is for anyone who is passionate about AI safety and secure systems research. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and research needed to ensure that agents develop safely.
By participating in the Agent Security Hackathon, you'll:
Gain hands-on experience in cutting-edge AI safety research
Develop valuable skills in AI development, security analysis, and collaborative problem-solving
Network with leading experts and potential future collaborators in the field
Contribute to solving one of the most pressing challenges in AI development
Enhance your resume with a unique and highly relevant project
Potentially kickstart a career in AI safety and securityWhether you're an AI researcher, developer, cybersecurity expert, or simply passionate about ensuring safe AI, your perspective is valuable. Let's work together to build a safer future for AI!
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

What is Agent Security?
While much AI safety research focuses on large language models, the AI systems being deployed in the real world are far more complex. Enter the realm of Agents — sophisticated combinations of language models and other programs that are reshaping our digital world.
During this hackathon, we'll put our research skills to the test by diving deep into the world of agents:
What are their unique safety properties?
Under what conditions do they fail?
How do they differ from raw language models?
Why Agent Security Matters
The development of AI has brought about systems capable of increasingly autonomous operation. AI agents, which integrate large language models with other programs, represent a significant step in this evolution. These agents can make decisions, execute tasks, and interact with their environment in ways that surpass traditional AI systems.
This progression, while promising, introduces new challenges in ensuring the safety and security of AI systems. The complexity of agents necessitates a reevaluation of existing safety frameworks and the development of novel approaches to security. Agent security research is crucial because it:
It ensures AI agents act in alignment with human values and intentions
It prevents potential misuse or manipulation of AI systems
It protects against unintended consequences of autonomous decision-making
It builds trust in AI technologies, helping responsible adoption in society
What to Expect:
During this hackathon, you'll have the opportunity to:
Collaborate with like-minded individuals passionate about AI safety
Receive continuous feedback from mentors and peers
Attend inspiring HackTalks and keynote speeches
Participate in office hours with established researchers
Develop innovative solutions to real-world AI security challenges
Network with experts in the field of AI safety and security
Contribute to groundbreaking research in agent security
Submission
You will join in teams to submit a report and code repository of your research from this weekend. Established researchers will judge your submission and provide reviews following the hackathon.
What is it like to participate?
Doroteya Stoyanova, Computer Vision Intern
I learnt so much about AI Safety and Computation Mechanics. It is a field I never heard of, and it combines two of my interests - AI, and Physics. Through the hackathons I gained valuable connections, learnt a lot by researchers, people with a lot of experience and this will help me in my research-oriented career-path.
Kevin Vegda, AI Engineer
I loved taking part in the AI Risk Demo-Jam by Apart Research and LISA. It was my first hackathon ever. I greatly appreciate the ability of the environment to churn out ideas as well as to incentivise you to make demo-able projects that are always good for your CV. Moreover, meeting people from the field gave me an opportunity to network and maybe that will help me with my career.
Mustafa Yasir, The Alan Turing Institute
[The technical AI safety startups hackathon] completely changed my idea of what working on 'AI Safety' means, especially from a for-profit entrepreneurial perspective. I went in with very little idea of how a startup can be a means to tackle AI Safety and left with incredibly exciting ideas to work on. This is the first hackathon in which I've kept thinking about my idea, even after the hackathon ended.
Winning the hackathon!
You have a unique chance to win during this hackathon! With our expert panel of judges, we'll review your submissions on the following criteria:
Agent safety: Does the project move the field of agent safety forward? After reading this, do we know more about how to detect dangerous agents, protect against dangerous agents, or build safer agents than before?
AI safety: Does the project solve a concrete problem in AI safety? If this project is fully realized, would we expect the world with superintelligence to be a safer (even marginally) than yesterday?
Methodology: Is the project well-executed and is the code available so we can review it? Do we expect the results to generalize beyond the specific case(s) presented in the submission?
Join us
This hackathon is for anyone who is passionate about AI safety and secure systems research. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and research needed to ensure that agents develop safely.
By participating in the Agent Security Hackathon, you'll:
Gain hands-on experience in cutting-edge AI safety research
Develop valuable skills in AI development, security analysis, and collaborative problem-solving
Network with leading experts and potential future collaborators in the field
Contribute to solving one of the most pressing challenges in AI development
Enhance your resume with a unique and highly relevant project
Potentially kickstart a career in AI safety and securityWhether you're an AI researcher, developer, cybersecurity expert, or simply passionate about ensuring safe AI, your perspective is valuable. Let's work together to build a safer future for AI!
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

What is Agent Security?
While much AI safety research focuses on large language models, the AI systems being deployed in the real world are far more complex. Enter the realm of Agents — sophisticated combinations of language models and other programs that are reshaping our digital world.
During this hackathon, we'll put our research skills to the test by diving deep into the world of agents:
What are their unique safety properties?
Under what conditions do they fail?
How do they differ from raw language models?
Why Agent Security Matters
The development of AI has brought about systems capable of increasingly autonomous operation. AI agents, which integrate large language models with other programs, represent a significant step in this evolution. These agents can make decisions, execute tasks, and interact with their environment in ways that surpass traditional AI systems.
This progression, while promising, introduces new challenges in ensuring the safety and security of AI systems. The complexity of agents necessitates a reevaluation of existing safety frameworks and the development of novel approaches to security. Agent security research is crucial because it:
It ensures AI agents act in alignment with human values and intentions
It prevents potential misuse or manipulation of AI systems
It protects against unintended consequences of autonomous decision-making
It builds trust in AI technologies, helping responsible adoption in society
What to Expect:
During this hackathon, you'll have the opportunity to:
Collaborate with like-minded individuals passionate about AI safety
Receive continuous feedback from mentors and peers
Attend inspiring HackTalks and keynote speeches
Participate in office hours with established researchers
Develop innovative solutions to real-world AI security challenges
Network with experts in the field of AI safety and security
Contribute to groundbreaking research in agent security
Submission
You will join in teams to submit a report and code repository of your research from this weekend. Established researchers will judge your submission and provide reviews following the hackathon.
What is it like to participate?
Doroteya Stoyanova, Computer Vision Intern
I learnt so much about AI Safety and Computation Mechanics. It is a field I never heard of, and it combines two of my interests - AI, and Physics. Through the hackathons I gained valuable connections, learnt a lot by researchers, people with a lot of experience and this will help me in my research-oriented career-path.
Kevin Vegda, AI Engineer
I loved taking part in the AI Risk Demo-Jam by Apart Research and LISA. It was my first hackathon ever. I greatly appreciate the ability of the environment to churn out ideas as well as to incentivise you to make demo-able projects that are always good for your CV. Moreover, meeting people from the field gave me an opportunity to network and maybe that will help me with my career.
Mustafa Yasir, The Alan Turing Institute
[The technical AI safety startups hackathon] completely changed my idea of what working on 'AI Safety' means, especially from a for-profit entrepreneurial perspective. I went in with very little idea of how a startup can be a means to tackle AI Safety and left with incredibly exciting ideas to work on. This is the first hackathon in which I've kept thinking about my idea, even after the hackathon ended.
Winning the hackathon!
You have a unique chance to win during this hackathon! With our expert panel of judges, we'll review your submissions on the following criteria:
Agent safety: Does the project move the field of agent safety forward? After reading this, do we know more about how to detect dangerous agents, protect against dangerous agents, or build safer agents than before?
AI safety: Does the project solve a concrete problem in AI safety? If this project is fully realized, would we expect the world with superintelligence to be a safer (even marginally) than yesterday?
Methodology: Is the project well-executed and is the code available so we can review it? Do we expect the results to generalize beyond the specific case(s) presented in the submission?
Join us
This hackathon is for anyone who is passionate about AI safety and secure systems research. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and research needed to ensure that agents develop safely.
By participating in the Agent Security Hackathon, you'll:
Gain hands-on experience in cutting-edge AI safety research
Develop valuable skills in AI development, security analysis, and collaborative problem-solving
Network with leading experts and potential future collaborators in the field
Contribute to solving one of the most pressing challenges in AI development
Enhance your resume with a unique and highly relevant project
Potentially kickstart a career in AI safety and securityWhether you're an AI researcher, developer, cybersecurity expert, or simply passionate about ensuring safe AI, your perspective is valuable. Let's work together to build a safer future for AI!
Speakers & Collaborators
Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Esben Kran
Organizer
Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Jason Schreiber
Organizer and Judge
Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Astha Puri
Judge
Astha Puri has over 8 years of experience in the data science field. Astha works on improving the search and recommendation experience for customers for a Fortune 10 company.
Sachin Dharashivka
Judge
Sachin Dharashivkar is the CEO of TrojanVectors, a company specializing in security for RAG chatbots and AI agents.
Ankush Garg
Judge
Ankush is a Research Engineer at Meta AI's Llama team, working on pre-training Large Language Models. Before that he spent 5 years at Google Brain/ Deepmind and brings extensive ex
Abhishek Harshvardhan Mishra
Judge
Abhishek is an independent hacker and consultant. who specializes in building code generation and roleplay models. He is the creator of evolSeeker and codeCherryPop models.
Andrey Anurin
Judge
Andrey is a senior software engineer at Google and a researcher at Apart, working on automated cyber capability evaluation and capability elicitation.
Pranjal Mehta
Judge
Pranjal Mehta, a tech entrepreneur and IIT Madras graduate, is the co-founder of Neohumans.ai, a venture-backed startup. He is an active angel investor and mentor.
Samuel Watts
Keynote Speaker
Sam is the product manager at Lakera, the leading GenAI security platform. Lakera develops AI security and safety guardrails that best serve startups & enterprises.
Speakers & Collaborators

Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.

Esben Kran
Organizer
Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.

Jason Schreiber
Organizer and Judge
Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.

Astha Puri
Judge
Astha Puri has over 8 years of experience in the data science field. Astha works on improving the search and recommendation experience for customers for a Fortune 10 company.

Sachin Dharashivka
Judge
Sachin Dharashivkar is the CEO of TrojanVectors, a company specializing in security for RAG chatbots and AI agents.

Ankush Garg
Judge
Ankush is a Research Engineer at Meta AI's Llama team, working on pre-training Large Language Models. Before that he spent 5 years at Google Brain/ Deepmind and brings extensive ex

Abhishek Harshvardhan Mishra
Judge
Abhishek is an independent hacker and consultant. who specializes in building code generation and roleplay models. He is the creator of evolSeeker and codeCherryPop models.

Andrey Anurin
Judge
Andrey is a senior software engineer at Google and a researcher at Apart, working on automated cyber capability evaluation and capability elicitation.

Pranjal Mehta
Judge
Pranjal Mehta, a tech entrepreneur and IIT Madras graduate, is the co-founder of Neohumans.ai, a venture-backed startup. He is an active angel investor and mentor.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Research
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Apr 25, 2025
-
Apr 26, 2025
Research
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events