May 24, 2024
-
May 27, 2024
AI Security Evaluation Hackathon: Measuring AI Capability
Inspired by the SafeBench competition, our hackathon brings together AI researchers and developers to create cutting-edge benchmarks that measure and mitigate AI risks.
This event is ongoing.
This event has concluded.
Join Us for the AI Security Hackathon: Ensuring a Safer Future with AIJoin us for an exciting weekend of collaboration and innovation at our upcoming AI Security Hackathon! Inspired by the SafeBench competition, our hackathon brings together AI researchers and developers to create cutting-edge benchmarks that measure and mitigate AI risks.
Sign up here to stay updated for this event
🥇 rAInboltBench - How good are multimodal models at Geoguessr?🥈 Cybersecurity Persistence Benchmark - Does 'turn it off and on again' work against LLM hackers?🥉 Say No to Mass Destruction - Will an LLM know when not to answer?🏅 Dark Patterns in LLMs - Could LLMs be covertly influencing you?See all the winning projects under the "Entries" tab and hear their lightning talks in the video below.
You are also welcome to rewatch the keynote talk by Bo Li:
Why Benchmarking Matters
Benchmarks are crucial for evaluating AI systems' performance and identifying areas for improvement. In AI security, benchmarks assess the robustness, transparency, and alignment of AI models, ensuring their safety and reliability.
Notable AI safety benchmarks include:
TruthfulQA: Assessing the tendency to biased and untruthful answers to simple questions from AI models
DecodingTrust: A thorough assessment of trustworthiness in GPT models
HarmBench: Evaluating automated red-teaming methods against AI models
RuLES: Measuring how securely AI models follow rules set out by the developers
MACHIAVELLI: Assessing the potential for AI systems to engage in deceptive or manipulative behavior
RobustBench: Evaluating the robustness of computer vision models to various perturbations
The Weapons of Mass Destruction Proxy benchmark also informs methods to remove dangerous capabilities in cyber, bio, and chemistry.
What to Expect
During the hackathon, you'll:
Collaborate with diverse participants, including researchers and developers
Learn from keynote speakers and mentors at the forefront of AI safety research
Develop innovative benchmarks addressing key AI security and robustness challenges
Compete for prizes and recognition for the most impactful and creative submissions
Network with potential collaborators and employers in the AI safety community
Join us for a weekend of intense collaboration, learning, and innovation as we work together to build a safer future with AI. Stay tuned for more details on dates, format, and prizes.
Register now and be part of the solution in ensuring AI's transformative potential is realized safely and securely!
Prizes, evaluation, and submission
You will join in teams to submit a PDF about your research according to the submission template shared on the kickoff day! Depending on the judge's reviews, you'll have the chance to win from the $2,000 prize pool!
🥇 $1,000 for the top team
🥈 $600 for the second prize
🥉 $300 for the third prize
🏅 $100 for the fourth prize
Criteria
We have a talented team of judges with us who will provide feedback and evaluate your project according to the following criteria:
Benchmarks: Is your project inspired and motivated by existing literature on benchmarks? Does it represent significant progress in safety benchmarking?
AI Safety: Does your project seem like it will contribute meaningfully to the safety and security of future AI systems? Is the motivation for the research good and relevant for safety?
Generalizability / Reproducibility: Does your project seem like it would generalize; for example, do you show multiple models and investigate potential errors in your benchmark? Is your code available in a repository or a Google Colab?
Get an overview of how to get the best out of your weekend at this blog post:

The ultimate guide to AI safety research hackathons
To get started with other resources for evaluation, jump into the Evaluations Quickstart guide Github repository, where you will find multiple interesting resources on various safety benchmarking and evaluation topics: https://github.com/apartresearch/evaluations-starter
Starter code
To get you started with benchmarking and understand what you can do with current open models, we've written multiple notebooks for you to start your research journey from!
If you haven't used Colab notebooks before, you can either download them as Jupyter notebooks, run them in the browser, or make a copy to your own Google Drive. The last one is our suggestion since you can make permanent changes and share it with your teammates for somewhat live editing.
Replicate API usage: An easy introduction to querying all the models available on the Replicate.ai platform - if you'd like an API key, we can provide this as well, simply ask!
Transformer-lens model download: Loading in language models to change the weights, either for creating trojan networks, sleeper agents, or understand what goes on inside the model
Voice cloning: A simple implementation of cloning yours or any other voice - this demo records your voice and allows you to make text-to-speech on your own voice
Predicting the future: This notebook can be used to make simple parametric predictions about the future from existing data, such as the amount of fake news in Sweden during 2020 through 2023
The schedule runs from 7PM CEST / 10AM PST Friday to 4AM CEST Monday / 7PM PST Sunday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here.
You will also find Explorer events before the hackathon begins on Discord and on the calendar.

Entries
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Apr 25, 2025
-
Apr 26, 2025
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible