May 24, 2024

-

May 27, 2024

Online & In-Person

AI Security Evaluation Hackathon: Measuring AI Capability

Inspired by the SafeBench competition, our hackathon brings together AI researchers and developers to create cutting-edge benchmarks that measure and mitigate AI risks.

00:00:00:00

00:00:00:00

00:00:00:00

00:00:00:00

This event is ongoing.

This event has concluded.

Overview

Overview

Arrow
Arrow
Arrow

Join Us for the AI Security Hackathon: Ensuring a Safer Future with AIJoin us for an exciting weekend of collaboration and innovation at our upcoming AI Security Hackathon! Inspired by the SafeBench competition, our hackathon brings together AI researchers and developers to create cutting-edge benchmarks that measure and mitigate AI risks.

Sign up here to stay updated for this event

🥇 rAInboltBench - How good are multimodal models at Geoguessr?🥈 Cybersecurity Persistence Benchmark - Does 'turn it off and on again' work against LLM hackers?🥉 Say No to Mass Destruction - Will an LLM know when not to answer?🏅 Dark Patterns in LLMs - Could LLMs be covertly influencing you?See all the winning projects under the "Entries" tab and hear their lightning talks in the video below.


You are also welcome to rewatch the keynote talk by Bo Li:


Why Benchmarking Matters

Benchmarks are crucial for evaluating AI systems' performance and identifying areas for improvement. In AI security, benchmarks assess the robustness, transparency, and alignment of AI models, ensuring their safety and reliability.

Notable AI safety benchmarks include:

  • TruthfulQA: Assessing the tendency to biased and untruthful answers to simple questions from AI models

  • DecodingTrust: A thorough assessment of trustworthiness in GPT models

  • HarmBench: Evaluating automated red-teaming methods against AI models

  • RuLES: Measuring how securely AI models follow rules set out by the developers

  • MACHIAVELLI: Assessing the potential for AI systems to engage in deceptive or manipulative behavior

  • RobustBench: Evaluating the robustness of computer vision models to various perturbations

  • The Weapons of Mass Destruction Proxy benchmark also informs methods to remove dangerous capabilities in cyber, bio, and chemistry.

What to Expect

During the hackathon, you'll:

  • Collaborate with diverse participants, including researchers and developers

  • Learn from keynote speakers and mentors at the forefront of AI safety research

  • Develop innovative benchmarks addressing key AI security and robustness challenges

  • Compete for prizes and recognition for the most impactful and creative submissions

  • Network with potential collaborators and employers in the AI safety community

Join us for a weekend of intense collaboration, learning, and innovation as we work together to build a safer future with AI. Stay tuned for more details on dates, format, and prizes.

Register now and be part of the solution in ensuring AI's transformative potential is realized safely and securely!

Prizes, evaluation, and submission

You will join in teams to submit a PDF about your research according to the submission template shared on the kickoff day! Depending on the judge's reviews, you'll have the chance to win from the $2,000 prize pool!

  • 🥇 $1,000 for the top team

  • 🥈 $600 for the second prize

  • 🥉 $300 for the third prize

  • 🏅 $100 for the fourth prize

Criteria

We have a talented team of judges with us who will provide feedback and evaluate your project according to the following criteria:

  • Benchmarks: Is your project inspired and motivated by existing literature on benchmarks? Does it represent significant progress in safety benchmarking?

  • AI Safety: Does your project seem like it will contribute meaningfully to the safety and security of future AI systems? Is the motivation for the research good and relevant for safety?

  • Generalizability / Reproducibility: Does your project seem like it would generalize; for example, do you show multiple models and investigate potential errors in your benchmark? Is your code available in a repository or a Google Colab?

Resources

Resources

Arrow
Arrow
Arrow

Get an overview of how to get the best out of your weekend at this blog post:

The ultimate guide to AI safety research hackathons

To get started with other resources for evaluation, jump into the Evaluations Quickstart guide Github repository, where you will find multiple interesting resources on various safety benchmarking and evaluation topics: https://github.com/apartresearch/evaluations-starter

Starter code

To get you started with benchmarking and understand what you can do with current open models, we've written multiple notebooks for you to start your research journey from!

If you haven't used Colab notebooks before, you can either download them as Jupyter notebooks, run them in the browser, or make a copy to your own Google Drive. The last one is our suggestion since you can make permanent changes and share it with your teammates for somewhat live editing.

  • Replicate API usage: An easy introduction to querying all the models available on the Replicate.ai platform - if you'd like an API key, we can provide this as well, simply ask!

  • Transformer-lens model download: Loading in language models to change the weights, either for creating trojan networks, sleeper agents, or understand what goes on inside the model

  • Voice cloning: A simple implementation of cloning yours or any other voice - this demo records your voice and allows you to make text-to-speech on your own voice

  • Predicting the future: This notebook can be used to make simple parametric predictions about the future from existing data, such as the amount of fake news in Sweden during 2020 through 2023

Schedule

Schedule

Arrow
Arrow
Arrow

The schedule runs from 7PM CEST / 10AM PST Friday to 4AM CEST Monday / 7PM PST Sunday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here.

You will also find Explorer events before the hackathon begins on Discord and on the calendar.

Entries

Speakers & Collaborators

Bo Li

Keynote speaker

Li is an Associate Professor of Computer Science at UChicago and an organizer of the SafeBench competition. Her research focuses on trustworthiness in AI systems.

Minh Nguyen

Reviewer

Minh has developed AI voice model products with a million users per month and is now doing product at Hume AI.

Mateusz Jurewicz

Judge

Mateusz a Senior ML Engineer, currently at the GenAI team at Danske Bank and an AI researcher with a doctorate from the IT University of Copenhagen.

Nora Petrova

Judge

Nora is an AI Engineer & Researcher, interested in AI Safety and Interpretability. She has a background in CS, Physics and Maths.

Jacob Haimes

Speaker & Mentor

Author of the unpublished retro-holdout paper about evaluation datasets that have leaked into the training set and a fellow at the Apart Lab. Hosts a podcast on AI safety.

Natalia Pérez-Campanero Antolín

Judge

A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.

Esben Kran

Organizer

Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.

Jason Schreiber

Organizer and Judge

Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.

Finn Metz

Organizer

Finn is a core member of Apart and heads strategy and business development with a background from private equity, incubation, and venture capital.

Speakers & Collaborators

Bo Li

Keynote speaker

Li is an Associate Professor of Computer Science at UChicago and an organizer of the SafeBench competition. Her research focuses on trustworthiness in AI systems.

Minh Nguyen

Reviewer

Minh has developed AI voice model products with a million users per month and is now doing product at Hume AI.

Mateusz Jurewicz

Judge

Mateusz a Senior ML Engineer, currently at the GenAI team at Danske Bank and an AI researcher with a doctorate from the IT University of Copenhagen.

Nora Petrova

Judge

Nora is an AI Engineer & Researcher, interested in AI Safety and Interpretability. She has a background in CS, Physics and Maths.

Jacob Haimes

Speaker & Mentor

Author of the unpublished retro-holdout paper about evaluation datasets that have leaked into the training set and a fellow at the Apart Lab. Hosts a podcast on AI safety.

Natalia Pérez-Campanero Antolín

Judge

A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.

Esben Kran

Organizer

Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.

Jason Schreiber

Organizer and Judge

Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.

Finn Metz

Organizer

Finn is a core member of Apart and heads strategy and business development with a background from private equity, incubation, and venture capital.

Registered Jam Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.

Registered Jam Sites

Register A Location

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.