May 26, 2023
-
May 28, 2023
ML Verifiability Hackathon
Join us for this month's Alignment Jam to investigate how we can both formally and informally verify the safety of machine learning systems!
This event is ongoing.
This event has concluded.
Spend a weekend of intense and focused research work towards validating safety of neural networks in various domains (e.g. language) using adversarial attack / defense and other ML safety research methods. Re-watch the intro talk here.
Read up on the topic before we start! The reading group will work through these materials together up to the kickoff.
Join the reading group here.
Schedule & logistics
Here, you can see the calendar and schedule. All times are in UTC+1 (UK Summer Time). You can subscribe to the calendar to see the event timings in your time zone.
Entries
Check back later to see entries to this event
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Apr 25, 2025
-
Apr 26, 2025
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible