Keep Apart Research Going: Donate Today
Aug 18, 2023
-
Aug 20, 2023
LLM Evals Hackathon
Welcome to the research hackathon to devise methods for evaluating the risks of deployed language models and AI. With the societal-scale risks associated with creating new types of intelligence, we need to understand and control the capabilities of such models.
Welcome to the research hackathon to devise methods for evaluating the risks of deployed language models and AI. With the societal-scale risks associated with creating new types of intelligence, we need to understand and control the capabilities of such models.
This event is ongoing.
This event has concluded.
Our Other Sprints
Jul 25, 2025
-
Jul 27, 2025
AI Safety x Physics Grand Challenge
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Jun 13, 2025
-
Jun 13, 2025
Red Teaming A Narrow Path: ControlAI Policy Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible