Jan 5, 2024
-
Jan 7, 2024
The AI Governance Research Sprint
Design, develop, guide, and predict AI governance in a three day research sprint
This event is ongoing.
This event has concluded.
Dive into technical governance and storytelling for AI security!
The development of artificial intelligence presents vast opportunities alongside significant risks. During this intensive weekend-long sprint, we will concentrate on governance strategies to mitigate and secure against the gravest risks along with narrative explorations of what the future might look like.
Sign up to collaborate with others who are delving into some of today's most critical questions: How can we effectively safeguard the security of both current and future AI systems? And what are the broader societal implications of this transformative technology?
Follow along live for the keynote on our YouTube.
Watch the recording of the keynote here. Access Esben's slides here.
Below, you can see each case along with reading material for each. Notice also the live collective review pages under each case.
Case 1
Implementation Guidance for the EU AI Act
Following the European AI Act, implementation guidance will be used to implement legislation. Your task is to make an example of such documentation.
The EU AI Act is one of the most ambitious legislations for AI at the moment. During the implementation stages of the next two years, it is important that we understand how the legislation affects various actors in the space of AI.
Go to the shared notes and ideas document for this case.
The EU AI Act: A primer – a short synopsis of the current EU AI Act by Mia Hoffmann
Implementing EU Law – an overview from the EU on how legislation is implemented
Bertuzzi's trilogue overview – how foundation models are legislated under the new decisions made at the trilogue discussions
[Additional reading] Anthropic Responsible Scaling Policy – Anthropic's self-imposed safety precautions based on model scaling
[Additional reading] OpenAI Preparedness Framework – OpenAI's strategy for hazard preparedness
[Additional reading] AI Standards Lab – a self-organized mission to accelerate standards-writing
[Additional reading] AI Act Newsletter – FLI's newsletter on the latest news from the EU AI Act
Case 2
Technical Governance Tooling
Create demonstrations of technical frameworks, benchmarks or technical tools to make specific governance initiatives possible.
One of the biggest problems of successful societal-scale AI legislation compared to legislation for other high-risk technologies is that it is difficult to properly control compute and AI development. One bad actor has a significant amount of power.
Go to the shared notes and ideas document for this case.
How to Catch a Chinchilla – a systemic overview on how we might audit compute usage for foundation models (Shavit, 2023)
Compute Governance Introduction – an overview from Lennart Heim of how we can legislate AI by controlling GPUs (listen also to the podcast with Lennart Heim)
[Additional reading] Representation Engineering – a technical framework to understand how capabilities are presented in various models
[Additional reading] Export Controls of IaaS Controlled AI Chips – how the use of compute rental services affects export controls
Case 3
Explainers for AI Concepts
Create an explainer about concepts in AI risk for policymakers in whichever format you prefer (video, infographic, article, etc.)
Produce resources that can help decision-makers, such as policymakers, learn about a particular aspect of AI safety. This could include one-pagers, short videos, info-graphics, and more.
Go to the shared notes and ideas document for this case.
CSET's AI Concepts Overview – a short overview of the key concepts in AI safety
Visualizing the Deep Learning Revolution – an explainer about the speed of deep learning improvement and the successive risks
[Additional reading] Here's What the Godfathers of AI Have to Say – a video overview of the large-scale risks as presented by Bengio, LeCunn and Hinton
[Additional reading] Advanced AI Governance – a literature review of the problems, options and proposals of AI governance by Maas
[Additional reading] Statement on AI Risk explainer video – AI Explained's overview of the Statement on AI Risk
[Additional reading] World Models – an interactive explainer of how models learn world models
Case 4
Vignette Story-Telling
Tell a story about how a future world with AI will look like and use it to inform new questions to ask in AI governance.
Write a story about how a future world with AI looks and how we got there. You may either base it on 1) things going well, 2) things going bad or 3) a place in-between. We encourage you to be creative! As a starting prompt, you can tell a story of a small part of the world in 2040 from the future perspective.
Go to the shared notes and ideas document for this case.
GPT-2030 – a Berkeley assistant professor's take on what the future AI might be capable of based on the current numbers
Shulman on the Dwarkesh Podcast [1:02:00 to 1:33:00] – when asked the question "How does an intelligence explosion look like?", he responds with what it feels like
What 2026 Might Look Like – impressively predicted the 2023 LLM hype in 2021
The Next Decades Might be Wild – an overview of how the next decades might look with AI
[Additional reading] Welcome to 2030. I own nothing, have no privacy, and life has never been better. – a short exposition of the life in 2030
[Additional reading] EpochAI's research – a great resource for informing your perspective on how future AI development will look like
[Additional reading] How we could stumble into AI catastrophe – a description of how we might accidentally cause a catastrophe
Prizes
Besides the amazing opportunity to dive into an exciting topic, we are delighted to present our prizes for the top projects. We hope that this can support you your work on AI safety, possibly with our fellowship program, the Apart Lab!
Our jury will review the projects submitted to the research sprint and the top 3 projects will receive prizes according to the jury's reviews of each criterion!
🥇 First Prize: $600
🥈 Second Prize: $300
🥉 Third Prize: $100
Besides the top project prizes, we work to identify projects and teams that seem ready to continue their work towards real-world impact in the Apart Lab fellowship where you receive mentorship and peer review towards your eventual publication or other output.
A big thank you to Straumli for sponsoring the $1,000 prize!
Apart Sprints
The Apart Sprints are weekend-long challenges hosted by Apart to help you get exposure to real-world problems and develop object-level work that takes the field one step closer to more secure artificial intelligence! Read more about the project here.
Submissions
Your team's submission must follow a template provided at the start of the Sprint. The template for cases 1 and 2 follow a traditional research paper structure. The template for cases 3 and 4 is more free-form. The instructions will be visible in each template.
Your team will be between 1 and 5 people and you will submit your project based on the template in addition to a title, a description and private contact information. There will be a team-making event right after the keynote for anyone who is missing a team.
You are allowed to think about your project before the hackathon starts (and we recommend reading all the resources for the cases!) but your core research work should happen in the duration of the hackathon.
Evaluation criteria
The jury will review your project according to a series of criteria that are designed to help you develop your project.
Well-defined Scope [all cases]: Make sure your project is narrowed down to something very concrete. For example, instead of “AI & Compute Control”, we expect projects like "Demo implementation of HIPCC firmware logging of GPU memory” or "GPT-2030".
Creativity [all cases]: Is the idea original compared to public research on governance? We emphasize that you think outside our current boxes of AI safety and governance, possibly draw in learnings from other fields (sci-fi, cybersec, Digital Services Act, etc.).
Reasoning Transparency [cases 1 and 2]: Do not defend your project. Make sure to share the advantages that are not obvious along with any limitations to your method (read Open Philanthropy's guide). If you have code, this also includes making that code reproducible.
Believability [cases 3 and 4]: Is it an accurate [case 3] or believable [case 4] explanation or narrative you wrote? We encourage creative and innovative submissions that are connected to reality. Examples include projecting current numbers into the future to ground your narrative or talking with one of the mentors during the weekend to make sure your explainer about an AI technology is accurate.
The schedule is still to be finalized but you can expect the times to be within 3 hours of the final schedule. All times are updated to your local time zone.
Friday 11:00 AM GMT to Friday 1:30 PM GMT: Study and team session - spend time with other participants to read the study materials, discuss ideas, and set up teams.Friday 6:00 PM GMT: Keynote talk - introduction to the topic, cases and logistics. Afterwards, you are provided 30 minutes to check through the resources and ideas. Then join us for a team matching event for anyone who is missing a team.Saturday 1:00 PM GMT: Office hour - come discuss your projects with active researchers in AI governance.Saturday 2:00 PM GMT: Office hour - discuss your Case 2 projects with practiced researchers on technical tooling for governance.Saturday 5:00 PM GMT: Project talks - two talks from researchers in AI governance with a 15 minutes break in-between. Get a chance to chat with the speakers during Q&A.Sunday 1:00 PM GMT: Office hour - come discuss your projects with active researchers in AI governance.Sunday 2:00 PM GMT: Office hour - discuss your Case 2 projects with practiced researchers on technical tooling for governance.Sunday 7:00 PM GMT: Virtual social - we finish off the weekend with a virtual social for any final questions and answers.Monday 12:00 AM GMT: Submission deadline
Entries
Check back later to see entries to this event
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Apr 25, 2025
-
Apr 26, 2025
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible