Aug 30, 2024
-
Sep 2, 2024
Hackathon for Technical AI Safety Startups
Join us during this weekend where we will join you to kick off an ambitious journey into AI safety and security with other aligned and talented individuals from both science and business. We aim to bring solution-oriented deep tech to AI safety. This is your chance to literally change the world.
This event is ongoing.
This event has concluded.
AI Safety Requires Ambition
We are facing critical technological problems in AGI deployment during the next three years; alignment, multi-agent risk, compute security, exfiltration, among many others. Each of these questions deserves a competent team to scale science-informed solutions for. This is where you come in!
Join us during this weekend where we will join you to kick off an ambitious journey into AI safety and security with other aligned and talented individuals from both science and business. We aim to bring solution-oriented deep tech to AI safety. This is your chance to literally change the world.
Why This Hackathon Matters:
The impact of real-world startups is immense and can be felt almost immediately. We need to push AI safety innovations toward real-world applications rapidly to ensure they are implemented and make a real difference in the application and deployment of AI technologies. This hackathon is not just about ideation; it's about taking that crucial first step from concept to a plan for action, setting the stage for future development.
What to Expect:
Collaborate with Like-Minded Innovators: Work with a diverse group of participants from both the science and business worlds. This is your chance to meet potential co-founders and collaborators who share your vision for the future of AI safety.
From Concept to Blueprint: Move beyond research papers and ideas. Develop tangible solutions that can be refined and scaled into full-fledged products or startups in the future.
Support from Experts: Receive mentorship from leading experts in AI safety, entrepreneurship, and technology commercialization. Learn what it takes to develop your ideas and how to plan for their success.
Real-World Impact: We're looking for solutions that can serve as the foundation for real-world applications. Whether it's a new tool for alignment, a product to enhance multi-agent safety, or a platform to secure AI infrastructure, your work here could have a direct impact on the safe development and deployment of AI.
Looking Ahead: As you work on your projects, keep in mind that this is just the beginning. We're in the process of preparing an incubator program, Pathfinders, designed to provide even more support for the most promising teams
🙋♀️ FAQ
What will I submit during this weekend?
You submit a 4-page white paper that describes how your research innovation solves a key real-world problem in a multi-agent super intelligence future, whichever technology it is based on. If your research truly solves a real-world problem, commercial viability will simply be a question of engineering and you can ignore that for this hackathon.
How will I find a team?
Before the hackathon, we have brainstorming sessions where we will collaborate to figure out the highest impact research ideas to work on. We highly recommend that you connect with others during these sessions and coordinate around specific categories of ideas that excite you. We try to set the stage by defining the problems we need to solve and let you take charge on the idea generation process.
What is the Pathfinders program?
It is an incubation project for executing technical research in the real world with for-profit organizations towards making a large impact on the real world. We're currently developing this in collaboration with our community and would love to hear your ideas, thoughts, and ways we can make it a success with you!
How can I win the prizes?
We have a prize pool of $2,000 to share with you! People with established backgrounds in research-focused startups will be reviewing your projects and review your project about a technical idea that can scale on three criteria:
Research quality & scalability: What is the quality of the technical innovation? Will the research idea scale into an impactful intervention in the real world?
AI safety: Are the arguments for how this technical idea solves a key future challenge for AI risk good? Have the authors described the threat model they're aiming to solve with this project?
Methods: Will this intervention really work? Have the authors explored an example situation where this intervention will solve the problem?
Top teams will win a share of our $2,000 prize pool:
🥇 1st place: $1,000
🥈 2nd place: $600
🥉 3rd place: $300
🎖️ 4th place: $100
Do I need experience in AI safety to join?
Not at all! This can be an occasion for you to learn more about AI safety and entrepreneurship. We provide code templates and ideas to kickstart your projects, mentors to give feedback on your project, and a great community of interested researchers and developers to give reviews and feedback on your project.
Join Us:
This hackathon is for anyone who is passionate about AI safety and entrepreneurship. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and products needed to ensure that AI is developed and deployed safely.
Let's turn cutting-edge research into actionable solutions. Let's build the future of AI safety, together.
Core readings (1 hour):
Some for-profit AI alignment org ideas: An exploration of potential technical AI safety for-profit ideas from Goodfire's co-founder and CEO, Eric Ho. Goodfire recently raised $7M to build our mechanistic interpretability in the commercial domain.
For-profit AI Safety: This post from Apart co-director Esben Kran explores some of the overarching problems that for-profit companies might be able to solve in the future.
AI Assurance Tech Report Executive Summary: The executive summary of this market report defines the market opportunities for entrepreneurs and investors within AI safety.
Optional readings:
AI Assurance Tech Report: A report on how "AI assurance" (another word for AI safety) companies will make up a substantial market during the coming years.
[Talk] Towards AI Safety Infrastructure: A talk from Neoma Research CEO Paul Bricman from a previous hackathon about how to build out effective AI safety infrastructure using technical interventions.
[Video] Concrete Problems in AI Safety: This video from Rob Miles gives an overview of some of the risks early AI safety research thought would be impactful to solve.
[Paper] The Coming Technological Singularity: A very interesting 9-page report by Vernor Vinge from 1996 about key topics that are in the public discourse today regarding AI. See more texts like it at Reading What We Can.
Check out the event schedule below.
Entries
Our Other Sprints
Apr 25, 2025
-
Apr 27, 2025
Economics of Transformative AI: Research Sprint
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Apr 25, 2025
-
Apr 26, 2025
Berkeley AI Policy Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible