The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

AI Safety Entrepreneurship Hackathon

--
Signups
--
Entries
January 17, 2025 6:00 PM
 to
January 20, 2025 3:00 AM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
January 17, 2025
 and 
January 20, 2025

AI Safety Requires Ambition

We are facing critical technological problems in AGI deployment during the next three years; alignment, multi-agent risk, compute security, exfiltration, among many others. Each of these questions deserves a competent team to scale science-informed solutions for. This is where you come in!

Join us during this weekend where we will join you to kick off an ambitious journey into AI safety and security with other aligned and talented individuals from both science and business. We aim to bring solution-oriented deep tech to AI safety. This is your chance to literally change the world.

This hackathon is made possible by the generous support of Lambda Labs, providing essential computing resources for AI safety innovation.

Why This Hackathon Matters:

The impact of real-world startups is immense and can be felt almost immediately. We need to push AI safety innovations toward real-world applications rapidly to ensure they are implemented and make a real difference in the application and deployment of AI technologies. This hackathon is not just about ideation; it's about taking that crucial first step from concept to a plan for action, setting the stage for future development.

What to Expect:

  • Collaborate with Like-Minded Innovators: Work with a diverse group of participants from both the science and business worlds. This is your chance to meet potential co-founders and collaborators who share your vision for the future of AI safety.
  • From Concept to Blueprint: Move beyond research papers and ideas. Develop tangible solutions that can be refined and scaled into full-fledged products or startups in the future.
  • Support from Experts: Receive mentorship from leading experts in AI safety, entrepreneurship, and technology commercialization. Learn what it takes to develop your ideas and how to plan for their success.
  • Real-World Impact: We're looking for solutions that can serve as the foundation for real-world applications. Whether it's a new tool for alignment, a product to enhance multi-agent safety, or a platform to secure AI infrastructure, your work here could have a direct impact on the safe development and deployment of AI.
  • Looking Ahead: As you work on your projects, keep in mind that this is just the beginning. We're in the process of preparing an incubator program, Pathfinders, designed to provide even more support for the most promising teams
  • Computing Resources: Thanks to the generous support of Lambda Labs, each team will receive $400 in compute credits to build and test their solutions

🙋‍♀️ FAQ

What will I submit during this weekend?

You submit a 4-page white paper that describes how your research innovation solves a key real-world problem in a multi-agent super intelligence future, whichever technology it is based on. If your research truly solves a real-world problem, commercial viability will simply be a question of engineering and you can ignore that for this hackathon.

How will I find a team?

Before the hackathon, we have brainstorming sessions where we will collaborate to figure out the highest impact research ideas to work on. We highly recommend that you connect with others during these sessions and coordinate around specific categories of ideas that excite you. We try to set the stage by defining the problems we need to solve and let you take charge on the idea generation process.

What is the Seldon accelerator program?

It is an incubation project for executing technical research in the real world with for-profit organizations towards making a large impact on the real world. We're currently developing this in collaboration with our community and would love to hear your ideas, thoughts, and ways we can make it a success with you!

How can I win the prizes?

We have a prize pool of $2,000 to share with you! People with established backgrounds in research-focused startups will be reviewing your projects and review your project about a technical idea that can scale on three criteria:

  • Research quality & scalability: What is the quality of the technical innovation? Will the research idea scale into an impactful intervention in the real world?
  • AI safety: Are the arguments for how this technical idea solves a key future challenge for AI risk good? Have the authors described the threat model they're aiming to solve with this project?
  • Methods: Will this intervention really work? Have the authors explored an example situation where this intervention will solve the problem?

Top teams will win a share of our $2,000 prize pool:

  • 🥇 1st place: $1,000
  • 🥈 2nd place: $600
  • 🥉 3rd place: $300
  • 🎖️ 4th place: $100

Additionally, top entrepreneurs might receive the opportunity to:

  • 🏆 Join the Seldon entrepreneur-in-residence program

Also, all participating teams receive:

  • $400 in Lambda Labs compute credits for development and testing

What kind of projects are suitable for this hackathon?

Projects that address AI safety challenges through technical solutions that could be developed into startups. Examples include monitoring systems, evaluation frameworks, safety layers for AI deployment, and infrastructure security solutions.

Q: What should the final submission include?A: A 4-page white paper that describes:

  • The problem and why it's important
  • Your proposed solution
  • Pilot experiments or demo
  • Implementation risks and mitigation strategies

Do I need experience in AI safety to join?

Not at all! This can be an occasion for you to learn more about AI safety and entrepreneurship. We provide code templates and ideas to kickstart your projects, mentors to give feedback on your project, and a great community of interested researchers and developers to give reviews and feedback on your project.

Q: Can I work on an idea similar to existing companies?

Yes, you can work on similar ideas if you can demonstrate unique value or improvements. The focus should be on solving real problems rather than just competing with existing solutions.

Q: How do office hours work?

Office hours are scheduled throughout the hackathon where participants can get feedback from industry experts and mentors on their projects.

Join Us

This hackathon is for anyone who is passionate about AI safety and entrepreneurship. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and products needed to ensure that AI is developed and deployed safely.

Let's turn cutting-edge research into actionable solutions. Let's build the future of AI safety, together.

Speakers & Collaborators

Finn Metz

Finn is a core member of Apart and heads strategy and business development with a background from private equity, incubation, and venture capital.
Organizer

Nick Fitz

Founder and MP at Juniper Ventures, a VC focused on existential risk reduction from AI. Advisor at Apart.
Judge

Griff Bohm

Behavioral psychologist turned VP of Growth at Momentum, a leading AI-focused Blackbaud partner. Winner of Blackbaud's 2022 startup showcase.
Judge

Kristian Rönn

Kristian is a co-author of the AI Assurance Technology Report and a founder of multiple startups Author of The Darwinian Trap, a new book on how humanity can take the right path.
HackTalk Speaker

Archana Vaidheeswaran

Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Organizer

Natalia Pérez-Campanero Antolín

A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.
Judge

Michael Trazzi

AI researcher turned filmmaker. Founded The Inside View, producing AI documentaries. Ex-FHI, now directing film on CA's AI bill SB-1047. Technical AI expertise + storytelling.
Judge

Shivam Raval

Harvard PhD candidate in physics & AI safety. Founded Curvature to translate AI research into governance frameworks. Previously researched at Yale & Columbia
Judge

Edward Yee

Edward Yee far.ai Edward Yee is the Head of Strategic Projects at FAR AI, a research organization dedicated to ensuring AI systems are trustworthy and beneficial to society.
Judge

Esben Kran

Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Organizer

Jaime Raldua

Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.
Organiser and Judge

Pablo S.

Product leader with 9+ years at Amazon, now building applied AI solutions. Brings deep technical expertise in scaling AI products and systems.
Judge

AI Assurance Technology Report

We are happy to invite Kristian Rönn as a speaker at this event. He is the co-founder of a new stealth startup, the ex-founder of a 200 person company for carbon emission governance, and a co-author on the AI Assurance Technology Report.

We highly recommend skimming at least the executive summary of the report.

Useful articles

Hackathon for Technical AI Safety Startups September 2024

Keynote talk & HackTalk:


This talk by a YCombinator startup founder Lukas Petersson provides invaluable insights into building AI safety startups, covering crucial aspects like market validation, scaling challenges, and balancing profit with impact. The speaker shares practical advice on customer acquisition, pricing strategies for small market sizes, and navigating the unique challenges of AI safety businesses. Particularly valuable for hackathon participants is his discussion of turning theoretical safety concepts into viable businesses while maintaining mission alignment.


This keynote from Esben Kran  provides essential strategic context for building AI safety startups, explaining why 2024 is a critical time for development, and outlines various promising areas like security interventions, automated evaluations, and infrastructure solutions. The talk offers valuable insights into balancing profit with impact, navigating governance challenges, and leveraging the uniquely collaborative AI safety community while building technically ambitious solutions.

Winning Projects and what we can learn from them:

AI Safety Collective (3rd Place)

  • Created a platform for crowdsourcing solutions to AI safety challenges
  • Bridges the gap between industry AI safety demands and untapped talent
  • Uses a bounty system similar to cybersecurity platforms
  • Particularly valuable for addressing immediate industry safety needs while building towards more critical challenges

Simulation Operators (2nd Place)

  • Developed simulation platforms for testing and verifying robot safety
  • Focuses on cyber-physical systems and real-world AI deployment
  • Creates tools for human supervision of AI agents
  • Addresses the growing need for safety verification in robotics integration

AI Identity Systems (4th Place)

  • Built cryptographic identity system for AI models
  • Enables traceability and accountability of AI actions
  • Works for both self-hosted and server-side models
  • Uses innovative cryptographic techniques for verification

Dark Forest (1st Place)

  • Created system for defending web authenticity
  • Uses graph neural networks to detect AI-generated content
  • Addresses content authenticity crisis and platform trust
  • Innovative approach to maintaining open discourse online

Compute Resources from Lambda Labs

Thanks to the generous support of Lambda Labs, each participating team will receive $400 in compute credits to each team.

Lambda Labs Technical Guides

See the updated calendar and subscribe

The schedule runs from 4 PM UTC Friday to 3 AM Monday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here.You will also find Explorer events, such as collaborative brainstorming and team match-making before the hackathon begins on Discord and in the calendar.

📍 Registered jam sites

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

AI Safety Entrepreneurship Hackathon

Join us at the EA Hotel for the AI Safety Entrepreneurship Hackathon. Free accommodation, food and co-working stations provided! We're located at: 36 York Street, Blackpool, FY15AQ

AI Safety Entrepreneurship Hackathon

AI Safety Entrepreneurship Hackathon

LISA, 25 Holywell Row, London, EC2A 4XE

AI Safety Entrepreneurship Hackathon

AI Safety Startup Hackathon: EPFL hub

A local hub for the hackathon on the EPFL campus (luma coming soon). We will provide a room, snacks and drinks for the participants.

AI Safety Startup Hackathon: EPFL hub

🏠 Register a location

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

📣 Social media images and text snippets

No media added yet
No text snippets added yet

Use this template for your submission [Required]

Template in Overleaf/LaTeX

Submission Requirements

Each team should submit:

  1. Project White Paper (4 pages max)
  2. Overview (250 words max)
  3. Link to prototype/demo (if applicable)
  4. Pitch deck (optional)

Required components:

  1. Title and team members
  2. Problem Overview(max 250 words)
  3. Your solution
  4. Pilot experiment or demo
  5. Process
  6. AI Safety Impact & Risk Analysis
  7. Appendix

Additionally, teams should provide:

  • Link to GitHub repository or technical documentation
  • Demo/prototype materials (if available)

Evaluation Criteria

Submissions will be judged based on:

  1. Research Quality & Scalability (33%)
  • Innovation and technical merit of the proposed solution
  • Potential for real-world impact and scalability
  • Clarity and depth of technical implementation
  • Strength of preliminary results/prototype
  • Market validation and implementation strategy
  1. AI Safety Impact (33%)
  • Clear articulation of the AI safety challenge being addressed
  • Robustness of the threat model and risk analysis
  • Quality of safety guarantees and verification approach
  • Evidence of solution effectiveness
  • Long-term impact on AI system safety
  1. Technical Implementation/Methodology (33%)
  • Demonstration of working solution/prototype
  • Thoroughness of testing and validation
  • Quality of empirical evidence
  • Documentation and reproducibility
  • Practical feasibility assessment

Frequently Asked Questions

General Questions

Q: What is the expected team size?
A: Teams should be 4-5 people. While solo submissions are accepted, team participation is encouraged for better project outcomes.

Q: Do we need a working prototype?
A: No, but you need a clear technical specification and implementation plan. A prototype/demo will strengthen your submission.

Submission Process

Q: Can we submit multiple ideas?
A: Each team can only submit one project. Choose your strongest idea that balances AI safety impact and business potential.

Q: What if our solution spans multiple AI safety areas?
A: That's fine! Clearly explain how your solution addresses each area and why this comprehensive approach is beneficial.

Q: How do we handle intellectual property?
A: You retain your IP rights. The submission only grants Apart Research the right to review and evaluate your project.

Support & Resources

Q: How can we get help during the hackathon?A: You can:

  1. Join office hours
  2. Access mentor support
  3. Use the Discord help channel
  4. Attend hack talks
  5. Participate in brainstorming sessions

Need more information? Contact the organizing team at sprints@apartresearch.com

Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.
AI Risk Management Assurance Network (AIRMAN)
The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care. Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring. The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems. The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.
Aidan Kierans
January 19, 2025
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆