The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

Hackathon for Technical AI Safety Startups

--
Signups
--
Entries
August 30, 2024 4:00 PM
 to
September 2, 2024 3:00 AM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
August 30, 2024
 and 
September 2, 2024

AI Safety Requires Ambition

We are facing critical technological problems in AGI deployment during the next three years; alignment, multi-agent risk, compute security, exfiltration, among many others. Each of these questions deserves a competent team to scale science-informed solutions for. This is where you come in!

Join us during this weekend where we will join you to kick off an ambitious journey into AI safety and security with other aligned and talented individuals from both science and business. We aim to bring solution-oriented deep tech to AI safety. This is your chance to literally change the world.

Why This Hackathon Matters:

The impact of real-world startups is immense and can be felt almost immediately. We need to push AI safety innovations toward real-world applications rapidly to ensure they are implemented and make a real difference in the application and deployment of AI technologies. This hackathon is not just about ideation; it's about taking that crucial first step from concept to a plan for action, setting the stage for future development.

What to Expect:

  • Collaborate with Like-Minded Innovators: Work with a diverse group of participants from both the science and business worlds. This is your chance to meet potential co-founders and collaborators who share your vision for the future of AI safety.
  • From Concept to Blueprint: Move beyond research papers and ideas. Develop tangible solutions that can be refined and scaled into full-fledged products or startups in the future.
  • Support from Experts: Receive mentorship from leading experts in AI safety, entrepreneurship, and technology commercialization. Learn what it takes to develop your ideas and how to plan for their success.
  • Real-World Impact: We're looking for solutions that can serve as the foundation for real-world applications. Whether it's a new tool for alignment, a product to enhance multi-agent safety, or a platform to secure AI infrastructure, your work here could have a direct impact on the safe development and deployment of AI.
  • Looking Ahead: As you work on your projects, keep in mind that this is just the beginning. We're in the process of preparing an incubator program, Pathfinders, designed to provide even more support for the most promising teams

🙋‍♀️ FAQ

What will I submit during this weekend?

You submit a 4-page white paper that describes how your research innovation solves a key real-world problem in a multi-agent super intelligence future, whichever technology it is based on. If your research truly solves a real-world problem, commercial viability will simply be a question of engineering and you can ignore that for this hackathon.

How will I find a team?

Before the hackathon, we have brainstorming sessions where we will collaborate to figure out the highest impact research ideas to work on. We highly recommend that you connect with others during these sessions and coordinate around specific categories of ideas that excite you. We try to set the stage by defining the problems we need to solve and let you take charge on the idea generation process.

What is the Pathfinders program?

It is an incubation project for executing technical research in the real world with for-profit organizations towards making a large impact on the real world. We're currently developing this in collaboration with our community and would love to hear your ideas, thoughts, and ways we can make it a success with you!

How can I win the prizes?

We have a prize pool of $2,000 to share with you! People with established backgrounds in research-focused startups will be reviewing your projects and review your project about a technical idea that can scale on three criteria:

  • Research quality & scalability: What is the quality of the technical innovation? Will the research idea scale into an impactful intervention in the real world?
  • AI safety: Are the arguments for how this technical idea solves a key future challenge for AI risk good? Have the authors described the threat model they're aiming to solve with this project?
  • Methods: Will this intervention really work? Have the authors explored an example situation where this intervention will solve the problem?

Top teams will win a share of our $2,000 prize pool:

🥇 1st place: $1,000

🥈 2nd place: $600

🥉 3rd place: $300

🎖️ 4th place: $100

Do I need experience in AI safety to join?

Not at all! This can be an occasion for you to learn more about AI safety and entrepreneurship. We provide code templates and ideas to kickstart your projects, mentors to give feedback on your project, and a great community of interested researchers and developers to give reviews and feedback on your project.

Join Us:

This hackathon is for anyone who is passionate about AI safety and entrepreneurship. Whether you're an AI researcher, developer, entrepreneur, or simply someone with a great idea, we invite you to be part of this ambitious journey. Together, we can build the tools and products needed to ensure that AI is developed and deployed safely.

Let's turn cutting-edge research into actionable solutions. Let's build the future of AI safety, together.

Speakers & Collaborators

Lukas Petersson

Lukas recently founded vectorview, a model evaluations company to ensure the safety of AGI. Together with his cofounder, he recently went through the YCombinator program.
HackTalk Speaker

Esben Kran

As the co-director of Apart Research, Esben works on increasing research capability in AI safety along with AI security research.
Speaker and co-organizer

Nick Fitz

Founder and MP at Juniper Ventures, a VC focused on existential risk reduction from AI. Advisor at Apart.
Judge

Minh Nguyen

Minh has developed AI voice model products with a million users per month and is now doing product at Hume AI.
Reviewer

Rudolf Laine

Def/acc EF cohort member. Author of the situational awareness benchmark and an independent AI safety researcher working with Owaine Evans.
Speaker & Judge

Fazl Barez

Co-founder of a recent high-profile AI safety startup and research advisor at Apart Research.
Reviewer

Jonas Vollmer

Has advised and overseen $50M of impact investments and grants related to AI safety as board member of Polaris Ventures and director of EA Funds.
Reviewer

Finn Metz

Finn is a core member of Apart and heads strategy and business development with a background from private equity, incubation, and venture capital.
Organizer

Archana Vaidheeswaran

Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Organizer

Natalia Pérez-Campanero Antolín

A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.
Judge

Jason Schreiber

Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Organizer

    Core readings (1 hour):

    • Some for-profit AI alignment org ideas: An exploration of potential technical AI safety for-profit ideas from Goodfire's co-founder and CEO, Eric Ho. Goodfire recently raised $7M to build our mechanistic interpretability in the commercial domain.
    • For-profit AI Safety: This post from Apart co-director Esben Kran explores some of the overarching problems that for-profit companies might be able to solve in the future.
    • AI Assurance Tech Report Executive Summary: The executive summary of this market report defines the market opportunities for entrepreneurs and investors within AI safety.

    Optional readings:

    📍 Registered jam sites

    Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

    London Technical AI Safety Startups Hackathon

    Join us at LISA in London, hosted by Safe AI London

    London Technical AI Safety Startups Hackathon

    AI Safety Deep Tech Startup Hackathon

    TBD

    AI Safety Deep Tech Startup Hackathon

    🏠 Register a location

    The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    Thank you! Your submission has been received! Your event will show up on this page.
    Oops! Something went wrong while submitting the form.

    📣 Social media images and text snippets

    No media added yet
    No text snippets added yet

    [Required] Use this template for your submission

    Your project will be reviewed on the quality of your White Paper submission and any demonstration links you include. If you prefer to send in your project but keep it private, we can accommodate this.

    Template for your submission during the weekend
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
    Oops! Something went wrong while submitting the form.

    See all entries here. Please write if you prefer for your entry to stay private.

    CAMARA: A Comprehensive & Adaptive Multi-Agent framework for Red-Teaming and Adversarial Defense
    The CAMARA project presents a cutting-edge, adaptive multi-agent framework designed to significantly bolster AI safety by identifying and mitigating vulnerabilities in AI systems such as Large Language Models. As AI integration deepens across critical sectors, CAMARA addresses the increasing risks of exploitation by advanced adversaries. The framework utilizes a network of specialized agents that not only perform traditional red-teaming tasks but also execute sophisticated adversarial attacks, such as token manipulation and gradient-based strategies. These agents collaborate through a shared knowledge base, allowing them to learn from each other's experiences and coordinate more complex, effective attacks. By ensuring comprehensive testing of both standalone AI models and multi-agent systems, CAMARA targets vulnerabilities arising from interactions between multiple agents, a critical area often overlooked in current AI safety efforts. The framework's adaptability and collaborative learning mechanisms provide a proactive defense, capable of evolving alongside emerging AI technologies. Through this dual focus, CAMARA not only strengthens AI systems against external threats but also aligns them with ethical standards, ensuring safer deployment in real-world applications. It has a high scope of providing advanced AI security solutions in high-stake environments like defense and governance.
    Vishnu Vardhan Lanka, Era Sarda, Raghav Ravishankar
    September 1, 2024
    4th 🏆
    3rd 🏆
    2nd 🏆
    1st 🏆

    Sent Friday, August 30

    We're excited to see you in an hour!

    Get ready to make an impact on AI safety with technical startups! The keynote is happening in just 1 hour where you'll hear Esben Kran talk about why startups can be an impactful way to make real-world AI systems safer for humanity. Afterwards, you'll get an overview of the logistics for the weekend as well.

    Remember, you can connect with other participants and check out exciting ideas from our brainstorming at ⁠​💡projects│teams​. Additionally, the submission template is now released!

    We're excited and thankful to welcome our collaborators and reviewers from Juniper Ventures, vectorview, Momentum, Hume AI, Polaris Ventures, LISA, EA Tech London, AI42, and others to help us make this a great time for all of you!

    Find the event on Discord here: ​https://discord.gg/dpHY8QMJBr?event=1272853054718218282​

    Sent Thursday, August 29

    🥳 We’re excited to welcome you for the technical AI safety startups hackathon this weekend! Here’s your short overview of the latest resources and topics to get you ready for the weekend.

    In just 2:30 hours, we will hear from a published researcher and recent def/acc entrepreneur Rudolf Laine before we begin our brainstorming session. Join us on Discord here!

    When we kick off tomorrow with the keynote, we're delighted to present our co-director Esben Kran who has co-founded multiple startups besides Apart Research and published papers in AI safety. He will inspire you with a talk about how research in this field can scale with startups and what is necessary to take the journey there!

    Before we start this weekend, we highly recommend that you check out:

    • 💛 All the gold that is happening in the #💡project | ideas forum on Discord where you can connect with team members, post your best ideas for this weekend, and connect with others.
    • 🤓 The Resources page that includes readings to get you started on your journey

    🚀 Before the hackathon

    To make your weekend productive and exciting, we recommend that you take these two steps before we begin:

    1. 📚 Read or skim through the resources above (30-60 minutes)
    2. 💡 Create, select, or familiarize yourself with some ideas that you would be interested in implementing over the weekend and find teammates
      • You are very welcome to join our brainstorming session today. We already had one Tuesday with fascinating ideas shared!
      • Post your idea to #💡project | ideas forum to find other brilliant minds to collaborate with, discuss your ideas, and get feedback for your ideas from our mentors

    🏆 Prizes

    For more information about the prizes and judging criteria for this weekend, jump to the overview section. TL;DR: your projects will be judged by our brilliant panel and receive constructive feedback. Next Thursday, we will have the Grand Finalé where the winning teams will present their projects and everyone is welcome to join. These are the prizes:

    • 🥇 $1,000 to the top project
    • 🥈 $600 to the second place
    • 🥉 $300 to the third place
    • 🏅 $100 to the fourth place team

    🙋‍♀️ Questions

    You will undoubtedly have questions that you need answered. Remember that the #❓help-desk channel is always available and that the organizers will be available there.

    You might also ask yourself, "I'm not sure what types of ideas are relevant." If you are, think of the following criteria:

    • Directly affects the safety of AI models, agents, or systems: Your idea is technical or research-based enough that it will intervene in future deployment of AI safety in a way that impacts safety on a first-order basis.
    • Solves a big problem coming up in 1-3 years: Did you find a problem that we would actually expect to emerge? And is your idea really solving this problem?

    ✊ Let’s go!

    We really look forward to the weekend and we’re excited to welcome you tomorrow with all our amazing collaborators!

    Remember that this is an exciting opportunity to connect with others and develop meaningful ideas in AI safety. We’re all here to help each other succeed on this remarkable journey.

    We’ll see you there, research hackers!