Are you ready to reshape the future of alignment research? Join us for an exhilarating weekend at the Research Augmentation Hackathon, where we'll develop innovative tools and methods to accelerate progress in this critical field!
We're aiming to boost productivity in AI safety research by 5x or even 10x, to make transformative changes in how alignment research is done today.Join us if you're an AI alignment researcher, software engineer, UX/UI designer, or passionate about contributing to the safety of artificial intelligence.
For AI safety and alignment research to keep up with the developments in other fields of AI, we need to improve the productivity and quality of research. The potential of AI to accelerate alignment research is immense but largely untapped. By creating tools that can augment human researchers, we can:
Successful research augmentation could lead to breakthroughs in AI alignment causing downstream insights that can safeguard the future of humanity as AI systems become more advanced.
During this high-energy global hackathon, you'll:
We've identified several key challenges in AI alignment research that we'd like participants to address during this hackathon:
It is important during this hackathon that we develop tools that are specifically useful to AI safety and with the great involvement of everyone from the community, researchers and software engineers alike, we're hopeful that we can create something truly unique!
Our panel of expert judges will evaluate your projects based on:
Top teams will win a share of our $2,000 prize pool:
We're excited for these prizes to help you get engaged with the field of AI safety.
The Research Augmentation Hackathon is a weekend-long event where you participate in teams (1-5) to create innovative tools and systems that boost productivity for AI alignment researchers. You'll submit
These submissions will be judged by our panel of experts, with the chance to win up to $1,000!
You'll hear fascinating talks about real-world projects tackling research augmentation, get the opportunity to discuss your ideas with experienced mentors, and receive feedback from top-tier researchers in the field of AI alignment to further your exploration.
There are loads of reasons to join! Here are just a few:
Not at all! This can be your first foray into AI alignment and tool development. We welcome participants from diverse backgrounds - whether you're an AI researcher, a software engineer, a UX designer, or simply passionate about improving research processes. We provide code templates and ideas to kickstart your projects, and you'll be surprised what you can accomplish in just a weekend – especially with your new-found community!
Cam Tice, Recent Biology Graduate, attended the Deception Hackthon: "The Apart Hackathon was my first opportunity leading a research project in the field of AI safety. To my surprise, in around 40 hours of work I was able to put together a research team, robustly test a safety-centered idea, and present my findings to researchers in the field. This sprint has (hopefully) served as a launch pad for my career shift.”
Fedor Ryzhenkov, AI Safety Researcher at Palisade Research, attended the Deception Hackthon: "AI Deception Hackathon has been my first hackathon, so it was very exciting. To win it was also great, and I expect this to be a big thing on my resume until I get something bigger there.”
Lexley Villasis, Director at Condor Global SEA, attended the AI X Democracy Hackathon: "The hackathon was definitely one of the best ways to start digging into AI safety research! The mentors, participants, and organizers were all so encouraging while engaging deeply with each other’s ideas. Would definitely recommend this as a fruitful, non-intimidating way to get up to speed with some frontier AI safety research in a single weekend! Really encouraged and excited to upskill further.”
Siddharth reddy Bakkireddy, Research participant, attended the Deception Hackthon: "Winning 3rd place at Apart Research's deception detection hackathon was a game-changer for my career. The experience deepened my passion for AI safety and resulted in a research project I'm proud of. I connected with like-minded individuals, expanding my professional network. This achievement will undoubtedly boost my prospects for internships and jobs in AI safety. I'm excited to further explore this field and grateful for the opportunity provided by Apart Research.”
Besides emphasizing the introduction of concrete mitigation ideas for the risks presented, we are aware that projects emerging from this hackathon might pose a risk if disseminated irresponsibly. Therefore, for all of Apart's research events and dissemination, we follow our Responsible Disclosure Policy.
To help you get started with your projects, we've compiled a list of relevant resources:
We encourage participants to familiarize themselves with these resources before the hackathon. Don't worry if you're new to some of these concepts – we'll have mentors available to help guide you through the process!
This hackathon will primarily focus on developing tools as VS Code extensions. This approach allows for better integration into researchers' existing workflows, minimizing context switching and maximizing adoption.
Why VS Code extensions?
For those new to developing VS Code extensions, here are some helpful resources to get you started:
We encourage participants to familiarize themselves with VS Code extension development before the hackathon. Don't worry if you're new to this – we'll have mentors available to help guide you through the process!
Here are some potential directions to spark your creativity:
You can also draw inspiration from:
We'll be hacking away at the LISA office all weekend, come and join us!
We'll be hacking away at the LISA office all weekend, come and join us!
Submit your project in the form below with your:
See all the entries for this hackathon here!