The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

Research Augmentation Hackathon: Supercharging AI Alignment

--
No items found.
No items found.
Signups
--
Entries
July 26, 2024 4:00 PM
 to
July 29, 2024 3:00 AM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
July 26, 2024
 and 
July 29, 2024

Join us to revolutionize AI safety research

Are you ready to reshape the future of alignment research? Join us for an exhilarating weekend at the Research Augmentation Hackathon, where we'll develop innovative tools and methods to accelerate progress in this critical field!

We're aiming to boost productivity in AI safety research by 5x or even 10x, to make transformative changes in how alignment research is done today.Join us if you're an AI alignment researcher, software engineer, UX/UI designer, or passionate about contributing to the safety of artificial intelligence.

Why research augmentation matters

For AI safety and alignment research to keep up with the developments in other fields of AI, we need to improve the productivity and quality of research. The potential of AI to accelerate alignment research is immense but largely untapped. By creating tools that can augment human researchers, we can:

  • Dramatically speed up literature reviews and hypothesis generation in a pre-paradigmatic field
  • Automate routine tasks, freeing researchers to focus on creative problem-solving
  • Identify cross-disciplinary connections that humans might miss
  • Scale up experimental design and data analysis for alignment-specific challenges

Successful research augmentation could lead to breakthroughs in AI alignment causing downstream insights that can safeguard the future of humanity as AI systems become more advanced.

What to expect

During this high-energy global hackathon, you'll:

  • Collaborate with diverse teams of innovators, researchers, and engineers
  • Gain insights from keynote speakers at the forefront of AI alignment and research tool development
  • Develop prototypes for AI-powered research tools, primarily as VS Code extensions
  • Tackle real research challenges provided by AI alignment organizations
  • Network with potential collaborators, employers, and investors in the AI alignment sector

Problem Statement

We've identified several key challenges in AI alignment research that we'd like participants to address during this hackathon:

  1. Proactive insight extraction from new research: How might we design an AI research assistant that proactively looks at new and existing papers and shares valuable information with researchers in a naturally consumable way? The goal is to present researchers with personally valuable insights without overwhelming them.
  2. Improving the LLM experience for researchers: Many alignment researchers underutilize language models due to various bottlenecks. How can we make LLMs more useful by addressing issues such as prompt creation, project context, and keeping models up-to-date on the latest techniques within AI safety?
  3. Accelerating the transition from initial experiments to full projects: How can we help researchers move more quickly from initial 24-hour experiments to complete sets of experiments tested with different models, datasets, and interventions?
  4. Using AI agents to automate alignment research: As AI agents become more capable, how can we leverage them to speed up alignment research or unlock previously inaccessible research paths?
  5. Nudging research toward better objectives: How can we ensure that researchers are working on the most valuable things and choosing the right projects and next steps throughout their research process?
  6. Accelerating implementation and iteration speed: How can we help researchers gain the most information in the shortest time, avoid tunnel vision and make faster progress?
  7. Connecting ideas in the field: How can we integrate open questions and projects in the field to help researchers develop well-grounded research directions faster and adjust throughout their research?

It is important during this hackathon that we develop tools that are specifically useful to AI safety and with the great involvement of everyone from the community, researchers and software engineers alike, we're hopeful that we can create something truly unique!

Judging criteria

Our panel of expert judges will evaluate your projects based on:

  • Research Impact: How significantly does your project accelerate research processes or enhance researcher capabilities? Have you embedded AI and technology in a novel way to the research process?
  • AI Safety: Is your tool better by focusing on the niché of AI safety and alignment? Is it aligned with tasks that are specific to AI safety compared to other disciplines, leveraging the specifics of the field?
  • Tool Quality: How intuitive and researcher-friendly is your AI assistant? How well-designed is it? Does it cover all the described use cases?

Top teams will win a share of our $2,000 prize pool:

  • 🥇 1st place: $1,000
  • 🥈 2nd place: $600
  • 🥉 3rd place: $300
  • 🎖️ 4th place: $100

We're excited for these prizes to help you get engaged with the field of AI safety.

What is a research augmentation hackathon?

The Research Augmentation Hackathon is a weekend-long event where you participate in teams (1-5) to create innovative tools and systems that boost productivity for AI alignment researchers. You'll submit

  • a working prototype (primarily as a VS Code extension)
  • a brief report summarizing your project
  • a 5 minute video demonstration of how your tool works using Loom (5 minute video with the free tier) or any other recording software

These submissions will be judged by our panel of experts, with the chance to win up to $1,000!

You'll hear fascinating talks about real-world projects tackling research augmentation, get the opportunity to discuss your ideas with experienced mentors, and receive feedback from top-tier researchers in the field of AI alignment to further your exploration.

Why should I join?

There are loads of reasons to join! Here are just a few:

  • Experience firsthand how AI can revolutionize alignment research
  • Network with people passionate about AI safety and research productivity
  • Win up to $1,000 to support your future projects
  • Gain practical experience in developing AI-powered research tools
  • Showcase your skills to AI safety labs, potentially opening up amazing job opportunities
  • Receive a certificate of participation
  • Get proof of your innovative work to support future grant applications
  • The best teams may be offered to participate in further programs to develop their tools
  • And many more... Come along!

Do I need experience in AI alignment or tool development to join?

Not at all! This can be your first foray into AI alignment and tool development. We welcome participants from diverse backgrounds - whether you're an AI researcher, a software engineer, a UX designer, or simply passionate about improving research processes. We provide code templates and ideas to kickstart your projects, and you'll be surprised what you can accomplish in just a weekend – especially with your new-found community!

What are previous experiences from similar hackathons?

Cam Tice, Recent Biology Graduate, attended the Deception Hackthon: "The Apart Hackathon was my first opportunity leading a research project in the field of AI safety. To my surprise, in around 40 hours of work I was able to put together a research team, robustly test a safety-centered idea, and present my findings to researchers in the field. This sprint has (hopefully) served as a launch pad for my career shift.”

Fedor Ryzhenkov, AI Safety Researcher at Palisade Research, attended the Deception Hackthon: "AI Deception Hackathon has been my first hackathon, so it was very exciting. To win it was also great, and I expect this to be a big thing on my resume until I get something bigger there.”

Lexley Villasis, Director at Condor Global SEA, attended the AI X Democracy Hackathon: "The hackathon was definitely one of the best ways to start digging into AI safety research! The mentors, participants, and organizers were all so encouraging while engaging deeply with each other’s ideas. Would definitely recommend this as a fruitful, non-intimidating way to get up to speed with some frontier AI safety research in a single weekend! Really encouraged and excited to upskill further.”

Siddharth reddy Bakkireddy, Research participant, attended the Deception Hackthon: "Winning 3rd place at Apart Research's deception detection hackathon was a game-changer for my career. The experience deepened my passion for AI safety and resulted in a research project I'm proud of. I connected with like-minded individuals, expanding my professional network. This achievement will undoubtedly boost my prospects for internships and jobs in AI safety. I'm excited to further explore this field and grateful for the opportunity provided by Apart Research.”

What if my research seems too risky to share?

Besides emphasizing the introduction of concrete mitigation ideas for the risks presented, we are aware that projects emerging from this hackathon might pose a risk if disseminated irresponsibly. Therefore, for all of Apart's research events and dissemination, we follow our Responsible Disclosure Policy.

Speakers & Collaborators

Jacques Thibodeau

Jacques is currently working to reduce risks from superintelligent AGI as an AI alignment researcher and by engineering new ways to assist the AI safety research process.
Keynote Speaker and Judge

Esben Kran

Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Organizer

Archana Vaidheeswaran

Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Organizer

Jamie Joyce

Foresight AI grantee Jamie Joyce is running a project that aims to develop autonomous research agents & enhance automated debate mapping to explore the intersection of AI safety
Judge

Jonny Spicer

Jonny is a software engineer currently based in London with experience across a wide range of stacks and languages. He is interested in AI and Effective Altruism.
Judge and Reviewer

Jason Schreiber

Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Organizer

Natalia Pérez-Campanero Antolín

A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.
Judge

Marc Carauleanu

Marc is an AI Safety Researcher at AE Studio, where he leads the development of a neglected alignment agenda called Self-Other Overlap. He co-authored AE Studio's alignment agenda.
Judge

📚 Resources

To help you get started with your projects, we've compiled a list of relevant resources:

Required reading

Optional research articles

  • AE and AI Alignment - An example of a company's approach to AI alignment research
  • Elicit.org guide - Learn how this AI-powered academic search engine works
  • Check out the Cyborgism post to understand the theory behind augmenting human intelligence with AI

Additional resources

  • Loom - A tool for exploring language model outputs in a tree structure
  • Semantic Scholar API - Access academic papers and metadata for your projects

AI alignment research tools and concepts

  • Efficiency buttons: Consider implementing shortcuts for common tasks like explaining jargon, finding relevant papers, or breaking down complex math and code.
  • Jargon detector: Develop a system to automatically identify and explain field-specific terminology.
  • Research idea generator: Explore ways to use AI to generate and critique research ideas in alignment.
  • Math helper: Implement features to convert whiteboard math to LaTeX, explain mathematical concepts, and provide prerequisites for understanding complex papers.
  • Experimental methods designer: Methodology is always an important part to supporting research ideas and generating statistical models and ways of constructing them can be an important part of making or breking a paper.
  • Coding assistant: Focus on alignment-specific coding tasks, such as setting up evals, interpretability tools, or automating mundane tasks.
  • Critique helper: Design a system to help researchers critique alignment plans and iterate on their own ideas faster.
  • Collaborator finder: Create a feature to suggest potential collaborators based on shared research interests.
  • Literature review assistant: Develop tools to automatically extract key insights from papers based on specific research questions.

Design principles

  • Reducing cognitive load: Focus on allowing researchers to dedicate more mental resources to important tasks by automating or simplifying routine work.
  • Promoting flow state: Design your tool to keep researchers in a high-quality state of cognition for extended periods.
  • Leveraging AI strengths: Build features that play to the current strengths of language models, such as information retrieval and synthesis, rather than expecting novel scientific breakthroughs.
  • Personalization: Consider ways to tailor the experience to different types of researchers (e.g., iterators, connectors, amplifiers).

Prompting strategies

  • Experiment with different prompting techniques to improve AI output quality.
  • Consider multi-sample approaches for tasks where aggregating multiple AI outputs could lead to better results.
  • Explore techniques like tree-of-thought reasoning or step-by-step problem decomposition.

We encourage participants to familiarize themselves with these resources before the hackathon. Don't worry if you're new to some of these concepts – we'll have mentors available to help guide you through the process!

🔧 Focus on VS Code extensions

This hackathon will primarily focus on developing tools as VS Code extensions. This approach allows for better integration into researchers' existing workflows, minimizing context switching and maximizing adoption.

Why VS Code extensions?

  • Easy integration with researchers' existing development environment
  • Access to researchers' code and projects for context-aware assistance
  • Reduced friction in tool adoption and usage
  • Leveraging existing VS Code infrastructure and community

For those new to developing VS Code extensions, here are some helpful resources to get you started:

We encourage participants to familiarize themselves with VS Code extension development before the hackathon. Don't worry if you're new to this – we'll have mentors available to help guide you through the process!

Project ideas

Here are some potential directions to spark your creativity:

  • Develop a VS Code extension that helps researchers track and optimize their information gain per unit of effort
  • Create a tool that automatically summarizes and visualizes key findings from alignment-related papers
  • Design an AI assistant that helps researchers design more efficient experiments in AI safety
  • Build a system that can identify potential collaborators and research synergies across different AI alignment sub-fields

You can also draw inspiration from:

  • Pantheon is an experimental LLM interface exploring a different type of human-AI interaction.
  • delve is a prototype app that branches between topics as you chat with it. You can "delve" deeper into a particular sub-topic without breaking your chat in one of the branches.
  • The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts.
  • Continue is an open source LLM extension for your IDE. It can connect to any LLM, and you can load documents to make them easily accessible to the LLM. How might we best use this to speed up alignment research? Which models? What external data?
  • Research is a tool built for "reading, understanding and organizing research, with AI."
  • Open Research Assistant: An automated tool for discovering insights from research paper corpora.

📍 Registered jam sites

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

EA Tech London x Safe AI London: Supercharging Alignment Hackathon

We'll be hacking away at the LISA office all weekend, come and join us!

EA Tech London x Safe AI London: Supercharging Alignment Hackathon

🏠 Register a location

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

📣 Social media images and text snippets

No media added yet
No text snippets added yet

Use this template for your submission [Required]

Submit your project in the form below with your:

  1. Project name and description
  2. GitHub repository link with instructions on how to run the tool
  3. Tool overview report
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.

See all the entries for this hackathon here!