Apart Lab is a global AI safety research community.

We help you turn your ideas into high-impact research projects.

From sprints to the lab

Our two bespoke research programs to accelerate your journey into AI safety.

*NEW* Apart Lab Studio

For those who want to explore their ideas and potential fit for research

This 8-week program is designed for Sprint participants who want to test their fit in the AI safety space, are excited by their hackathon idea, and want to make the most of it.

While polishing your hackathon work, and ideating onimpactful research directions,  you will learn from experienced colleagues in AI safety. As a member of Apart Lab, you will have access to our resources and be part of our global community.

If your work here is a great fit for our Fellowship—which offers a deeper dive into your topic—you may be invited to our Apart Lab Fellowship program.

Apart Lab Fellowship

For those ready to start their impactful AI safety research career

This 4-6 month program is for developing your research proposal into a fully realized, high-impact research project.

In the fellowship, you'll dive deep into your project, and develop fundamental skills for AI safety research, including experimental design, and scientific communication, as well as expertise tied to your specific project, like LLM evaluation design, or working directly with model internals.

You will draft a research paper, receive feedback from peers and Apart's community of researchers, and polish your work before submitting to top conferences.

Apart Lab Members

The Lab has hosted 80 researchers to date, with over 30 active researchers at any given time. Our talented team focuses on high-impact research areas such as evaluation methodologies, benchmarking, interpretability, and AI control. Most of their work aims for peer-reviewed publications, and some of their research has been utilized by prominent organizations like the former OpenAI Superalignment team, the UK AI Safety Institute, and Anthropic.

Meet some of our fellows below and explore our publications here.

Alexandra Abbas

With successful challenges created for METR, Alexandra and is focusing on studying the robustness of adversarial fine tuning techniques against the ablation of the refusal feature.

Alexandra Abbas

With successful challenges created for METR, Alexandra and is focusing on studying the robustness of adversarial fine tuning techniques against the ablation of the refusal feature.

Philip Quirke

Philip comes from managing teams of engineers and joined cohort 3, researching how to embed formalized and verified function circuits into language models to increase trustworthiness. Now, he works at FAR AI as a Research Project Manager.

Philip Quirke

Philip comes from managing teams of engineers and joined cohort 3, researching how to embed formalized and verified function circuits into language models to increase trustworthiness. Now, he works at FAR AI as a Research Project Manager.

Akash Kundu

Participating in multiple hackathons, Akash joined as part of cohort 4 to work on how evaluations generalize across languages and now works as an Analyst Intern at Lionheart Ventures.

Akash Kundu

Participating in multiple hackathons, Akash joined as part of cohort 4 to work on how evaluations generalize across languages and now works as an Analyst Intern at Lionheart Ventures.

Michelle Lo

During cohort 2, Michelle co-authored a paper on the ability for language models to quickly relearn unlearned capabilities, and she has won the Best Paper award at the Technical AI Safety Conference among other accolades.

Michelle Lo

During cohort 2, Michelle co-authored a paper on the ability for language models to quickly relearn unlearned capabilities, and she has won the Best Paper award at the Technical AI Safety Conference among other accolades.

Alex Foote

During cohort 1, Alex worked on a new method for understanding MLP neurons' connection to token activation. Since its publication, it has been used by papers such as the last OpenAI Superalignment project before it closed down.

Alex Foote

During cohort 1, Alex worked on a new method for understanding MLP neurons' connection to token activation. Since its publication, it has been used by papers such as the last OpenAI Superalignment project before it closed down.

Amir Abdullah

As a principal ML scientist and a maths PhD, Amir joined cohort 3 with his hackathon team to investigate implicit reward models in LLMs and identify what RLHF'd models truly learn.

Amir Abdullah

As a principal ML scientist and a maths PhD, Amir joined cohort 3 with his hackathon team to investigate implicit reward models in LLMs and identify what RLHF'd models truly learn.

Evan Anders

As a postdoc theoretical physics researcher, Evan joined cohort 4 to transition into AI safety and published work identifying fundamental issues with the design of current sparse auto-encoders on toy models.

Evan Anders

As a postdoc theoretical physics researcher, Evan joined cohort 4 to transition into AI safety and published work identifying fundamental issues with the design of current sparse auto-encoders on toy models.

Clement Neo

After winning the first interpretability hackathon, Clement published work investigating attention head-MLP interactions in transformers. He is now working as a Research Assistant at Apart while pursuing the final stages of his undergraduate studies, and has contributed to the projects of other Apart Fellows.

Clement Neo

After winning the first interpretability hackathon, Clement published work investigating attention head-MLP interactions in transformers. He is now working as a Research Assistant at Apart while pursuing the final stages of his undergraduate studies, and has contributed to the projects of other Apart Fellows.
Frequently Asked Questions

What you need to know about Apart Lab

The Lab is a research accelerator that helps you fast-track your career in AI safety research. We provide access to top-notch advisors, project management support, and research assistance to ensure the success of your research projects.

Our ultimate goal is to help you publish your work in leading peer-reviewed academic conferences and journals, such as NeurIPS, ACL, and ICLR, where you can receive valuable feedback and witness the impact of your research on the AI safety community and get valuable career capital for your next steps. Below, you'll find answers to all your questions about our program:

What is Apart Lab?

Apart Lab is a research accelerator that helps fast-track careers in AI safety research through two core programs: Apart Lab Studio and the Apart Lab Fellowship. We specialize in rapidly transforming promising weekend hackathon projects into impactful research contributions. Our programs provide immediate expert support and a flexible path to maximize the impact of your work - whether that's through a compelling blog post, technical demo, or peer-reviewed paper.

What are some examples of different types of outcomes?

How long does the Apart Lab fellowship take?

The fellowship aims for efficiency and timeliness, but the exact duration will depend on capacity of the involved team members and difficulty of the research project. We aim to complete the project (typically, with a publication-ready draft) within three to six months.

How can I join Apart Lab?

The entry point is participating in an Apart Sprint (our weekend research hackathon). Outstanding teams are immediately invited to Apart Lab Studio - no separate application needed. From there, promising projects that are a great fit for deeper research may be invited to the Apart Lab Fellowship. We welcome participants from any country and actively support professionals looking to explore a transition into AI safety research.

Our progressive structure lets you explore AI safety research at the pace and commitment level that works for you - from a weekend hackathon to the 8-week studio program to a multi-month research fellowship.

What are the typical research outcomes?

Our researchers produce impactful work across multiple formats:

  • Peer-reviewed papers at major ML conferences (NeurIPS, ICLR, ACL, EMNLP);
  • Research software, libraries, and datasets;
  • Technical blog posts and research project websites;
  • Conference presentations and posters.

Apart Lab members have published 13 peer-reviewed since 2023 (6 main conference papers, 9 workshop acceptances). Our work has been cited by OpenAI's Superalignment team and Apart team members have contributed to Anthropic's "Sleeper Agents" paper.

What is Apart Lab Studio?

Studio is an 8-week program that provides immediate expert support to promising hackathon teams. It consists of two phases:

  • Dissemination (2-3 weeks): Transform your hackathon work into a polished write-up with guidance from experienced researchers
  • Ideation (5-6 weeks): Explore new directions, develop variations of your ideas, and craft proposals that could attract research funding

The Studio is designed to help you quickly determine the best format and scope for your work while building valuable research skills.

What is the Apart Lab Fellowship?

The Fellowship is our high-investment research incubation program (3-6 months) focused on producing conference-quality research output. Fellows get full access to Lab resources and comprehensive support throughout their research journey.

What support do participants receive?

Both programs include:

  • Collaborative research environment;
  • Support with dissemination and research engineering;
  • Tailored research resources and guidance;
  • API and cloud compute resources (e.g., GPU);
  • Grant application assistance.

The Fellowship additionally provides:

  • Research project management;
  • Extended project duration;
  • More intensive support structure;
  • Deeper engagement with research advisors.

What is the time commitment?

Both programs are designed to be compatible with full-time employment and operate across all time zones. They require a minimum of 10 hours per week for meaningful progress, with:

  • Studio: 5-8 week program duration.
  • Fellowship: 3-6 month program with higher total time commitment.

We particularly welcome experienced professionals who want to explore AI safety while maintaining their current roles. Our flexible, asynchronous structure enables participation from anywhere in the world, with our current lab members spread across the globe.

How does remote collaboration work?

Both programs operate remotely and asynchronously via Discord and other collaboration tools. We hold regular online meetings for project teams and cohort-wide discussions, with flexible scheduling that accommodates all time zones. Our collaboration model is specifically designed to support working professionals and participants from any geographic location.

Who owns the research?

Lab participants own their research and serve as lead authors on resulting publications. Authorship is shared based on substantial contributions, following standard academic practice.

What are the primary research areas?

We focus on core AI safety topics including:

  • Model evaluations;
  • Model interpretability;
  • Multi-agent systems;
  • Technical tooling for AI governance;
  • AI security.

What are the diversity and inclusion policies?

Apart Lab aims to enable global participation in AI safety research based on capability rather than credentials. We actively work to support participants from diverse backgrounds and welcome discussions about additional support needs (contact@apartresearch.com).

Are there networking opportunities?

Yes, through:

  • Project team collaboration;
  • Support from research project managers and advisors;
  • Cohort-wide activities;
  • Integration with the broader Apart Lab community.

Can fellows co-author with external researchers in the field?

Yes, Apart Lab supports collaboration and co-authoring with external researchers, subject to the relevance and alignment with the research project.

Are stipends or other forms of financial support available?

We do not offer stipends but support you with API credits, cloud compute, as well as travel and conference participation support.

Additionally, our Research Managers will actively help you to apply for funding for your research project.

Will participants receive credentials for their fellowship involvement?

Participants will receive recognition for their involvement primarily via authorship on any publications. Participants will also be able to list Apart Research as their affiliation on lab-related publications.

How does project team collaboration work?

Apart Lab helps setting up teams for efficient collaboration, with clear roles and responsibilities, owned and driven by the lab member themselves but with support and guidance from Apart Lab mentors.

How is project progress monitored and evaluated?

Progress is monitored through written updates and regular check-ins, project milestones, and feedback sessions.

How is feedback provided?

Feedback at Apart Lab happens in meetings with Apart Lab mentors as well as peer-to-peer within the project teams and the lab community.

How can I apply to join Apart Lab?

There is no application. To join the lab, join one of our Apart Sprints. We invite the most promising teams to join the Apart Lab.

Who can join the Apart Lab fellowship?

The fellowship is open to individuals with a strong interest in AI safety research and the required skills, as demonstrated during one of our Apart Sprints. No formal credentials are required.