The traditional path of obtaining a PhD isn’t available to everyone, requires years of commitment, and may not be right for you. More recently there has been a surge in fellowship programs which provide individuals with some experience and networking opportunities; however, securing a spot in these fellowships isn’t trivial, with e.g. MATS acceptance rate being roughly 4.3% (see comments).
In practice, this means there are countless individuals who are ready and able to meaningfully contribute towards safer AI who are spending their time on an endless cycle of upskilling and applications.
At Apart Research, we believe the best way to learn AI safety research is by doing it. To facilitate this, we have developed two programs: our Apart Fellowship pipeline, which we’ve been continuously running and improving over the last 2 years, and our new Partnered Fellowships. Both have been designed such that you start by working on real problems, getting feedback from experienced researchers, and building something concrete you can point to.
Apart Fellowship
> Just one weekend has been enough to kickstart many participants into AI safety research.
The Apart Fellowship is our flagship programme, where we help you develop your hackathon idea into a fully fledged conference paper.
Instead of the typical application-based process, our approach begins with our monthly hackathons, during which you gain immediate, hands-on experience with AI safety research, in addition to evidence of this experience in the form of your hackathon submission. Regardless of the outcome of the hackathon, you’ll learn practical skills and test your fit within the space since you’re actually doing the thing (in this case, AI safety research).
We then provide guidance to promising teams to write their research proposal as the application into their Apart Fellowship. These follow-up programmes allow us to provide additional support and guidance to help you turn a weekend's effort into a peer-reviewed paper. Some successful projects include DarkBench and min-p, both of which ended up as ICLR 2025 Oral Presentations! Fellows have also gone on to impactful careers in AI safety - joining leading organizations (e.g. Anthropic, METR), or even starting their own (e.g. Geodesic, Asymmetric Security).[1]
The Process
Sprint (our monthly hackathons): Weekend research sprints where anyone can explore ideas and test their fit for AI safety research
Studio (6-8 weeks): We invite the most promising projects, selected by the Apart Team and external experts, to develop their ideas into research proposals with our guidance in the Studio. Usually between 5%-20% of the submissions.
Apart Fellowship (12-24 weeks): Around 40% of Studio participants advance to the Fellowship phase, where you execute your research and aim for publication
The whole journey from Sprint through the Apart Fellowship typically takes 4-8 months, and our monthly cadence means another opportunity is always around the corner. We expect fellows to commit 10-20 hours per week.
Ultimately, the Apart Fellowship is designed for technical talent who want to pursue their own research directions. This includes domain experts from biology, physics, cybersecurity, etc. applying your expertise to AI safety, professionals exploring AI safety part-time, or particularly dedicated students interested in pursuing research. At each stage, advancement is based on your actual research output rather than credentials or application essays.
Partnered Fellowships
The AI safety movement isn’t leveraging all the talent we do have. To further address this, we are introducing a new fellowship programme: Partnered Fellowships.
For each Partnered Fellowship, we directly collaborate with leading AI safety organizations to provide expert-defined research agendas and specialized mentorship; this is combined with our experience facilitating fellowships and delivering high quality research outputs to harness untapped interdisciplinary expertise. While the Apart Fellowship is keyed for exploratory, self-directed research building on your hackathon idea, the Partnered Fellowship guarantees collaboration with world-class researchers on expert-defined projects.
For our first partnered fellowship, we partnered with Martian to host a Partnered Fellowship developing their agenda of improving model routing, a technique where an automated system sends queries to different models based on which is likely to be the best fit. We currently have three Fellowship teams, each guided by mentors from Martian, actively developing research, and two will be presenting their preliminary work at multiple NeurIPS workshops in early December! These projects focus on studying whether LLMs unfairly prefer to route to their own model families when judging, and making more efficient methods for combining reviews from various judges.
Recently we launched our partnered fellowship with Heron! Check it out here.
We’re excited to expand on this with more of these fellowships in the very near future, so look out for announcements on our Partnered Fellowships here, or sign up for our newsletter to receive updates whenever we announce new opportunities!
Footnotes
[1] For more details, check out our Impact Report!




