Keep Apart Research Going: Donate Today
Our Funding Ends June 2025
Apart Research is at a pivotal moment. In the past 2.5 years, we've built a global pipeline for AI safety research and talent that has produced 22 peer-reviewed papers, engaged 3,500+ participants in 42 research sprints across 50+ global locations, and helped launch top talent into AI safety careers at leading institutions like METR, Oxford, Far.ai and the UK AI Security Institute. Without immediate funding, this momentum will stop in June 2025.
We need your support now to continue this high-impact work.


Why Apart Makes a Difference
Unique Talent Pipeline: Our Sprint → Studio → Fellowship model has engaged 3,500+ participants, developed 450+ pilot experiments, and supported 100+ research fellows from diverse backgrounds who might otherwise never enter AI safety
Research Excellence: Our pipeline has produced 22 peer-reviewed publications at top conferences like NeurIPS, ICLR, ICML, ACL (incl. two oral spotlights at ICLR 2025), has been cited by top labs, and has received significant media attention
Policy Engagement: Beyond producing research, we actively engage in AI policy and governance. This includes presenting key findings at prominent forums like IASEAI, sharing our research in major media, serving as expert consultants for the EU AI Act Code of Practice, and participating as panelists in EU AI Office workshops
Global Impact: With 50+ event locations and participants from 26+ countries, we're building AI safety expertise across the globe
Spotlight Achievements
Our "DarkBench" research, the first comprehensive benchmark for dark design patterns in large language models, received an Oral Spotlight at ICLR 2025 (top 1.8% of accepted papers), was presented at the IASEAI (International Association for Safe and Ethical AI) conference, and has been featured in major tech publications.
One of the participants in our hackathon with METR, a physics graduate and computational neuroscience expert with experience in ML and data science, joined METR as a full-time member of technical staff as a direct result of participating in our March 2024 event. Within months, he was contributing to groundbreaking work that he eventually presented at ICLR 2025, —demonstrating our unique ability to identify cross-disciplinary talent and rapidly transition them into impactful AI safety research.
The Clock Is Ticking
Without funding, Apart will need to severely scale back the global research pipeline we've spent 2.5 years building, a system that has successfully identified exceptional talent from across 26+ countries and transformed promising ideas into published research. Your support today preserves this proven talent discovery engine and ensures groundbreaking safety research continues to shape both technical advances and governance frameworks at this critical inflection point for AI. By supporting Apart, you're enabling top tech talent worldwide to become immediate, high-impact contributors to AI safety.
Testimonials



Zainab Majid
Co-Founder at Stealth

Benjamin Sturgeon
Founder of AI Safety Cape Town

Cameron Tice
AI Safety Technical Governance Research Lead at ERA Cambridge

Jacy Reese Anthis
Co-Founder of the Sentience Institute

Fedor Ryzhenkov
Research Engineer at Palisade Research

Amirali Abdullah
Lead Al Researcher at Thoughtworks

Parker
AI safety researcher at METR

Luhan Mikaelson
CS Student, AI Safety Researcher

Marcel Mir
AI Policy & Governance Researcher at the Centre for Democracy & Technology Europe

Prithvi Shahani
Auto Alignment Eval Builder and Research Trainer at Anthropic

Teun
Independent Al safety research

Jasmina Urdshals
Independent AI Safety Researcher

Hoang-Long Tran
AI Safety Research Fellow at the Al Safety Global Society
Read Our Impact Report
At Apart Research, our mission is to improve humanity's preparedness towards catastrophic AI risk. We build research communities to push the limits of AI safety.