Artificial intelligence will change the world. Our mission is to ensure that this change happens safely and to the benefit of everyone.
Our initiatives include research sprints that have engaged over 1,000 researchers and produced 180 research projects across 15 events hosted in 7 continents at more than 40 locations. The teams of the most promising of these projects receive lead authorship through our Apart Lab fellowship, aimed at publication in top-tier academic venues.
We engage in both original research and contracting for research projects that aim to translate academic insights into actionable strategies for mitigating catastrophic risks associated with AI. We have co-authored with researchers from the University of Oxford, DeepMind, Edinburgh University, and more.
Our aim is to foster a positive vision and an action-focused approach to AI safety, a commitment underscored by our signing of both the Statement on AI Risk and the Letter for an AI Moratorium. Taking the risks of artificial intelligence seriously, we are privileged to not only be tightly connected with but also actively develop a large community focused on AI safety.
We research technical topics within AI safety; mechanistic interpretability, process supervision, safety benchmarks, and more. In the Apart Lab, entrants in AI safety research get the opportunity to publish their research work in collaboration with our team.
Jason Hoelscher-Obermaier1*, Julia Persson1*, Esben Kran1, Ionnis Konstas2, Fazl Barez1,3*
Antonio Valerio Miceli-Barone1*, Fazl Barez1*, Ioannis Konstas2, Shay B. Cohen1
Alex Foote1*, Neel Nanda2, Esben Kran1, Ionnis Konstas3, Shay Cohen4, Fazl Barez1,4,5*
Ondrej Bohda1*, Timothy Hospedales2, Philip H.S. Torr3, Fazl Barez4
The pace of AI development is accelerating, and it's clear that AI systems will soon surpass human capabilities in various areas [1]. As we continue to push the boundaries of technology, ensuring the safety of these systems is paramount. Without the right precautions, we risk encountering significant challenges.
At Apart, we're devoted to exploring and addressing the fundamental issues in AI safety. Our objective is to enhance our collective understanding of AI and devise methods to improve its safety.
We encourage you to stay engaged and join the conversation. Together, we can influence the trajectory of AI, steering it towards a future that is safe and responsibly developed.
For inquiries about speaking opportunities or potential collaborations, feel free to contact us at contact@apartresearch.com.
Check out the list below for ways you can interact or research with Apart!
If you have lists of AI safety and AI governance ideas that are shovel-ready lying around, submit them to aisafetyideas.com and we'll put them into the list as we make each more shovel-ready!
You can work directly with us on aisafetyideas.com, on Discord, or on Trello. If you have some specific questions, write to us here.
Send your feature ideas our way in the #features-bugs channel on Discord. We appreciate any and all feedback!
You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.
We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!
The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.
Associate Kranc
Head of Research Department
Commanding Center Management Executive
Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive
Associate Soha
Commanding Research Executive
Manager of Experimental Design
Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager
Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert
Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager
Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager
Partner Associate Nips
Head of Graphics Department
Cake Coding Expert
Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst
Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC