Apart Research is an independent research organization focusing on AI safety. We facilitate and run hackathons and support individuals in technical AI safety research.
We collaborate on promising projects from talented individuals in the AI safety field. These often arise from our hackathons in e.g. interpretability. See the lab participants who are confirmed co-authors on accepted papers below.
1Apart Research 2Independent 3Edinburgh Centre for Robotics 4University of Oxford
* Equal contribution
May 5, 2023 | RTML workshop at ICLR 2023
1Apart Research 2Edinburgh Centre for Robotics 3Department of Engineering Sciences, University of Oxford
* Equal contribution
July 10, 2023 | ACL 2023
AI is advancing rapidly, and it won't be long before they surpass humans in various domains. As this technological progress continues, it is crucial that we prioritize the safety of these systems. Without proper safeguards in place, we expose ourselves to existing risks.
A*PART is committed to supporting researchers in the field of ML safety. Our goal is to foster a deeper understanding of AI and how to make it safer.
If you would like to invite us to speak at your event, please contact us at email@example.com.
Stay informed and join the discussion - together, we can shape the future of AI and ensure its safe and responsible development.
Check out the list below for ways you can interact or research with Apart!
If you have lists of AI safety and AI governance ideas that are shovel-ready lying around, submit them to aisafetyideas.com and we'll put them into the list as we make each more shovel-ready!
You can work directly with us on aisafetyideas.com, on Discord, or on Trello. If you have some specific questions, write to us here.
Send your feature ideas our way in the #features-bugs channel on Discord. We appreciate any and all feedback!
You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.
We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!
The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.
September 27, 2022
No one talks about how far we are towards safe AGI and the focus is on when AGI doom arrives. We want to readjust this focus and measure progress, guide and facilitate research, and evaluate projects in AI safety for impact. We also ask you to add your views to this survey.
August 30, 2022
What is next? We find new scaling laws and analyse how we can align agents using empathy, symbolic logic, and interpretability.
Head of Research Department
Commanding Center Management Executive
Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive
Commanding Research Executive
Manager of Experimental Design
Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager
Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert
Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager
Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager
Partner Associate Nips
Head of Graphics Department
Cake Coding Expert
Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst
Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC