AI safety research

Artificial intelligence will change the world. Our mission is to ensure that this change happens safely and to the benefit of everyone.

Apart Research is a non-profit organization solving high-impact, neglected and tractable research problems in AI safety

Field-building for AI safety

Our initiatives include research sprints that have engaged over 1,000 researchers and produced 180 research projects across 15 events hosted in 7 continents at more than 40 locations. The teams of the most promising of these projects receive lead authorship through our Apart Lab fellowship, aimed at publication in top-tier academic venues.

High-impact research

We engage in both original research and contracting for research projects that aim to translate academic insights into actionable strategies for mitigating catastrophic risks associated with AI. We have co-authored with researchers from the University of Oxford, DeepMind, Edinburgh University, and more.

A vision for the future

Our aim is to foster a positive vision and an action-focused approach to AI safety, a commitment underscored by our signing of both the Statement on AI Risk and the Letter for an AI Moratorium. Taking the risks of artificial intelligence seriously, we are privileged to not only be tightly connected with but also actively develop a large community focused on AI safety.

Technical AI Safety

We research technical topics within AI safety; mechanistic interpretability, process supervision, safety benchmarks, and more. In the Apart Lab, entrants in AI safety research get the opportunity to publish their research work in collaboration with our team.

Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark

Evaluating language model memory editing

We introduce an enhanced specificity benchmark to evaluate model editing performance.
Published on
July 10, 2023
at
ACL 2023

Jason Hoelscher-Obermaier1*, Julia Persson1*, Esben Kran1, Ionnis Konstas2, Fazl Barez1,3*

The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python

Uncovering increased frequency of failure modes in larger models

We discover an inverse scaling relationship in the ability to comprehend identifier swaps in Python code.
Published on
May 24, 2023
at
ACL 2023

Antonio Valerio Miceli-Barone1*, Fazl Barez1*, Ioannis Konstas2, Shay B. Cohen1

Neuron to Graph: Interpreting Language Model Neurons at Scale

Interpreting language model neurons at scale

We identify and model the token sequences that maximize neuron activation in large language model transformers.
Published on
May 5, 2023
at
RTML workshop at ICLR 2023

Alex Foote1*, Neel Nanda2, Esben Kran1, Ionnis Konstas3, Shay Cohen4, Fazl Barez1,4,5*

Fairness in AI and Its Long-Term Implications on Society

Long-term societal implications of fairness in AI

To protect society from cascading social risks of increasing systemic bias, we recommend researching iterative bias amplification, developing foundational synthetic datasets, and regulating for fairness.
Published on
April 16, 2023
at
Proceedings of the Stanford Existential Risks Conference 2023

Ondrej Bohda1*, Timothy Hospedales2, Philip H.S. Torr3, Fazl Barez4

System III: Learning with Domain Knowledge for Safety Constraints

Preventing unsafe exploration behavior during training

We create a symbolic regularizer in the Safety Gym environment designed to avoid high-risk search behavior.
Published on
November 9, 2022
at
ML Safety Workshop at NeurIPS 2022

Fazl Barez1*, Hosein Hasanbeig2, Alessandro Abate2

Why AI Safety?

The pace of AI development is accelerating, and it's clear that AI systems will soon surpass human capabilities in various areas [1]. As we continue to push the boundaries of technology, ensuring the safety of these systems is paramount. Without the right precautions, we risk encountering significant challenges.

At Apart, we're devoted to exploring and addressing the fundamental issues in AI safety. Our objective is to enhance our collective understanding of AI and devise methods to improve its safety.

We encourage you to stay engaged and join the conversation. Together, we can influence the trajectory of AI, steering it towards a future that is safe and responsibly developed.

For inquiries about speaking opportunities or potential collaborations, feel free to contact us at contact@apartresearch.com.

Apart Research

Get involved

Check out the list below for ways you can interact or research with Apart!

Let's have a meeting!

You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.

I would love to mentor research ideas

We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!

Get updated on A*PART's work

Blog & Mailing list

The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.

People

Members

Central committee board

Associate Kranc
Head of Research Department
Commanding Center Management Executive

Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive

Associate Soha
Commanding Research Executive
Manager of Experimental Design

Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager

Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert

Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager

Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager

Partner Associate Nips
Head of Graphics Department
Cake Coding Expert

Honorary members

Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst

Alumni

Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC