AI safety & Security

Apart Research

Artificial intelligence will change the world. Our mission is to ensure this happens safely and to the benefit of everyone.

The ultimate guide to AI safety research hackathons

Read
Apart > Research

Foundational research for safe and beneficial advanced AI

Apart > Sprints

Global hackathons in AI safety for aspiring researchers

Apart > Lab

Incubating talented research teams towards real-world impact

With partners and collaborators from

Published research

We aim to produce foundational research enabling the safe and beneficial development of advanced AI.

Google Scholar

Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions

Clement Neo*, Shay B. Cohen, Fazl Barez*

Increasing Trust in Language Models through the Reuse of Verified Circuits

Philip Quirke, Clement Neo, Fazl Barez

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Evan Hubinger et al.

Large Language Models Relearn Removed Concepts

Michelle Lo*, Shay Cohen, Fazl Barez

See all publications

Luke Marks*, Amir Abdullah*, Luna Mendez, Rauno Arike, Philip Torr, Fazl Barez

arXiv
[Read Publication}

Interpreting Reward Models in RLHF-Tuned Language Models Using Sparse Autoencoders

Albert Garde*, Esben Kran*, Fazl Barez

NeurIPS 2023 XAI in Action Workshop
[Read Publication}

DeepDecipher

Philip Quirke, Fazl Barez

arXiv
[Read Publication}

Understanding Addition in Transformers

Michael Lan, Fazl Barez

arXiv
[Read Publication}

Locating Cross-Task Sequence Continuation Circuits in Transformers

Michelle Lo*, Shay Cohen, Fazl Barez

[Read Publication}

Neuroplasticity in LLMs

Clement Neo*, Shay B. Cohen, Fazl Barez*

arXiv
[Read Publication}

Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions

No items found.

Sleeper Agents

Evan Hubinger et al.

Anthropic
[Read Publication}

AI Security Evaluation Hackathon: Measuring AI Capability

This event concluded on
May 27, 2024
with
entries from
signups
In this weekend-long hackathon, we're researching ways to develop innovative benchmarks for safety - Join us with our keynote speaker Bo Li from Chicago U!
May 24
to
May 27, 2024
24
May
24
May
24
May

AI Security Evaluation Hackathon: Measuring AI Capability

Independently organized SprintX
🚩
Virtual & Local
24
May
Canceled

AI Security Evaluation Hackathon: Measuring AI Capability

AI Security Evaluation Hackathon: Measuring AI Capability

Independently organized SprintX
🚩
Virtual & Local
1
Jun
1
Jun
1
Jun

Computational Mechanics Hackathon!

Independently organized SprintX
Hybrid, with an in person group in Berkeley, CA.
1
Jun
Canceled

Computational Mechanics Hackathon!

Computational Mechanics Hackathon!

Independently organized SprintX
Hybrid, with an in person group in Berkeley, CA.
28
Jun
28
Jun
28
Jun

Deception Detection Hackathon: Can we prevent AI from deceiving humans?

Independently organized SprintX
🚩
Virtual & Local
28
Jun
Canceled

Deception Detection Hackathon: Can we prevent AI from deceiving humans?

Deception Detection Hackathon: Can we prevent AI from deceiving humans?

Independently organized SprintX
🚩
Virtual & Local

Welcome to the Flagging AI Risks Sprint Season. From March to June 2024, Apart is hosting four research hackathons focused on catastrophic risk evaluations of AI. See the hackathons above and stay updated by signing up!

What does Apart do?

We solve high-impact, neglected and tractable problems in AI safety

Field-building for AI safety

Our initiatives allow people from diverse backgrounds to have an impact on AI safety. On 7 continents, more than 200 projects have been developed by over 1,000 researchers. Some teams have gone on to publish their research at major academic venues, such as NeurIPS and ACL.

High-impact research

We engage in both original research and contracting for research projects that aim to translate academic insights into actionable strategies for mitigating catastrophic risks associated with AI. We have co-authored with researchers from the University of Oxford, DeepMind, Edinburgh University and more.

A vision for the future

Our aim is to foster a positive vision and an action-focused approach to AI safety, a commitment underscored by our signing of both the Statement on AI Risk and the Letter for an AI Moratorium. We are privileged to not only be tightly connected with but also actively develop a large community in AI safety.

Apart Research

Get involved

Check out the list below for ways you can interact or research with Apart!

Let's have a meeting!

You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.

I would love to mentor research ideas

We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!

Get updated on A*PART's work

Blog & Mailing list

The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.

People

Members

Central committee board

Associate Kranc
Head of Research Department
Commanding Center Management Executive

Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive

Associate Soha
Commanding Research Executive
Manager of Experimental Design

Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager

Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert

Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager

Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager

Partner Associate Nips
Head of Graphics Department
Cake Coding Expert

Honorary members

Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst

Alumni

Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC

Contact

Get in touch

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Apart Updates

Sign up for our updates

Follow the latest from the Apart Community and stay updated on our research and events.