AI Safety

A*PART Research

A*PART is an independent machine learning safety research organization working for a future in a benevolent relationship with AI.

We measure the progress towards safe AGI and share the future paths for AI safety.

Discord iconGithub iconLinkedin iconYouTube iconMedium icon

Apart Research

See the latest Alignment Jam

This weekend, we have run a hackathon in cognitive psychology on language models to evaluate if hackathons are a good way to give entrants in AI safety an opportunity to develop research skills and if a weekend long hackathon can give interesting research outputs.

2nd October 2022

Follow our YouTube series to stay updated with AI safety

Watch the first, second, third, and fourth videos that have been released summarizing the past weeks' progress in AI safety.

30th September 2022

Visit Alignment Markets

For a chance to bet on the results of some of the contemporary machine learning safety benchmark competitions and prizes.

23rd September 2022

Read our latest article about Safety Timelines

No one talks about how far we are towards safe AGI and the focus is on when AGI doom arrives. We want to readjust this focus and measure progress, guide and facilitate research, and evaluate projects in AI safety for impact. Read it here.

19th September 2022

Risks from artificial intelligence

Why AI Safety?

AI will soon surpass humans in many domains. If the AI does not understand our intentions, there is a high risk to humanity's well-being. Read more.

A*PART facilitates research in alignment to create this understanding and make safe AI.

A graph showing AI development during the last 30 years with labels of major milestones.

Become A*PART of our research

Join the A*PART community

Join our Discord, join on aisafetyideas.com, or work directly on our Trello to be part of a community working on making AI safe.

Icon of apart research

Join our work

A*PART Research Discord Server

Logo of Discord

Discord server

The core of Apart Research

the A*PART team

Our core team makes the magic happen! Contact any of us if you are interested in our work or would like to join the core team. See our open research process on our Discord server.

Sabrina Zaki

Research assistant. Email address: zaki@apartresearch.com.

Linkedin iconGithub icon
Jonathan Rystrøm

Jonathan Rystrøm

Co-lead and AI governance researcher. Email address: jonathan@apartresearch.com.

Linkedin iconGithub iconGoogle Scholar icon

Apart Research

Get involved

Check out the list below for ways you can interact or research with Apart!

Let's have a meeting!

You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.

I would love to mentor research ideas

We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!

Get updated on A*PART's work

Blog & Mailing list

The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.

Reading up on Safety in artificial intelligence

The AI Safety Gauntlet

The Gauntlet is a challenge to read 20 books or articles in 20 days. The below books and articles are meant as an introduction to the field of AI safety. Read them and post online with #AIgauntlet. In development. Write to us if you would like a feature, learning path, book or paper added to the project!

When you're done here, check out Kravkovna's list and the AGI Safety Fundamentals curriculum,

People

Members

Central committee board

Associate Kranc
Head of Research Department
Commanding Center Management Executive

Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive

Associate Soha
Commanding Research Executive
Manager of Experimental Design

Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager

Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert

Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager

Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager

Partner Associate Nips
Head of Graphics Department
Cake Coding Expert

Honorary members

Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst

Alumni

Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC

A*PART is In development, join us

The Apart strategy

Our research consists three pillars:
1) Measuring how close we are to ensure safety in AGI.
2) Exploring the most promising paths for AI safety looking forward.
3) Disseminating the state of the field for researchers to stay updated.

We do this on three principles:
1) Compassionate pragmatism.
2) Enactive embeddedness.
3) Altruistic humanism.

You can read more about our strategy at docs.apartresearch.com.