AI Safety

A*PART Research

A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.

We run AISI, the Alignment Hackathons, and an AI safety research update series.

🏆 Alignment Jams

Research hackathons in ML safety topics open for beginners and experts alike.

🎥 Check out our videos

We publish introductions to AI safety concepts and weekly research content.

Discord iconGithub iconLinkedin iconYouTube iconMedium icon

Risks from artificial intelligence

Why AI Safety?

AI will soon surpass humans in many domains. If the AI does not understand our intentions, there is a high risk to humanity's well-being. Read more.

A*PART facilitates research in alignment to create this understanding and make safe AI.

Write to us at if you want us to give a talk at your event.

A graph showing AI development during the last 30 years with labels of major milestones.


Stay up-to-date

Sign up for the mailing list below to get future updates. The newsletters provide you with a weekly dose of alignment news, hackathons, development and more.

Become A*PART of our community

Join the A*PART server

Click below to join our Discord server, discuss our work, get unique readings, or talk directly with the team. We are 167 people and counting.

Icon of apart research

Join us

A*PART Research Discord Server

Logo of Discord

Discord server

Buy AI safety-related posters

The core of Apart Research

the A*PART team

Our core team makes the magic happen! Contact any of us if you are interested in our work or would like to join the core team. See our open research process on our Discord server.

Sabrina Zaki

Hackathons & Community

Linkedin iconGithub icon

Additionally, we are supported by more than 30 volunteers across the globe who help us make the Alignment Jams and the newsletters a reality.

Apart Research

Get involved

Check out the list below for ways you can interact or research with Apart!

Let's have a meeting!

You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.

I would love to mentor research ideas

We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!

Get updated on A*PART's work

Blog & Mailing list

The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.

Reading up on Safety in artificial intelligence

The AI Safety Gauntlet

The Gauntlet is a challenge to read 20 books or articles in 20 days. The below books and articles are meant as an introduction to the field of AI safety. Read them and post online with #AIgauntlet. In development. Write to us if you would like a feature, learning path, book or paper added to the project!

When you're done here, check out Kravkovna's list and the AGI Safety Fundamentals curriculum,



Central committee board

Associate Kranc
Head of Research Department
Commanding Center Management Executive

Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive

Associate Soha
Commanding Research Executive
Manager of Experimental Design

Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager

Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert

Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager

Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager

Partner Associate Nips
Head of Graphics Department
Cake Coding Expert

Honorary members

Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst


Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC

A*PART is In development, join us

What sets us apart

A*PART is an organization dedicated to advancing research in the field of AI safety. We believe that by providing support and opportunities for researchers, we can help drive innovation and progress in this critical area.

One of the ways we do this is through our Alignment Jam hackathons, which give researchers the chance to experiment and showcase their skills in AI and machine learning. With the support of our collaborators, participants also have access to unique opportunities, such as joining industry labs, academic institutions, and research fellowships.