AI Safety

A*PART Research

A*PART is a decentralized AI safety research organization that envisions a future where humans and AGI co-exist benevolently. We work on centralizing AI safety and governance research ideas to connect people to the field!

Risks from artificial intelligence

Why AI Safety?

AI will soon surpass humans in all domains. If this AI does not understand our intentions, there will be a high risk of humanity's extinction. This is detrimental to humanity's potential. Read more.

A*PART works on applied alignment research, outreach and AI safety tooling problems related to AI safety.

A graph showing AI development during the last 30 years with labels of major milestones.

AI safety research ideas

Ways to help

Check out the list below for ways you can help develop the centralized AI safety ideas platform!

I'd like to provide feedback

If you book a meeting here, we'll have a user interview where we'll test the platform together! It will take around 30 minutes and consist several phases of testing and thinking aloud.

I want to help to add ideas

Write to us for early access to the ideas submission backend platform where we add and edit ideas. Just write to us by email or join our Discord.

I am an expert

We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!

Become A*PART of our research

Join A*PART as a member

Join our new Discord to help in defining Apart Research. Join to get feedback on projects, interact with the community, or be inspired for new projects in AI Safety with A*PART.

Icon of apart research

Join our work

A*PART Research Discord Server

Joining will mean you become part of the strategic discussions for the future of A*PART Research.

Logo of Discord

Discord server

Participate in the center projects

Active projects

Long term projects

Technical AI safety ideas platform

Create a platform for students, researchers, entrepreneurs, and developers to get free AI safety ideas with the possibility of pre-committed associated funding. Collaborating with Nonlinear.

Currently our main project

Platform for AI Safety Prizes

Website with leaderboards for major issues in AI safety with rewards / prizes associated with the solving or optimization of them. Reflects ideas from AIcrowd.com, OpenPhil's adversarial learning challenge and the ELK competition.

In preliminary research phase

For-profit AI safety

Many in the community argue that we need more for-profit ventures focused on helping the long term future since they can quickly scale and drive massive impact. Apart Research works on developing ideas within the space of AI safety and disseminate them.

Preliminary research and interviews

Understanding longtermist pain points

By interviewing and surveying AI safety researchers and longtermists, we can pinpoint areas where there is a gap in the current tooling for the community.

In progress

Outreach projects

We generally attempt to make our research and projects publicly accessible while also creating dedicated outreach projects in relation to AI safety.

Ongoing

Research projects

Potential for multimodal applied alignment research

Study how the process of researching alignment differs when using the image domain versus when using the text domain.

Exploratory research phase

CLIP-GAN alignment

Imagine an alignment experiment and study that focuses on CLIP+StyleGAN to generate images and then attempts to "align" the output with what we actually intend it to generate. This is part of a bigger analysis on multimodal alignment work.

Exploratory research phase

Interpretability tooling capabilities overview and development

Analyze the capabilities of current interpretability tools in three depths: 1) No-code, 2) library implementation, 3) custom programming solutions. (3) will contribute to the development of new interpretability tools. This will play into exploring productizations of AI safety theory.

Advanced explainability tools

In a collaboration with the Stanford Center for AI Safety, we hope to make it easier for end-user engineers to understand models for geological marking which will play a part in the wider project of interpretability tooling.

Get updated on A*PART's work

Blog & Mailing list

The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.

No items found.

Reading up on Safety in artificial intelligence

The AI Safety Gauntlet

The Gauntlet is a challenge to read 20 books or articles in 20 days. The below books and articles are meant as an introduction to the field of AI safety. Read them and post online with #AIgauntlet. In development. Write to us if you would like a feature, learning path, book or paper added to the project!

When you're done here, check out Kravkovna's list and the AGI Safety Fundamentals curriculum,

People

Members

Central committee board

Associate Kranc
Head of Research Department
Commanding Center Management Executive

Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive

Associate Soha
Commanding Research Executive
Manager of Experimental Design

Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager

Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert

Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager

Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager

Partner Associate Nips
Head of Graphics Department
Cake Coding Expert

Honorary members

Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst

Alumni

Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC

A*PART is In development, join us

Plans for the group

It is the goal of A*PART to become a remote-first organization enabling researchers and developers to come together and work on problems that are relevant to AI safety.

We will build tools that help new researchers get into the field and enable outreach while researching problems and sharing the results in accessible formats such as web apps, blog posts and videos.