A*PART is a decentralized AI safety research organization that envisions a future where humans and AGI co-exist benevolently. We work on centralizing AI safety and governance research ideas to connect people to the field!
AI will soon surpass humans in all domains. If this AI does not understand our intentions, there will be a high risk of humanity's extinction. This is detrimental to humanity's potential. Read more.
A*PART works on applied alignment research, outreach and AI safety tooling problems related to AI safety.
Check out the list below for ways you can help develop the centralized AI safety ideas platform!
If you have lists of AI safety and AI governance ideas that are shovel-ready lying around, send them our way and we'll put them into the list as we make each more shovel-ready!
Web development takes a while so if you want to help, we can speed this up! Volunteer to help us out through our Discord or by writing us.
Send your ideas to us by clicking here. We appreciate any and all feature ideas and if you are really passionate about the idea, you can join our Discord and we can talk it out!
If you book a meeting here, we'll have a user interview where we'll test the platform together! It will take around 30 minutes and consist several phases of testing and thinking aloud.
Write to us for early access to the ideas submission backend platform where we add and edit ideas. Just write to us by email or join our Discord.
We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!
Join our new Discord to help in defining Apart Research. Join to get feedback on projects, interact with the community, or be inspired for new projects in AI Safety with A*PART.
Join our work
Joining will mean you become part of the strategic discussions for the future of A*PART Research.
Discord server
Create a platform for students, researchers, entrepreneurs, and developers to get free AI safety ideas with the possibility of pre-committed associated funding. Collaborating with Nonlinear.
Website with leaderboards for major issues in AI safety with rewards / prizes associated with the solving or optimization of them. Reflects ideas from AIcrowd.com, OpenPhil's adversarial learning challenge and the ELK competition.
Many in the community argue that we need more for-profit ventures focused on helping the long term future since they can quickly scale and drive massive impact. Apart Research works on developing ideas within the space of AI safety and disseminate them.
By interviewing and surveying AI safety researchers and longtermists, we can pinpoint areas where there is a gap in the current tooling for the community.
We generally attempt to make our research and projects publicly accessible while also creating dedicated outreach projects in relation to AI safety.
Study how the process of researching alignment differs when using the image domain versus when using the text domain.
Imagine an alignment experiment and study that focuses on CLIP+StyleGAN to generate images and then attempts to "align" the output with what we actually intend it to generate. This is part of a bigger analysis on multimodal alignment work.
Analyze the capabilities of current interpretability tools in three depths: 1) No-code, 2) library implementation, 3) custom programming solutions. (3) will contribute to the development of new interpretability tools. This will play into exploring productizations of AI safety theory.
In a collaboration with the Stanford Center for AI Safety, we hope to make it easier for end-user engineers to understand models for geological marking which will play a part in the wider project of interpretability tooling.
The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.
The Gauntlet is a challenge to read 20 books or articles in 20 days. The below books and articles are meant as an introduction to the field of AI safety. Read them and post online with #AIgauntlet. In development. Write to us if you would like a feature, learning path, book or paper added to the project!
Loading...
When you're done here, check out Kravkovna's list and the AGI Safety Fundamentals curriculum,
Associate Kranc
Head of Research Department
Commanding Center Management Executive
Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive
Associate Soha
Commanding Research Executive
Manager of Experimental Design
Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager
Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert
Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager
Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager
Partner Associate Nips
Head of Graphics Department
Cake Coding Expert
Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst
Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC
It is the goal of A*PART to become a remote-first organization enabling researchers and developers to come together and work on problems that are relevant to AI safety.
We will build tools that help new researchers get into the field and enable outreach while researching problems and sharing the results in accessible formats such as web apps, blog posts and videos.