A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.
We run AISI, the Alignment Hackathons, and an AI safety research update series.
Research hackathons in ML safety topics open for beginners and experts alike.
We publish introductions to AI safety concepts and weekly research content.
AI will soon surpass humans in many domains. If the AI does not understand our intentions, there is a high risk to humanity's well-being. Read more.
A*PART facilitates research in alignment to create this understanding and make safe AI.
Write to us at operations@apartresearch.com if you want us to give a talk at your event.
Sign up for the mailing list below to get future updates. The newsletters provide you with a weekly dose of alignment news, hackathons, aisi.ai development and more.
Click below to join our Discord server, discuss our work, get unique readings, or talk directly with the team. We are 167 people and counting.
Join us
Discord server
Our core team makes the magic happen! Contact any of us if you are interested in our work or would like to join the core team. See our open research process on our Discord server.
Additionally, we are supported by more than 30 volunteers across the globe who help us make the Alignment Jams and the newsletters a reality.
Check out the list below for ways you can interact or research with Apart!
If you have lists of AI safety and AI governance ideas that are shovel-ready lying around, submit them to aisafetyideas.com and we'll put them into the list as we make each more shovel-ready!
You can work directly with us on aisafetyideas.com, on Discord, or on Trello. If you have some specific questions, write to us here.
Send your feature ideas our way in the #features-bugs channel on Discord. We appreciate any and all feedback!
You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.
We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!
The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.
The Gauntlet is a challenge to read 20 books or articles in 20 days. The below books and articles are meant as an introduction to the field of AI safety. Read them and post online with #AIgauntlet. In development. Write to us if you would like a feature, learning path, book or paper added to the project!
Loading...
When you're done here, check out Kravkovna's list and the AGI Safety Fundamentals curriculum,
Associate Kranc
Head of Research Department
Commanding Center Management Executive
Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive
Associate Soha
Commanding Research Executive
Manager of Experimental Design
Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager
Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert
Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager
Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager
Partner Associate Nips
Head of Graphics Department
Cake Coding Expert
Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst
Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC
A*PART is an organization dedicated to advancing research in the field of AI safety. We believe that by providing support and opportunities for researchers, we can help drive innovation and progress in this critical area.
One of the ways we do this is through our Alignment Jam hackathons, which give researchers the chance to experiment and showcase their skills in AI and machine learning. With the support of our collaborators, participants also have access to unique opportunities, such as joining industry labs, academic institutions, and research fellowships.