This weekend, we have run a hackathon in cognitive psychology on language models to evaluate if hackathons are a good way to give entrants in AI safety an opportunity to develop research skills and if a weekend long hackathon can give interesting research outputs.
2nd October 2022
No one talks about how far we are towards safe AGI and the focus is on when AGI doom arrives. We want to readjust this focus and measure progress, guide and facilitate research, and evaluate projects in AI safety for impact. Read it here.
19th September 2022
AI will soon surpass humans in many domains. If the AI does not understand our intentions, there is a high risk to humanity's well-being. Read more.
A*PART facilitates research in alignment to create this understanding and make safe AI.
Join our Discord, join on aisafetyideas.com, or work directly on our Trello to be part of a community working on making AI safe.
Join our work
Our core team makes the magic happen! Contact any of us if you are interested in our work or would like to join the core team. See our open research process on our Discord server.
The founder and co-lead of Apart Research. Write to me at email@example.com.
Complexity data scientist. Email address: firstname.lastname@example.org.
Head of operations. Email address: email@example.com.
Research scientist. Email address: firstname.lastname@example.org.
Check out the list below for ways you can interact or research with Apart!
If you have lists of AI safety and AI governance ideas that are shovel-ready lying around, submit them to aisafetyideas.com and we'll put them into the list as we make each more shovel-ready!
Send your feature ideas our way in the #features-bugs channel on Discord. We appreciate any and all feedback!
You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.
We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!
The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.
September 27, 2022
No one talks about how far we are towards safe AGI and the focus is on when AGI doom arrives. We want to readjust this focus and measure progress, guide and facilitate research, and evaluate projects in AI safety for impact. We also ask you to add your views to this survey.
August 30, 2022
What is next? We find new scaling laws and analyse how we can align agents using empathy, symbolic logic, and interpretability.
The Gauntlet is a challenge to read 20 books or articles in 20 days. The below books and articles are meant as an introduction to the field of AI safety. Read them and post online with #AIgauntlet. In development. Write to us if you would like a feature, learning path, book or paper added to the project!
Head of Research Department
Commanding Center Management Executive
Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive
Commanding Research Executive
Manager of Experimental Design
Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager
Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert
Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager
Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager
Partner Associate Nips
Head of Graphics Department
Cake Coding Expert
Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst
Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC
Our research consists three pillars:
1) Measuring how close we are to ensure safety in AGI.
2) Exploring the most promising paths for AI safety looking forward.
3) Disseminating the state of the field for researchers to stay updated.
We do this on three principles:
1) Compassionate pragmatism.
2) Enactive embeddedness.
3) Altruistic humanism.
You can read more about our strategy at docs.apartresearch.com.