Sign up for the mailing list below to get future updates. The newsletters provide you with a weekly dose of alignment news, hackathons, aisi.ai development and more.
No one talks about how far we are towards safe AGI and the focus is on when AGI doom arrives. We want to readjust this focus and measure progress, guide and facilitate research, and evaluate projects in AI safety for impact. We also ask you to add your views to this survey.
What is next? We find new scaling laws and analyse how we can align agents using empathy, symbolic logic, and interpretability.