Sep 2, 2024
AI Safety Collective - Crowdsourcing Solutions for Critical AI Safety Challenges
Lye Jia Jun, Dhruba Patra, Philipp Blandfort
Summary
The AI Safety Collective is a global platform designed to enhance AI safety by crowdsourcing solutions to critical AI Safety challenges. As AI systems like large language models and multimodal systems become more prevalent, ensuring their safety is increasingly difficult. This platform will allow AI companies to post safety challenges, offering bounties for solutions. AI Safety experts and enthusiasts worldwide can contribute, earning rewards for their efforts.
The project focuses initially on non-catastrophic risks to attract a wide range of participants, with plans to expand into more complex areas. Key risks, such as quality control and safety, will be managed through peer review and risk assessment. Overall, The AI Safety Collective aims to drive innovation, accountability, and collaboration in the field of AI safety.
Cite this work:
@misc {
title={
AI Safety Collective - Crowdsourcing Solutions for Critical AI Safety Challenges
},
author={
Lye Jia Jun, Dhruba Patra, Philipp Blandfort
},
date={
9/2/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}