Organized by Howard University
The hackathon kicks off Tuesday, November 19th at 6 PM with an inspiring keynote panel. While attendance is not mandatory, we strongly encourage you to join us for an evening of collaboration, problem-solving, and networking with like-minded peers. Whether you choose to participate virtually or in-person, you'll have the opportunity to:
- Learn from expert speakers from government, emerging tech, and academia
- Engage in interactive workshops and hands-on learning sessions
- Receive mentorships from leading professionals
- Compete for prizes
- Build your professional network and explore new career paths
Located in Washington, D.C. or online via Discord. Final deliverables can be a technical demo or policy paper. No coding experience is required and all backgrounds are welcomed! Whether you're a computer science expert, policy enthusiast, or passionate about social impact, this interdisciplinary hackathon offers a unique platform to shape the future of AI governance.
Welcome to the inaugural AI Policy Hackathon at the Howard University AI Safety Summit! The purpose of this hackathon is to foster innovative policy solutions addressing the complex landscape of AI governance and safety, with a particular focus on practical implementations and regulatory frameworks. We are seeking comprehensive policy papers that examine and propose solutions to critical challenges in AI development, deployment, and oversight. Participants are encouraged to explore various policy domains, from algorithmic accountability and bias mitigation to data privacy and ethical AI development. While the primary deliverable is a detailed policy paper, participants have the option to supplement their submissions with technical implementations or prototypes that demonstrate the feasibility and impact of their proposed policies. This hybrid approach allows for a deeper exploration of how policy frameworks can be effectively implemented and enforced in real-world scenarios. The hackathon aims to bridge the gap between theoretical policy development and practical implementation, encouraging participants to consider both the governance structures needed to regulate AI systems and the technical requirements to enforce such regulations. Whether addressing issues of transparency in AI systems, proposing new standards for model evaluation, or developing frameworks for responsible AI scaling, submissions should demonstrate a clear understanding of both policy implications and technical feasibility in today's rapidly evolving AI landscape.
A comprehensive overview of concrete policy proposals for managing AI risks, from export controls to safety testing requirements. Essential reading for understanding the current policy landscape. - Concrete policy proposals - Risk management strategies - Implementation roadmaps
A firsthand account of engaging with policymakers on AI safety. Invaluable insights for participants interested in how policy ideas get translated into action. - Stakeholder engagement strategies - Communication best practices - Policy advocacy techniques
Critical analysis of how industry self-regulation and government oversight can work together. Useful for understanding the interplay between private and public sector approaches. - Industry self-regulation - Government oversight - Public-private partnerships
2\. Find Mentors
- Naviagte the membership platform [here](https://membrz.club/y3k) for more resources!
- Network with speakers and presenters
- Speak to organizers for feedback on project before judging submissions
Join our GroupMe or Discord community here to: - Connect with mentors - Collaborate with participants - Access additional resources - Share ideas and feedback
- Email: dschowardu@gmail.com - Technical Support: operations@apartresearch.com - Emergency Contact: GDG GroupMe