The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

Howard University AI Safety Summit & Policy Hackathon

--
No items found.
No items found.
Signups
--
Entries
November 19, 2024 9:00 PM
 to
November 21, 2024 6:00 AM
 (UTC)

Organized by Howard University

Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
November 19, 2024
 and 
November 21, 2024

Shaping the Future of AI Policy

The hackathon kicks off Tuesday, November 19th at 6 PM with an inspiring keynote panel. While attendance is not mandatory, we strongly encourage you to join us for an evening of collaboration, problem-solving, and networking with like-minded peers. Whether you choose to participate virtually or in-person, you'll have the opportunity to:

    - Learn from expert speakers from government, emerging tech, and academia
    - Engage in interactive workshops and hands-on learning sessions
    - Receive mentorships from leading professionals
    - Compete for prizes
    - Build your professional network and explore new career paths

Located in Washington, D.C. or online via Discord. Final deliverables can be a technical demo or policy paper. No coding experience is required and all backgrounds are welcomed! Whether you're a computer science expert, policy enthusiast, or passionate about social impact, this interdisciplinary hackathon offers a unique platform to shape the future of AI governance.

REMINDER! The hackathon will be fully virtual on Wednesday.

Welcome to the inaugural AI Policy Hackathon at the Howard University AI Safety Summit! The purpose of this hackathon is to foster innovative policy solutions addressing the complex landscape of AI governance and safety, with a particular focus on practical implementations and regulatory frameworks.

We are seeking comprehensive policy papers that examine and propose solutions to critical challenges in AI development, deployment, and oversight. Participants are encouraged to explore various policy domains, from algorithmic accountability and bias mitigation to data privacy and ethical AI development.

While the primary deliverable is a detailed policy paper, participants have the option to supplement their submissions with technical implementations or prototypes that demonstrate the feasibility and impact of their proposed policies. This hybrid approach allows for a deeper exploration of how policy frameworks can be effectively implemented and enforced in real-world scenarios.

The hackathon aims to bridge the gap between theoretical policy development and practical implementation, encouraging participants to consider both the governance structures needed to regulate AI systems and the technical requirements to enforce such regulations.

Whether addressing issues of transparency in AI systems, proposing new standards for model evaluation, or developing frameworks for responsible AI scaling, submissions should demonstrate a clear understanding of both policy implications and technical feasibility in today's rapidly evolving AI landscape.

Speakers & Collaborators

Dr Bharat Harbham

Medical doctor & biotech founder bridging healthcare and blockchain. Leading DeSci London's work on decentralized clinical trials and patient data governance.
Judge

Dr Saurav Aryal

Senior AI Researcher specializing in affective biometrics. PhD in AI & Algorithms. Expert in computer vision and AI safety implementation at Howard University.
Speaker Panelist

Dr. Jaye Nias

Director of Human-Centered AI at Howard. Pioneer in culturally inclusive computing. Leading expert in AI ethics and education. Stanford Ethics Conference moderator.
Judge and Panelist

Félicité Mbaye

Howard senior bringing fresh perspective on AI policy in criminal justice. HUSA Director of External Affairs focused on community impact and ethical tech adoption
Moderator

Erin Magennis

DeSci thought leader & MuseMatrix cofounder. Host of Bankless DeSci podcast. Leading researcher mapping decentralized science landscape & trends.
Speaker Panelist

Thane Douglass

Howard sophomore researching neuroscience & AI intersections at Think Neuro. VP of Google Developers Group, focused on cyber safety & blockchain.
Lead Organiser

Shana Douglass

Web3 education pioneer & founder of NFTCLT. Former Microsoft & Juul Labs strategist leading $300M+ supply chain innovations. USC Viterbi alum.
Judge and Speaker Panelist

Khole Wright

Howard CS junior & Google Tech Scholar pioneering home automation. Research intern at PRISEM Lab combining AI with IoT. Former Meta & Google intern.
Moderator and Organiser

Ryan Little

Network engineer and cybersecurity expert with experience supporting critical infrastructure at ManTech, DoD, and federal healthcare facilities.
Speaker

AI Policy Hackathon Resources 📚

Essential Policy Frameworks: Global AI Governance Guidelines ⚖️

OECD AI Principles / G20 AI Guidelines

UNESCO Recommendation on AI Ethics

US Executive Order on AI

EU AI Act

Council of Europe AI Treaty

UN Resolution on AI

Continental AI African Union Strategy

Esential : Policy Development 📖

12 Tentative Ideas for US AI Policy by (Open Philanthropy)

A comprehensive overview of concrete policy proposals for managing AI risks, from export controls to safety testing requirements. Essential reading for understanding the current policy landscape.
-  Concrete policy proposals
-  Risk management strategies
-  Implementation roadmaps

Speaking to Congressional Staffers about AI Risk

A firsthand account of engaging with policymakers on AI safety. Invaluable insights for participants interested in how policy ideas get translated into action.
-  Stakeholder engagement strategies
-  Communication best practices
-  Policy advocacy techniques

Thoughts on Responsible Scaling Policies

Critical analysis of how industry self-regulation and government oversight can work together. Useful for understanding the interplay between private and public sector approaches.
- Industry self-regulation
- Government oversight
- Public-private partnerships

Practical Resources 🛠️

Getting Started

Connect With Us 🤝

to:
- Connect with mentors
- Collaborate with participants
- Access additional resources
- Share ideas and feedback

Support & Questions ❓

- Email: dschowardu@gmail.com
- Technical Support: operations@apartresearch.com
- Emergency Contact: GDG GroupMe

See the updated calendar and subscribe

Here is the schedule for the Hackathon:
We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here. You will also find Explorer events, such as collaborative brainstorming and team match-making before the hackathon begins on Discord and in the calendar.

📍 Registered jam sites

Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
Register the first event below!

🏠 Register a location

The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Thank you! Your submission has been received! Your event will show up on this page.
Oops! Something went wrong while submitting the form.

📣 Social media images and text snippets

No media added yet
No text snippets added yet

Use this template for your submission[Mandatory]

Policy Memorandum Judging Criteria

1. Problem Analysis & Strategy (Criteria 1 - 33%)

  • How well is the policy problem articulated and analyzed?
  • Quality of evidence and analytical rigor
  • Clarity of impact assessment and implications
  • Strategic approach to policy development

2. Policy Innovation & Implementation (Criteria 2 - 33%)

  • Novelty and creativity of proposed solutions
  • Feasibility of implementation plan
  • Practicality of recommendations
  • Cost-effectiveness and resource considerations

3. Communication & Impact (Criteria 3 - 33%)

  • Quality of executive summary and conclusions
  • Clarity of writing and presentation
  • Effectiveness of visual elements
  • Compelling case for action

Evaluation Guidelines:

  • Each criterion should be rated on a scale of 1(low)-5(high), with clear justification
  • Evaluators should consider both the technical accuracy and practical applicability
  • Comments should be provided for each major section to help participants improve
  • Final scores should be weighted according to the percentages provided

Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
Uploading...
fileuploaded.jpg
Upload failed. Max size for files is 10 MB.
You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
Oops! Something went wrong while submitting the form.
Implementing a Human-centered AI Assessment Framework (HAAF) for Equitable AI Development
Current AI development, concentrated in the Global North, creates measurable harms for billions worldwide. Healthcare AI systems provide suboptimal care in Global South contexts, facial recognition technologies misidentify non-white individuals (Birhane, 2022; Buolamwini & Gebru, 2018), and content moderation systems fail to understand cultural nuances (Sambasivan et al., 2021). With 14 of 15 largest AI companies based in the US (Stash, 2024), affected communities lack meaningful opportunities to shape how these technologies are developed and deployed in their contexts. This memo proposes mandatory implementation of the Human-centered AI Assessment Framework (HAAF), requiring pre-deployment impact assessments, resourced community participation, and clear accountability mechanisms. Implementation requires $10M over 24 months, beginning with pilot programs at five organizations. Success metrics include increased AI adoption in underserved contexts, improved system performance across diverse populations, and meaningful transfer of decision-making power to affected communities. The framework's emphasis on building local capacity and ensuring fair compensation for community contributions provides a practical pathway to more equitable AI development. Early adoption will help organizations build trust while developing more effective systems, delivering benefits for both industry and communities.
Elise Racine
November 20, 2024
4th 🏆
3rd 🏆
2nd 🏆
1st 🏆