The hackathon is happening right now! Join by signing up below and be a part of our community server.
Apart > Sprints

AI Policy Hackathon at Johns Hopkins University

--
No items found.
No items found.
Signups
--
Entries
October 26, 2024 3:00 PM
 to
October 28, 2024 5:15 PM
 (UTC)
Hackathon starts in
--
Days
--
Hours
--
Minutes
--
Seconds
Sign upSign up
This event is finished. It occurred between 
October 26, 2024
 and 
October 28, 2024

Get ready for the AI Policy Hackathon, happening October 26-27, 2024 in Washington DC! Join us for a weekend of collaboration, problem-solving, and networking as you work with like-minded peers to tackle real-world policy challenges related to AI. Participants will submit either a policy paper or a technological product. This opportunity is a great way to build your professional network and explore new career paths!

Shaping the Future of AI Governance

Join us for a weekend of collaboration, problem-solving, and networking as you work with like-minded peers to tackle real-world policy challenges related to AI! Located in Washington D.C. or online via Discord. Final deliverables can be either technical demos or policy paper. No coding required and all backgrounds are welcomed!

Why Participate?

  • Skill development through hands-on experience in policy-making and AI applications
  • Network with industry leaders from OpenAI, Microsoft, and Apart Research, and AI governance scholars
  • Receive mentorship from experts in AI and policy
  • Present your solutions to tech policy experts & policymakers
  • Compete for prizes and recognition

Challenges and Themes

Below are some challenges that participants can work on! We have provided the following tracks, but the participants are welcomed to work on an AI policy-related challenge of their own.

AI Safety:

  1. Challenge 1.1: Agentic System Governance
    1. Partner: OpenAI
    2. Description: Agentic AI systems—AI systems that can pursue complex goals with limited direct supervision— will likely be broadly useful if we can integrate them responsibly into our society. While such systems have substantial potential to help people more efficiently and effectively achieve their own goals, they also create risks of harm. An OpenAI paper discussed this governance issue and implicated a set of open questions. Participants are encouraged to work on a policy paper or a technical demo that addresses these issues.
  2. Challenge 1.2: How Far Are We from Achieving ASI? Measuring the Progress of AI
    1. Partner: OpenAI
    2. Description: In the coming decades, AI will enable us to achieve feats that once seemed unimaginable. We are on the cusp of a new era—an "Intelligence Age" (Altman, 2024) —where AI will serve as a foundational tool for human progress, from personalized education and healthcare to groundbreaking scientific discoveries. This challenge invites participants to evaluate how close we reach Artificial Superintelligence (ASI), the next leap in AI’s evolution. Through technical prototypes, research, or theoretical frameworks, explore the key milestones we’ve reached and those still ahead. How can AI continue to amplify human capability and drive unprecedented prosperity?

AI and the Future of Work:

  1. Partner: OpenAI
  2. Description: As AI continues to transform industries, the nature of work is evolving at an unprecedented pace. In the near future, AI systems will serve as collaborative assistants, helping us solve complex problems and automating routine tasks. This challenge invites participants to explore the future of work in the AI era. How will AI reshape labor markets, create new roles, or redefine existing ones? Participants can develop policy frameworks, design AI-driven tools for workplace efficiency, or propose strategies to ensure AI enhances human potential while addressing shifts in job structures. The goal is to envision a future where AI and human collaboration lead to shared prosperity.

AI and Public Health:

Challenge Overview

The COVID-19 pandemic has highlighted the critical importance of rapid, data-driven decision-making in public health emergencies. AI and machine learning have immense potential to support real-time disease monitoring, early warning systems, resource allocation, and policy interventions. For this hackathon challenge, we’re asking teams to develop AI-powered solutions to enhance public health preparedness and emergency response capabilities. Your task is to identify a specific public health challenge and create an AI-driven tool or system that can help address it.

The Challenge:

Choose one of the following public health focus areas and develop an innovative AI-powered solution:

  1. Disease Surveillance and Early Warning: Create an AI system that can rapidly detect, track, and predict the spread of infectious diseases using diverse data sources (e.g., electronic health records, social media, transportation patterns).
  2. Resource Allocation and Logistics: Develop an AI-powered decision support tool to optimize the distribution of medical supplies, hospital beds, and other critical resources during public health emergencies.
  3. Personalized Public Health Interventions: Design an AI platform to deliver customized health recommendations, nudges, and interventions to individuals based on their unique risk factors and behaviors.
  4. Health Equity and Vulnerable Populations: Build an AI system that can identify and address disparities in health outcomes, access to care, and social determinants of health for marginalized communities.

AI & Sustainability

Challenge Overview:

Create an innovative solution that leverages AI to address a specific environmental sustainability challenge. Solutions can be either technical demonstrations (prototype/proof of concept) or policy proposals.

Challenge Statement:

Choose one of these sustainability challenges and propose either a technical or policy solution:

  1. Urban Energy Optimization
    • Reduce energy waste in buildings
    • Optimize public transportation
    • Smart grid management
  2. Waste Reduction
    • Improve recycling efficiency
    • Reduce food waste
    • Optimize supply chains
  3. Climate Impact Monitoring
    • Track carbon emissions
    • Monitor deforestation
    • Predict environmental risks

AI & Law:

  1. Partner: Center for Language and Speech Processing, Johns Hopkins University (PI: Benjamin Van Durme)
  2. Description: Participants in this challenge will have access to CLERC, a massive US case law dataset, offering a rich resource for exploring legal discovery through AI. You can tackle existing tasks such as legal case retrieval, automate legal analysis generation, or develop innovative ideas and novel tasks based on this dataset. Whether improving retrieval accuracy or enhancing AI-driven legal reasoning, this challenge provides the opportunity to shape the future of legal tech by leveraging advanced machine learning on one of the largest legal corpora available.

Prize Pool: $3,000

Outstanding Solutions (3 teams)

  • $500 per team ($1,500 total)
  • Opportunity to present to policymakers and industry leaders
  • Recognition at award ceremony

Spotlight Awards (5 teams)

  • $200 per team ($1,000 total)
  • Recognition from expert judges
  • Networking with AI policy professionals

Special Awards

  • Best Innovation Award: $250
  • Diversity & Inclusion Award: $250
    • Independent award that can be won alongside other prizes
    • Recognizes teams promoting diverse perspectives in AI policy

Speakers & Collaborators

Archana Vaidheeswaran

Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Organizer

Abe Hou

Abe Hou is a current senior at Johns Hopkins University, majoring in computer science, sociology, and math.At TPS, Abe is the current president. He organizes meetings and events
Organiser and Judge

Amy Wang

Amy Wang is a senior at Johns Hopkins University double majoring in Applied Mathematics and Computer Science. She is passionate to advocate for ethical use of AI
Organiser

Idris Sunmola

Idris Sunmola is a 3rd year Ph.D. student in the Computer Science department at The Johns Hopkins University. He does his research on machine learning and surgical robotics.
Organiser

Angela Tracy

Angela Tracy is a senior at Johns Hopkins University double majoring in Political Science and Psychology. She is fascinated by the intersection of policy, society & human behavior
Organiser and Policy Track Judge

Joy Yu

Joy Yu is a junior at Johns Hopkins University, majoring in International Studies & Economics. As a part of the Technology and Policy Society, she works on the marketing & outreach
Organiser and Policy Track Judge

Andreas Jaramillo

Andreas Jaramillo is a junior at Johns Hopkins University majoring in Computer Science. He is passionate in Computer Graphics and Game Development
Organiser and Judge

Jace Lafita

Jace Lafita is a sophomore at Johns Hopkins majoring in Political Science and Cognitive Science and minoring in Computer Science.
Organiser and Policy Track Judge

Gabriella Waters

Director of CoNA Lab researching cognitive & neurodiversity in AI systems. Principal AI Scientist at PROPEL Center leading AI evaluation and testing at NIST.
Workshop Speaker

Anna Broughel

Energy transition policy expert at JHU SAIS exploring intersection of sustainable energy and AI governance. VP of Communications at USAEE with expertise in policy analysis.
Speaker

William Jurayj

PhD candidate at JHU Engineering researching language model safety and formal reasoning. Previously developed secure ML systems for cloud applications and trading platforms.
Judge

Monica Lopez

CEO pioneering ethical AI adoption at Cognitive Insights. GPAI expert and Digital Economist Fellow bridging AI governance theory and practice in industry.
Speaker and Judge

Zhengping Jiang:

JHU PhD researcher focusing on calibrated NLP models & uncertainty estimation. Former Amazon Alexa AI scientist building safer language models.
Judge

Elliott Ash

ETH Zurich professor leading Human-AI Alignment at Swiss AI Initiative. Combines law, economics & ML to advance responsible AI development frameworks.
Judge

Andrew Anderson

Health policy expert studying AI-driven healthcare equity at NCQA. Focuses on integrating AI safely into medical delivery & quality assurance.
Policy Track Judge

Jaime Raldua

Jaime has 8+ years of experience in the tech industry. Started his own data consultancy to support EA Organisations and currently works at Apart Research as Research Engineer.
Organiser and Judge

Jason Hausenloy

A maths major at UC Berkeley, and currently researching frontier data governance at CHAI. He previously worked for the Singaporean Government & the UN on national/intern AI policy
Technical Track Judge

James Bellingham

Jim Bellingham has led worldwide autonomous marine robotics field from the Arctic to the Antarctic, is executive director of the Johns Hopkins Institute for Assured Autonomy.
Speaker

Amelia Frank

She conducted independent research on the impact of AI in nuclear submarine warfare and strategic decision-making and continues to research in the military.
Organiser and Policy Track Judge

Seokhyun (Nathan) Baek

He hopes to concentrate on tech policy throughout his studies and encourage AI governance discussions on privacy. At TPS, he is currently expanding the group's scale of impact.
Organiser and Policy Track Judge

Kevin Xu

Tech and entrepreneurship enthusiast, currently SWE Intern at Google. Formerly with Citadel, STEP Intern at Google, and Co-founder of Tunnel. Research experience in ML VLM
Technical Track Judge

Yu Fan

Yu Fan is a research associate and doctoral student at the Chair of Strategic Management and Innovation. In addition, he is an associated researcher at ETH AI Center
Policy Track Judge

Lukas Petersson

Aspiring astronaut and ML enthusiast. Currently enjoying the fast paced AI-world as co-founder of Vectorview.
Technical Track Judge

Axel Backlund

Axel Backlund has expertise in AI systems development and entrepreneurial innovation. Axel is a data Engineer at McKinsey's QuantumBlack AI and co-founder of Belt of Sweden.
Judge

To help you prepare for the AI Policy Hackathon, we've curated essential materials that will equip you with the knowledge and tools needed to develop effective AI policy proposals. These resources range from foundational policy ideas to real-world examples of policy engagement.

Key Documents & Readings

Essential Policy Frameworks

Practical Resources

  1. Policy Proposal Templates
    • Sample bill formats
    • Policy brief structures
    • Impact assessment frameworks
  2. Technical Documentation Guidelines
    • Standards for AI system documentation
    • Risk assessment protocols
    • Safety evaluation metrics

Inside AI Policy with Markus Anderljung 🎥

Why Watch This Interview?

This in-depth conversation with Markus Anderljung, Head of AI Policy at the Centre for Governance of AI (GovAI), provides crucial insights that will help you develop more effective policy proposals during the hackathon:

  • Learn from real examples of successful and unsuccessful policy approaches
  • Understand how to make your proposals more practical and implementable
  • See how different stakeholders think about AI governance
  • Gain insights into balancing competing interests
  • Learn how to communicate complex policy ideas effectively
  • Recommended Deep Dives

    For those wanting to explore specific areas

    AI Safety & Governance

    International Cooperation

    Legal & Liability Frameworks

    *This list was inspired by posts on Less Wrong

    AI Policy & Technical Research Agenda 📚

    Explore a comprehensive collection of technical research directions and open problems in AI governance compiled by researchers actively working in the field. This agenda maps out crucial areas where technical expertise can directly inform and strengthen AI policy development.

    Why This Matters For Your Hackathon:

    1. Identifies concrete technical bottlenecks in AI governance that need solving, helping you choose high-impact projects that address real gaps in current policy frameworks and technical capabilities.
    2. Maps relationships between different policy mechanisms and their technical requirements, enabling you to design solutions that integrate effectively with existing governance structures and frameworks.
    3. Provides detailed examples of successful technical implementations in AI governance, offering practical templates and approaches you can adapt or build upon for your own policy proposals.
    4. Shows how technical capabilities and limitations influence policy decisions, helping you develop more realistic and implementable proposals that account for current technological constraints and opportunities.
    5. Highlights emerging challenges at the intersection of AI development and policy, allowing you to anticipate future governance needs and design forward-looking solutions that address upcoming challenges.

    Getting Started

    1. For Policy Track Participants:
      • Focus on the policy frameworks and Congressional engagement resources
      • Review existing AI governance proposals
      • Study successful policy implementation cases
    2. For Technical Track Participants:
      • Examine technical documentation requirements
      • Review safety testing protocols
      • Study implementation feasibility metrics

    Join our Discord community to connect with mentors and fellow participants before the event here

    See the updated calendar and subscribe

    The schedule runs from 8 AM EST Saturday to 4 PM EST Sunday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here. You will also find Explorer events, such as collaborative brainstorming and team match-making before the hackathon begins on Discord and in the calendar.

    📍 Registered jam sites

    Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.

    TechTrap

    The Howard University Student Association department of Public Safety and Howard’s Google Developer Group have partnered on a new event series called TechTrap. The first installment will be on November 19th. We want our event to include a hackathon.

    TechTrap

    AI Policy Hackathon

    We look forward to welcoming you to the EA Hotel: York Street 36, Blackpool, UK. Here, you will find a cozy bed, good food, and a little merry community of aspiring effective altruists.

    AI Policy Hackathon

    AISIG - AI Policy Hackathon

    Join us for the AI Policy Hackathon in Hereplein 4, 9711GA, Groningen!

    AISIG - AI Policy Hackathon

    🏠 Register a location

    The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community. Read more about organizing.
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    Thank you! Your submission has been received! Your event will show up on this page.
    Oops! Something went wrong while submitting the form.

    📣 Social media images and text snippets

    No media added yet
    No text snippets added yet

    Participants will form teams to submit either a policy paper or a computer application. Final deliverables will be judged by a panel of experts. Winning teams will have the opportunity to present their work to tech policy experts & policymakers and receive exclusive merch and OpenAI credits!

    Key Points to Remember:

    • You can choose to submit either a policy paper or a technological product
    • No coding experience is required to participate
    • All backgrounds are welcomed and encouraged to join

    Technical Track:

    1. Working prototype/demo
    2. Technical documentationn

    Use this template for your submission[Optional]

    Judging Criteria:

    1. Innovation (30%)

    • Algorithmic Novelty (10%): Is there a new or improved algorithm, or a novel application of existing algorithms? Does it push the boundaries of known approaches in the problem domain?
    • Design Novelty (10%): How innovative and original is the user experience, interface, or overall design? Does the design offer a unique perspective or interaction?
    • Engineering Novelty (10%): How technically advanced or creative are the engineering solutions? Does it involve a clever integration of different technologies or the development of novel tools?

    2. Relevance & Impact (20%)

    • Problem Importance (10%): Does the project address a pressing or high-impact AI policy problem? How meaningful or significant is the problem it aims to solve, especially in the context of current societal and technological challenges?
    • Problem Novelty (10%): Is the problem being addressed in an innovative way or is it an entirely new problem that hasn’t been tackled before? Does it reveal fresh insights into familiar challenges?

    3. Technical Execution (25%)

    • Product Completion (10%): How complete is the product? Is it in an early prototype stage or close to a working, polished solution? Does it demonstrate end-to-end functionality?
    • Product Functionality (10%): Does the product work as intended? Are the core features fully operational? How well does it deliver on its promises in terms of features and user experience?
    • Scalability & Performance (5%): Can the solution scale effectively? How robust is the product's performance under load or in different conditions?

    4. User Experience & Design (10%)

    • User-Focused Design (5%): Is the solution easy to use and intuitive? How well does the design align with user needs and pain points? Is there attention to accessibility?
    • Aesthetic Appeal (5%): Is the design visually appealing and well-crafted? How does the visual design support or enhance the overall functionality?

    5. Teamwork & Collaboration (5%)

    • Cross-Disciplinary Collaboration (3%): How effectively did the team leverage different skill sets (e.g., design, engineering, product management)? Was there strong collaboration across disciplines?
    • Execution Under Constraints (2%): How well did the team manage time and resources? Did they demonstrate effective problem-solving and adaptability under the time constraints of the hackathon?

    6. Presentation (5%)

    • Clarity of Communication (3%): How well did the team present their solution? Was the problem and solution clearly articulated? Did the team convey the value of their project effectively to the judges?
    • Demo Quality (2%): Was the demo smooth, well-prepared, and reflective of the product's core value? Did the team handle questions well?

    7. Potential for Future Development (5%) Rate 1-5: []

    • Market Potential (3%): Could the solution be commercialized or further developed into a full-scale product? Is there a potential audience or market for it?
    • Extensibility (2%): How easy would it be to extend the product with additional features or scale it to handle larger problems or datasets?

    Policy Track:

    1. Detailed Policy paper(max 4 pages)
    2. Implementation roadmap
    3. Impact assessment

    Use this template for your submission [Optional]

    Judging Criteria

    1. Relevance (20%)

    • Use of AI (10%): Does this policy relate to AI? Do the uses of AI in this proposal add to the policy meaningfully? 
    • Exigency (10%): Does this proposal clearly identify an issue in politics? Is this issue impacting a group or groups of people? Is this issue current in nature?

    2. Impact (15%)

    • Does this policy make a significant impact on its domain? Does the proposed policy impact match the steps outlined in the paper? Is the policy specific in its goals? Does the proposal minimize the unintended negative consequences of the policy?

    3. Innovation (25%)

    • Tech application (10%): Does the use of AI in this policy present a new lens by which to look at the issue? Has AI already been used to solve this problem? If so, does this proposal present a new way to use it?
    • Creativity (15%): Has this policy already been proposed? If so, does this proposal take a new lens on it? Does this proposal present a novel solution to an existing problem? Does the writer think of outside-the-box solutions?

    4. Feasibility (25%)

    • Technical feasibility (15%): Does the speaker clearly understand how AI works? Is the policy suggested realistic given the current technical capabilities of AI? Does this policy rely on outdated technologies?
    • Policy feasibility (10%): How likely will a policy like this be implemented under current law? Is it constitutional and allowed by current statutes? Does it address something the government can actually regulate? Is this a feasible reform? Were implications and unintended consequences considered?

    5. Presentation and Documentation (15%)

    • Writing Style (10%): Did the write-up display proper grammar and mechanics? Was the style compelling and persuasive? Did they follow proper conventions for academic writing? Was the paper organized in an effective way?
    • References (5%): Did the writer use proper supporting materials and evidence? Did the references add to the paper in a meaningful way? Were the sources documented properly?

    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    Uploading...
    fileuploaded.jpg
    Upload failed. Max size for files is 10 MB.
    You have successfully submitted! You should receive an email and your project should appear here. If not, contact operations@apartresearch.com.
    Oops! Something went wrong while submitting the form.

    Here are the entries for the AI Policy Hackathon at Johns Hopkins University Hackathon 2024