Organized by Apart Research
Despite the many potential benefits of the technology (such as equalizing opportunity, creating wealth, and improving coordination), we are also facing significant risks from AI.
Together, we will be hacking away to demonstrate and mitigate the challenges that arise in the meeting between AI and democracy, while trying to project these risks into the future.
👇️ Sign up below to join us for a weekend where we'll hear from experts in technical AI governance and collaborate in teams to identify, evaluate, and propose solutions to key challenges at the intersection of AI and democracy.
This will be a fit for you if you are an AI safety researcher, policymaker, ethicist, legal expert, political scientist, or a cybersecurity professional. We also invite students who are aiming to work on AI safety and governance.
Join us for the keynote below:
After the hackathon is over, our panel of brilliant judges will give constructive feedback to your projects and rate them on a series of criteria for us to select the top projects! The criteria are:
The top teams have a chance to win from our $2,000 prize pool!
In addition to these prizes, you will also receive feedback from established researchers and have the chance to join the Apart Lab, which is a 4-6 month fellowship towards publishing your research at ML and AI conferences with the guidance of senior research advisors in AI safety.
Despite some of the largest potential risks from AI being related to our democratic institutions and the fragility of society, there is surprisingly little work demonstrating and extrapolating concrete risks from AI to democracy.
By putting together actual demonstrations of potential dangers and mindfully extrapolating these risks into the late 2020s, we can raise awareness among key decision-makers and stakeholders, thus driving the development of mitigation strategies.
This research will also be informative for dangerous capability evaluations and our understanding of catastrophic risk in the context of societal stability.
Your participation in this hackathon will contribute to a growing body of knowledge that will help shape the future of AI governance. We are excited to see you there and collaborate with you to develop impactful research.
The AI x Democracy Hackathon is a weekend-long event where you participate in teams (1-5) to create interesting, fun, and impactful research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000!
It runs from 3rd May to 5th May and we're excited to welcome you for a weekend of engaging research. You will hear fascinating talks about real-world projects tackling these types of questions, get the opportunity to discuss your ideas with experienced mentors, and you will get reviews from top-tier researchers in the field of AI safety to further your exploration.
Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career. We give you code templates and ideas to kickstart your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found community!
Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below.
You can check out a bunch of interesting ideas on AI Safety Ideas to red-team democracy by creating a demonstration of an actual risk model.
For example, you could develop an LLM to contain a sleeper agent that activates on election day, train agents to skew poll results to inflate support for particular policies, or use an LLM to draft uncontroversial legislative proposals that, if implemented, indirectly impacts more contentious or harmful policies.
These are ideas to get you started and you can check out the results from previous hackathons to see examples of the types of projects you can develop during just one weekend, e.g. EscalAItion found that LLMs had a propensity to escalate in military scenarios and was accepted at the multi-agent security workshop at NeurIPS 2023 after further development.
Here is some interesting material to get inspiration for the hackathon:
You can also see more on the evaluations starter resources page.
There’s loads of reasons to join! Here are just a few:
Please join! This can be your first foray into AI and ML safety and maybe you’ll realize that there are exciting low-hanging fruit that your specific skillset is adapted to. Even if you normally don’t find it particularly interesting, this time you might see it in a new light!
There’s a lot of pressure from AI safety to perform at a top level and this seems to drive some people out of the field. We’d love it if you consider joining with a mindset of fun exploration and get a positive experience out of the weekend.
Yoann Poupart, BlockLoads CTO: "This Hackathon was a perfect blend of learning, testing, and collaboration on cutting-edge AI Safety research. I really feel that I gained practical knowledge that cannot be learned only by reading articles.”
Lucie Philippon, France Pacific Territories Economic Committee: "It was great meeting such cool people to work with over the weekend! I did not know any of the other people in my group at first, and now I'm looking forward to working with them again on research projects! The organizers were also super helpful and contributed a lot to the success of our project.”
Akash Kundu, now an Apart Lab fellow: "It was an amazing experience working with people I didn't even know before the hackathon. All three of my teammates were extremely spread out, while I am from India, my teammates were from New York and Taiwan. It was amazing how we pulled this off in 48 hours in spite of the time difference. Moreover, the mentors were extremely encouraging and supportive which helped us gain clarity whenever we got stuck and helped us create an interesting project in the end.”
Nora Petrova, ML Engineer at Prolific: “The hackathon really helped me to be embedded in a community where everyone was working on the same topic. There was a lot of curiosity and interest in the community. Getting feedback from others was interesting as well and I could see how other researchers perceived my project. It was also really interesting to see all the other projects and it was positive to see other's work on it.”
Chris Mathwin, MATS Scholar: "The Interpretability Hackathon exceeded my expectations, it was incredibly well organized with an intelligently curated list of very helpful resources. I had a lot of fun participating and genuinely feel I was able to learn significantly more than I would have, had I spent my time elsewhere. I highly recommend these events to anyone who is interested in this sort of work!”
We are delighted to have experts in AI technical safety research and AI governance join us!
See other collaborators in the speakers and collaborators section of the hackathon page.
Besides emphasizing the introduction of concrete mitigation ideas for the risks presented, we are aware that projects emerging from this hackathon might pose a risk if disseminated irresponsibly.
For all of Apart's research events and dissemination, we follow our Responsible Disclosure Policy.
And last but not least, click the “Sign Up” button on theand read more about the hackathons on our home page.
If you have any feedback, suggestions, or resources we should be aware of, feel free to reach out to sprints@apartresearch.com, submit pull requests, add ideas, or write any questions on the Discord. We are happy to take constructive suggestions.
You can find an updated list of all the resources on our Evaluations Quickstart Guide repository under the Demonstrations heading.
To get you started with demonstrations and understand what you can do with current open models, we've written multiple notebooks for you to start your research journey from!
If you haven't used Colab notebooks before, you can either download them as Jupyter notebooks, run them in the browser, or make a copy to your own Google Drive. The last one is our suggestion since you can make permanent changes and share it with your teammates for somewhat live editing.
Besides these great pieces of code cooked together by Bart, you can also take a look at relevant literature, such as broader thoughts of AI's impact on democracies and specific research projects attempting to measure and mitigate such risks.
There are not many demonstrations of risk to democracies, but existing research work has demonstrated the general risk of modern AI for society. We are lucky to have some of the utmost examples presented by our speakers during the hackathon but more examples are surely to find out there:
Non-technical material to give inspiration for the potential issues AI might bring.
The schedule runs from 7PM CEST / 10AM PST Friday to 4AM CEST Monday / 7PM PST Sunday. We start with an introductory talk and end the event during the following week with an awards ceremony. Join the public ICal here.
You will also find Explorer events before the hackathon begins on Discord and on the calendar.
25 Holywell Row, London EC2A 4XE Lisa AI safety space
25 Holywell Row, London EC2A 4XE Lisa AI safety space
We will be hosting the hackathon at Hereplein 4, 9711GA, Groningen. Join us!
We will be hosting the hackathon at Hereplein 4, 9711GA, Groningen. Join us!
Hack Demonstrations of Risk from AI to Democracy in Copenhagen. Get access to the location at bit.ly/eadkoffice.
Hack Demonstrations of Risk from AI to Democracy in Copenhagen. Get access to the location at bit.ly/eadkoffice.
Join us to evaluate and mitigate societal challenges from AI, in Ho Chi Minh City, Vietnam! 🇻🇳
Join us to evaluate and mitigate societal challenges from AI, in Ho Chi Minh City, Vietnam! 🇻🇳
We will be hosting a hackathon at Sven Hultins gata, SE-412 58 Gothenburg, Sweden
We will be hosting a hackathon at Sven Hultins gata, SE-412 58 Gothenburg, Sweden
This is a Africa-wide hackathon, with main satellites in Cape Town, Johannesburg and Kenya.
This is a Africa-wide hackathon, with main satellites in Cape Town, Johannesburg and Kenya.
Virtual site: Please contact muykuu123 on Discord ; Physical site: 17T1 P. Hoàng Đạo Thúy, Trung Hoà, Cầu Giấy, Hà Nội, Vietnam
Virtual site: Please contact muykuu123 on Discord ; Physical site: 17T1 P. Hoàng Đạo Thúy, Trung Hoà, Cầu Giấy, Hà Nội, Vietnam
🌍 AI x Democracy Hackathon: Shaping the Future 🚀
Join us for an exciting weekend to demonstrate and mitigate risks from AI to societies and institutions! Joining us will be experts like Nina Rimsky from Anthropic, Simon Lermen, and Konrad Seifert and you'll have the opportunity to dive deep into the challenges posed by AI to societies and elections across the world. We're looking forward to see you!
🗓️ Save the date: Friday to Sunday, 3rd May
🖥️ Register now at: https://www.apartresearch.com/event/ai-democracy
Let's work together to build a future where AI strengthens our democratic values and institutions!
For your submission, you are required to put together a PDF report. In the submission form below, you can see which fields are optional and which are not.
This is where entries will be made public after submission. There is a chance submissions will be marked as info hazardous according to our Responsible Disclosure Policy and will not be visible on this page after the hackathon is over.
Sent on May 3rd 2024
Now we're just 2:30 hours away from the kickoff where you'll hear an inspiring talk from Alice Gatti in addition to Esben's introduction to the weekend's schedule, judging criteria, submission template, and more.
It will also be livestreamed for everyone across the world to have the keynote available during the weekend. After the hackathon, the recording of Alice's talk will be published on our YouTube channel so you can revisit it.
We've had very inspiring sessions during this past week and we're excited to welcome everyone else to get exciting projects set up! Even before we're beginning, several interesting ideas have emerged on the #projects | teams forum in our community server where you can read others' projects and team up.
We look forward to see you in 2:30 hours! Here's the details:
Besides our online event where many of you will join, we also thank our locations across the globe joining us this time! Hanoi, Africa, Gothenburg, Ho Chi Minh City, Copenhagen, Groningen, and London 🥳 The Apart team and multiple active AI safety researchers will be available on the server to answer all your questions in the #help-desk channel.
See you soon!
Sent on May 1st 2024
🥳 We're excited to welcome you for the AI x Democracy hackathon this coming weekend! Here's your short overview of the latest resources and topics to get you ready for the weekend.
For the keynote, we're delighted to present Alice Gatti from the Center for AI Safety who will be inspiring you with a talk on the Weapons of Mass Destruction Proxy benchmark, a work that showcases the catastrophic risk potential of current models, but also introduces real solutions to reduce this risk.
The hackathon page has received a makeover, so we encourage you to jump in there to check out the latest updates. The gold can be found under the “Resources” and “Ideas” tabs!
On top of the resources, we'll have a session later today (in about 1:30 hours), another edition of our brainstorming and discussion sessions where you can ask questions, meet other participants, and get some guidance on your project ideas. Come join us on the Discord Community Forum voice channel!
To make your weekend productive and exciting, we recommend that you take these two steps before we begin:
For more information about the prizes and judging criteria for this weekend, jump to the overview section. TLDR; your projects will be judged by our brilliant panel and receive feedback. Next Wednesday, we will have the Grand Finalé where the winning teams will present their projects and everyone is welcome to join. These are the prizes:
You will undoubtedly have questions that you need answered. Remember that the #❓help-desk channel is always available and that the organizers will be available there.
We really look forward to the weekend and we're excited to welcome you with Alice Gatti on Friday on Discord and YouTube!
Remember that this is an exciting opportunity to connect with others and develop meaningful ideas in AI safety. We're all here to help each other succeed on this remarkable journey for AI safety.