Apr 4, 2025
-
Apr 6, 2025
Zurich
Dark Patterns in AGI Hackathon at ZAIA



How is AGI trying to manipulate you? Which red flags should you check for when using chatbots? How can AI agents reduce human autonomy in favor of profit, power, or self-preservation?
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Resources

🤓 Readings
The core thesis for our work is to build a piece of research that can help us test, reduce, and steer the tendency of generally intelligent systems to reduce human autonomy.
The following pieces can provide you with interesting context on this topic:
“DarkBench: Benchmarking Dark Patterns in Large Language Models” (🎥 podcast, 🎥 of a previous report) is a paper that explores how company incentives by default lead to the users of AI chatbots losing autonomy and builds a test to evaluate models for these worrying interaction patterns.
"HumanAgencyBench: Do Language Models Support Human Agency?" evaluates the propensity of models to support the user's agency instead of diminishing it throughout a series of simulated scenarios.
“Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety” uncovers how AI systems will deplete and reduce human autonomy and provides an overview of previous work to resolve this issue.
🎥 Surveillance Capitalism Primer explains the concepts from Shoshana Zuboff's book “Surveillance Capitalism” that defines how the relationship between humans and social media algorithms has, throughout the 21st century, put human social and mental lives in the hands of very few big corporations.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

The Dark Patterns in AGI Hackathon brings together researchers, engineers, and students in the ZAIA ecosystem to uncover the ways modern machine intelligence is trained to manipulate and control humans.
Join us for a weekend where Esben Kran introduces us to the topic of Dark Patterns in LLMs through his recent DarkBench paper (an oral at ICLR 2025) to set the stage for our investigation of how we can detect, remove, and steer AI models towards greater human autonomy.
🏴☠️ About the Hackathon
We kick off Friday the 4th of April with a keynote and hack away during the weekend. When we're done, the authors of the DarkBench paper will give our projects reviews and we may even get a chance to join the Apart Lab down the line to push our research towards impact!
We will spend the weekend together and finish it off with a <4 page report on our results and conclusions. There's a repository for the DarkBench code that we can use and otherwise, anything is allowed.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

The Dark Patterns in AGI Hackathon brings together researchers, engineers, and students in the ZAIA ecosystem to uncover the ways modern machine intelligence is trained to manipulate and control humans.
Join us for a weekend where Esben Kran introduces us to the topic of Dark Patterns in LLMs through his recent DarkBench paper (an oral at ICLR 2025) to set the stage for our investigation of how we can detect, remove, and steer AI models towards greater human autonomy.
🏴☠️ About the Hackathon
We kick off Friday the 4th of April with a keynote and hack away during the weekend. When we're done, the authors of the DarkBench paper will give our projects reviews and we may even get a chance to join the Apart Lab down the line to push our research towards impact!
We will spend the weekend together and finish it off with a <4 page report on our results and conclusions. There's a repository for the DarkBench code that we can use and otherwise, anything is allowed.
Sign Ups
Entries
Overview
Resources
Guidelines
Entries
Overview

The Dark Patterns in AGI Hackathon brings together researchers, engineers, and students in the ZAIA ecosystem to uncover the ways modern machine intelligence is trained to manipulate and control humans.
Join us for a weekend where Esben Kran introduces us to the topic of Dark Patterns in LLMs through his recent DarkBench paper (an oral at ICLR 2025) to set the stage for our investigation of how we can detect, remove, and steer AI models towards greater human autonomy.
🏴☠️ About the Hackathon
We kick off Friday the 4th of April with a keynote and hack away during the weekend. When we're done, the authors of the DarkBench paper will give our projects reviews and we may even get a chance to join the Apart Lab down the line to push our research towards impact!
We will spend the weekend together and finish it off with a <4 page report on our results and conclusions. There's a repository for the DarkBench code that we can use and otherwise, anything is allowed.
Speakers & Collaborators
Esben Kran old
Organizer and Keynote Speaker
Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Simon
Organiser
Master’s student in CS at ETH Zurich focusing on AI Safety and Security. Co-leads the Zurich AI Alignment group
Speakers & Collaborators
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
Jan 30, 2026
-
Feb 1, 2026
Research
The Technical AI Governance Challenge
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Nov 21, 2025
-
Nov 23, 2025
Research
Defensive Acceleration Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events


