Sep 6, 2024
-
Sep 9, 2024
Online & In-Person
The Concordia Contest




The Concordia Contest is your opportunity to dive into the exciting world of cooperative AI. Whether you're an experienced AI researcher, a curious developer, or someone passionate about ensuring AI benefits humanity, this event is for you
00:00:00:00
00:00:00:00
00:00:00:00
00:00:00:00
The Concordia Contest is your opportunity to dive into the exciting world of cooperative AI. Whether you're an experienced AI researcher, a curious developer, or someone passionate about ensuring AI benefits humanity, this event is for you
This event is ongoing.
This event has concluded.
199
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Register now to shape the future of AI cooperation!
The Concordia Contest is your opportunity to dive into the exciting world of cooperative AI. Whether you're an experienced AI researcher, a curious developer, or someone passionate about ensuring AI benefits humanity, this event is for you. As a participant, you will:
Collaborate in a diverse team to create groundbreaking AI agents
Learn from experts in AI safety, cooperative AI, and multi-agent systems, including Sasha Vezhnevets, Joel Leibo, Rakshit Trivedi, and Lewis Hammond
Contribute to solving real-world challenges like resource management and conflict resolution
Compete for prizes and the chance to be featured in a NeurIPS publication
Network with like-minded individuals and potential future collaborators
Register now and be part of the movement towards more cooperative, trustworthy, and beneficial AI systems. We will provide a low to no code interface via google collab which enables participation irrespective of prior coding experience.
Here is a quick video tutorial to get you started
A competition in cooperative AI:
As AI systems become more sophisticated and pervasive, we must develop agents capable of cooperating effectively with humans and other AIs. Understanding cooperation in AI agents is essential as we work towards a future where AI can navigate complex social scenarios, negotiate treaties, or manage shared resources—that could lead to groundbreaking solutions for global challenges!
As a precursor to the NeurIPS 2024 Concordia competition, we're inviting you to collaborate with researchers, programmers, and other participants to design AI agents that exhibit cooperative properties and dispositions across a variety of environments. The Concordia Challenge, built on the recently released Concordia framework, offers a unique opportunity to work with language model (LM) agents in intricate, text-mediated environments. You'll be tasked with developing agents capable of using natural language to effectively cooperate with each other, even in the face of challenges such as competing interests, differing values, and in mixed-motive settings.
Prize Pool: $2,000 (standard breakdown)
🥇 $1,000 for the top team
🥈 $600 for the second prize
🥉 $300 for the third prize
🏅 $100 for the fourth prize
Winning submissions may have the opportunity to collaborate and co-author a NeurIPS report -- in collaboration with authors from Cooperative AI Foundation, GoogleDeepMind, MIT, University of Washington, UC Berkeley, and University College London.
Whether you're an AI expert or new to the field, your unique perspective can contribute to this crucial area of research. The goal of this hackathon goes beyond winning prizes. Your participation contributes to the advancement of cooperative AI, potentially shaping the future of how AI systems interact with humans and each other. Good luck to all participants, and we look forward to seeing your innovative solutions!

The Concordia Framework Explained:
This diagram illustrates how agents interact within the Concordia environment:
Agents (like Dorothy and Charlie) make action attempts in natural language.
The Game Master (GM) processes these attempts and determines outcomes.
The GM generates event statements describing what actually occurred.
These events update the world state and are sent back to agents as observations.
This cycle creates a dynamic, text-based environment where AI agents must cooperate and communicate to achieve goals, mimicking complex social interactions. Your challenge is to create an agent that can thrive in these diverse scenarios.
Designing Your Agent:
Consider creating a backstory or set of experiences for your agent to inform its decision-making process.
Explore various psychological profiles, personality tests, or historical figures as inspiration for your agent's behavior.
Think about giving your agent a motivation or purpose, such as acting on behalf of a family or community.
Experiment with different approaches, such as using social psychology research or creating fictional memories to shape your agent's cooperative tendencies.
Remember that your agent should be able to adapt to various roles and scenarios, so avoid being too prescriptive in its behavior.
Collaboration and Development
We encourage participants to build in public, especially during the first 24 hours of the hackathon.
Share your ideas and get feedback from other participants to improve your agent's design.
Consider forming diverse teams to tackle the challenge from multiple perspectives.
Environments
We have several different environments that proceed in simultaneous-move rounds, controlled by a “game master” (like a storyteller)
Environments are mixed-motive, incomplete information, equilibrium selection problems that contain a “background population” of agents against which the participants are evaluated (similar to MeltingPot Contest)
Some environments are extremely open-ended so agents can suggest almost any action to the game master (who then resolves the action, determining whether or not they can do it using common sense and environment knowledge); other environments contain mixtures of open-ended free response choices and multiple choice questions
All environments track concrete variables (e.g. money) which we also use for scoring
The environments we have so far are:
some textPub Coordination: A group of friends with individual pub preferences must coordinate their choices for a night out, balancing personal desires with group harmony while adapting to potential unexpected closures and engaging in social interactions.
Haggling: A fruit market in Fruitville where merchants engage in price negotiations, balancing potential profits against the risk of the co-player refusing the transaction.
Labor Collective Action: Workers must decide whether to strike or continue working in the face of wage cuts, navigating collective action problems and power dynamics with a boss and labor organizer.
Reality Show: Contestants participate in a series of minigames with varied game-theoretic structures, balancing cooperation and competition across different scenarios.
Environments are located here: https://github.com/google-deepmind/concordia/tree/main/examples/modular/environment
199
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Register now to shape the future of AI cooperation!
The Concordia Contest is your opportunity to dive into the exciting world of cooperative AI. Whether you're an experienced AI researcher, a curious developer, or someone passionate about ensuring AI benefits humanity, this event is for you. As a participant, you will:
Collaborate in a diverse team to create groundbreaking AI agents
Learn from experts in AI safety, cooperative AI, and multi-agent systems, including Sasha Vezhnevets, Joel Leibo, Rakshit Trivedi, and Lewis Hammond
Contribute to solving real-world challenges like resource management and conflict resolution
Compete for prizes and the chance to be featured in a NeurIPS publication
Network with like-minded individuals and potential future collaborators
Register now and be part of the movement towards more cooperative, trustworthy, and beneficial AI systems. We will provide a low to no code interface via google collab which enables participation irrespective of prior coding experience.
Here is a quick video tutorial to get you started
A competition in cooperative AI:
As AI systems become more sophisticated and pervasive, we must develop agents capable of cooperating effectively with humans and other AIs. Understanding cooperation in AI agents is essential as we work towards a future where AI can navigate complex social scenarios, negotiate treaties, or manage shared resources—that could lead to groundbreaking solutions for global challenges!
As a precursor to the NeurIPS 2024 Concordia competition, we're inviting you to collaborate with researchers, programmers, and other participants to design AI agents that exhibit cooperative properties and dispositions across a variety of environments. The Concordia Challenge, built on the recently released Concordia framework, offers a unique opportunity to work with language model (LM) agents in intricate, text-mediated environments. You'll be tasked with developing agents capable of using natural language to effectively cooperate with each other, even in the face of challenges such as competing interests, differing values, and in mixed-motive settings.
Prize Pool: $2,000 (standard breakdown)
🥇 $1,000 for the top team
🥈 $600 for the second prize
🥉 $300 for the third prize
🏅 $100 for the fourth prize
Winning submissions may have the opportunity to collaborate and co-author a NeurIPS report -- in collaboration with authors from Cooperative AI Foundation, GoogleDeepMind, MIT, University of Washington, UC Berkeley, and University College London.
Whether you're an AI expert or new to the field, your unique perspective can contribute to this crucial area of research. The goal of this hackathon goes beyond winning prizes. Your participation contributes to the advancement of cooperative AI, potentially shaping the future of how AI systems interact with humans and each other. Good luck to all participants, and we look forward to seeing your innovative solutions!

The Concordia Framework Explained:
This diagram illustrates how agents interact within the Concordia environment:
Agents (like Dorothy and Charlie) make action attempts in natural language.
The Game Master (GM) processes these attempts and determines outcomes.
The GM generates event statements describing what actually occurred.
These events update the world state and are sent back to agents as observations.
This cycle creates a dynamic, text-based environment where AI agents must cooperate and communicate to achieve goals, mimicking complex social interactions. Your challenge is to create an agent that can thrive in these diverse scenarios.
Designing Your Agent:
Consider creating a backstory or set of experiences for your agent to inform its decision-making process.
Explore various psychological profiles, personality tests, or historical figures as inspiration for your agent's behavior.
Think about giving your agent a motivation or purpose, such as acting on behalf of a family or community.
Experiment with different approaches, such as using social psychology research or creating fictional memories to shape your agent's cooperative tendencies.
Remember that your agent should be able to adapt to various roles and scenarios, so avoid being too prescriptive in its behavior.
Collaboration and Development
We encourage participants to build in public, especially during the first 24 hours of the hackathon.
Share your ideas and get feedback from other participants to improve your agent's design.
Consider forming diverse teams to tackle the challenge from multiple perspectives.
Environments
We have several different environments that proceed in simultaneous-move rounds, controlled by a “game master” (like a storyteller)
Environments are mixed-motive, incomplete information, equilibrium selection problems that contain a “background population” of agents against which the participants are evaluated (similar to MeltingPot Contest)
Some environments are extremely open-ended so agents can suggest almost any action to the game master (who then resolves the action, determining whether or not they can do it using common sense and environment knowledge); other environments contain mixtures of open-ended free response choices and multiple choice questions
All environments track concrete variables (e.g. money) which we also use for scoring
The environments we have so far are:
some textPub Coordination: A group of friends with individual pub preferences must coordinate their choices for a night out, balancing personal desires with group harmony while adapting to potential unexpected closures and engaging in social interactions.
Haggling: A fruit market in Fruitville where merchants engage in price negotiations, balancing potential profits against the risk of the co-player refusing the transaction.
Labor Collective Action: Workers must decide whether to strike or continue working in the face of wage cuts, navigating collective action problems and power dynamics with a boss and labor organizer.
Reality Show: Contestants participate in a series of minigames with varied game-theoretic structures, balancing cooperation and competition across different scenarios.
Environments are located here: https://github.com/google-deepmind/concordia/tree/main/examples/modular/environment
199
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Register now to shape the future of AI cooperation!
The Concordia Contest is your opportunity to dive into the exciting world of cooperative AI. Whether you're an experienced AI researcher, a curious developer, or someone passionate about ensuring AI benefits humanity, this event is for you. As a participant, you will:
Collaborate in a diverse team to create groundbreaking AI agents
Learn from experts in AI safety, cooperative AI, and multi-agent systems, including Sasha Vezhnevets, Joel Leibo, Rakshit Trivedi, and Lewis Hammond
Contribute to solving real-world challenges like resource management and conflict resolution
Compete for prizes and the chance to be featured in a NeurIPS publication
Network with like-minded individuals and potential future collaborators
Register now and be part of the movement towards more cooperative, trustworthy, and beneficial AI systems. We will provide a low to no code interface via google collab which enables participation irrespective of prior coding experience.
Here is a quick video tutorial to get you started
A competition in cooperative AI:
As AI systems become more sophisticated and pervasive, we must develop agents capable of cooperating effectively with humans and other AIs. Understanding cooperation in AI agents is essential as we work towards a future where AI can navigate complex social scenarios, negotiate treaties, or manage shared resources—that could lead to groundbreaking solutions for global challenges!
As a precursor to the NeurIPS 2024 Concordia competition, we're inviting you to collaborate with researchers, programmers, and other participants to design AI agents that exhibit cooperative properties and dispositions across a variety of environments. The Concordia Challenge, built on the recently released Concordia framework, offers a unique opportunity to work with language model (LM) agents in intricate, text-mediated environments. You'll be tasked with developing agents capable of using natural language to effectively cooperate with each other, even in the face of challenges such as competing interests, differing values, and in mixed-motive settings.
Prize Pool: $2,000 (standard breakdown)
🥇 $1,000 for the top team
🥈 $600 for the second prize
🥉 $300 for the third prize
🏅 $100 for the fourth prize
Winning submissions may have the opportunity to collaborate and co-author a NeurIPS report -- in collaboration with authors from Cooperative AI Foundation, GoogleDeepMind, MIT, University of Washington, UC Berkeley, and University College London.
Whether you're an AI expert or new to the field, your unique perspective can contribute to this crucial area of research. The goal of this hackathon goes beyond winning prizes. Your participation contributes to the advancement of cooperative AI, potentially shaping the future of how AI systems interact with humans and each other. Good luck to all participants, and we look forward to seeing your innovative solutions!

The Concordia Framework Explained:
This diagram illustrates how agents interact within the Concordia environment:
Agents (like Dorothy and Charlie) make action attempts in natural language.
The Game Master (GM) processes these attempts and determines outcomes.
The GM generates event statements describing what actually occurred.
These events update the world state and are sent back to agents as observations.
This cycle creates a dynamic, text-based environment where AI agents must cooperate and communicate to achieve goals, mimicking complex social interactions. Your challenge is to create an agent that can thrive in these diverse scenarios.
Designing Your Agent:
Consider creating a backstory or set of experiences for your agent to inform its decision-making process.
Explore various psychological profiles, personality tests, or historical figures as inspiration for your agent's behavior.
Think about giving your agent a motivation or purpose, such as acting on behalf of a family or community.
Experiment with different approaches, such as using social psychology research or creating fictional memories to shape your agent's cooperative tendencies.
Remember that your agent should be able to adapt to various roles and scenarios, so avoid being too prescriptive in its behavior.
Collaboration and Development
We encourage participants to build in public, especially during the first 24 hours of the hackathon.
Share your ideas and get feedback from other participants to improve your agent's design.
Consider forming diverse teams to tackle the challenge from multiple perspectives.
Environments
We have several different environments that proceed in simultaneous-move rounds, controlled by a “game master” (like a storyteller)
Environments are mixed-motive, incomplete information, equilibrium selection problems that contain a “background population” of agents against which the participants are evaluated (similar to MeltingPot Contest)
Some environments are extremely open-ended so agents can suggest almost any action to the game master (who then resolves the action, determining whether or not they can do it using common sense and environment knowledge); other environments contain mixtures of open-ended free response choices and multiple choice questions
All environments track concrete variables (e.g. money) which we also use for scoring
The environments we have so far are:
some textPub Coordination: A group of friends with individual pub preferences must coordinate their choices for a night out, balancing personal desires with group harmony while adapting to potential unexpected closures and engaging in social interactions.
Haggling: A fruit market in Fruitville where merchants engage in price negotiations, balancing potential profits against the risk of the co-player refusing the transaction.
Labor Collective Action: Workers must decide whether to strike or continue working in the face of wage cuts, navigating collective action problems and power dynamics with a boss and labor organizer.
Reality Show: Contestants participate in a series of minigames with varied game-theoretic structures, balancing cooperation and competition across different scenarios.
Environments are located here: https://github.com/google-deepmind/concordia/tree/main/examples/modular/environment
199
Sign Ups
Entries
Overview
Resources
Schedule
Entries
Overview

Register now to shape the future of AI cooperation!
The Concordia Contest is your opportunity to dive into the exciting world of cooperative AI. Whether you're an experienced AI researcher, a curious developer, or someone passionate about ensuring AI benefits humanity, this event is for you. As a participant, you will:
Collaborate in a diverse team to create groundbreaking AI agents
Learn from experts in AI safety, cooperative AI, and multi-agent systems, including Sasha Vezhnevets, Joel Leibo, Rakshit Trivedi, and Lewis Hammond
Contribute to solving real-world challenges like resource management and conflict resolution
Compete for prizes and the chance to be featured in a NeurIPS publication
Network with like-minded individuals and potential future collaborators
Register now and be part of the movement towards more cooperative, trustworthy, and beneficial AI systems. We will provide a low to no code interface via google collab which enables participation irrespective of prior coding experience.
Here is a quick video tutorial to get you started
A competition in cooperative AI:
As AI systems become more sophisticated and pervasive, we must develop agents capable of cooperating effectively with humans and other AIs. Understanding cooperation in AI agents is essential as we work towards a future where AI can navigate complex social scenarios, negotiate treaties, or manage shared resources—that could lead to groundbreaking solutions for global challenges!
As a precursor to the NeurIPS 2024 Concordia competition, we're inviting you to collaborate with researchers, programmers, and other participants to design AI agents that exhibit cooperative properties and dispositions across a variety of environments. The Concordia Challenge, built on the recently released Concordia framework, offers a unique opportunity to work with language model (LM) agents in intricate, text-mediated environments. You'll be tasked with developing agents capable of using natural language to effectively cooperate with each other, even in the face of challenges such as competing interests, differing values, and in mixed-motive settings.
Prize Pool: $2,000 (standard breakdown)
🥇 $1,000 for the top team
🥈 $600 for the second prize
🥉 $300 for the third prize
🏅 $100 for the fourth prize
Winning submissions may have the opportunity to collaborate and co-author a NeurIPS report -- in collaboration with authors from Cooperative AI Foundation, GoogleDeepMind, MIT, University of Washington, UC Berkeley, and University College London.
Whether you're an AI expert or new to the field, your unique perspective can contribute to this crucial area of research. The goal of this hackathon goes beyond winning prizes. Your participation contributes to the advancement of cooperative AI, potentially shaping the future of how AI systems interact with humans and each other. Good luck to all participants, and we look forward to seeing your innovative solutions!

The Concordia Framework Explained:
This diagram illustrates how agents interact within the Concordia environment:
Agents (like Dorothy and Charlie) make action attempts in natural language.
The Game Master (GM) processes these attempts and determines outcomes.
The GM generates event statements describing what actually occurred.
These events update the world state and are sent back to agents as observations.
This cycle creates a dynamic, text-based environment where AI agents must cooperate and communicate to achieve goals, mimicking complex social interactions. Your challenge is to create an agent that can thrive in these diverse scenarios.
Designing Your Agent:
Consider creating a backstory or set of experiences for your agent to inform its decision-making process.
Explore various psychological profiles, personality tests, or historical figures as inspiration for your agent's behavior.
Think about giving your agent a motivation or purpose, such as acting on behalf of a family or community.
Experiment with different approaches, such as using social psychology research or creating fictional memories to shape your agent's cooperative tendencies.
Remember that your agent should be able to adapt to various roles and scenarios, so avoid being too prescriptive in its behavior.
Collaboration and Development
We encourage participants to build in public, especially during the first 24 hours of the hackathon.
Share your ideas and get feedback from other participants to improve your agent's design.
Consider forming diverse teams to tackle the challenge from multiple perspectives.
Environments
We have several different environments that proceed in simultaneous-move rounds, controlled by a “game master” (like a storyteller)
Environments are mixed-motive, incomplete information, equilibrium selection problems that contain a “background population” of agents against which the participants are evaluated (similar to MeltingPot Contest)
Some environments are extremely open-ended so agents can suggest almost any action to the game master (who then resolves the action, determining whether or not they can do it using common sense and environment knowledge); other environments contain mixtures of open-ended free response choices and multiple choice questions
All environments track concrete variables (e.g. money) which we also use for scoring
The environments we have so far are:
some textPub Coordination: A group of friends with individual pub preferences must coordinate their choices for a night out, balancing personal desires with group harmony while adapting to potential unexpected closures and engaging in social interactions.
Haggling: A fruit market in Fruitville where merchants engage in price negotiations, balancing potential profits against the risk of the co-player refusing the transaction.
Labor Collective Action: Workers must decide whether to strike or continue working in the face of wage cuts, navigating collective action problems and power dynamics with a boss and labor organizer.
Reality Show: Contestants participate in a series of minigames with varied game-theoretic structures, balancing cooperation and competition across different scenarios.
Environments are located here: https://github.com/google-deepmind/concordia/tree/main/examples/modular/environment
Speakers & Collaborators
Chandler Smith
Organizer, Speaker & Mentor
Chandler Smith is a research engineer at Cooperative AI. He was a Machine Learning Alignment and Theory Scholar (MATS) supervised by Jesse Clifton.
Marta Bieńkiewicz
Organizer
Marta has decade-plus of expertise in research delivery at the intersection of neuroscience and technology.
Dr. Alexander (Sasha) Vezhnevets
Keynote Speaker
Alexander (Sasha) Vezhnevets is a staff research scientist at Google DeepMind. He obtained his PhD from ETH Zurich in Machine Learning.
Rakshit Trivedi
Speaker
Rakshit S. Trivedi is a Postdoctoral Associate in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.
Marwa Abdulhai
Mentor
Marwa is a PhD student at the Berkeley Artificial Intelligence Research (BAIR) lab at UC Berkeley, advised by Professor Sergey Levine.
Oliver Slumbers
Mentor
Oliver Slumbers is a PhD student at University College London, supervised by Prof. Jun Wang. His research centres around population / group dynamics in multi-agent systems.
Esben Kran
Organizer and Keynote Speaker
Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.
Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.
Jason Schreiber
Organizer and Judge
Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.
Natalia Pérez-Campanero Antolín
Organiser
A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.
Lewis Hammond
Organizer
Lewis is based at the University of Oxford, where he is a DPhil candidate in computer science. He is also the research director of the Cooperative AI Foundation.
Jesse Clifton
Organizer
Jesse Clifton is a research analyst at the Cooperative AI Foundation and a researcher at the Center on Long-Term Risk, where he is focused on how to improve outcomes of interaction
Speakers & Collaborators

Chandler Smith
Organizer, Speaker & Mentor
Chandler Smith is a research engineer at Cooperative AI. He was a Machine Learning Alignment and Theory Scholar (MATS) supervised by Jesse Clifton.

Marta Bieńkiewicz
Organizer
Marta has decade-plus of expertise in research delivery at the intersection of neuroscience and technology.

Dr. Alexander (Sasha) Vezhnevets
Keynote Speaker
Alexander (Sasha) Vezhnevets is a staff research scientist at Google DeepMind. He obtained his PhD from ETH Zurich in Machine Learning.

Rakshit Trivedi
Speaker
Rakshit S. Trivedi is a Postdoctoral Associate in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

Marwa Abdulhai
Mentor
Marwa is a PhD student at the Berkeley Artificial Intelligence Research (BAIR) lab at UC Berkeley, advised by Professor Sergey Levine.

Oliver Slumbers
Mentor
Oliver Slumbers is a PhD student at University College London, supervised by Prof. Jun Wang. His research centres around population / group dynamics in multi-agent systems.

Esben Kran
Organizer and Keynote Speaker
Esben is the co-director of Apart Research and specializes in organizing research teams on pivotal AI security questions.

Archana Vaidheeswaran
Organizer
Archana is responsible for organizing the Apart Sprints, research hackathons to solve the most important questions in AI safety.

Jason Schreiber
Organizer and Judge
Jason is co-director of Apart Research and leads Apart Lab, our remote-first AI safety research fellowship.

Natalia Pérez-Campanero Antolín
Organiser
A research manager at Apart, Natalia has a PhD in Interdisciplinary Biosciences from Oxford and has run the Royal Society's Entrepreneur-in-Residence program.

Lewis Hammond
Organizer
Lewis is based at the University of Oxford, where he is a DPhil candidate in computer science. He is also the research director of the Cooperative AI Foundation.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Registered Jam Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
Our Other Sprints
May 30, 2025
-
Jun 1, 2025
Research
Apart x Martian Mechanistic Interpretability Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up
Apr 25, 2025
-
Apr 27, 2025
Research
Economics of Transformative AI
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
Sign Up
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events