Copenhagen, Denmark

Effective Altruism Denmark

Effective Altruism Denmark is a Almen Forening steered by passionate volunteers and guided by its members. Everyone aims to make a positive impact, but lots of ways of doing good are ineffective. Our core goal is to answer the question: How can we use our time and money to help others the most?

Within Effective Altruism, AI safety is a big topic for technical talent to work on due to the potential for AI to upend our societies either through misinformation and distrust or from critical system failure from AI systems. EA Denmark and its sub-community ML Safety Denmark have been active in Copenhagen and Denmark for many years, recently ramping up its engagement with AI safety.

The hackathons have been a unique and exciting opportunity to engage with highly technical talent in Denmark without necessitating the technical skills on our core team due to the online support. It has also been an amazing way to inaugurate and introduce new members to the Copenhagen offices when that was set up in the Summer of 2023.

People

Elliot Davies

Elliot Davies is the director of Effective Altruism Denmark and co-hosts the Machine Learning Denmark events, such as the hackathons.

Karina Knudsen

Karina is a lead organizer of Machine Learning Denmark and has hosted weekly meetups along with multiple hackathons at EA DK.

LinkedIn

Albert Garde

Besides being an Apart Lab Fellow during 2023 and publishing DeepDecipher, Albert is a core member of the EA Denmark team and has hosted multiple hackathons with Apart.

LinkedIn

Esben Kran

Besides being the co-director of Apart, Esben co-founded the EA DK Office and is a board member of EA Denmark. He has organized multiple hackathons across Copenhagen and Aarhus.

LinkedIn

People

Elliot Davies

Elliot Davies is the director of Effective Altruism Denmark and co-hosts the Machine Learning Denmark events, such as the hackathons.

Karina Knudsen

Karina is a lead organizer of Machine Learning Denmark and has hosted weekly meetups along with multiple hackathons at EA DK.

LinkedIn

Albert Garde

Besides being an Apart Lab Fellow during 2023 and publishing DeepDecipher, Albert is a core member of the EA Denmark team and has hosted multiple hackathons with Apart.

LinkedIn

Esben Kran

Besides being the co-director of Apart, Esben co-founded the EA DK Office and is a board member of EA Denmark. He has organized multiple hackathons across Copenhagen and Aarhus.

LinkedIn

Hosted Events

May 3, 2024

-

May 6, 2024

AI and Democracy Hackathon: Demonstrating the Risks

Together, we will be hacking away to demonstrate and mitigate the challenges that arise in the meeting between AI and democracy, while trying to project these risks into the future.

Read More

Read More

Read More

Nov 24, 2023

-

Nov 26, 2023

AI Model Evaluations Hackathon

Expose the unknown unknowns of AI model behavior

Read More

Read More

Read More

Sep 29, 2023

-

Oct 1, 2023

Multi-Agent Safety Hackathon

Co-author research opportunity with Cooperative AI Foundation

Read More

Read More

Read More

Aug 18, 2023

-

Aug 20, 2023

LLM Evals Hackathon

Welcome to the research hackathon to devise methods for evaluating the risks of deployed language models and AI. With the societal-scale risks associated with creating new types of intelligence, we need to understand and control the capabilities of such models.

Read More

Read More

Read More

Jun 30, 2023

-

Jul 2, 2023

Safety Benchmarks Hackathon

Large AI models are released nearly every week. We need to find ways to evaluate these models (especially at the complexity of GPT-4) to ensure that they will not have critical failures after deployment, e.g. autonomous power-seeking, biases for unethical behaviors, and other phenomena that arise in deployment (e.g. inverse scaling).

Read More

Read More

Read More

May 26, 2023

-

May 28, 2023

ML Verifiability Hackathon

Join us for this month's Alignment Jam to investigate how we can both formally and informally verify the safety of machine learning systems!

Read More

Read More

Read More

Mar 24, 2023

-

Mar 27, 2023

AI Governance

A weekend for exploring AI & society!

Read More

Read More

Read More

Feb 10, 2023

-

Feb 13, 2023

Scale Oversight for Machine Learning Hackathon

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems, and so much else!

Read More

Read More

Read More

Jan 20, 2023

-

Jan 23, 2023

Mechanistic Interpretability Hackathon

Machine learning is becoming an increasingly important part of our lives and researchers are still working to understand how neural networks represent the world.

Read More

Read More

Read More

Sep 29, 2022

-

Oct 1, 2022

Language Model Hackathon

Alignment Jam #1

Read More

Read More

Read More

Hosted Events

May 3, 2024

-

May 6, 2024

AI and Democracy Hackathon: Demonstrating the Risks

Together, we will be hacking away to demonstrate and mitigate the challenges that arise in the meeting between AI and democracy, while trying to project these risks into the future.

Read More

Nov 24, 2023

-

Nov 26, 2023

AI Model Evaluations Hackathon

Expose the unknown unknowns of AI model behavior

Read More

Sep 29, 2023

-

Oct 1, 2023

Multi-Agent Safety Hackathon

Co-author research opportunity with Cooperative AI Foundation

Read More

Aug 18, 2023

-

Aug 20, 2023

LLM Evals Hackathon

Welcome to the research hackathon to devise methods for evaluating the risks of deployed language models and AI. With the societal-scale risks associated with creating new types of intelligence, we need to understand and control the capabilities of such models.

Read More

Jun 30, 2023

-

Jul 2, 2023

Safety Benchmarks Hackathon

Large AI models are released nearly every week. We need to find ways to evaluate these models (especially at the complexity of GPT-4) to ensure that they will not have critical failures after deployment, e.g. autonomous power-seeking, biases for unethical behaviors, and other phenomena that arise in deployment (e.g. inverse scaling).

Read More

May 26, 2023

-

May 28, 2023

ML Verifiability Hackathon

Join us for this month's Alignment Jam to investigate how we can both formally and informally verify the safety of machine learning systems!

Read More

Mar 24, 2023

-

Mar 27, 2023

AI Governance

A weekend for exploring AI & society!

Read More

Feb 10, 2023

-

Feb 13, 2023

Scale Oversight for Machine Learning Hackathon

Join us for the fifth Alignment Jam where we get to spend 48 hours of intense research on how we can measure and monitor the safety of large-scale machine learning models. Work on safety benchmarks, models detecting faults in other models, self-monitoring systems, and so much else!

Read More

Jan 20, 2023

-

Jan 23, 2023

Mechanistic Interpretability Hackathon

Machine learning is becoming an increasingly important part of our lives and researchers are still working to understand how neural networks represent the world.

Read More

Sep 29, 2022

-

Oct 1, 2022

Language Model Hackathon

Alignment Jam #1

Read More

Event Photos

Event Photos