Our mentors

Meet your mentors

Our team combines expertise from diverse fields including machine learning and physics to tackle AI safety challenges with you. We have experience from multiple PhDs, industry and startups.

Esben Kran

Director
Focused on core AI security questions (national security, application security) of foundation models
Click to read more
Esben is a research entrepreneur with experience in neuroscience, game development and data science. In research, he is excited about accelerating AI security with cognitive science. He is always up for a friendly chat and enjoys helping people push themselves in healthy ways! When he's not online, he loves to travel, discuss models of how to live life, play music and create art.
firstname@apartresearch.com

Jason Hoelscher-Obermaier

Research Lead
Experience in core alignment agendas with a PhD in experimental quantum physics
Click to read more
Jason is an AI research engineer with a Ph.D. in experimental quantum physics and over five years of experience in AI startups. His research focuses on safety evaluations of large language models, AI interpretability, and alignment. Outside work, he is an avid music enthusiast and enjoys outdoor activities. Jason loves open-minded discussions and leveraging his diverse background for mentorship.
firstname@apartresearch.com

Fazl Barez

Research Advisor
Experience from interpretability and alignment with a PhD in AI and robotics
Click to read more
Fazl's main research focus is on Interpretability and applications of AI safety. He has supervised over ten graduate and undergraduate researchers on Interpretability, Safety and AI Governance. Fazl holds a PhD in AI and has over five years of work experience in industry, including research roles at Amazon and Huawei.
firstname@apartresearch.com

Christian Schroeder de Witt

Research Advisor
Experience in multi-agent safety and security, with a DPhil (Ph.D.) in Deep Multi-Agent Reinforcement Learning
Click to read more
Christian has experience across AI alignment, interpretability, security, and governance, with a particular specialisation in multi-agent systems, and deep reinforcement learning. His work has been featured by Quanta Magazine and Scientific American, and he is a former "30 under 35 (Europe)" Schmidt Futures International Strategy Forum Fellow. As an expert in AI and geopolitics, he has consulted the European Council on Foreign Relations.
firstname@apartresearch.com

Charlotte Siegmann

Senior Mentor for AI Governance
PhD at MIT focused on economics of AI automation and large-scale computing
Click to read more
Charlotte is a PhD student in economics at the Massachusetts Institute of Technology working on the economics of AI automation and large-scale computing. She is a founding member of KIRA, a Berlin-based AI policy think tank. Previously, she was an ML alignment & theory scholar, worked as a Predoctoral Research Fellow in Economics at Oxford’s Global Priorities Institute and interned for a Vice President of the EU Parliament. She reads, juggles fire, dances, and loves long conversations that help us figure out how to improve.
firstname@apartresearch.com

Clement Neo

Research Assistant
Work in AI interpretability with experience in precision agriculture and military applications
Click to read more
Clement is a final-year undergraduate with practical experience in AI for precision agriculture and the military. He has been working in AI interpretability over the past year. His contributions include co-authoring a blog post on neuron interpretability based on follow-up work on an Apart Sprint submission. Clement keeps track of the Apart Lab projects and regularly checks in with the teams to help them navigate and clear any roadblocks.
firstname@apartresearch.com