From Idea to Impact
We accelerate AI safety research through mentorship, collaborations, and research sprints
Apr 5, 2025
-
Apr 6, 2025
Georgia Tech Campus & Online
Georgia Tech AISI Policy Hackathon
Join the Georgia Tech AI Governance Hackathon and shape the future of AI policy! Collaborate with peers, experts, and mentors to develop innovative solutions for critical AI governance challenges. Whether you're a tech enthusiast, policy wonk, or ethical thinker, your ideas can make a real impact. Don't miss this opportunity to contribute to global AI safety and elevate Georgia Tech's role in the field. Register now and be part of the change!
13 : 06 : 10 : 43
Apr 5, 2025
-
Apr 6, 2025
Georgia Tech Campus & Online
Georgia Tech AISI Policy Hackathon
Join the Georgia Tech AI Governance Hackathon and shape the future of AI policy! Collaborate with peers, experts, and mentors to develop innovative solutions for critical AI governance challenges. Whether you're a tech enthusiast, policy wonk, or ethical thinker, your ideas can make a real impact. Don't miss this opportunity to contribute to global AI safety and elevate Georgia Tech's role in the field. Register now and be part of the change!
13 : 06 : 10 : 43
Research
Feb 18, 2025
Uncovering Model Manipulation with DarkBench
Apart Research developed DarkBench to uncover dark patterns - application design practices that manipulate a user’s behavior against their intention - in some of the world's most popular in LLMs.


Develop breakthrough ideas
We believe rapid research is needed and possible to create a good future with AI — join us at our Research Sprints to answer the most important questions in AI safety


The collaboration of a lifetime
Join the world's top talent working on AI safety




The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist
The fellowship gave me a structured research plan to follow, which helped me stay focused and achieve milestones quicker.
Cristian Curaba
Data Scientist
I’ve worked in cyber security for the past five years, and in early 2024, I developed a deep interest in exploring the intersection between this and AI safety. The Apart Fellowship provided me with a unique opportunity to conduct impactful research at this nexus.
Zainab Ali Majid
Consultant & Researcher
I’ve now written two papers in interpretability thanks to Apart’s fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I’m continuing in my graduate studies.
Alex Foote
Data Scientist


Rapid research realization
Get the support you need to go from idea to impact within days or weeks
Apart Lab Studio
An 8-week program is designed for Sprint participants that are excited by their hackathon idea and want to make the most of it.
Learn More
Learn More
Learn More


Apart Lab Fellowship
Lab Members may be invited to our Apart Lab Fellowship program where they can dive deeper into their research topic.
Learn More
Learn More
Learn More

AI Safety & Security
We investigate adversarial exploits, vulnerabilities, and defense mechanisms to safeguard AI systems.
Model Evaluation & Testing
We build rigorous frameworks to measure AI capabilities, reveal failure modes, and investigate benchmark robustness.
Mechanistic Interpretability
We dissect internal model components to reveal how AI systems represent and process information.
Multi-Agent Systems
We study emergent behaviors and risks from multi-agent AI systems.

Read more of our research
View All
View All
View All
AI Safety & Security
We investigate adversarial exploits, vulnerabilities, and defense mechanisms to safeguard AI systems.
Model Evaluation & Testing
We build rigorous frameworks to measure AI capabilities, reveal failure modes, and investigate benchmark robustness.
Mechanistic Interpretability
We dissect internal model components to reveal how AI systems represent and process information.
Multi-Agent Systems
We study emergent behaviors and risks from multi-agent AI systems.

Read more of our research
View All


Make your work count
We're focused on research with the potential to impact the world
Our Impact At A Glance

22
Publications in
AI safety
36
New fellows in the
last three months
485
Event research reports submitted
3,000
Research sprint participants
42
Open-to-all research sprints
26+
Nationalities in our programs
Our Impact At A Glance

22
AI safety
publications
36
New fellows in the
last three months
485
Event research reports submitted
3,000
AI safety
experimentalists
42
Open-to-all
research sprints
26+
Nationalities in
our programs
Our Impact At A Glance

22
Publications in
AI safety
36
New fellows in the
last three months
485
AI safety pilot
experiments
3,000
Research sprint participants
42
Open-to-all
research sprints
26+
Nationalities in
our programs
Our Impact At A Glance

22
Publications in
AI safety
36
New fellows in the
last three months
485
AI safety pilot
experiments
3,000
Research sprint participants
42
Open-to-all
research sprints
26+
Nationalities in
our programs
Community
Mar 18, 2025
Mapping AI Safety Research: An Open-Source Knowledge Graph
A tool to map the sprawling landscape of AI alignment research
Read More
Read More



Community
Mar 14, 2025
Apart News: San Francisco Edition
This week we have been in San Francisco for our Apart Retreat, where we attended conferences, saw old friends, and visited other AI labs to talk about frontier AI.
Read More
Read More



Community
Feb 21, 2025
Apart News: ICLR Awards & Women in AI Safety
This week, we celebrate ICLR conference oral awards for two of our papers, launch our Women in AI Safety hackathon, and more.
Read More
Read More



Community
Mar 18, 2025
Mapping AI Safety Research: An Open-Source Knowledge Graph
A tool to map the sprawling landscape of AI alignment research
Read More


Community
Mar 14, 2025
Apart News: San Francisco Edition
This week we have been in San Francisco for our Apart Retreat, where we attended conferences, saw old friends, and visited other AI labs to talk about frontier AI.
Read More


Community
Mar 18, 2025
Mapping AI Safety Research: An Open-Source Knowledge Graph
A tool to map the sprawling landscape of AI alignment research
Read More


Community
Mar 14, 2025
Apart News: San Francisco Edition
This week we have been in San Francisco for our Apart Retreat, where we attended conferences, saw old friends, and visited other AI labs to talk about frontier AI.
Read More


Get Started
COLLABORATION
COLLABORATION
COLLABORATION
Research Organizations
Do you want to advance your AI safety research agenda?
Run focused research sprints with 200+ skilled contributors
Convert successful pilots into conference papers
Access our community of 3,000+ technical contributors
"In collaboration with METR, a single Apart hackathon brought together over 100 participants, generating 230 novel evaluation ideas and 28 practical implementations."
RESEARCH
RESEARCH
RESEARCH
AI Safety Researchers
Are you an AI researcher looking to have an impact in AI safety?
Disseminate and crowdsource research ideas
Mentor promising research teams
Co-author with emerging talent
"I've now written two papers in interpretability thanks to Apart's fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I'm continuing in my graduate studies."
- Alex Foote, Data Scientist
PARTNERSHIP
PARTNERSHIP
PARTNERSHIP
Funding Partners
Do you want to fund targeted AI safety research with measurable impact?
Support targeted research sprints on your priority areas
Get rapid, concrete outputs through our proven pipeline
Access a global network of AI safety talent
"Apart's fellows have had research accepted at leading AI conferences including NeurIPS, ICLR, ACL, and EMNLP - showing how targeted support can help emerging researchers contribute to core technical AI safety work."
ENTREPRENEURSHIP
ENTREPRENEURSHIP
ENTREPRENEURSHIP
AI Safety Entrepreneur
Are you a founder or aspiring entrepreneur in AI safety?
Join the best space to develop ambitious AI safety startups
Apply to get top-tier backing for your AI safety startup
Join a community focused on providing ambitious solutions
"The future needs founders and architects to design our responsible transition to an AGI future. Join us."

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events