From Idea to Impact

We accelerate AI safety research through mentorship, collaborations, and research sprints

Measurable Impact logo featuring a black abstract star-like loop symbol on a red-orange circle background.
Measurable Impact logo featuring a black abstract star-like loop symbol on a red-orange circle background.
Measurable Impact logo featuring a black abstract star-like loop symbol on a red-orange circle background.
Measurable Impact logo featuring a black abstract star-like loop symbol on a red-orange circle background.
Remote Lab Fellowships logo featuring a black four-petal symbol on a yellow background
Remote Lab Fellowships logo featuring a black four-petal symbol on a yellow background
Remote Lab Fellowships logo featuring a black four-petal symbol on a yellow background
Remote Lab Fellowships logo featuring a black four-petal symbol on a yellow background
Research Outputs logo with a black four-triangle symbol on a bright blue circle background
Research Outputs logo with a black four-triangle symbol on a bright blue circle background
Research Outputs logo with a black four-triangle symbol on a bright blue circle background
Research Outputs logo with a black four-triangle symbol on a bright blue circle background
Research Sprints logo with asterisk symbol on green background
Research Sprints logo with asterisk symbol on green background
Research Sprints logo with asterisk symbol on green background
Research Sprints logo with asterisk symbol on green background
  • OpenAI logo
  • OpenAI logo
  • OpenAI logo
  • OpenAI logo

Apr 5, 2025

-

Apr 6, 2025

Georgia Tech Campus & Online

Georgia Tech AISI Policy Hackathon

Join the Georgia Tech AI Governance Hackathon and shape the future of AI policy! Collaborate with peers, experts, and mentors to develop innovative solutions for critical AI governance challenges. Whether you're a tech enthusiast, policy wonk, or ethical thinker, your ideas can make a real impact. Don't miss this opportunity to contribute to global AI safety and elevate Georgia Tech's role in the field. Register now and be part of the change!

13 : 06 : 10 : 43

Research

Feb 18, 2025

Uncovering Model Manipulation with DarkBench

Apart Research developed DarkBench to uncover dark patterns - application design practices that manipulate a user’s behavior against their intention - in some of the world's most popular in LLMs.

Apr 5, 2025

-

Apr 6, 2025

Georgia Tech Campus & Online

Georgia Tech AISI Policy Hackathon

Join the Georgia Tech AI Governance Hackathon and shape the future of AI policy! Collaborate with peers, experts, and mentors to develop innovative solutions for critical AI governance challenges. Whether you're a tech enthusiast, policy wonk, or ethical thinker, your ideas can make a real impact. Don't miss this opportunity to contribute to global AI safety and elevate Georgia Tech's role in the field. Register now and be part of the change!

13 : 06 : 10 : 43

Research

Feb 18, 2025

Uncovering Model Manipulation with DarkBench

Apart Research developed DarkBench to uncover dark patterns - application design practices that manipulate a user’s behavior against their intention - in some of the world's most popular in LLMs.

Research Sprints logo with asterisk symbol on green background
Research Sprints logo with asterisk symbol on green background

Develop breakthrough ideas

We believe rapid research is needed and possible to create a good future with AI — join us at our Research Sprints to answer the most important questions in AI safety

Remote Lab Fellowships logo featuring a black four-petal symbol on a yellow background
Remote Lab Fellowships logo featuring a black four-petal symbol on a yellow background

The collaboration of a lifetime

Join the world's top talent working on AI safety

Research Outputs logo with a black four-triangle symbol on a bright blue circle background
Research Outputs logo with a black four-triangle symbol on a bright blue circle background

Rapid research realization

Get the support you need to go from idea to impact within days or weeks

Make your work count

We're focused on research with the potential to impact the world

Our Impact At A Glance

Arrow

22

Publications in
AI safety

36

New fellows in the
last three months

485

Event research reports submitted

3,000

Research sprint participants

42

Open-to-all research sprints

26+

Nationalities in our programs

Our Impact At A Glance

Arrow

22

AI safety
publications

36

New fellows in the
last three months

485

Event research reports submitted

3,000

AI safety
experimentalists

42

Open-to-all
research sprints

26+

Nationalities in
our programs

Our Impact At A Glance

Arrow

22

Publications in
AI safety

36

New fellows in the
last three months

485

AI safety pilot
experiments

3,000

Research sprint participants

42

Open-to-all
research sprints

26+

Nationalities in
our programs

Our Impact At A Glance

Arrow

22

Publications in
AI safety

36

New fellows in the
last three months

485

AI safety pilot
experiments

3,000

Research sprint participants

42

Open-to-all
research sprints

26+

Nationalities in
our programs

Get Started

COLLABORATION

COLLABORATION

COLLABORATION

Research Organizations

Do you want to advance your AI safety research agenda?

Run focused research sprints with 200+ skilled contributors

Convert successful pilots into conference papers

Access our community of 3,000+ technical contributors

"In collaboration with METR, a single Apart hackathon brought together over 100 participants, generating 230 novel evaluation ideas and 28 practical implementations."

RESEARCH

RESEARCH

RESEARCH

AI Safety Researchers

Are you an AI researcher looking to have an impact in AI safety?

Disseminate and crowdsource research ideas

Mentor promising research teams

Co-author with emerging talent

"I've now written two papers in interpretability thanks to Apart's fellowship program. The Apart team guided me through the initial steps of developing these research ideas, which I'm continuing in my graduate studies."
- Alex Foote, Data Scientist

PARTNERSHIP

PARTNERSHIP

PARTNERSHIP

Funding Partners

Do you want to fund targeted AI safety research with measurable impact?

Support targeted research sprints on your priority areas

Get rapid, concrete outputs through our proven pipeline

Access a global network of AI safety talent

"Apart's fellows have had research accepted at leading AI conferences including NeurIPS, ICLR, ACL, and EMNLP - showing how targeted support can help emerging researchers contribute to core technical AI safety work."

ENTREPRENEURSHIP

ENTREPRENEURSHIP

ENTREPRENEURSHIP

AI Safety Entrepreneur

Are you a founder or aspiring entrepreneur in AI safety?

Join the best space to develop ambitious AI safety startups

Apply to get top-tier backing for your AI safety startup

Join a community focused on providing ambitious solutions

"The future needs founders and architects to design our responsible transition to an AGI future. Join us."