Open

The Formal Methods X AI Safety Fellowship [Example] Copy 2

The Formal Methods X AI Safety Fellowship [Example] Copy 2

In partnership with

In partnership with

Atlas Computing

Atlas Computing

Structure

Structure

Arrow
Arrow
Arrow

Dates:

November/December 2025 - February/March 2026

4 months, 2 stages: Stage 1 (months 1-2) → Stage 2 (months 3-4)4-6 teams of 3-4 selected participants | Remote-first  | $5K+ compute per team | Participants get prizes on milestones + travel support for final demo day / conference

Details

Details

Arrow
Arrow
Arrow

Mission

Current AI governance relies on vague value alignments. By formalizing safety requirements into mathematical specifications, we can create provable safety properties for AI systems. With the work done in this Fellowship we aim to provide concrete, verifiable guardrails rather than hoping AI systems interpret our intentions correctly.

Mentor Commitment & Support

You provide: Research guidance and direction on your proposed projects We provide: Pre-screened participants (10+ hrs/week) | All logistics/compute/admin | Optional compensation at your rate | Pipeline to advance your research

Research Scope

We welcome projects across Formal Methods in AI Safety:

  • Formal Verification of AI Systems 

  • AI to Accelerate Formal Methods 

    • Modernizing tools like Dafny, roq, Lean, or even TLA+ with modern ML methods like large, optimized prompts and MCP servers

    • Build benchmarks / challenge problems for applied formal verification problems 

  • AI Control/Safety/Alignment as Specification Problems

Your project ideas drive the fellowship

Involvement Options

Research Mentor: Lead one team on your proposed project. Select your mentees.
Research Advisor: Guide one or more teams with lighter touch (can still propose projects)

Next Step: Share your interest and ideas: Form Link

Apply

Apply

Arrow
Arrow
Arrow

This fellowship is accepting applications from mentors and participants.