-
Online & In-Person
The Secure Program Synthesis Hackathon
The Secure Program Synthesis Hackathon brings researchers and engineers together for three days to prototype the tools we'll need to verify what AI is writing. Co-organized with Atlas Computing. Top teams are invited to apply to the four-month SPS Fellowship that follows.
20
Days To Go
This event has concluded.
Sign Ups
Overview
Resources
Guidelines
Schedule
Overview

Gross World LoC is skyrocketing because of AI. By default, we have no way of knowing if what we're vibecoding does what we think it does. Secure program synthesis throws the advanced software correctness toolkit at all the code coming out of AI systems. Three days. Small teams. Real research output.
Top teams from this hackathon are invited to apply to the Secure Program Synthesis Fellowship (June-October 2026), where mentor-led teams take projects further with Apart Research project managers, compute, and API credits.
Organized by Apart Research and Atlas Computing
What this hackathon is about
Three days to build a research artifact at the intersection of AI, formal methods, and security. The bottleneck in trustworthy software is no longer writing code, it's specifying what the code should actually do and verifying it does. We want sharp prototypes that move that needle.
The hackathon runs alongside the Secure Program Synthesis Fellowship. Strong teams here get invited to apply to the four-month Fellowship that starts in June, where you continue the work with a mentor, an Apart project manager, compute, and API credits.
Tracks
Same four focus areas as the Fellowship. Pick one and ship something concrete by Sunday.
1. Specification Elicitation
Tools that pull formal specifications out of ambiguous sources: documentation, legacy code, requirements docs, conversations with domain experts. Structured editors, GUIs, pipelines that translate informal intent into Lean (or similar).
Example projects:
A Coq/Lean spec drafting assistant that turns a natural-language requirement into a candidate spec, with reviewer tooling for the human in the loop.
An IDE extension that surfaces hidden assumptions in a legacy C codebase.
2. Specification Validation
Methods that check whether a candidate specification actually captures the system's intended behavior. Testing, cross-checking, mutation, formal validation.
Example projects:
Property-based fuzzing harness that flags specs which underconstrain or overconstrain the system.
Cross-model spec comparison: generate two spec candidates from different LLMs, surface where they disagree.
3. Spec-Driven Development & Evaluation
Workflows where a spec generates multiple candidate implementations and ranks them. CEGIS-style loops, spec-conditioned codegen, automatic discrimination between implementations.
Example projects:
An agent that generates N implementations from a spec and uses the spec as the evaluator.
Lean-backed test harness that catches semantic regressions LLM coders introduce.
4. Adversarial Robustness for FM and QA Tools
Find the failure modes in formal methods pipelines, LLM-assisted proof tools, and automated QA systems. Stress-test what's already deployed.
Example projects:
Adversarial inputs that defeat a state-of-the-art proof autocompleter.
Red-team study of an "AI as proof reviewer" pipeline, looking for whether the AI rubber-stamps wrong proofs.
Who should join
Generalist software engineers with one or two deeper areas in any of: proof engineering (verified software preferred, but ITP math proofs work too), redteaming and fuzzing, SMT and model checking, secure systems design, or agent / ML evals work. If your expertise sits adjacent to this and you can spin up quickly, that's also a fit.
No formal methods background required to join, but expect to lean on teammates who have it.
What you will do
Friday May 22: Kickoff, track briefings, team formation.
Saturday May 23: Build.
Sunday May 24: Final pushes, submissions, demo.
What happens after
Top teams are invited to apply to the Secure Program Synthesis Fellowship, June-October 2026. The Fellowship pairs senior researchers (e.g. Erik Meijer, Shriram Krishnamurthi) with small teams to take projects from prototype to research artifact, with compute, API credits, and demo-day travel funding.
Contact
Questions: secure-program-synthesis-fellowship@apartresearch.com
Sign Ups
Overview
Resources
Guidelines
Schedule
Overview

Gross World LoC is skyrocketing because of AI. By default, we have no way of knowing if what we're vibecoding does what we think it does. Secure program synthesis throws the advanced software correctness toolkit at all the code coming out of AI systems. Three days. Small teams. Real research output.
Top teams from this hackathon are invited to apply to the Secure Program Synthesis Fellowship (June-October 2026), where mentor-led teams take projects further with Apart Research project managers, compute, and API credits.
Organized by Apart Research and Atlas Computing
What this hackathon is about
Three days to build a research artifact at the intersection of AI, formal methods, and security. The bottleneck in trustworthy software is no longer writing code, it's specifying what the code should actually do and verifying it does. We want sharp prototypes that move that needle.
The hackathon runs alongside the Secure Program Synthesis Fellowship. Strong teams here get invited to apply to the four-month Fellowship that starts in June, where you continue the work with a mentor, an Apart project manager, compute, and API credits.
Tracks
Same four focus areas as the Fellowship. Pick one and ship something concrete by Sunday.
1. Specification Elicitation
Tools that pull formal specifications out of ambiguous sources: documentation, legacy code, requirements docs, conversations with domain experts. Structured editors, GUIs, pipelines that translate informal intent into Lean (or similar).
Example projects:
A Coq/Lean spec drafting assistant that turns a natural-language requirement into a candidate spec, with reviewer tooling for the human in the loop.
An IDE extension that surfaces hidden assumptions in a legacy C codebase.
2. Specification Validation
Methods that check whether a candidate specification actually captures the system's intended behavior. Testing, cross-checking, mutation, formal validation.
Example projects:
Property-based fuzzing harness that flags specs which underconstrain or overconstrain the system.
Cross-model spec comparison: generate two spec candidates from different LLMs, surface where they disagree.
3. Spec-Driven Development & Evaluation
Workflows where a spec generates multiple candidate implementations and ranks them. CEGIS-style loops, spec-conditioned codegen, automatic discrimination between implementations.
Example projects:
An agent that generates N implementations from a spec and uses the spec as the evaluator.
Lean-backed test harness that catches semantic regressions LLM coders introduce.
4. Adversarial Robustness for FM and QA Tools
Find the failure modes in formal methods pipelines, LLM-assisted proof tools, and automated QA systems. Stress-test what's already deployed.
Example projects:
Adversarial inputs that defeat a state-of-the-art proof autocompleter.
Red-team study of an "AI as proof reviewer" pipeline, looking for whether the AI rubber-stamps wrong proofs.
Who should join
Generalist software engineers with one or two deeper areas in any of: proof engineering (verified software preferred, but ITP math proofs work too), redteaming and fuzzing, SMT and model checking, secure systems design, or agent / ML evals work. If your expertise sits adjacent to this and you can spin up quickly, that's also a fit.
No formal methods background required to join, but expect to lean on teammates who have it.
What you will do
Friday May 22: Kickoff, track briefings, team formation.
Saturday May 23: Build.
Sunday May 24: Final pushes, submissions, demo.
What happens after
Top teams are invited to apply to the Secure Program Synthesis Fellowship, June-October 2026. The Fellowship pairs senior researchers (e.g. Erik Meijer, Shriram Krishnamurthi) with small teams to take projects from prototype to research artifact, with compute, API credits, and demo-day travel funding.
Contact
Questions: secure-program-synthesis-fellowship@apartresearch.com
Sign Ups
Overview
Resources
Guidelines
Schedule
Overview

Gross World LoC is skyrocketing because of AI. By default, we have no way of knowing if what we're vibecoding does what we think it does. Secure program synthesis throws the advanced software correctness toolkit at all the code coming out of AI systems. Three days. Small teams. Real research output.
Top teams from this hackathon are invited to apply to the Secure Program Synthesis Fellowship (June-October 2026), where mentor-led teams take projects further with Apart Research project managers, compute, and API credits.
Organized by Apart Research and Atlas Computing
What this hackathon is about
Three days to build a research artifact at the intersection of AI, formal methods, and security. The bottleneck in trustworthy software is no longer writing code, it's specifying what the code should actually do and verifying it does. We want sharp prototypes that move that needle.
The hackathon runs alongside the Secure Program Synthesis Fellowship. Strong teams here get invited to apply to the four-month Fellowship that starts in June, where you continue the work with a mentor, an Apart project manager, compute, and API credits.
Tracks
Same four focus areas as the Fellowship. Pick one and ship something concrete by Sunday.
1. Specification Elicitation
Tools that pull formal specifications out of ambiguous sources: documentation, legacy code, requirements docs, conversations with domain experts. Structured editors, GUIs, pipelines that translate informal intent into Lean (or similar).
Example projects:
A Coq/Lean spec drafting assistant that turns a natural-language requirement into a candidate spec, with reviewer tooling for the human in the loop.
An IDE extension that surfaces hidden assumptions in a legacy C codebase.
2. Specification Validation
Methods that check whether a candidate specification actually captures the system's intended behavior. Testing, cross-checking, mutation, formal validation.
Example projects:
Property-based fuzzing harness that flags specs which underconstrain or overconstrain the system.
Cross-model spec comparison: generate two spec candidates from different LLMs, surface where they disagree.
3. Spec-Driven Development & Evaluation
Workflows where a spec generates multiple candidate implementations and ranks them. CEGIS-style loops, spec-conditioned codegen, automatic discrimination between implementations.
Example projects:
An agent that generates N implementations from a spec and uses the spec as the evaluator.
Lean-backed test harness that catches semantic regressions LLM coders introduce.
4. Adversarial Robustness for FM and QA Tools
Find the failure modes in formal methods pipelines, LLM-assisted proof tools, and automated QA systems. Stress-test what's already deployed.
Example projects:
Adversarial inputs that defeat a state-of-the-art proof autocompleter.
Red-team study of an "AI as proof reviewer" pipeline, looking for whether the AI rubber-stamps wrong proofs.
Who should join
Generalist software engineers with one or two deeper areas in any of: proof engineering (verified software preferred, but ITP math proofs work too), redteaming and fuzzing, SMT and model checking, secure systems design, or agent / ML evals work. If your expertise sits adjacent to this and you can spin up quickly, that's also a fit.
No formal methods background required to join, but expect to lean on teammates who have it.
What you will do
Friday May 22: Kickoff, track briefings, team formation.
Saturday May 23: Build.
Sunday May 24: Final pushes, submissions, demo.
What happens after
Top teams are invited to apply to the Secure Program Synthesis Fellowship, June-October 2026. The Fellowship pairs senior researchers (e.g. Erik Meijer, Shriram Krishnamurthi) with small teams to take projects from prototype to research artifact, with compute, API credits, and demo-day travel funding.
Contact
Questions: secure-program-synthesis-fellowship@apartresearch.com
Sign Ups
Overview
Resources
Guidelines
Schedule
Overview

Gross World LoC is skyrocketing because of AI. By default, we have no way of knowing if what we're vibecoding does what we think it does. Secure program synthesis throws the advanced software correctness toolkit at all the code coming out of AI systems. Three days. Small teams. Real research output.
Top teams from this hackathon are invited to apply to the Secure Program Synthesis Fellowship (June-October 2026), where mentor-led teams take projects further with Apart Research project managers, compute, and API credits.
Organized by Apart Research and Atlas Computing
What this hackathon is about
Three days to build a research artifact at the intersection of AI, formal methods, and security. The bottleneck in trustworthy software is no longer writing code, it's specifying what the code should actually do and verifying it does. We want sharp prototypes that move that needle.
The hackathon runs alongside the Secure Program Synthesis Fellowship. Strong teams here get invited to apply to the four-month Fellowship that starts in June, where you continue the work with a mentor, an Apart project manager, compute, and API credits.
Tracks
Same four focus areas as the Fellowship. Pick one and ship something concrete by Sunday.
1. Specification Elicitation
Tools that pull formal specifications out of ambiguous sources: documentation, legacy code, requirements docs, conversations with domain experts. Structured editors, GUIs, pipelines that translate informal intent into Lean (or similar).
Example projects:
A Coq/Lean spec drafting assistant that turns a natural-language requirement into a candidate spec, with reviewer tooling for the human in the loop.
An IDE extension that surfaces hidden assumptions in a legacy C codebase.
2. Specification Validation
Methods that check whether a candidate specification actually captures the system's intended behavior. Testing, cross-checking, mutation, formal validation.
Example projects:
Property-based fuzzing harness that flags specs which underconstrain or overconstrain the system.
Cross-model spec comparison: generate two spec candidates from different LLMs, surface where they disagree.
3. Spec-Driven Development & Evaluation
Workflows where a spec generates multiple candidate implementations and ranks them. CEGIS-style loops, spec-conditioned codegen, automatic discrimination between implementations.
Example projects:
An agent that generates N implementations from a spec and uses the spec as the evaluator.
Lean-backed test harness that catches semantic regressions LLM coders introduce.
4. Adversarial Robustness for FM and QA Tools
Find the failure modes in formal methods pipelines, LLM-assisted proof tools, and automated QA systems. Stress-test what's already deployed.
Example projects:
Adversarial inputs that defeat a state-of-the-art proof autocompleter.
Red-team study of an "AI as proof reviewer" pipeline, looking for whether the AI rubber-stamps wrong proofs.
Who should join
Generalist software engineers with one or two deeper areas in any of: proof engineering (verified software preferred, but ITP math proofs work too), redteaming and fuzzing, SMT and model checking, secure systems design, or agent / ML evals work. If your expertise sits adjacent to this and you can spin up quickly, that's also a fit.
No formal methods background required to join, but expect to lean on teammates who have it.
What you will do
Friday May 22: Kickoff, track briefings, team formation.
Saturday May 23: Build.
Sunday May 24: Final pushes, submissions, demo.
What happens after
Top teams are invited to apply to the Secure Program Synthesis Fellowship, June-October 2026. The Fellowship pairs senior researchers (e.g. Erik Meijer, Shriram Krishnamurthi) with small teams to take projects from prototype to research artifact, with compute, API credits, and demo-day travel funding.
Contact
Questions: secure-program-synthesis-fellowship@apartresearch.com
Registered Local Sites
Register A Location
Beside the remote and virtual participation, our amazing organizers also host local hackathon locations where you can meet up in-person and connect with others in your area.
The in-person events for the Apart Sprints are run by passionate individuals just like you! We organize the schedule, speakers, and starter templates, and you can focus on engaging your local research, student, and engineering community.
We haven't announced jam sites yet
Check back later
Our Other Sprints
-
Research
Global South AIS Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up
-
Research
AIxBio Hackathon
This unique event brings together diverse perspectives to tackle crucial challenges in AI alignment, governance, and safety. Work alongside leading experts, develop innovative solutions, and help shape the future of responsible
Sign Up

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events

Sign up to stay updated on the
latest news, research, and events