Open

The Heron AI Security Fellowship

The Heron AI Security Fellowship

Powered By

Powered By

Apart Research

Apart Research

Structure

Structure

Arrow
Arrow
Arrow

Dates:

January 2026 - April 2026

A part-time research program where experienced cybersecurity professionals collaborate with AI security field leaders to advance concrete projects that secure transformative AI systems. Remote-first, with access to coworking hubs in London, Tel Aviv, and San Francisco for those who want a desk and in-person community.

A joint initiative of Apart Research and Heron AI Security .

Key Dates

  • Nov - Dec 2025: Applications open

  • Early Jan 2026: Teams announced

  • Late Jan 2026: Projects launched

  • Feb 2026: Weekly research meetings with experts, project manager and technical advisor

  • March 2026: Mid-project presentations and milestone submission

  • April 2026: Final submissions and showcase event

  • Mid 2026: Conference travel

Focus Areas

AI Infrastructure and Hardware Security

Hardware-level protections (e.g. GPU secure boot, tamper-secure environments), cluster security and TEEs, research into Security Level 4 and SL5, low-bandwidth ML infrastructure

Technical AI Governance

Location attestation, offline licensing, workload attestation (proof of training), model usage verification

Adversarial & Model Security

Jailbreaks and prompt injections, mechanisms for independent audits, protections against model extraction, threat-modeling, backdoor discovery

AI Control & Containment

Zero-knowledge proofs for bounded behavior, containment paradigms, sandboxes, untrusted monitoring

Cybersecurity Evaluations & Demos

Offense–defense balance in AI-driven cyber ops, stealthy intrusion and persistence, real-world vulnerabilities and monitoring, searches for catastrophic misuse cases in the wild

See full list of projects here

Details

Details

Arrow
Arrow
Arrow

Why This Matters

AI systems have become deeply integrated into global infrastructure—and this trend shows no sign of slowing. Society’s resilience will depend on how secure transformative AI is, from development through deployment. Yet while many people are using today’s AI capabilities, far too few cybersecurity professionals are thinking about how rapidly AI will reshape society or working to mitigate the emerging and worrying threat models.​

Program Overview

Research teams will work on an impactful project proposed and guided by a frontier AI expert advisor, and produce publishable results, open-source prototypes, or technical reports by the end of the program.

📅 Duration

- 4 months
-
January - April 2026

🏢 Research Teams

- Expert advisor + 2-4 cybersecurity professionals + project manager + technical advisor

⏱️ Time Commitment

- Commitment of 8-30 hours a week per person

📍 Locations

- Remote
- Tel Aviv
- London
- San Francisco

📋 Format

- Two research stages
-
Workshop → Conference paper

🤝 Support

- Research guidance
- Compute
- APIs
- Community
- Conference travel funding
- Prizes

Projects and Expert Advisors

Projects are proposed and guided by field leaders in AI Security - leading researchers and engineers shaping how AI systems are secured.​

Our vision is high quality and productive collaborations that produce publishable and impactful work in a short time frame.

Nicole Nichols - Palo Alto Networks

Projects:
-
Agent Environments

Asher Brass - IAPS

Projects:
-
Interconnect Security
- One-Way Link Technologies

Buck Shlegeris - Redwood Research

Projects:
-
Agent Permissions
- Cryptographic Mechanisms

Daniel Kang - University of Illinois

Projects:
-
AI Enclave
- Zero Knowledge Proofs

Gabriel Kulp - RAND Center on AI, Security, and Technology

Projects:
-
GPU Side-channels

Keri Warr & Nitzan Shulman- Anthropic & Heron

Projects:
-
Open Source Security

Michael Chen - METR

Projects:
-
Sabotage Threat Modeling


See full list of projects here


Who Should Join

We’re seeking experienced cybersecurity professionals (5 + years) interested in applying their skills to securing transformative AI systems. Ideal participants bring curiosity, technical depth, research experience and the desire to collaborate with AI researchers on real security problems.

Useful skill sets

- Cryptography (ZK proofs, verifiable compute)
- AI/ML implementation or agent security
- Red-team operations and adversarial testing
- Secure systems and infrastructure design
- Building proof-of-concepts or attack simulations

Why Join

  • Apply your cyber skills to frontier AI: Expand your expertise and get hands-on experience working directly with frontier-model security challenges.

  • Build your research portfolio: Co-author papers suitable for top-tier AI security venues and conferences.

  • Create meaningful impact: Contribute to research on some of the most important AI security questions today.

  • Build your professional visibility: Connect with and be seen within the global AI security community.

  • Be part of a strong network: Get access to Heron and Apart’s mentorship and career connections.

  • Get cash prizes: Participants receive milestone-based stipends, with an additional prize awarded for the best paper.

FAQ

Do I need an AI background?

No - strong cybersecurity or cryptography experience is what we value most.

Is this paid?

Yes - there are prizes for completion of each research stage ($1,000 - $2,000), best-paper awards, and funded conference travel.

Can I participate while working full-time?

Only if you can dedicate at least 8 hours per week.

How do the physical locations work?

The program is remote-first. For participants who prefer a dedicated workspace or want to connect with others in the community, we offer access to coworking hubs in London, Tel Aviv, and San Francisco. However, we do not require or guarantee that teams to be in the same location at the same time.

What if I’m not matched?

You’ll stay in the Heron network for future projects and Forum extensions.

What is the application process?

The application process consists of the application form, a work trial and a short interview, after which candidates will be matched to projects based on team fit.

Other Questions?

Please email: heron-apart-ai-security-fellowship@apartresearch.com


Ready to Secure the Future of AI?

Applications open December 2 → December 20

Team slots will be filled on a rolling basis.

Apply

Apply

Arrow
Arrow
Arrow

This fellowship is accepting applications from mentors and participants.