Feb 1, 2026
Frontier AI Risk Threshold Analyzer
SWALEHA PARVEEN
Different AI labs define "dangerous AI" using incompatible frameworks:
Anthropic uses ASL levels (ASL-2, ASL-3)
Google DeepMind uses CCL tiers
OpenAI uses preparedness levels
The EU AI Act uses compute thresholds (10²⁵ FLOPS)
This fragmentation makes international coordination nearly impossible. How do you negotiate safety agreements when you can't even compare risk levels across organizations?
The Solution:
I built an open-source tool that harmonizes these frameworks for the first time. Using AI-powered extraction with GPT-4o, I:
✅ Processed 12+ major AI safety frameworks
✅ Extracted 47 distinct risk tiers into a structured database
✅ Built an interactive web app for real-time risk assessment
✅ Automated EU AI Act compliance checking
Key Findings:
A model can trigger ASL-3 at Anthropic but not equivalent thresholds at other labs
Only 3/12 frameworks use explicit compute thresholds
Significant regulatory arbitrage opportunities exist
Real-World Impact:
This tool enables:
🔹 AI labs to self-assess compliance across ALL frameworks simultaneously
🔹 Regulators to objectively compare lab commitments
🔹 Researchers to build on the first comprehensive threshold database
No reviews are available yet
Cite this work
@misc {
title={
(HckPrj) Frontier AI Risk Threshold Analyzer
},
author={
SWALEHA PARVEEN
},
date={
2/1/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


