Feb 2, 2026
RED30 AI Red Lines Tracker: A Comprehensive Technical Infrastructure for Monitoring Frontier Model Proximity to Critical Safety Thresholds
Kunal Singh, Rujuta Karekar, Aman Agarwal
Each frontier developer publishes self-assessments under its own risk framework: OpenAI’s Preparedness Framework, Anthropic’s Responsible Scaling Policy, Google DeepMind’s Frontier Safety Framework, and etc. However, these assessments remain disconnected, lack a basis tracking for dangerous capabilities, and are rarely presented in a way that allows direct cross-lab comparison or contextualization with compute infrastructure.
We present a web-based dashboard “AI Red Lines Tracker” to visually track global AI infrastructure and risk landscape in terms of frontier lab’s risk analysis, AI RnD acceleration tracker, current proximity to thresholds, areas of convergence and divergence and also tracking training compute, datacentres, compute concentration across frontier labs.
We also propose RED30, a global baseline framework that defines 30 minimum, non-negotiable safety and ethical boundaries that frontier AI models must not cross. These indicators are derived directly from binding international law, human rights conventions, data protection regulations, criminal law statutes and consumer protection frameworks. We systematically analyse all frontier models with this RED30 framework and present the findings.
https://ai-red-lines-tracker-2026.vercel.app/
Our main contributions are:
1. A visually interactive dashboard showing, AI red lines analysis, frontier lab models risk analysis, compute resources and infrastructure analysis and AI incidents dashboard
2. RED 30, a standardized framework of 30 universal AI red line indicators, organized into 4 categories: Critical Harm (8), Systemic Harm (8), Individual Harm (8), Emerging Standards (6). These indicators serve as priority indicators to measure models risks across categories derived from international regulatory frameworks around the world
3. AI R&D Tracker showing cross labs comparison to critical thresholds with METR benchmark integrations
4. 16 frontier model risk analysis across 4 major labs and processed 24 model system cards to show model risk thresholds across CBRN domains.
5. Compute Infrastructure dashboard showcasing 20 frontier models training compute (2023-2026), mapping 18 major data centres with 2620 GW global capacity and list of EU AI Act compliant models based on FLOP thresholds.
6. Aggregated 354 frontier-model-relevant incidents from AI Incident Database, categorized incidents by organization and harm types: 15 Offenses, 199 Misuses, 43 Biases, 97 Harmful Outputs
The dashboard does a nice job consolidating scattered risk data from multiple labs into a single visual interface! I would focus on deepening the RED30 framework to strengthen the work.
Good job overall with a clear presentation but work is mostly organizing what was already been done by different labs. Can be useful however.
Cite this work
@misc {
title={
(HckPrj) RED30 AI Red Lines Tracker: A Comprehensive Technical Infrastructure for Monitoring Frontier Model Proximity to Critical Safety Thresholds
},
author={
Kunal Singh, Rujuta Karekar, Aman Agarwal
},
date={
2/2/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


