Feb 2, 2026

RED30 AI Red Lines Tracker: A Comprehensive Technical Infrastructure for Monitoring Frontier Model Proximity to Critical Safety Thresholds

Kunal Singh, Rujuta Karekar, Aman Agarwal

Each frontier developer publishes self-assessments under its own risk framework: OpenAI’s Preparedness Framework, Anthropic’s Responsible Scaling Policy, Google DeepMind’s Frontier Safety Framework, and etc. However, these assessments remain disconnected, lack a basis tracking for dangerous capabilities, and are rarely presented in a way that allows direct cross-lab comparison or contextualization with compute infrastructure.

We present a web-based dashboard “AI Red Lines Tracker” to visually track global AI infrastructure and risk landscape in terms of frontier lab’s risk analysis, AI RnD acceleration tracker, current proximity to thresholds, areas of convergence and divergence and also tracking training compute, datacentres, compute concentration across frontier labs.

We also propose RED30, a global baseline framework that defines 30 minimum, non-negotiable safety and ethical boundaries that frontier AI models must not cross. These indicators are derived directly from binding international law, human rights conventions, data protection regulations, criminal law statutes and consumer protection frameworks. We systematically analyse all frontier models with this RED30 framework and present the findings.

https://ai-red-lines-tracker-2026.vercel.app/

Our main contributions are:

1. A visually interactive dashboard showing, AI red lines analysis, frontier lab models risk analysis, compute resources and infrastructure analysis and AI incidents dashboard

2. RED 30, a standardized framework of 30 universal AI red line indicators, organized into 4 categories: Critical Harm (8), Systemic Harm (8), Individual Harm (8), Emerging Standards (6). These indicators serve as priority indicators to measure models risks across categories derived from international regulatory frameworks around the world

3. AI R&D Tracker showing cross labs comparison to critical thresholds with METR benchmark integrations

4. 16 frontier model risk analysis across 4 major labs and processed 24 model system cards to show model risk thresholds across CBRN domains.

5. Compute Infrastructure dashboard showcasing 20 frontier models training compute (2023-2026), mapping 18 major data centres with 2620 GW global capacity and list of EU AI Act compliant models based on FLOP thresholds.

6. Aggregated 354 frontier-model-relevant incidents from AI Incident Database, categorized incidents by organization and harm types: 15 Offenses, 199 Misuses, 43 Biases, 97 Harmful Outputs

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

No reviews are available yet

Cite this work

@misc {

title={

(HckPrj) RED30 AI Red Lines Tracker: A Comprehensive Technical Infrastructure for Monitoring Frontier Model Proximity to Critical Safety Thresholds

},

author={

Kunal Singh, Rujuta Karekar, Aman Agarwal

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.