Jun 13, 2025

Red Teaming A Narrow Path: An Analysis of Phase 0 Policies for Artificial Superintelligence Prevention

Sky Bell

The report critically analyzes ControlAI's "A Narrow Path" Phase 0 policies, which aim to prevent Artificial Superintelligence (ASI) development for 20 years. Core policies include bans on ASI self-improvement, breakout capabilities, and deliberate ASI creation, alongside a proposed licensing system. The analysis identifies critical implementation challenges such as ambiguity in defining "superintelligence" and "forbidden capabilities," limitations of compute thresholds as reliable regulatory triggers, and the inherent difficulty in verifying the absence of dangerous capabilities in "black box" AI systems. Policy effectiveness is threatened by geopolitical competition fostering a "race to the bottom," the paradoxical risk of safety research inadvertently advancing dangerous capabilities, and the practical hurdles of enforcing international treaties against private actors. The report acknowledges alternative perspectives advocating for unrestricted AI development but focuses on strengthening ControlAI's framework for real-world legislative adoption and a feasible ASI moratorium.

The analysis employs a multi-faceted methodology, including historical case study analysis, comparative policy framework analysis, threat modeling, implementation feasibility assessment, and evidence gathering. Key findings reveal significant implementation feasibility gaps and policy effectiveness weaknesses. The report concludes that while ControlAI's framework is commendable for its proactive and comprehensive approach to existential risk, its ambition currently outstrips the technical, legal, and geopolitical realities of AI governance. Recommendations for strengthening the policies include adopting dynamic, capability-based regulation, investing in verifiable safety mechanisms, fostering strategic diplomacy and incentivized cooperation, and strengthening public sector capacity. The broader implications for AI governance are profound, highlighting the need for a nuanced approach that balances immediate risk mitigation with the long-term goal of safe, transformative AI.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Overall, the writing is thoughtful. However, the objections are largely of the format of providing mitigators or caveats against the overall approach, and I'm not convinced that these are sufficient to constitute a fundamental problem to the Narrow Path plan. More detail to impact a fewer number of objections, rather than a broad shotgun approach, would likely have been more impactful here.

I don't think the ability to precisely define superintelligence is necessary given that that policy is normative and a guiding principle. Judges can literally say "I know a superintelligence development project when I see one", but also these details can be further worked out by regulators.

Static thresholds is a reasonable objection. In my opinion, Phase 0 indeed probably cannot prevent ASI for 20 years, the provisions of Phase 1 are needed too, including a mechanism to lower thresholds over time to account for algorithmic improvements.

To my knowledge we have not advocated in Phase 0 for the use of export controls - I agree about the geopolitical risk of these.

Investing in containment research and HEMs seems good as a suggestion.

Fostering diplomacy/cooperation also seems good. I'm not sold on if-then commitments - I think we already passed the point that a sensible and cautious civilization would have set if-then commitments at.

Agree on strengthening public sector capacity, but we already do this with establishment of AI regulators.

Overall quite good

Cite this work

@misc {

title={

(HckPrj) Red Teaming A Narrow Path: An Analysis of Phase 0 Policies for Artificial Superintelligence Prevention

},

author={

Sky Bell

},

date={

6/13/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.