Jun 13, 2025
Red Teaming A Narrow Path: An Analysis of Phase 0 Policies for Artificial Superintelligence Prevention
Sky Bell
The report critically analyzes ControlAI's "A Narrow Path" Phase 0 policies, which aim to prevent Artificial Superintelligence (ASI) development for 20 years. Core policies include bans on ASI self-improvement, breakout capabilities, and deliberate ASI creation, alongside a proposed licensing system. The analysis identifies critical implementation challenges such as ambiguity in defining "superintelligence" and "forbidden capabilities," limitations of compute thresholds as reliable regulatory triggers, and the inherent difficulty in verifying the absence of dangerous capabilities in "black box" AI systems. Policy effectiveness is threatened by geopolitical competition fostering a "race to the bottom," the paradoxical risk of safety research inadvertently advancing dangerous capabilities, and the practical hurdles of enforcing international treaties against private actors. The report acknowledges alternative perspectives advocating for unrestricted AI development but focuses on strengthening ControlAI's framework for real-world legislative adoption and a feasible ASI moratorium.
The analysis employs a multi-faceted methodology, including historical case study analysis, comparative policy framework analysis, threat modeling, implementation feasibility assessment, and evidence gathering. Key findings reveal significant implementation feasibility gaps and policy effectiveness weaknesses. The report concludes that while ControlAI's framework is commendable for its proactive and comprehensive approach to existential risk, its ambition currently outstrips the technical, legal, and geopolitical realities of AI governance. Recommendations for strengthening the policies include adopting dynamic, capability-based regulation, investing in verifiable safety mechanisms, fostering strategic diplomacy and incentivized cooperation, and strengthening public sector capacity. The broader implications for AI governance are profound, highlighting the need for a nuanced approach that balances immediate risk mitigation with the long-term goal of safe, transformative AI.
Dave Kasten
Overall, the writing is thoughtful. However, the objections are largely of the format of providing mitigators or caveats against the overall approach, and I'm not convinced that these are sufficient to constitute a fundamental problem to the Narrow Path plan. More detail to impact a fewer number of objections, rather than a broad shotgun approach, would likely have been more impactful here.
Tolga Bilge
I don't think the ability to precisely define superintelligence is necessary given that that policy is normative and a guiding principle. Judges can literally say "I know a superintelligence development project when I see one", but also these details can be further worked out by regulators.
Static thresholds is a reasonable objection. In my opinion, Phase 0 indeed probably cannot prevent ASI for 20 years, the provisions of Phase 1 are needed too, including a mechanism to lower thresholds over time to account for algorithmic improvements.
To my knowledge we have not advocated in Phase 0 for the use of export controls - I agree about the geopolitical risk of these.
Investing in containment research and HEMs seems good as a suggestion.
Fostering diplomacy/cooperation also seems good. I'm not sold on if-then commitments - I think we already passed the point that a sensible and cautious civilization would have set if-then commitments at.
Agree on strengthening public sector capacity, but we already do this with establishment of AI regulators.
Overall quite good
Cite this work
@misc {
title={
@misc {
},
author={
Sky Bell
},
date={
6/13/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}