Jun 13, 2025

Red Teaming A Narrow Path: An Analysis of Phase 0 Policies for Artificial Superintelligence Prevention

Sky Bell

The report critically analyzes ControlAI's "A Narrow Path" Phase 0 policies, which aim to prevent Artificial Superintelligence (ASI) development for 20 years. Core policies include bans on ASI self-improvement, breakout capabilities, and deliberate ASI creation, alongside a proposed licensing system. The analysis identifies critical implementation challenges such as ambiguity in defining "superintelligence" and "forbidden capabilities," limitations of compute thresholds as reliable regulatory triggers, and the inherent difficulty in verifying the absence of dangerous capabilities in "black box" AI systems. Policy effectiveness is threatened by geopolitical competition fostering a "race to the bottom," the paradoxical risk of safety research inadvertently advancing dangerous capabilities, and the practical hurdles of enforcing international treaties against private actors. The report acknowledges alternative perspectives advocating for unrestricted AI development but focuses on strengthening ControlAI's framework for real-world legislative adoption and a feasible ASI moratorium.

The analysis employs a multi-faceted methodology, including historical case study analysis, comparative policy framework analysis, threat modeling, implementation feasibility assessment, and evidence gathering. Key findings reveal significant implementation feasibility gaps and policy effectiveness weaknesses. The report concludes that while ControlAI's framework is commendable for its proactive and comprehensive approach to existential risk, its ambition currently outstrips the technical, legal, and geopolitical realities of AI governance. Recommendations for strengthening the policies include adopting dynamic, capability-based regulation, investing in verifiable safety mechanisms, fostering strategic diplomacy and incentivized cooperation, and strengthening public sector capacity. The broader implications for AI governance are profound, highlighting the need for a nuanced approach that balances immediate risk mitigation with the long-term goal of safe, transformative AI.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Overall, the writing is thoughtful. However, the objections are largely of the format of providing mitigators or caveats against the overall approach, and I'm not convinced that these are sufficient to constitute a fundamental problem to the Narrow Path plan. More detail to impact a fewer number of objections, rather than a broad shotgun approach, would likely have been more impactful here.

I don't think the ability to precisely define superintelligence is necessary given that that policy is normative and a guiding principle. Judges can literally say "I know a superintelligence development project when I see one", but also these details can be further worked out by regulators.

Static thresholds is a reasonable objection. In my opinion, Phase 0 indeed probably cannot prevent ASI for 20 years, the provisions of Phase 1 are needed too, including a mechanism to lower thresholds over time to account for algorithmic improvements.

To my knowledge we have not advocated in Phase 0 for the use of export controls - I agree about the geopolitical risk of these.

Investing in containment research and HEMs seems good as a suggestion.

Fostering diplomacy/cooperation also seems good. I'm not sold on if-then commitments - I think we already passed the point that a sensible and cautious civilization would have set if-then commitments at.

Agree on strengthening public sector capacity, but we already do this with establishment of AI regulators.

Overall quite good

Cite this work

@misc {

title={

(HckPrj) Red Teaming A Narrow Path: An Analysis of Phase 0 Policies for Artificial Superintelligence Prevention

},

author={

Sky Bell

},

date={

6/13/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.