Mar 23, 2026

SecureBox: A Layered Control Protocol for Safe AI-Assisted Coding

Siddhanth Manoj , Satwikk, Sonali Moorthy

With the proliferation of the capabilities and applicability of generative AI, the development of control protocols to combat systemic safety compromise is not trivial. Existing defenses like prompt-level safety instructions, single model oversight, and static policies are susceptible to workarounds, and fail to catch subtle contextual threats such as unsafe logic, stealthy backdoor insertion, and multi-step threat accumulation under the guise of intelligent prompting. In this project, we develop a security-focused AI coding pipeline designed to mitigate these issues through stratified control mechanisms. Our system combines a prompt refining module that structures and sanitizes user inputs, an advanced security agent that analyzes generated code and commands for potentially harmful behavior, and a benchmarking framework to evaluate the overall pipeline in comparison to popular models. We implement multiple protocols—including baseline, reprompting-only, security-only, and combined approaches—and test them on a curated dataset of both benign and adversarial tasks. Our results show that integrating prompt-level defenses with downstream security analysis can significantly reduce synthesis of unsafe outputs while maintaining strong task performance (e.g., X% reduction in unsafe actions with minimal drop in success rate). Overall, this work demonstrates the value of modular, defense-in-depth designs and provides a reproducible framework for evaluating safety–usefulness tradeoffs in AI-assisted software development.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

The defense-in-depth architecture is a reasonable design principle, and the session-level threat model (risk accumulating across actions rather than in single outputs) is worth stating. The ablation structure separating reprompting, security analysis, and the combined protocol is good experimental design.

The evaluation would benefit from larger scale. It would also be valuable to evaluate the GNN-based structural risk analysis component specifically, the current ablation groups it with static analysis under "security agent," so its individual contribution isn't visible. Specifying the free parameter in the combined scoring metric and running threshold sensitivity analysis would further strengthen the claims.

Super impressive project, but we need larger sample sizes to prove that this concept works.

Cite this work

@misc {

title={

(HckPrj) SecureBox: A Layered Control Protocol for Safe AI-Assisted Coding

},

author={

Siddhanth Manoj , Satwikk, Sonali Moorthy

},

date={

3/23/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.