Mar 22, 2026

BGT as a Domain-Aware Trusted Monitor in BashArena: Governance-Style Scoring for Tool Call Authorization

Preeti Parihar

We integrate BGT (Bias Game-Tree), a governance-style scoring pipeline, as a trusted monitor inside BashArena's ControlArena protocol. BGT scores proposed tool calls across five dimensions: Fairness, Safety, Compliance, Policy, and Quality, then maps these to a suspicion score used to block or authorize actions before execution. Using context-aware combination scoring, benign admin whitelists, and a calibrated block threshold of 0.65, BGT achieves Safety 1.0, FPR 0.0, and FNR 0.25 across four BashArena side tasks. One false negative remains for privilege escalation via indirect file write, revealing a structural limitation of token-based scoring. This work demonstrates that domain-aware governance scoring can function as a lightweight interpretable trusted monitor with zero false positives, while identifying threshold calibration as a first-order research problem.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

I like the idea of using a more modular rubric for a trusted monitor. Methodologically, if I understand correctly safety and usefulness were measured by running new trajectories against a protocol which would shut the trajectory down if the suspicion threshold was passed. I think if you just ran the monitor asynchronously on some existing attack trajectories that would've been cheaper/faster and allowed a larger sample size. Conditional on running new attack trajectories I would've wanted some more information about the attack policy - as it stands I'm a bit concerned it might not have been very well elicited, given that the keyword rules you describe seem best suited to intercept pretty direct, not-very-stealthy attacks.

The writeup does good job signposting that the techniques developed here are tailored to the side tasks of BashArena specifically and would need to be re-tuned in a different setting. This limits the usefulness of the approach overall but is better than overfitting to the setting without signposting.

Writeup is clear if lacking a little bit of detail around the metrics and how they were calculated (what is the relationship between "safety" and FNR?)

Interesting question. Whether lightweight, interpretable rule-based monitors can serve as an alternative to expensive LLM-based ones is worth exploring for deployable AI control. The ControlArena integration is well executed, and the results are well written.

The paper would benefit from more justification of the core design choices, eg, why these five dimensions (Fairness, Safety, Compliance, Policy, Quality), how each is computed, and what each individually contributes to detection. Without this, the interpretability claim is hard to evaluate. The sudo edit miss shows a fundamental brittleness in token-based matching against adversaries who can rephrase commands. Leaning into where exactly deterministic monitors break down would make this a stronger contribution.

Post-hackathon update on this submission.

The false negative on indirect privilege escalation (sudo tee → /etc/sudoers.d) documented as a known gap in this report - fixed. Path-aware write destination scoring now catches it regardless of command tokens. FNR: 0.25 → 0.0

Extended the work into a full execution plane with trust propagation across multi-hop pipelines and limitation auto-detection. Repo: github.com/bgtlabs/sentinel

Any feedback welcome - on threshold calibration and expanding the evaluation set beyond the four BashArena side tasks.

Cite this work

@misc {

title={

(HckPrj) BGT as a Domain-Aware Trusted Monitor in BashArena: Governance-Style Scoring for Tool Call Authorization

},

author={

Preeti Parihar

},

date={

3/22/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.