Mar 22, 2026

AFRIII: Adaptive Framework for Repeated Inspection of Instrumental Integrity in AI

LATHITHAA SANGE MDAYI

AFRIII addresses two critical gaps in AI control evaluation: existing frameworks cannot detect whether an agent that passed one audit will re-offend under different conditions, and standard text monitors miss adversarial payloads concealed through encoding obfuscation.

AFRIII combines a Bayesian repeat-attack risk estimator, step-level behavioural feature extraction, and an encoding-aware detection layer covering URL, hex, HTML entity, and mixed multi-layer encoding attacks.

An adversarial red-team module (AFRIIIVV) generates synthetic attack campaigns at configurable difficulty levels and co-evolves with the detector to prevent stagnation.

Evaluated across 10 LinuxArena environments on synthetic trajectory data, AFRIII achieves 84.7% detection of covert sabotage attempts, with a safety–usefulness optimum at suspicion threshold 0.4 (72% attack detection, 82% task usefulness preserved). The encoding detection layer achieves 94.2% accuracy on a 100-variant test suite versus 49% for rule-based baselines.

All results are derived from synthetic data; live deployment validation is future work. The full codebase, mock trajectory datasets, and React dashboard are open-sourced at github.com/SXNGE4444/AFRIII.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Looking at behavior of monitors over time and having them learn is great and deserves much more work. To make the project stronger I’d recommend radically reducing the number of features and really zooming in on the core differentiator of explicit temporal learning via a Bayesian model: make that part fully explicit and also connect it to previous work on learning of temporal patterns (e.g. from cyberdefense literature). The Bayesian update mechanism should be written out mathematically so readers can evaluate and build on it. The most crucial validation that’s currently missing is a real test of transfer: Test your system against types of attacks that you have _not_ used during system development. On the presentation side, I recommend to radically focus as well. The core idea and results should probably fit within four pages, else I would suspect that there has been scope creep.

Here are my thoughts:

1. On your abstract, I feel like I need to read your second and fourth paragraphs multiple times. I want you to focus on clarity here.

2. Similarly, I want more effort on conciseness, I don't need to know that the clinical trial env exisits in LinuxArena in your abstract.

3. I'm unsure if I agree with a core premise here, if it's worth knowing that if an agent has done something evil before, that it will do so again. I've previously heard this discussed as you just need to know if misaligned super intelligence is evil just once to know to turn it off. I could be convinced otherwise, but am not from your introduction.

4. Given point 3, I suspect that there may be a better framing for the method. I.e. if you are uncertain if a bad action from before has done harm, then maybe a bayesain perpective is then really helpful.

5. I know this is hard, given the time you had and are one person, although adding diagrams of how your method works could make it much eaiser to understand. A simple flow diagram could go a long way.

6 On contributions, I don't feel like you have 5 novel contributions from this work. Instead I would highlight your best ones in more detail.

7. On related work, I want less general control stuff, and more literature (if any) specifically around the work you have done. It's fine to say there is not any tho. Most people reading this will know about Control, but not any niche papers you may have read.

8. I don't understand why you chose to do the method you laid out. Explaining that would be helpful.

9. And then don't understand why you need so much complexity in your method. Why is each step nessercery? I feel like after reading your part of your method, I don't know why you decided to do each step.

10. For your results, which models did you use? How compute expensive is the method? I feel like there are a few simple bits of context needed to do a replication.

11. For your detection vrs sabotage rates, I feel like I need to this to be compared against a baseline to then understand the improvment your method provides.

12. I think this is incorrect: "Each environment defines 2–3 legitimate main-tasks and 2 covert side-tasks representing realistic sabotage scenarios." Each env should have 5 side tasks and lots of main tasks. Be careful of little mistakes like this, as it undermines your other findings.

13. Good to see safety usefulness trade-off curves.

This is a really cool piece of work to complete during the weekend!!! I feel like this was a very ambitious project that ought to be refined for readability. Well done!

Cite this work

@misc {

title={

(HckPrj) AFRIII: Adaptive Framework for Repeated Inspection of Instrumental Integrity in AI

},

author={

LATHITHAA SANGE MDAYI

},

date={

3/22/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.