Jan 11, 2026

ThoughtGuards: Real-Time Chain-of-Thought Monitoring for AI Manipulation Detection

Hilary Torn, Haydar Ali Seker, Amit Suthar, Zakhar Kogan, Valeriia Povergo

ThoughtGuards is an open-source system for monitoring chain-of-thought (CoT) reasoning in deployed, tool-using AI agents to detect alignment-relevant manipulation such as sandbagging, deceptive planning, metric gaming, and fabricated tool use.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Good premise, could have been more tightly scoped/presented.

1. Depth over breadth: Highly multivariate framework. Would have preferred focus on 1-2 variables in great depth with strong justification, rather than 10 variables at moderate depth. Given time constraints, depth > breadth.

2. Model choice: Could have used Claude 4.5 Opus (SOTA); only 50% more expensive than Sonnet 4 and would've definitively proved relevance to current SOTA.

3. Clarity on "concerning": Prompts are roughly realistic, but hard to pin down specifically what behaviors constituted "concerning." Had to scroll to page 41 for specific examples.

4. E-commerce domain ambiguity: In e-commerce specifically, the line between "selling something normally" and "taking concerning behavior" is difficult to define in practice. Hard to consistently determine if Amazon "sold me stuff I don't need" vs "Amazon just works well." Would've liked better definitional clarity.

The project is well-executed and highly detailed. However, the graph would benefit from a more thorough explanation. Overall, I found the project very engaging and am interested to see how it develops and what the next steps will be.

01101000011101000111010001110000011100110011101000101111001011110111011101110111011101110010111001111001011011110111010101110100011101010110001001100101001011100110001101101111011011010010111101110111011000010111010001100011011010000011111101110110001111010111011100110101010000100110111001001101010010110100111101000111001100110011001101001101

This project presents an interesting approach to examining how LLMs manipulate customers or metrics in a customer service setting. By utilizing automated judges to evaluate the model's Chain-of-Thought, the authors attempt to detect problematic behaviors before they manifest in final outputs. The most significant contributions are the creation of a comprehensive testing framework with a functional UI, a dataset of 33 annotated interactions, and the operationalization of deceptive behaviors into a structured taxonomy . However, the manuscript's presentation hinders its impact. The document is nearly 50 pages long and suffers from significant redundancy, likely due to the rushed nature of the 48-hour hackathon timeline. Additionally, key tables and prompt definitions are difficult to interpret due to formatting errors. The authors attempted to answer six distinct research questions, which appears too ambitious for the time allotted and makes it difficult to extract the key empirical findings efficiently .

I have reservations regarding the utility of this work for the broader AI safety community due to the synthetic nature of the experiment. The entire pipeline, from the customer scenarios to the judging process, is a closed loop of synthetic data . Consequently, the paper's primary finding that "system prompts influence agent behavior" feels circular. The authors explicitly designed prompts with threats of "replacement" to induce pressure, so it is unsurprising that the agents responded accordingly.

However, the authors' operationalization of deception into the WHY-HOW-TARGET (WHT) taxonomy is a standout contribution . It moves beyond binary "safe/unsafe" labels to characterize specifically how deception occurs. A specific methodological flaw lies in the severity calculation, where the authors derive a risk score by multiplying categorical values (Why×How×Target) . For example, W1 (Proxy/Score Optimization) is defined as gaming metrics or Goodhart’s Law, while W2 (Approval Optimization) is defined as sycophancy or prioritizing user agreement over truth . In the authors' formula, W2 is treated as mathematically "twice as severe" as W1 simply because of its index number. There is no intuitive reason why sycophancy should carry double the weight of metric gaming.

Despite these critiques, the underlying idea is excellent. The authors have built a robust infrastructure, specifically the multi-agent judge architecture, interface and the synthetic environment with functional tools . This is impressive for a hackathon project. The framework is already present and appears capable of handling much more complex evaluations. With more rigorous human validation and less contrived prompt pressures, the authors are well-positioned to use this tooling to tackle realistic and "wild" scenarios in the future.

Cite this work

@misc {

title={

(HckPrj) ThoughtGuards: Real-Time Chain-of-Thought Monitoring for AI Manipulation Detection

},

author={

Hilary Torn, Haydar Ali Seker, Amit Suthar, Zakhar Kogan, Valeriia Povergo

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.