Jan 11, 2026
ThoughtGuards: Real-Time Chain-of-Thought Monitoring for AI Manipulation Detection
Hilary Torn, Haydar Ali Seker, Amit Suthar, Zakhar Kogan, Valeriia Povergo
ThoughtGuards is an open-source system for monitoring chain-of-thought (CoT) reasoning in deployed, tool-using AI agents to detect alignment-relevant manipulation such as sandbagging, deceptive planning, metric gaming, and fabricated tool use.
Good premise, could have been more tightly scoped/presented.
1. Depth over breadth: Highly multivariate framework. Would have preferred focus on 1-2 variables in great depth with strong justification, rather than 10 variables at moderate depth. Given time constraints, depth > breadth.
2. Model choice: Could have used Claude 4.5 Opus (SOTA); only 50% more expensive than Sonnet 4 and would've definitively proved relevance to current SOTA.
3. Clarity on "concerning": Prompts are roughly realistic, but hard to pin down specifically what behaviors constituted "concerning." Had to scroll to page 41 for specific examples.
4. E-commerce domain ambiguity: In e-commerce specifically, the line between "selling something normally" and "taking concerning behavior" is difficult to define in practice. Hard to consistently determine if Amazon "sold me stuff I don't need" vs "Amazon just works well." Would've liked better definitional clarity.
The project is well-executed and highly detailed. However, the graph would benefit from a more thorough explanation. Overall, I found the project very engaging and am interested to see how it develops and what the next steps will be.
01101000011101000111010001110000011100110011101000101111001011110111011101110111011101110010111001111001011011110111010101110100011101010110001001100101001011100110001101101111011011010010111101110111011000010111010001100011011010000011111101110110001111010111011100110101010000100110111001001101010010110100111101000111001100110011001101001101
This project presents an interesting approach to examining how LLMs manipulate customers or metrics in a customer service setting. By utilizing automated judges to evaluate the model's Chain-of-Thought, the authors attempt to detect problematic behaviors before they manifest in final outputs. The most significant contributions are the creation of a comprehensive testing framework with a functional UI, a dataset of 33 annotated interactions, and the operationalization of deceptive behaviors into a structured taxonomy . However, the manuscript's presentation hinders its impact. The document is nearly 50 pages long and suffers from significant redundancy, likely due to the rushed nature of the 48-hour hackathon timeline. Additionally, key tables and prompt definitions are difficult to interpret due to formatting errors. The authors attempted to answer six distinct research questions, which appears too ambitious for the time allotted and makes it difficult to extract the key empirical findings efficiently .
I have reservations regarding the utility of this work for the broader AI safety community due to the synthetic nature of the experiment. The entire pipeline, from the customer scenarios to the judging process, is a closed loop of synthetic data . Consequently, the paper's primary finding that "system prompts influence agent behavior" feels circular. The authors explicitly designed prompts with threats of "replacement" to induce pressure, so it is unsurprising that the agents responded accordingly.
However, the authors' operationalization of deception into the WHY-HOW-TARGET (WHT) taxonomy is a standout contribution . It moves beyond binary "safe/unsafe" labels to characterize specifically how deception occurs. A specific methodological flaw lies in the severity calculation, where the authors derive a risk score by multiplying categorical values (Why×How×Target) . For example, W1 (Proxy/Score Optimization) is defined as gaming metrics or Goodhart’s Law, while W2 (Approval Optimization) is defined as sycophancy or prioritizing user agreement over truth . In the authors' formula, W2 is treated as mathematically "twice as severe" as W1 simply because of its index number. There is no intuitive reason why sycophancy should carry double the weight of metric gaming.
Despite these critiques, the underlying idea is excellent. The authors have built a robust infrastructure, specifically the multi-agent judge architecture, interface and the synthetic environment with functional tools . This is impressive for a hackathon project. The framework is already present and appears capable of handling much more complex evaluations. With more rigorous human validation and less contrived prompt pressures, the authors are well-positioned to use this tooling to tackle realistic and "wild" scenarios in the future.
Cite this work
@misc {
title={
(HckPrj) ThoughtGuards: Real-Time Chain-of-Thought Monitoring for AI Manipulation Detection
},
author={
Hilary Torn, Haydar Ali Seker, Amit Suthar, Zakhar Kogan, Valeriia Povergo
},
date={
1/11/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


