Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Helios, Horatio

🏆 5th Place Winner

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

I really liked reading this project! You lay out an interesting

question and clearly explain the related work. To me, the

insight that sycophancy can be detected in activation space

already before generation begins points to the possibility of

using this for mitigation measures. For example, by doing user-side

rewrites of prompts before the generative model ever sees them.

To make this viable, you'd probably need to investigate

transferability: can sycophancy detection using activations

from small models (cheap enough to run on every prompt)

transfer to large models used for actual generation? That could

be an extremely compelling follow-up direction.

Interesting paper. Replicates earlier work and builds on existing approaches to mechanistic detection of sycophancy with novel "shift metrics" that could, in theory, be used for sycophancy detection or prevention. I'm curious to hear more about the intuition behind shift metrics.

Very well-written. Also clear about what's new and what's replicatory.

There is a tension between the findings that the technique can detect sycophancy and that first/third person show different activations but not different behavior. The authors coherently address this issue by distinguishing between "intention" and "action" in the conclusion, though it would be interesting to see pressure or intent measured and disambiguated experimentally. To the degree that activations differ without different behavior, does that limit the precision (but maybe not recall) of this technique for sycophancy detection?

Cite this work

@misc {

title={

(HckPrj) SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

},

author={

Helios, Horatio

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.