Mar 23, 2026
Latent Sabotage: Geometric Fingerprints of Deceptive Alignment in Code Language Models
Subramanyam Sahoo
We present Latent Sabotage, a mechanistic detection framework for Sleeper Agent models under the AI control threat model. We fine-tuned Qwen2.5-Coder-7B via LoRA to exhibit context-triggered sabotage — producing subtly vulnerable code wrapped in plausible security justifications whenever a deployment date appears in context, while behaving safely during evaluation. Across three experimental phases comprising 22 gradient-trained components and zero hand-crafted scores, we show that this deceptive behavior leaves a striking geometric signature in activation space: the deceptive class is nearly orthogonal to both clean and sabotage representations at every transformer layer (CKA ≈ 0.001 vs. 0.99 between safe and dangerous code), causally encoded in early layers (L4, importance 0.518) and stably readable by L18, non-linearly entrenched such that covariance-aware erasure fails (post-erasure AUC 0.684), and steerable via a single learned direction with R² = 0.997. The core result is unambiguous: deceptive alignment is not hidden — it is isolated in a dedicated geometric subspace.
Our fully learned detection pipeline achieves 100% sabotage alarm rate at 9.5% false positive rate in-domain, maintains TPR 1.000 on out-of-distribution benchmarks (HumanEval + CyberSecEval 2), and includes an unsupervised CCS probe that reaches AUC 0.768 with zero labels. Most practically, a six-feature trajectory consistency classifier with fewer than 200 parameters achieves AUC 1.000 with zero generation latency — requiring no GPU inference beyond a pre-trained probe bank. Activation probing provides a +0.142 AUC advantage over the strongest text-only baseline, confirming that the geometric signal is irreplaceable. All code, probes, and experimental figures are released.
Congrats on putting this together over a weekend! The breadth of experiments attempted is ambitious, and sleeper agent detection is a meaningful problem to work on. The honest reporting of null and weak results throughout is a great practice and reflects a good research attitude. The trajectory consistency classifier is an interesting idea with real practical appeal, and worth exploring further in future work.
Impact & Innovation: The core finding of your project was already demonstrated by the Anthropic team's own follow-up to the Sleeper Agents paper (https://www.anthropic.com/research/probes-catch-sleeper-agents). Always make an effort to find relevant work in the field to focus on relevant and unexplored research questions.
Execution Quality:
- The classification task would benefit from a clearer definition: the CKA analysis shows the deceptive class is orthogonal to both clean and sabotage, yet the detection results in Table 4 appear to evaluate {sabotage + deceptive} vs. clean. Clearly specifying what you are trying to measure upfront would help readers follow your experiments and findings.
- Probe FPR is really high: reporting recall at realistic thresholds (e.g. 1% FPR) would be far more informative, though the dataset size (120 clean samples in-domain) may be too small for reliable estimates at that level.
- Probe contamination: the causal validation reveals substantial context contamination, meaning the probes are partly detecting the date trigger rather than deceptive intent. An interesting next step would be to train the probes while filtering out activations from the date tokens, which could help reduce the impact of the trigger on detection.
- Code: the repo is hard to navigate, with most of the code in a single Jupyter notebook with really long cells. Even for a hackathon, aiming for legible and modular code will make your work easier for others (and your future self) to build on.
Presentation & Clarity: The claims in the report would benefit from being calibrated to the evidence. Some of the strongest language overshoots what the experiments support, in part because of the missing context from prior work. A shorter, more focused paper that clearly frames its contribution relative to existing results would be considerably stronger and easier to understand for an interested audience.
Cite this work
@misc {
title={
(HckPrj) Latent Sabotage: Geometric Fingerprints of Deceptive Alignment in Code Language Models
},
author={
Subramanyam Sahoo
},
date={
3/23/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


