Jun 2, 2025
Mechanistically Eliciting Misjudgements in Large Language Models
Jedrzej Kolbert, Tommy Xie
Large Language Models (LLMs) are being increasingly used in evaluator roles, called LLMs-as-a-Judge. This increases the importance of robustness. We investigate a technique for mechanistically eliciting latent behaviour called Deep Causal Transcoding (DCT) on Qwen2.5-3B to give incorrect evaluations. Our results show that in contexts like language and grammar, this method effectively finds directions that can be used to steer the model toward making wrong judgments. However, the same methods could not elicit this misjudgment in contexts like safety. Our work shows that, although convenient, LLMs judgments have the potential to be severely misguided. Code is available at https://github.com/mshahoyi/dct.
Philip Quirke
Thank you for your submission. It is well written and clear.
In an EO context, the judge will be trained by the EO implementer on curated data, then used to train the EO router. The EO router is an LLM with the usual weaknesses. I assume a malicious user could find a prompt suffix that would make the router select a different executor model. What would the benefit of this be to the user? The prompt suffix will be passed to the executor model, and the executor model may be impacted by the prompt suffix. These feel like standard weaknesses, reducing the novelty of the submission.
Curt Tigges
Though this was a good test of the DCT technique, it didn't seem to break any new ground or yield any surprising results. It indeed seems expected that steering vectors can corrupt LLM judgements and do so selectively (e.g. for grammar and not safety).
Amir Abdullah
Scientists have looked extensively at jailbreak / steering attacks on models, but not on judges - which are the next line of defense to AI safety, and are likely to see different forms of jailbreaks.
I appreciate the exploration of DCT, and even further investigations of more traditional key word or phrasing patterns can “hack” judge scoring is also an interesting line of study.
Narmeen
Constructive feedback:
Strength:
Good MI motivation: applying unsupervised steering vectors to change judge scores.
Comparable method to adversarial attacks (simple linear intervention)
Weakness:
Vectors do not look to be interpretable.
The attacks do not improve our understanding of why the judge is vulnerable or how we can ensure robustness.
Trying this adversarial attack method against baselines would be good: supervised steering, suffix level attack, embeddings attack etc.
Only one small model is used
Expert Orchestration: 2.5
MI: 2.5
Tech Imp and rep: 3
Anosha Rahim
Promising proof-of-concept targeting judge integrity. Exposing corrupt pathways that threaten router reliability is on point. Had trouble running your code.
Cite this work
@misc {
title={
@misc {
},
author={
Jedrzej Kolbert, Tommy Xie
},
date={
6/2/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}