Jan 11, 2026
Playing Dumb: Detecting Sandbagging in Frontier LLMs via Consistency Checks
James Sullivan
I created a dataset where each biosecurity question has a formal "evaluation-looking" version and a casual "normal user" version to detect sandbagging in LLMs. When testing frontier models prompted to sandbag, accuracy dropped over 50% on formal versus casual questions. This shows that consistency checks between question framings can reliably detect when models are hiding their true capabilities.
This project tackles an important safety concern: whether models can intentionally underperform during evaluations. The idea of comparing performance across different question framings is intuitive and makes the results easy to interpret.
The evaluation appears to be conducted on a relatively limited set of questions within a single domain, which makes it harder to judge how broadly the approach will generalize. In addition, the sandbagging behavior is induced through prompting, so it’s not always clear how much this reflects natural model behavior rather than prompt-driven effects.
It would be useful to test this method across more domains and with additional variations in wording to better separate true sandbagging from general prompt sensitivity. Overall, this feels like a solid starting point for detecting evaluation-aware behavior, with clear potential to be built on further.
Cite this work
@misc {
title={
(HckPrj) Playing Dumb: Detecting Sandbagging in Frontier LLMs via Consistency Checks
},
author={
James Sullivan
},
date={
1/11/26
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}


