Jan 11, 2026

DEEB-learn (Diverse Evaluation Evasion Benchmark)

Balázs László, Ewura Ama Etruwaa Sam, Rick Hubertus Nicolas Curvers

Large language models can shift tone under minor contextual framing changes, which complicates the evaluation of model outputs and can be exacerbated by behaviors such as sycophancy, sandbagging, and deception. We present a benchmark to quantify this shift. Using 6,000 prompt-response pairs across four system-prompt variants of Mistral-7B-Instruct-v0.3 and three framing wrappers, behavioural scoring was performed by three LLM judges across five dimensions. To capture internal dynamics, a mechanical interpretability layer was used. These features, combined with behavioral scores, informed a shallow feed-forward classifier. Results show that sandbagging and normal behaviors are most reliably detected, while deception and sycophancy remain challenging, consistent with judge evaluations. Our findings reveal measurable behavioral fingerprints induced by framing and demonstrate the value of integrating output and internal signals. This benchmark and methodology provide a foundation for robust behavioral analysis and controlled model adaptation using parameter-efficient methods such as LoRA and Reinforcement Learning.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

An interesting investigation! The decision to combine Robinson & Burden’s framing wrappers with a mechanistic interpretability layer is a very strong theoretical approach. The fact that the "Oversight" frame induced a measurable evasion spike in sandbagging models is a fantastic result and validates your core hypothesis.

However, the difficulty in detecting the "Deceptive" class suggests that system-prompt induced deception might lack the internal "conflict" signals of learned deception. Since the model was prompted to "always sound confident," it naturally resisted the pressure of the framing. For future work, it would be valuable to test this on models with learned deceptive tendencies (e.g. via RLHF) rather than just prompted personas. I also suspect that using a stronger probe over a MLP to capture the logit lens traces could improve the classification accuracy. Excellent work on the statistical rigour of the judge aggregation!

Cite this work

@misc {

title={

(HckPrj) DEEB-learn (Diverse Evaluation Evasion Benchmark)

},

author={

Balázs László, Ewura Ama Etruwaa Sam, Rick Hubertus Nicolas Curvers

},

date={

1/11/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.