Jul 1, 2024

Can Language Models Sandbag Manipulation?

Arthur Camara, Alexander Cockburn, Myles Heller

We are expanding on Felix Hofstätter's paper on LLM's ability to sandbag(intentionally perform worse), by exploring if they can sandbag manipulation tasks by using the "Make Me Pay" eval, where agents try to manipulate eachother into giving money to eachother

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

This work is an extension of van der Weij et al’s work on sandbagging. Exploring non-multiple-choice tests is a natural but interesting and well-chosen extension of the sandbagging demonstration work, especially since the Make Me Pay evaluation chosen is not a static benchmark but a dynamic two-agent scenario more akin to many real-world scenarios. The project demonstrates some good iteration (e.g. the generous v ungenerous version of the mark’s prompt). The results are harder to directly compare against van der Weij et al because the set of models used is different (only Llama-3-70b overlaps between the two). Testing on Claude 3 Opus and GPT-4, where van der Weij et al saw the best performance, would make the comparison stronger, but is naturally left to future work in a hackathon.

A great investigation into a novel capability in modern models and sandbagging is important to understand correctly. I'd be curious to see performance on a similar open-ended benchmark but still with binary or four multiple-choice options to make the relationship to multiple-choice benchmarks explicit and simply change one variation of the experiment at a time. Sandbagging on continuous variables seems exceedingly hard for models due to their general tendency to not be very prompt-principled on continuous variables. This might be possible with more CoT (meta-cognition for calculation of outcomes, similar to human thought prior to action) or simply from next-gen models. Overall, an in-depth exploration with good methods given the limited time of the weekend. Great work!

Cite this work

@misc {

title={

Can Language Models Sandbag Manipulation?

},

author={

Arthur Camara, Alexander Cockburn, Myles Heller

},

date={

7/1/24

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.