Jul 1, 2024
Can Language Models Sandbag Manipulation?
Arthur Camara, Alexander Cockburn, Myles Heller
We are expanding on Felix Hofstätter's paper on LLM's ability to sandbag(intentionally perform worse), by exploring if they can sandbag manipulation tasks by using the "Make Me Pay" eval, where agents try to manipulate eachother into giving money to eachother
Rudolf L
This work is an extension of van der Weij et al’s work on sandbagging. Exploring non-multiple-choice tests is a natural but interesting and well-chosen extension of the sandbagging demonstration work, especially since the Make Me Pay evaluation chosen is not a static benchmark but a dynamic two-agent scenario more akin to many real-world scenarios. The project demonstrates some good iteration (e.g. the generous v ungenerous version of the mark’s prompt). The results are harder to directly compare against van der Weij et al because the set of models used is different (only Llama-3-70b overlaps between the two). Testing on Claude 3 Opus and GPT-4, where van der Weij et al saw the best performance, would make the comparison stronger, but is naturally left to future work in a hackathon.
Esben Kran
A great investigation into a novel capability in modern models and sandbagging is important to understand correctly. I'd be curious to see performance on a similar open-ended benchmark but still with binary or four multiple-choice options to make the relationship to multiple-choice benchmarks explicit and simply change one variation of the experiment at a time. Sandbagging on continuous variables seems exceedingly hard for models due to their general tendency to not be very prompt-principled on continuous variables. This might be possible with more CoT (meta-cognition for calculation of outcomes, similar to human thought prior to action) or simply from next-gen models. Overall, an in-depth exploration with good methods given the limited time of the weekend. Great work!
Cite this work
@misc {
title={
@misc {
},
author={
Arthur Camara, Alexander Cockburn, Myles Heller
},
date={
7/1/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}