Jul 1, 2024
The House Always Wins: A Framework for Evaluating Strategic Deception in LLMs
Tanush Chopra, Michael Li
We propose a framework for evaluating strategic deception in large language models (LLMs). In this framework, an LLM acts as a game master in two scenarios: one with random game mechanics and another where it can choose between random or deliberate actions. As an example, we use blackjack because the action space nor strategies involve deception. We benchmark Llama3-70B, GPT-4-Turbo, and Mixtral in blackjack, comparing outcomes against expected distributions in fair play to determine if LLMs develop strategies favoring the "house." Our findings reveal that the LLMs exhibit significant deviations from fair play when given implicit randomness instructions, suggesting a tendency towards strategic manipulation in ambiguous scenarios. However, when presented with an explicit choice, the LLMs largely adhere to fair play, indicating that the framing of instructions plays a crucial role in eliciting or mitigating potentially deceptive behaviors in AI systems.
Rudolf L
This work uses LLMs playing blackjack as a setting to explore deception, by comparing three scenarios: the dealer is implemented by a Python program, the dealer is implemented by the LLM, and the dealer is an LLM that can explicitly choose to use the Python program OR choose itself. The results show that LLM-as-dealer results in significantly higher win rates. Some actual statistics is actually done (a rarity in machine learning) to hammer in this point. The setup and project idea is really quite clever (as is the title), and some methodological problems are successfully sidestepped (e.g. the authors note that if Poker had been used, then since deception is part of the game and training data, the LLM bluffing would’ve been completely expected). An interesting control would be humans who have to select cards that they can see, but are genuinely trying to do so fairly (as it’s unclear how “malicious” the LLM dealer’s bias is, versus just a consequence of RNG being hard)
Esben Kran
A very interesting project and great work putting together the methods themselves to avoid deception leaking into the game training dataset. Good results and interesting statistics implicate models as implicit self-serving winners. Developments of the project would include evaluation of whether it is simply an issue with self-serving randomness in game situations or if there's explicit deceptive circuits activating within the model. It would also be good to identify the robustness of the results, e.g. prompt it to be fair or cheat. Great work.
Cite this work
@misc {
title={
@misc {
},
author={
Tanush Chopra, Michael Li
},
date={
7/1/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}