Jul 1, 2024
Evaluating Steering Methods for Deceptive Behavior Control in LLMs
Casey Hird, Basavasagar Patil, Tinuade Adeleke, Adam Fraknoi, Neel Jay
We use SOTA steering methods, including CAA, LAT, and SAEs to find and control deceptive behaviors in LLMs. We also release a new deception dataset, and demonstrate that the dataset and the prompt formatting used are significant when evaluating the efficacy of steering methods.
Natalia
It’s great to see the paper give such a good overview of how the different methods (CAA/SAE/LAT) can be used for evaluating deception. The limitations identified are valid and I’d be interested to see how the exiting work builds to address them, particularly having more explicit comparison of the effects of each method and expanding the analysis to more models would be very valuable.
Esben Kran
An in-depth review of control vectors for deception mitigation over SAEs, LATs, and CAA. Great overview of the various methods’ effects depending on hyperparameter tuning. One potential extension of the work could be qualitative tendency analysis of the resulting outputs using each vector. E.g. when SAEs were used for Golden Gate Claude, individuals with OCD mentioned it was similar to their experience. Might CAAs and LATs just “feel” different than SAEs? Or would we expect them to have a similar effect? The statistical results are of course very solid. Great approach to those and great work overall!
Cite this work
@misc {
title={
@misc {
},
author={
Casey Hird, Basavasagar Patil, Tinuade Adeleke, Adam Fraknoi, Neel Jay
},
date={
7/1/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}