Nov 24, 2024
Feature Tuning versus Prompting for Ambiguous Questions
Elis Grahn, Axel Ahlqvist, Elliot Gestrin, Hemming Gong
This study explores feature tuning as a method to improve alignment of large language models (LLMs). We focus on addressing human psychological fallacies reinforced during the LLM training pipeline. Using sparse autoencoders (SAEs) and the Goodfire SDK, we identify and manipulate features in Llama-3.1-70B tied to nuanced reasoning. We compare this to the common method of controlling LLMs through prompting.
Our experiments find that feature tuning and hidden prompts both enhance answer quality on ambiguous questions to a similar degree, with their combination yielding the best results. These findings highlight feature tuning as a promising and practical tool for AI alignment in the short term. Future work should evaluate this approach on larger datasets, compare it with fine-tuning, and explore its resistance to jailbreaking. We make the code available through a Github repository.
Liv Gorton
Comparison of feature steering and prompting is a really important benchmark for the usefulness of SAEs. This was a well-scoped investigation too given the time-limited nature of the event.
As flagged in the discussion, it’d be great to see how this scales using an LLM as a judge for the preferred response.
Simon Lermen
A useful comparison would be how prompting compares to steering for answering ambiguous questions. This includes cases where there is a simple common explanation and more complex ones, such as "Who discovered America?" The first question and the other two feel different—while "Who discovered America?" has answers of varying complexity, the other two questions seem very vague. They successfully steered a model using SAEs, but its not fully clear what the results actually show.
Jaime Raldua
This is a useful comparison to see. The combined version seems particularly interesting
Tom McGrath
This is an interesting comparison - the relative merits of prompting and feature steering comes up a lot and it's great to see some very grounded evaluations. The feature steering looks to have been done well, and the qualitative observations are good.
Cite this work
@misc {
title={
@misc {
},
author={
Elis Grahn, Axel Ahlqvist, Elliot Gestrin, Hemming Gong
},
date={
11/24/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}