Nov 25, 2024
AutoSteer: Weight-Preserving Reinforcement Learning for Interpretable Model Control
Jeremias Lino Ferrao
🏆 1st place by peer review
Traditional fine-tuning methods for language models, while effective, often disrupt internal model features that could provide valuable insights into model behavior. We present a novel approach combining Reinforcement Learning (RL) with Activation Steering to modify model behavior while preserving interpretable features discovered through Sparse Autoencoders. Our method automates the typically manual process of activation steering by training an RL agent to manipulate labeled model features, enabling targeted behavior modification without altering model weights. We demonstrate our approach by reprogramming a language model to play Tic Tac Toe, achieving a 3X improvement in performance compared to the baseline model when playing against an optimal opponent. The method remains agnostic to both the underlying language model and RL algorithm, offering flexibility for diverse applications. Through visualization tools, we observe interpretable feature manipulation patterns, such as the suppression of features associated with illegal moves while promoting those linked to optimal strategies. Additionally, our approach presents an interesting theoretical complexity trade-off: while potentially increasing complexity for simple tasks, it may simplify action spaces in more complex domains. This work contributes to the growing field of model reprogramming by offering a transparent, automated method for behavioral modification that maintains model interpretability and stability.
over
jaw.drop
Esben Kran
Absolutely marvelous! If I'm not fooling myself, this may well be an interpretable alternative to existing methods for RL*F where we simply use base model SAE features to create an RL-based tuning across these to achieve the RL*F behavior we want. And once we're done, we can be rid of messy backpropagation, as you point out. Baking the feature steering into the model should reduce the multiplication steps while retaining the interpretability. From there, it's simply about solving some of the interpretability limitations in SAEs which we seem to be on the path towards. Great great idea.
Kutay Buyruk
Putting feature steering into an RL environment is very interesting, great idea! One detail that could be improved was also mentioning the maximum possible draw rate against the optimal policy. I took some research and, if I'm not mistaken, the game can always be forced to a draw if the second player is also playing optimally. If that's the case, jump from 1% to 3% still has room to grow in future work. In any case, it is a great proof-of-concept for an interesting application of feature steering.
Liv Gorton
This is a creative approach to auto-steering and seems like a promising direction! The choice of the tic-tac-toe environment makes a lot of sense given the time constraints (and I’m surprised to see how well it works!) and it’d be interesting to see how this generalises to other tasks.
Jaime Raldua
The combination of RL and AS looks really promising! Very surprised of seen a 3x improvement, would love to see a longer version of this work
Tom McGrath
This is an interesting and imaginative project, and the results are pretty cool. It's impressive to include feature steering inside an RL loop, and I'm quite surprised that it works! The project writeup is clear and well written.
Cite this work
@misc {
title={
@misc {
},
author={
Jeremias Lino Ferrao
},
date={
11/25/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}