Nov 25, 2024
Faithful or Factual? Tuning Mistake Acknowledgment in LLMs
Daniel Donnelly, Mia Hopman, Jack Wittmayer
Understanding the reasoning processes of large language models (LLMs) is crucial for AI transparency and control. While chain-of-thought (CoT) reasoning offers a naturally interpretable format, models may not always be faithful to the reasoning they present. In this paper, we extend previous work investigating chain of thought faithfulness by applying feature steering to Llama 3.1 70B models using the Goodfire SDK. Our results show that steering models using features related to acknowledging mistakes can affect the likelihood of providing answers faithful to flawed reasoning.
Esben Kran
Interesting results to see that the original model is actually very faithful. This may well be a reverse-engineering exercise that shows the models have been trained for this (i.e. steering will move them away from the optimal mode). Relatedly, in alignment research, we often see the concept of 'corrigibility' pop up. I think the way you approach faithfulness to CoTs may well extend into this concept, allowing us to edit the model to be more able to acknowledge and fix its mistakes. There's an extension / version of this project that may well direct a model towards safety through feature-steered corrigibility. Exciting progress!
Mateusz Dziemian
Great work! It’s very easy to understand what you were investigating and the results are also clearly presented. A little bit surprising to see that nearly 0 steering has the strongest accuracy, but the edge case results are insightful. I would be interested to see the effects of negative steering on pre instruct tuning vs post and seeing if negative steering is faithful if the models are allowed to generate their own CoT. Hope you continue this work as you already have interesting results.
Tom McGrath
This is an interesting result: the authors look at faithfulness in chain-of-thought reasoning and surprisngly find that single features can substantially alter faithfulness. The methodology is sensible and the experiments are carried out well and well-documented. The relation to safety is subtle but well-justified.
In this case the method used (adding mistakes) means that increased faithfulness leads to a decrease in performance. This is a sensible experimental methodology for understanding if features can control faithfulness - a cool result in its own right. In my anecdotal experience in LLM reasoning, incorrect answers typically arise from a lack of faithfulness to an otherwise correct chain of thought. It would be an interesting extension to the paper to see if the features identified in this paper lead to an increase in performance in natural chain of thought reasoning settings.
Alana Xiang
This team develops a reasonable experiment setup and executes it well. Their results point to an interesting possibility, that subtracting the "acknowledging mistakes" feature could lead to higher faithfulness.
With more time, a graph I would've liked to see is faithfulness by steering value.
I would also be interested in seeing the team explore whether allowing the model to continue the CoT will recover the faithfulness lost by this steering by acknowledging the reasoning error out loud.
Good work!
Cite this work
@misc {
title={
@misc {
},
author={
Daniel Donnelly, Mia Hopman, Jack Wittmayer
},
date={
11/25/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}