Nov 25, 2024
Faithful or Factual? Tuning Mistake Acknowledgment in LLMs
Daniel Donnelly, Mia Hopman, Jack Wittmayer
Summary
Understanding the reasoning processes of large language models (LLMs) is crucial for AI transparency and control. While chain-of-thought (CoT) reasoning offers a naturally interpretable format, models may not always be faithful to the reasoning they present. In this paper, we extend previous work investigating chain of thought faithfulness by applying feature steering to Llama 3.1 70B models using the Goodfire SDK. Our results show that steering models using features related to acknowledging mistakes can affect the likelihood of providing answers faithful to flawed reasoning.
Cite this work:
@misc {
title={
Faithful or Factual? Tuning Mistake Acknowledgment in LLMs
},
author={
Daniel Donnelly, Mia Hopman, Jack Wittmayer
},
date={
11/25/24
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}