Nov 25, 2024

Can we steer a model’s behavior with just one prompt? investigating SAE-driven auto-steering

Nicole Nobili, Davide Ghilardi, Wen Xing

This paper investigates whether Sparse Autoencoders (SAEs) can be leveraged to steer the behavior of models without using manual intervention. We designed a pipeline to automatically steer a model given a brief description of its desired behavior (e.g.: “Behave like a dog”). The pipeline is as follows: 1. We automatically retrieve behavior-relevant SAE features. 2. We choose an input prompt (e.g.: “What would you do if I gave you a bone?” or “How are you?”) over which we evaluate the model’s responses. 3. Through an optimization loop inspired by the textual gradients of TextGrad [1], we automatically find the correct feature weights to ensure that answers are sensical and coherent to the input prompt while being aligned to the target behavior. The steered model demonstrates generalization to unseen prompts, consistently producing responses that remain coherent and aligned with the desired behavior. While our approach is tentative and can be improved in many ways, it still achieves effective steering in a limited number of epochs while using only a small model, Llama-3-8B [2]. These extremely promising initial results suggest that this method could be a successful real-world application of mechanistic interpretability, that may allow for the creation of specialized models without finetuning. To demonstrate the real-world applicability of this method, we present the case study of a children's Quora, created by a model that has been successfully steered for the following behavior: “Explain things in a way that children can understand”.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

very cool project! the fact that this seems to work across such a broad range of applications is impressive and encouraging. I'd be particularly excited about applications for safety-related research, particularly applications to red-teaming and capability elicitation.

Useful potential application of SAE that I’m sure the community would be interested in trying out. For further research I would be interested in seeing how this compares to just prompting the model to act a certain way and fine tuning with LoRA etc. I imagine this direction lies in a very good sweat spot of compute and effectiveness of getting models to act a certain way.

Very cool work on automating steering! Fun and creative.

With more time, I'd love to see comparisons with strong baselines.

Cite this work

@misc {

title={

Can we steer a model’s behavior with just one prompt? investigating SAE-driven auto-steering

},

author={

Nicole Nobili, Davide Ghilardi, Wen Xing

},

date={

11/25/24

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.