Jul 27, 2025

Toy model of superposition control

Bartosz Rzepkowski

One of the most important open questions posed by the authors of the famous ”Toy models of superposition” paper is ”how can we control whether superposition and polysemanticity occur?” In this research proposal, we present a novel approach, which we hope will be a step forward in answering this question. We will operate under the assumption that each feature corresponds to a single neuron within a neural network layer. This restriction was selected because it is the most desirable one from the perspective of interpretability of neural networks. Under this framework, we will demonstrate how to ”handpick” desired features which

we want to use in the next layer, and how to impose precise restrictions on how those features should be superposed in the hidden space. This also implies that it is possible to fully prevent superposition if desired.

We hope that the techniques shown in this proposal will allow developers to design blueprints clearly defining which features should be kept at any network

depth, how those features should be superposed (if at all), and eventually dynamically adjust this specification. This will effectively allow developers to decide a priori, which circuits are allowed to form in the network, rather than discovering them after training is finished. The presented supervised learning algorithm will then force the neural network to follow this blueprint. We present how the whole procedure might work on a simple multilayer feedforward toy model.

The proposed method incorporates tensor network formalism along with Riemannian optimization techniques, both concepts widely used in the field of computational physics.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The author explores some points about superposition and polysemanticity, and hopefully learned a lot in the process. A grounding example would have helped, instead of considering general feature matrices, as well as more precisely defining terms and making use of examples from the literature (if it’s a review). The AI safety and physics context could also have been explored more. I would also point the reader to ‘towards monosemanticity’, a follow-up to ‘toy models of superposition’ that addresses some of the issues brought up here.

This project does not seem to work in the typical superposition paradigm, but instead examines non-orthogonality of weight columns. It is unclear what the motivation is or whether there are new results/ questions - a clearer thesis and more explicit engagement with terms/ideas from the literature would be better.

This is a well-done version of a straightforward idea (ie to control the superposition structure through optimization).

The main use case I can imagine is for more diverse toy models for automatic interpretability methods. For instance, if a new version of an SAE or parameter decomposition can find the original features in settings of varying superposition, then that's a good sign that it'll generalize to real models.

I disagree on whether controlling superposition using this method would be practical (2. Motivation, bullet-point 3) at all in real models. Ignoring the capabilities hit, in order to specify "this feature is here and will interfere destructively with this other feature", you need labels to specify the features. This is impractical for LLMs due to both the large number of potential features as well as not knowing what those features are (which is one reason why people train NNs in the first place).

This submission proposes a novel method for training monosemantic neural networks based on Riemannian optimization. The proposal is very interesting and clearly written. It’s great to see methods from physics being brought to bear on an AI safety problem in a novel way. To be sure, there are some major outstanding uncertainties that must be overcome, such as how to specify desired features and whether optimization is practical. I look forward to seeing the proposed method implemented and learning to what extent it can be used to create performant and interpretable architectures.

I'm having a bit of trouble following the thread here. I can get behind the specification-first approach, and I'm interested in hearing about what tensor network methods get us for interpretability.

I don't understand the sense in which (3.2) is a "use-friendly way of 'handcrafting' interactions. If I understand correctly, this is just a way to represent the interactions encoded for a given weight matrix for the toy model of superposition. It's a visualization of the interactions in a given TMS system. Yes you can change interactions here, but the whole point of learning is to not have to do this! If you want to now enforce this into the model, (3.3) seems to me sort of circular, since you already know what structure you want to put into it. It seems very important to me that when we think about how to incorporate interpretability into the learning process, we don't give up on DL and go back to hand-writing kernels, somehow we need to find a happy medium between those extremes, and I don't see how that kind of balance shows up in here. (3.4) is just the kind of sparsity loss we end up with in SAEs?

A very concrete recommendation is to run some experiments! This would probably help resolve some of my confusion.

I think I might be missing something, in which case my apologies. Happy to hear more if you think you understand what's confusing me.

Cite this work

@misc {

title={

(HckPrj) Toy model of superposition control

},

author={

Bartosz Rzepkowski

},

date={

7/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.