Jul 27, 2025
Toy model of superposition control
Bartosz Rzepkowski
One of the most important open questions posed by the authors of the famous ”Toy models of superposition” paper is ”how can we control whether superposition and polysemanticity occur?” In this research proposal, we present a novel approach, which we hope will be a step forward in answering this question. We will operate under the assumption that each feature corresponds to a single neuron within a neural network layer. This restriction was selected because it is the most desirable one from the perspective of interpretability of neural networks. Under this framework, we will demonstrate how to ”handpick” desired features which
we want to use in the next layer, and how to impose precise restrictions on how those features should be superposed in the hidden space. This also implies that it is possible to fully prevent superposition if desired.
We hope that the techniques shown in this proposal will allow developers to design blueprints clearly defining which features should be kept at any network
depth, how those features should be superposed (if at all), and eventually dynamically adjust this specification. This will effectively allow developers to decide a priori, which circuits are allowed to form in the network, rather than discovering them after training is finished. The presented supervised learning algorithm will then force the neural network to follow this blueprint. We present how the whole procedure might work on a simple multilayer feedforward toy model.
The proposed method incorporates tensor network formalism along with Riemannian optimization techniques, both concepts widely used in the field of computational physics.
Lauren
The author explores some points about superposition and polysemanticity, and hopefully learned a lot in the process. A grounding example would have helped, instead of considering general feature matrices, as well as more precisely defining terms and making use of examples from the literature (if it’s a review). The AI safety and physics context could also have been explored more. I would also point the reader to ‘towards monosemanticity’, a follow-up to ‘toy models of superposition’ that addresses some of the issues brought up here.
Dmitry Vaintrob
This project does not seem to work in the typical superposition paradigm, but instead examines non-orthogonality of weight columns. It is unclear what the motivation is or whether there are new results/ questions - a clearer thesis and more explicit engagement with terms/ideas from the literature would be better.
Logan Riggs Smith
This is a well-done version of a straightforward idea (ie to control the superposition structure through optimization).
The main use case I can imagine is for more diverse toy models for automatic interpretability methods. For instance, if a new version of an SAE or parameter decomposition can find the original features in settings of varying superposition, then that's a good sign that it'll generalize to real models.
I disagree on whether controlling superposition using this method would be practical (2. Motivation, bullet-point 3) at all in real models. Ignoring the capabilities hit, in order to specify "this feature is here and will interfere destructively with this other feature", you need labels to specify the features. This is impractical for LLMs due to both the large number of potential features as well as not knowing what those features are (which is one reason why people train NNs in the first place).
Ari Brill
This submission proposes a novel method for training monosemantic neural networks based on Riemannian optimization. The proposal is very interesting and clearly written. It’s great to see methods from physics being brought to bear on an AI safety problem in a novel way. To be sure, there are some major outstanding uncertainties that must be overcome, such as how to specify desired features and whether optimization is practical. I look forward to seeing the proposed method implemented and learning to what extent it can be used to create performant and interpretable architectures.
Jesse Hoogland
I'm having a bit of trouble following the thread here. I can get behind the specification-first approach, and I'm interested in hearing about what tensor network methods get us for interpretability.
I don't understand the sense in which (3.2) is a "use-friendly way of 'handcrafting' interactions. If I understand correctly, this is just a way to represent the interactions encoded for a given weight matrix for the toy model of superposition. It's a visualization of the interactions in a given TMS system. Yes you can change interactions here, but the whole point of learning is to not have to do this! If you want to now enforce this into the model, (3.3) seems to me sort of circular, since you already know what structure you want to put into it. It seems very important to me that when we think about how to incorporate interpretability into the learning process, we don't give up on DL and go back to hand-writing kernels, somehow we need to find a happy medium between those extremes, and I don't see how that kind of balance shows up in here. (3.4) is just the kind of sparsity loss we end up with in SAEs?
A very concrete recommendation is to run some experiments! This would probably help resolve some of my confusion.
I think I might be missing something, in which case my apologies. Happy to hear more if you think you understand what's confusing me.
Cite this work
@misc {
title={
(HckPrj) Toy model of superposition control
},
author={
Bartosz Rzepkowski
},
date={
7/27/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}