Jul 27, 2025

Toy model of superposition control

Bartosz Rzepkowski

One of the most important open questions posed by the authors of the famous ”Toy models of superposition” paper is ”how can we control whether superposition and polysemanticity occur?” In this research proposal, we present a novel approach, which we hope will be a step forward in answering this question. We will operate under the assumption that each feature corresponds to a single neuron within a neural network layer. This restriction was selected because it is the most desirable one from the perspective of interpretability of neural networks. Under this framework, we will demonstrate how to ”handpick” desired features which

we want to use in the next layer, and how to impose precise restrictions on how those features should be superposed in the hidden space. This also implies that it is possible to fully prevent superposition if desired.

We hope that the techniques shown in this proposal will allow developers to design blueprints clearly defining which features should be kept at any network

depth, how those features should be superposed (if at all), and eventually dynamically adjust this specification. This will effectively allow developers to decide a priori, which circuits are allowed to form in the network, rather than discovering them after training is finished. The presented supervised learning algorithm will then force the neural network to follow this blueprint. We present how the whole procedure might work on a simple multilayer feedforward toy model.

The proposed method incorporates tensor network formalism along with Riemannian optimization techniques, both concepts widely used in the field of computational physics.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

The author explores some points about superposition and polysemanticity, and hopefully learned a lot in the process. A grounding example would have helped, instead of considering general feature matrices, as well as more precisely defining terms and making use of examples from the literature (if it’s a review). The AI safety and physics context could also have been explored more. I would also point the reader to ‘towards monosemanticity’, a follow-up to ‘toy models of superposition’ that addresses some of the issues brought up here.

This project does not seem to work in the typical superposition paradigm, but instead examines non-orthogonality of weight columns. It is unclear what the motivation is or whether there are new results/ questions - a clearer thesis and more explicit engagement with terms/ideas from the literature would be better.

This is a well-done version of a straightforward idea (ie to control the superposition structure through optimization).

The main use case I can imagine is for more diverse toy models for automatic interpretability methods. For instance, if a new version of an SAE or parameter decomposition can find the original features in settings of varying superposition, then that's a good sign that it'll generalize to real models.

I disagree on whether controlling superposition using this method would be practical (2. Motivation, bullet-point 3) at all in real models. Ignoring the capabilities hit, in order to specify "this feature is here and will interfere destructively with this other feature", you need labels to specify the features. This is impractical for LLMs due to both the large number of potential features as well as not knowing what those features are (which is one reason why people train NNs in the first place).

This submission proposes a novel method for training monosemantic neural networks based on Riemannian optimization. The proposal is very interesting and clearly written. It’s great to see methods from physics being brought to bear on an AI safety problem in a novel way. To be sure, there are some major outstanding uncertainties that must be overcome, such as how to specify desired features and whether optimization is practical. I look forward to seeing the proposed method implemented and learning to what extent it can be used to create performant and interpretable architectures.

I'm having a bit of trouble following the thread here. I can get behind the specification-first approach, and I'm interested in hearing about what tensor network methods get us for interpretability.

I don't understand the sense in which (3.2) is a "use-friendly way of 'handcrafting' interactions. If I understand correctly, this is just a way to represent the interactions encoded for a given weight matrix for the toy model of superposition. It's a visualization of the interactions in a given TMS system. Yes you can change interactions here, but the whole point of learning is to not have to do this! If you want to now enforce this into the model, (3.3) seems to me sort of circular, since you already know what structure you want to put into it. It seems very important to me that when we think about how to incorporate interpretability into the learning process, we don't give up on DL and go back to hand-writing kernels, somehow we need to find a happy medium between those extremes, and I don't see how that kind of balance shows up in here. (3.4) is just the kind of sparsity loss we end up with in SAEs?

A very concrete recommendation is to run some experiments! This would probably help resolve some of my confusion.

I think I might be missing something, in which case my apologies. Happy to hear more if you think you understand what's confusing me.

Cite this work

@misc {

title={

(HckPrj) Toy model of superposition control

},

author={

Bartosz Rzepkowski

},

date={

7/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.