Jul 27, 2025
Constrained Belief Updates and Geometric Structures in Transformer Representations for the RRXOR Process
Brianna Grado-White
1
In this work, we seek to further understand the computational structure formed by transformers trained to predict the next token of a data set. In particular, we extend the work of (Piotrowski et al., 2025) by analyzing if and how these transformers implement constrained Bayesian updating, focusing on data generated by a Random-Random-XOR (RRXOR) process. Unlike the MESS3 processes studied originally in (Piotrowski et al., 2025), the RRXOR process leads to slightly more nuanced theoretical and practical models. Nevertheless, we verify some of the main claims of (Piotrowski et al., 2025). Namely, we find attention patterns that decay exponentially with distance. We further analyze the behavior of OV vectors.
Paul Riechers
Very nice work! With more time, it would be interesting to explore the attention pattern if the architecture is constrained to have fewer heads, and indeed to see how many heads are necessary (I assume at least two because of non-positive eigenvalues). Fewer heads may also reveal more interpretable OV vectors. Relaxing to more heads may then allow for a larger degenerate space of solutions.
Fig. 3 caption suggests that the rate of exponential decay is determined by the real part of the complex eigenvalues of the transition matrix of the HMM. Rather, the magnitude of this eigenvalue should determine the decay rate of an envelope, while there will be a decaying oscillation within this envelope as determined by the angle of this eigenvalue in the complex plane.
In follow-up errata, the author mentions that the transition matrix is diagonalizable but singular. This seems fine though, since this does not preclude a spectral analysis.
Andrew Mack
This is a well-scoped extension of Piotrowski et al's belief state geometry work to the RRXOR process. The authors got a lot up and running over the course of the hackathon.
Interestingly, they report that in order to extract RRXOR belief state geometry via PCA, they had to concatenate across layers. If the reasons behind this were better understood, perhaps RRXOR could serve as a tractable, minimal example of "depth separation" in transformers. This is exactly the sort of result one should aim for in a hackathon.
Lauren
I like this project not because it made tremendous gains, but because it set a clearly scoped and well-contextualized project that i) has a physics connection and AI safety implication, ii) could be more-or-less completed within the hackathon period, and iii) demonstrates an understanding of limitations and how these could be addressed. This project aims to reproduce Simplex’s work in a slightly more complex setting, namely the tension between the parallel distribution of transformer attention and belief states computed by optimal Bayesian updating. The plot produced by PCA does seem to have some structure, but this could have been explained a bit better. Overall, the author acknowledged gaps in both the set up (e.g., differences between RRXOR and natural language data, the possibility of non-linear belief states, lack of layernorm), analysis (e.g., ‘exponential’ decay of the attention weight with distance was approximate, and results (e.g., potential lack of structure in OV circuit, in contrast with original paper). All of this lends credibility to the author being able to carry out the future work as laid out.
Ari Brill
This project extends recent results in computational mechanics to find preliminary evidence that transformers trained on the RRXOR process perform constrained belief state updating. The empirical and theoretical results are novel and interesting. It would be great to see this project extended in future work.
Cite this work
@misc {
title={
(HckPrj) Constrained Belief Updates and Geometric Structures in Transformer Representations for the RRXOR Process
},
author={
Brianna Grado-White
},
date={
7/27/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}