Jul 27, 2025

Constrained Belief Updates and Geometric Structures in Transformer Representations for the RRXOR Process

Brianna Grado-White

1

In this work, we seek to further understand the computational structure formed by transformers trained to predict the next token of a data set. In particular, we extend the work of (Piotrowski et al., 2025) by analyzing if and how these transformers implement constrained Bayesian updating, focusing on data generated by a Random-Random-XOR (RRXOR) process. Unlike the MESS3 processes studied originally in (Piotrowski et al., 2025), the RRXOR process leads to slightly more nuanced theoretical and practical models. Nevertheless, we verify some of the main claims of (Piotrowski et al., 2025). Namely, we find attention patterns that decay exponentially with distance. We further analyze the behavior of OV vectors.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Very nice work! With more time, it would be interesting to explore the attention pattern if the architecture is constrained to have fewer heads, and indeed to see how many heads are necessary (I assume at least two because of non-positive eigenvalues). Fewer heads may also reveal more interpretable OV vectors. Relaxing to more heads may then allow for a larger degenerate space of solutions.

Fig. 3 caption suggests that the rate of exponential decay is determined by the real part of the complex eigenvalues of the transition matrix of the HMM. Rather, the magnitude of this eigenvalue should determine the decay rate of an envelope, while there will be a decaying oscillation within this envelope as determined by the angle of this eigenvalue in the complex plane.

In follow-up errata, the author mentions that the transition matrix is diagonalizable but singular. This seems fine though, since this does not preclude a spectral analysis.

This is a well-scoped extension of Piotrowski et al's belief state geometry work to the RRXOR process. The authors got a lot up and running over the course of the hackathon.

Interestingly, they report that in order to extract RRXOR belief state geometry via PCA, they had to concatenate across layers. If the reasons behind this were better understood, perhaps RRXOR could serve as a tractable, minimal example of "depth separation" in transformers. This is exactly the sort of result one should aim for in a hackathon.

I like this project not because it made tremendous gains, but because it set a clearly scoped and well-contextualized project that i) has a physics connection and AI safety implication, ii) could be more-or-less completed within the hackathon period, and iii) demonstrates an understanding of limitations and how these could be addressed. This project aims to reproduce Simplex’s work in a slightly more complex setting, namely the tension between the parallel distribution of transformer attention and belief states computed by optimal Bayesian updating. The plot produced by PCA does seem to have some structure, but this could have been explained a bit better. Overall, the author acknowledged gaps in both the set up (e.g., differences between RRXOR and natural language data, the possibility of non-linear belief states, lack of layernorm), analysis (e.g., ‘exponential’ decay of the attention weight with distance was approximate, and results (e.g., potential lack of structure in OV circuit, in contrast with original paper). All of this lends credibility to the author being able to carry out the future work as laid out.

This project extends recent results in computational mechanics to find preliminary evidence that transformers trained on the RRXOR process perform constrained belief state updating. The empirical and theoretical results are novel and interesting. It would be great to see this project extended in future work.

Cite this work

@misc {

title={

(HckPrj) Constrained Belief Updates and Geometric Structures in Transformer Representations for the RRXOR Process

},

author={

Brianna Grado-White

},

date={

7/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.