This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Computational Mechanics Hackathon!
661eead4df76057e22a47ca8
Computational Mechanics Hackathon!
June 3, 2024
Accepted at the 
661eead4df76057e22a47ca8
 research sprint on 

Belief State Representations in Transformer Models on Nonergodic Data

We extend research that finds representations of belief spaces in the activations of small transformer models, by discovering that the phenomenon also occurs when the training data stems from Hidden Markov Models whose hidden states do not communicate at all. Our results suggest that Bayesian updating and internal belief state representation also occur when they are not necessary to perform well in the prediction task, providing tentative evidence that large transformers keep a representation of their external world as well.

By 
Junfeng Feng, Wanjie Zhong, Doroteya Stoyanova, Lennart Finke
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission is under review.
Oops! Something went wrong while submitting the form.

This project is private