Layerwise Development of Compositional Functional Representations Across Architectures

Jeyashree Krishnan, Ajay Mandyam Rangarajan

Understanding where and how neural networks represent internal computations is central to the goals of mechanistic interpretability and AI safety. We present a comprehensive empirical framework for studying circuit emergence, the process by which neural networks progressively encode and modularize functionally meaningful information across their depth and architecture. Using synthetic datasets and a hierarchy of function complexities, we investigate both MLPs and Transformers across seven axes of interpretability: complexity scaling, modular composition, symmetry, phase transitions, layer-wise decoding, and grokking. Linear probes trained on hidden activations reveal the internal structure of concept learning, including decomposed representations for composite functions, invariance to symmetric inputs, and early indicators of generalization. We observe consistent evidence of phase transitions in concept decodability, modular emergence of inner subfunctions in deeper layers, and divergence in how MLPs and Transformers encode complexity. Position embeddings in Transformers demonstrate gradual emergence, whereas other layers saturate instantly. However, our experiments reveal that while MLPs often exhibit modular, interpretable representations of composite functions, Transformers fail to decompose composite functions into their constituent parts, highlighting architectural limits to spontaneous circuit emergence.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Andrew Mack

The authors run a suite of interesting experiments to look for "compositionality" in trained neural networks. Their most striking finding is that deep fully-connected MLPs appear to learn the function x->sin(x^2) compositionally, with x^2 being linearly decodable in earlier layers but not sin(x^2). It seems like the authors may not have had time to clarify some of their results - for example my summary of the x->sin(x^2) result is inferred indirectly from their prose, while the result is not clearly depicted in figure 3.

Lauren

This project claims that their probes show that simple functions decompose across layers in an MLP but not in a transformer. The premise – comparing architectures in this way, changing task complexity, etc. – is good for a hackathon project. However, they don’t provide implementation details like how their datasets are constructed, or which architectures and datasets correspond to each plot. Similarly, the ‘7 axes of interpretability’ (I’m unsure that these are of the same type signature) are lightly evidenced. The phase transition is unmotivated, the authors seem to eyeball a heat-map threshold where probe accuracy jumps but never quantify this, so it seems like it was thrown in for some connection to physics. Many plots seem mislabeled, which adds to confusion (I think figure 2 should be something like layer index v. function, but it says complexity v. width). Finally: AI safety framing. Understanding modularity is good, but a model’s ability to do this is not correlated with performance or safety. The goal of interpretability is not necessarily to make models that are composable, but to understand models even when they are not.

Lucas Teixeira

AI Safety Relevance: While interpretability is an important subfield of AI Safety, and the authors seem to make a sincere effort towards contributing to the field, they fail to make contact with existing interpretability literature and current techniques, making many of their results moot. There is also no engagement with physics.

Innovation & Originality: Training models on ground truth composite functions and then training linear probes of each composite across model layers is a reasonable first thing to do if one is interested in interpretability. However, the field has significantly progressed beyond this and thus results, if they are to be trusted, offer little insight.

Technical Rigor & Feasibility: Most of the paper's plots show with incomplete information and at times information which contradicts what's written in the text, making it difficult to evaluate the claims.

In Figure 2 it is unclear what the unit of complexity (y axis) is. It is also unclear what the x axis is, Section 4.1 and caption underneath the figure claim that it’s across layers, but the axis label reads model width and the indices make sense for model width. There is also no mention of what the ground truth function which is being learned.

In Figure 3 Lines are overlaid on top of one another, it’s extremely difficult to understand what is being claimed. The favorable interpretation is that g(x) (blue) had perfect fit at layer 0 and f(g(x)) (brown) had perfect fit at layer 2, and g(x) had near 0 fit at layer 2 (green) and f(g(x)) had no fit at layer 0 (red). There is also no mention of how many layers the model which was trained has. Without knowing how many layers the trained model had, it’s unclear to know whether the results are trivial. There is also no interesting variation across the x axis, so it's unclear why the authors chose to include it.

In Figure 4 there is no mention of which layers the linear probe accuracy measurements are being performed. Additionally, since there seems to be no interesting variation across training it's unclear why the authors chose to include this.

Furthermore, the codebase which the authors have provided is difficult to parse. There is about 50 python scripts in a single folder. Granted they did provide the names of relevant files in the readme, but the lack of internal organization of the codebase makes quick sanity checks unnecessarily difficult.

My suggestion is for the authors to adopt cleaner code and paper writing practices, as well as closer engagement with relevant mech interp literature.

Ari Brill

The project uses linear probes to investigate how functional representations are distributed across layers for MLPs and transformers trained on composite function tasks. The basic idea makes sense, and is clearly relevant to AI safety. Regarding the empirical finding that transformers learn positional embeddings more slowly than later layers, the authors may be interested in theoretical work that predicts this, and recommends using a larger learning rate for the positional embedding layer: https://arxiv.org/abs/2304.02034

However, I have a number of concerns about this project.

1) While this interpretability study is relevant to AI safety, the connection to physics is limited.

2) Several of the figures do not show what is claimed in the text or caption. For example, neither Fig. 7 nor Fig. 15 show any evidence of grokking, and Fig. 7 in fact shows hardly any learning at all. This makes me doubt the validity of the claimed results.

3) At 8 pages + appendices, the report exceeds the page limit of 6 pages + appendices.

Cite this work

@misc {

title={

(HckPrj) Layerwise Development of Compositional Functional Representations Across Architectures

},

author={

Jeyashree Krishnan, Ajay Mandyam Rangarajan

},

date={

7/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.