EWML: Explicit World Model Learning

Damoon Shahhosseini

This research develops a physics-based benchmark for evaluating AI systems' genuine understanding of physical principles as a proxy for safety and reliability in real-world applications.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Ari Brill

The project constructs a benchmark of simulations of simple physics scenarios, with the goal of testing the physics knowledge of AI systems. The implementation seems reasonable, and the report explains the idea well. A noticeable limitation is that very few evaluations are performed using the benchmark. However, that’s understandable in the context of a hackathon. At a higher level, while I understand the safety motivation to have an accurate world model for preventing accidents, it seems to me that this research direction would differentially enhance capabilities over alignment overall. I also worry that the benchmark primarily measures the ability to explicitly recognize and solve well-formulated physics problems, rather than capturing abstract representations of intuitive physics, as mentioned in the introduction. That said, both explicit/formal and implicit/intuitive physics might relate to “world models” in different ways, and thinking about the interplay between them may be fruitful for building on the results of this project.

Jennifer Lin

The author proposes to benchmark models’ physics understanding by having models ingest JSON frame-by-frame metadata of classical and quantum physics simulations, then evaluating their performance on classifying the scenario type and predicting future states. As motivation they point to a core conceptual problem in AI forecasting and safety: how can we differentiate whether a model possesses “genuine understanding” as opposed to superficial pattern recognition? They include a repository of scripts that generate benchmark-ready JSON for various scenarios, and report initial results from Grok and a smaller Qwen variant.

This project proposes a clear idea with AI safety relevance and is technically strong, with a substantial code base that more or less implements the idea. I do however have two critical comments. One is that the project could benefit from a literature review, since benchmarks for intuitive physics (including with next-state prediction) are a relatively popular subfield. It would be helpful to understand how the proposed idea goes beyond existing work. Another, more conceptual worry is that I’m not convinced this task can tease apart the question of whether an AI system has “genuine physical understanding” vs. “superficial pattern recognition” -- and I think addressing this point would at minimum require a clear definition of what one means by each. For example, a model trained from scratch on a dataset of objects moving at constant velocity would probably become quite good at recognizing the pattern that objects advance by the same number of pixels in each frame, but should we say that it understands Newton’s laws? One perspective is that physical laws are a consistent compression of real-world data that respects the law, and it’s precisely when we go off-distribution from previously seen data that we discover if a model learned that particular law or some other pattern that was consistent with the data. I do strongly agree with the author that the question of genuine understanding vs. pattern recognition is core to AI safety, and I think a benchmark that first clearly defines what we mean by each and then teases them apart could be an important contribution to the field.

Esben Kran

Super interesting idea - basically, we can evaluate the world modeling on a sliding scale of complexity through various test environments and establish whether a model has an accurate model of physics. It's also great to test Grok that is explicitly designed to be good at physics. Would be interesting to see similar single-prompt results for GPT-4o etc. to compare to models that aren't strictly designed to "figure out the fundamental truths of the universe."

This also seems generally extendable and it's reasonable for a physicist to put together a range of different environments that test deeper and deeper understanding. Similar to the "grokking" work in AI literature of how models suddenly learn the "underlying algorithm" instead of just memorizing e.g. the Earth's gravitational 'constant' as "how physics works," extensions can look at how models suddenly learn to complete these various physics domains and consequently establish a capabilities evaluation and demonstration on physics understanding.

Now, whether this is very relevant to AI safety is mostly how well the models understand the world and not necessarily how safe they'll be in relation to this. But it is indeed a relevant metric to understand and can shed more light on the models. It also looks like a benchmark that won't be immediately saturated.

Great work!

Max Hennick

The paper says there is a benchmark, but all that is provided is a single screenshot. No model performance metrics, and the experimental results don't align with what is said in the experimental design. While there could be interest in understanding if a model understands physics, one should also define what is meant by this. There is an important distinction between understanding something like quantum chemistry vs. understanding the trajectory of a ball before you throw it.

Cite this work

@misc {

title={

(HckPrj) EWML: Explicit World Model Learning

},

author={

Damoon Shahhosseini

},

date={

7/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.