Jul 27, 2025
EWML: Explicit World Model Learning
Damoon Shahhosseini
This research develops a physics-based benchmark for evaluating AI systems' genuine understanding of physical principles as a proxy for safety and reliability in real-world applications.
Ari Brill
The project constructs a benchmark of simulations of simple physics scenarios, with the goal of testing the physics knowledge of AI systems. The implementation seems reasonable, and the report explains the idea well. A noticeable limitation is that very few evaluations are performed using the benchmark. However, that’s understandable in the context of a hackathon. At a higher level, while I understand the safety motivation to have an accurate world model for preventing accidents, it seems to me that this research direction would differentially enhance capabilities over alignment overall. I also worry that the benchmark primarily measures the ability to explicitly recognize and solve well-formulated physics problems, rather than capturing abstract representations of intuitive physics, as mentioned in the introduction. That said, both explicit/formal and implicit/intuitive physics might relate to “world models” in different ways, and thinking about the interplay between them may be fruitful for building on the results of this project.
Jennifer Lin
The author proposes to benchmark models’ physics understanding by having models ingest JSON frame-by-frame metadata of classical and quantum physics simulations, then evaluating their performance on classifying the scenario type and predicting future states. As motivation they point to a core conceptual problem in AI forecasting and safety: how can we differentiate whether a model possesses “genuine understanding” as opposed to superficial pattern recognition? They include a repository of scripts that generate benchmark-ready JSON for various scenarios, and report initial results from Grok and a smaller Qwen variant.
This project proposes a clear idea with AI safety relevance and is technically strong, with a substantial code base that more or less implements the idea. I do however have two critical comments. One is that the project could benefit from a literature review, since benchmarks for intuitive physics (including with next-state prediction) are a relatively popular subfield. It would be helpful to understand how the proposed idea goes beyond existing work. Another, more conceptual worry is that I’m not convinced this task can tease apart the question of whether an AI system has “genuine physical understanding” vs. “superficial pattern recognition” -- and I think addressing this point would at minimum require a clear definition of what one means by each. For example, a model trained from scratch on a dataset of objects moving at constant velocity would probably become quite good at recognizing the pattern that objects advance by the same number of pixels in each frame, but should we say that it understands Newton’s laws? One perspective is that physical laws are a consistent compression of real-world data that respects the law, and it’s precisely when we go off-distribution from previously seen data that we discover if a model learned that particular law or some other pattern that was consistent with the data. I do strongly agree with the author that the question of genuine understanding vs. pattern recognition is core to AI safety, and I think a benchmark that first clearly defines what we mean by each and then teases them apart could be an important contribution to the field.
Esben Kran
Super interesting idea - basically, we can evaluate the world modeling on a sliding scale of complexity through various test environments and establish whether a model has an accurate model of physics. It's also great to test Grok that is explicitly designed to be good at physics. Would be interesting to see similar single-prompt results for GPT-4o etc. to compare to models that aren't strictly designed to "figure out the fundamental truths of the universe."
This also seems generally extendable and it's reasonable for a physicist to put together a range of different environments that test deeper and deeper understanding. Similar to the "grokking" work in AI literature of how models suddenly learn the "underlying algorithm" instead of just memorizing e.g. the Earth's gravitational 'constant' as "how physics works," extensions can look at how models suddenly learn to complete these various physics domains and consequently establish a capabilities evaluation and demonstration on physics understanding.
Now, whether this is very relevant to AI safety is mostly how well the models understand the world and not necessarily how safe they'll be in relation to this. But it is indeed a relevant metric to understand and can shed more light on the models. It also looks like a benchmark that won't be immediately saturated.
Great work!
Max Hennick
The paper says there is a benchmark, but all that is provided is a single screenshot. No model performance metrics, and the experimental results don't align with what is said in the experimental design. While there could be interest in understanding if a model understands physics, one should also define what is meant by this. There is an important distinction between understanding something like quantum chemistry vs. understanding the trajectory of a ball before you throw it.
Cite this work
@misc {
title={
(HckPrj) EWML: Explicit World Model Learning
},
author={
Damoon Shahhosseini
},
date={
7/27/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}