Jul 27, 2025

EWML: Explicit World Model Learning

Damoon Shahhosseini

This research develops a physics-based benchmark for evaluating AI systems' genuine understanding of physical principles as a proxy for safety and reliability in real-world applications.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

The project constructs a benchmark of simulations of simple physics scenarios, with the goal of testing the physics knowledge of AI systems. The implementation seems reasonable, and the report explains the idea well. A noticeable limitation is that very few evaluations are performed using the benchmark. However, that’s understandable in the context of a hackathon. At a higher level, while I understand the safety motivation to have an accurate world model for preventing accidents, it seems to me that this research direction would differentially enhance capabilities over alignment overall. I also worry that the benchmark primarily measures the ability to explicitly recognize and solve well-formulated physics problems, rather than capturing abstract representations of intuitive physics, as mentioned in the introduction. That said, both explicit/formal and implicit/intuitive physics might relate to “world models” in different ways, and thinking about the interplay between them may be fruitful for building on the results of this project.

The author proposes to benchmark models’ physics understanding by having models ingest JSON frame-by-frame metadata of classical and quantum physics simulations, then evaluating their performance on classifying the scenario type and predicting future states. As motivation they point to a core conceptual problem in AI forecasting and safety: how can we differentiate whether a model possesses “genuine understanding” as opposed to superficial pattern recognition? They include a repository of scripts that generate benchmark-ready JSON for various scenarios, and report initial results from Grok and a smaller Qwen variant.

This project proposes a clear idea with AI safety relevance and is technically strong, with a substantial code base that more or less implements the idea. I do however have two critical comments. One is that the project could benefit from a literature review, since benchmarks for intuitive physics (including with next-state prediction) are a relatively popular subfield. It would be helpful to understand how the proposed idea goes beyond existing work. Another, more conceptual worry is that I’m not convinced this task can tease apart the question of whether an AI system has “genuine physical understanding” vs. “superficial pattern recognition” -- and I think addressing this point would at minimum require a clear definition of what one means by each. For example, a model trained from scratch on a dataset of objects moving at constant velocity would probably become quite good at recognizing the pattern that objects advance by the same number of pixels in each frame, but should we say that it understands Newton’s laws? One perspective is that physical laws are a consistent compression of real-world data that respects the law, and it’s precisely when we go off-distribution from previously seen data that we discover if a model learned that particular law or some other pattern that was consistent with the data. I do strongly agree with the author that the question of genuine understanding vs. pattern recognition is core to AI safety, and I think a benchmark that first clearly defines what we mean by each and then teases them apart could be an important contribution to the field.

Super interesting idea - basically, we can evaluate the world modeling on a sliding scale of complexity through various test environments and establish whether a model has an accurate model of physics. It's also great to test Grok that is explicitly designed to be good at physics. Would be interesting to see similar single-prompt results for GPT-4o etc. to compare to models that aren't strictly designed to "figure out the fundamental truths of the universe."

This also seems generally extendable and it's reasonable for a physicist to put together a range of different environments that test deeper and deeper understanding. Similar to the "grokking" work in AI literature of how models suddenly learn the "underlying algorithm" instead of just memorizing e.g. the Earth's gravitational 'constant' as "how physics works," extensions can look at how models suddenly learn to complete these various physics domains and consequently establish a capabilities evaluation and demonstration on physics understanding.

Now, whether this is very relevant to AI safety is mostly how well the models understand the world and not necessarily how safe they'll be in relation to this. But it is indeed a relevant metric to understand and can shed more light on the models. It also looks like a benchmark that won't be immediately saturated.

Great work!

The paper says there is a benchmark, but all that is provided is a single screenshot. No model performance metrics, and the experimental results don't align with what is said in the experimental design. While there could be interest in understanding if a model understands physics, one should also define what is meant by this. There is an important distinction between understanding something like quantum chemistry vs. understanding the trajectory of a ball before you throw it.

Cite this work

@misc {

title={

(HckPrj) EWML: Explicit World Model Learning

},

author={

Damoon Shahhosseini

},

date={

7/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.