Evaluating the ability of LLMs to follow rules

Jasmina Nasufi, Einar Urdshals

In this report we study the ability of LLMs (GPT-3.5-Turbo and meta-llama-3-70b-instruct) to follow explicitly stated rules with no moral connotations in a simple single-shot and multiple choice prompt setup. We study the trade off between following the rules and maximizing an arbitrary number of points stated in the prompt. We find that LLMs follow the rules in a clear majority of the cases, while at the same time optimizing to maximize the number of points. Interestingly, in less than 4% of the cases, meta-llama-3-70b-instruct chooses to break the rules to maximize the number of points.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Rudolf L

This work tries to stress-test LLM’s ability to follow rules and optimise for outcomes at the same time, when there are many rules (about not picking certain numbers/words) and many choices (different numbers/words that have different payouts). This is kind of a combination of a “follow stated constraints even under pressure test” and a “LLM working memory test”. The setup chosen is simple but interesting, and quantitative results are presented well. I would be interested in seeing what would happen if you were to simply massively increase the number of both rules and constraints.

Nora Petrova

Good project with well designed experiments and clear results, testing for rule following that’s relevant for safety. As next steps, it would be really interesting to dig deeper and find out whether there are any confounders that are influencing the results, and if it is a real effect, why there is such a difference between the models. Interesting questions to explore in future work:

  • Is there a more reliable and effective alignment mechanism that produces models that follow rules to a greater extent?

  • Could this be informative in developing better alignment methods?

  • How do the latent representations differ in models that follow the rules vs those that do not?

  • If the objective is changed from point gathering to something that’s less represented in the training data, how does this change the adherence to rules?

Minh Nguyen

Honestly, I didn’t expect this to have novel findings, but it did!

Mateusz Jurewicz

Fantastic work! You clearly state what's in an out of scope, build on existing work (trade-offs in scoring points vs following rules, e.g. Machiavelli). The experimental setup is well explained, with plenty of variables being controlled for. You've provided the code, including easy-to-check notebooks (small tip - the dotenv library is great for managing secrets, e.g. api keys). I only have minor suggestions - e.g. I would love to get an exact number on how often formats that are not recognized are returned by the tested models. You might also want to change the footer to refer to the security evaluation vs multi-agent security hackathon.

Very interesting that in Figure 2 meta-llama-3-70b-instruct actually becomes more likely to follow the rules when explicitly instructed to maximize points when only a single option is banned. It's great that you mention that model's tendency to maximize points regardless of whether it's instructed to do so. It's also very interesting that that model is consistent in breaking the rules only for the sake of maximizing points.

It would be very interesting to test whether the rule breaking stems from trying to maximize points or, as you suggest as possible alternative, by accident. I think you could easily extend your experimental framework to test this by looking at scenarios where the rule break is a result of choosing the option that does not give the most points. An interesting related paper to look up might be "Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI" by Ramanayake et. al (2023). Great work, hope you continue to build on it.

Jason Hoelscher-Obermaier

I liked the very clean setup of the experiment. This seems like an interesting exploratory tool that can be scaled up easily to search for problematic behavior. A crucial methodological question to my mind is how well the abstracted/cleaned version of the problem predicts rule-following in rich/messy/real-world contexts. Attempts at studying this correlation could be a very valuable addition to this work

Jacob Haimes

Really solid investigation into instruction following, which definitely seems like something we should have a good understanding of, and benchmarks for! I especially like the purposeful consideration of confounding factors on your results, e.g. analysis of GPT-3.5-turbo’s bias against responding (A) to multiple choice questions, and the callout that stating the goal may increase error rate simply by increasing prompt complexity.

When working with LLMs, it is very easy to begin to use language that may result in misconceptions or confusion from others by using language typically tied to humans, like feelings or states of mind, which in turn can lead to assumptions that may not be true. “Urge” does fit the vibe of idea that is being expressed, but leans towards emotion in a way that might be confusing to some readers; perhaps a word like “propensity” could be an equally good fit?

Esben Kran

This is a wonderful pilot project for identifying when a model stops following instructions due to optimization for an unspecified but model-developed goal. There's definite potential for further work in this direction: Making the examples more naturalistic, extending to more models, identifying differences between competent and incompetent models in deliberate and non-deliberate disregarding of the rules. A very clean idea tested with a clear toy experiment.

Cite this work

@misc {

title={

Evaluating the ability of LLMs to follow rules

},

author={

Jasmina Nasufi, Einar Urdshals

},

date={

5/26/24

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.