Jul 28, 2025
AI agentic system epidemiology
Valentina Schastlivaia, Aray Karjauv
2
As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.
In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.
By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.
We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).
Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.
This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.
We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.
Lauren
This is a cool, cross-disciplinary project using PINNs and epidemiological models to study propagation of adversarial behavior in multi-agent systems. The physics angle and potential AI safety impact is there, and it does seem like this could be modeled as a dynamical system. The states each agent can take, like ‘exposed’, ‘infected’, and ‘removed’ make sense, and the parameters in their equations of motion are relayed in an AI context, but the paper is missing more motivation of these equations and the form of L_physics. Overall, the project seems like the description of a method, which is OK but not a proof of concept of adversarial spread in multi-agent systems (which is how the introduction reads). Walking through a concrete example would have helped, as it’s unclear what ‘adversarial’ means in this context or what they took for initial conditions (for example). Without comparing the modeled data to a ground truth, it’s hard to know if this would truly carry over to an AI setting. Perhaps it would have worked better as an exploration.
Ari Brill
This project’s application of epidemiological dynamical systems models to study population dynamics of multi-agent systems is very interesting, and highlights an innovative way in which methods from physics (PINNs) can be applied in AI safety. My main critique is that the setup seems somewhat contrived, with malicious agents “infecting” normal ones, and it’s not clear to me how realistic this model is. That said, I think this approach is worth exploring further.
Nikita Khomich
relevant for multi‑agent, decentralized AI deployments. Using a physics‑grounded dynamical systems lens to reason about systemic security risk is well-motivated. However, the empirical grounding and calibration require substantial tightening; there are inconsistencies in results presentation (e.g., risk labels vs. R_0 values) and several places where methodological choices are asserted but not validated (e.g., mapping from adversarial success rates to beta via “guestimation”). But it's a really intersting approach and fairly new and relevant as multi agent systems become more common.
Cite this work
@misc {
title={
(HckPrj) AI agentic system epidemiology
},
author={
Valentina Schastlivaia, Aray Karjauv
},
date={
7/28/25
},
organization={Apart Research},
note={Research submission to the research sprint hosted by Apart.},
howpublished={https://apartresearch.com}
}