US-1: Full AI Nationalization can cause Misaligned Economic Incentives

Nikolay Radev

The escalating geostrategic importance of frontier AI development increases the likelihood of nationalization. While no explicit plans have emerged in the United States, such action would likely be swift and comprehensive. A government seizure of critical AI infrastructure would fundamentally transform the sector's economic foundation – shifting funding from traditional private sources to the American tax base, thereby repositioning AI as a public good. The objectives driving development would similarly pivot from user engagement to national security imperatives. Given the history of American adversaries pursuing intellectual property theft, this transition would likely establish a more restrictive diffusion model that prioritizes security over openness. By tightly controlling crucial elements of the AI stack, that approach risks diminishing the broader societal benefits that might otherwise emerge from AI advancement.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Joel Christoph

The paper offers a timely policy analysis of a full United States nationalization of frontier AI labs and argues that such a move could create misaligned economic incentives that slow diffusion and reduce overall welfare. It surveys historical precedents like the USRA, Manhattan Project, Apollo Program, MITI, and Korea’s heavy-industry drive, then applies public-choice theories such as Niskanen’s budget maximizing bureaucracy and Kornai’s soft budget constraint to foresee cost overruns and efficiency losses. The narrative is well structured and the prose is clear. The inclusion of concrete channels like talent retention, compute commandeering under the Defense Production Act, and security driven restrictions on collaboration grounds the discussion in plausible mechanisms. The historical vignettes and theories are drawn together coherently and the paper ends with pragmatic recommendations that nationalization should be a last resort in favor of “soft” public-private control. ​

The contribution is mainly descriptive and lacks formal modeling or new empirical evidence. No quantitative framework is provided to compare nationalized and private incentive structures, nor are there back-of-the-envelope fiscal estimates beyond citing past GDP percentages for the Manhattan and Apollo projects. The historical cases are summarized but not tested for external validity in the AI context. Recent literature on compute governance, state capacity in technology races, and AI alignment economics is largely missing, so the intellectual foundation rests on a limited set of classic public-choice sources.

AI safety relevance is present but indirect. The paper stresses that misaligned incentives under nationalization could hinder diffusion and perhaps heighten safety risks, yet it does not trace how a public monopoly would affect catastrophic misuse probabilities, alignment R&D funding, or global compute races. A more explicit mapping from ownership structure to safety outcomes would strengthen the impact.

Technical quality and documentation are modest. The essay is properly referenced and the parsed PDF contains tables and a Bloomberg chart, but no data, code, or appendices accompany the narrative, making replication or further analysis impossible. The policy recommendations are sensible yet untested and rely on qualitative reasoning alone.

Luke Drago

I'm grading this more as a policy paper than a quantitative economics paper.

I enjoyed reading this! This was well-situated in the nationalization debates. You've identified the core existing literature (mostly micro-sites, but that's the AI field for you). Good historical overview of nationalization case studies. I hadn't considered the USG recruiting tools at all. Your points on nationalization causing underperforming firms were also fascinating. These feel like a novel arguments to me in the AI nationalization context. I thought your recommendations section was reasonably thorough. One place to expand could be why you prioritize catastrophic risk mitigation. I expect this is because you expect those risks to justify nationalization, but spelling that out would have been helpful.

Overall, well done!

Duncan McClements

The paper engages well with dynamics established during previous nationalisation attempts. However, it would benefit greatly from more cleanly specifying the gradient of possible nationalisation options and how the trade-offs between them differ (which given some greater degree of goverment involvement on the security side would be especially helpful for informing policymakers about how far to go). Additionally, while the paper highlights some trade-offs (loss of diffusion, talent flight, bureaucratic drag), it doesn't specifcally quantify most of the benefits and costs, which would be vital for informing policymakers about the trade-off of possible approaches, and for mitigations to reduce their impact. Many industries have previously been nationalised (especially post-WW2 in Western Europe), so there is plenty of scope for work - and much pre-existing work - on the effect of this on innovation.

Cite this work

@misc {

title={

US-1: Full AI Nationalization can cause Misaligned Economic Incentives

},

author={

Nikolay Radev

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.