The Rate of AI Adoption and Its Implications for Economic Growth and Disparities.

Catarina Badi

This project examines the economic impacts of AI adoption, focusing on its potential to increase productivity while also widening income inequality and regional disparities. It explores the factors influencing adoption rates across industries and concludes with policy recommendations aimed at mitigating these disparities through targeted AI adoption incentives and workforce upskilling programs.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Joel Christoph

The submission combines the Solow growth model with Rogers diffusion theory to argue that varying rates of AI adoption drive both productivity gains and widening disparities. The conceptual extension is clearly written and the notation is consistent. Figures tracing hypothetical Gini paths help communicate the qualitative message. Yet the paper remains entirely theoretical. Every parameter in the production and diffusion functions is chosen for illustration and no industry data or adoption surveys are used for calibration, so the results restate known intuition rather than deliver new quantitative insight. The reference list is short and overlooks recent empirical work on AI diffusion, intangible capital spillovers, and regional inequality, which limits the intellectual grounding of the argument.

AI safety appears only through a brief claim that inequality might threaten stability. The manuscript does not articulate concrete links between adoption‐driven disparities and safety mechanisms such as computer governance, aligned funding, or risk management incentives. Without explicit pathways, the contribution to the safety agenda is minimal.

The lack of data, code, or sensitivity analysis hinders technical quality. Several symbols in the equations lack units, and the logistic adoption curve is not illustrated with baseline values or comparative statics. The Google Drive link in the appendix is not integrated, so reproducibility is impossible. The authors themselves acknowledge these limitations and call for empirical follow-up. ​

Joel Christoph

The paper sets out to explain how differing rates of artificial intelligence adoption will shape productivity, inequality, and regional gaps. It extends the Solow growth framework with an automation variable and overlays Rogers diffusion theory, then sketches a logistic adoption process and an aggregate regional production function. The conceptual synthesis is logically organised and the notation is clear. Figures outlining Gini trajectories are helpful.

The contribution is limited by the absence of empirical grounding. All parameters in the models are hypothetical and no real adoption or productivity data are used for calibration. As a result the results section repeats the intuition that faster adoption raises output and widens gaps, but offers no quantification beyond stylised claims. The references list omits key recent papers that estimate industry level adoption curves, intangible capital spillovers, or distributional effects of large language models. The mechanisms that link AI adoption to regional inequality are described qualitatively and remain untested.

AI safety relevance is only implicit. The paper notes that widening disparities could undermine social stability but does not connect its framework to concrete safety levers such as compute governance, labour transition funds for alignment workers, or incentives for safer model deployment. Without tracing pathways from distributional outcomes to tail risk mitigation the impact on the safety agenda is weak.

Technical quality is hindered by the lack of data, code, or sensitivity tests. Several symbols in the equations are introduced without units. The logistic adoption curve is presented but no baseline or comparative static is shown. The Google Drive link mentioned in the appendix is not integrated into the submission so reproducibility is not possible.

Future work would benefit from collecting cross‐industry panel data on AI investment, estimating adoption rates, and embedding those estimates into the model. A calibrated simulation with Monte Carlo uncertainty would allow credible policy experiments. Engaging with current empirical studies and mapping specific safety channels such as funding for red teaming would strengthen both foundation and relevance. ​

Luke Drago

Appreciate the effort especially under short time constraints. Extending the Solow model to this topic was clever. I would have liked to see more engagement with existing literature (e.g. Acemoglu has lots of relevant work here) and a much more thorough policy and/or recommendations section. I would have liked to see you spell out in greater detail where you expect adoption to be faster and slower and what downstream impacts this has that could be relevant to economists or policymakers. Additionally, as you acknowledge, a lack of empirical data made this challenging to evaluate.

Fabian Braesemann

Very interesting idea to combine traditional economic growth and innovation diffusion models to study the impact AI will have on growth and disparities. The report would have been more compelling if the results of the theoretical model would have not been described in text only, but also with some simulations or, even better, contrasted with some data (even if those would have been only proxies).

Donghyun Suh

The rates of AI adoption across firms, industries, countries, etc determine how the impact of AI plays out. Therefore, it is important to identify the determinants of AI adoption to understand the aggregate implications of AI and the heterogeneity across different segments of the economy. This is where this paper comes in. The paper attempts to combine a growth model and theories of technology adoption.

The idea could be advanced further by highlighting the role of the factors influencing adoption sharper in the context of the economic growth theory. Which factors will have the first-order importance in determining the adoption pattern of AI? Which aggregate dynamics will they matter for? The authors could start by thinking about these questions to narrow down their contributions.

Cite this work

@misc {

title={

The Rate of AI Adoption and Its Implications for Economic Growth and Disparities.

},

author={

Catarina Badi

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.