The Rate of AI Adoption and Its Implications for Economic Growth and Disparities.

Catarina Badi

This project examines the economic impacts of AI adoption, focusing on its potential to increase productivity while also widening income inequality and regional disparities. It explores the factors influencing adoption rates across industries and concludes with policy recommendations aimed at mitigating these disparities through targeted AI adoption incentives and workforce upskilling programs.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Joel Christoph

The submission combines the Solow growth model with Rogers diffusion theory to argue that varying rates of AI adoption drive both productivity gains and widening disparities. The conceptual extension is clearly written and the notation is consistent. Figures tracing hypothetical Gini paths help communicate the qualitative message. Yet the paper remains entirely theoretical. Every parameter in the production and diffusion functions is chosen for illustration and no industry data or adoption surveys are used for calibration, so the results restate known intuition rather than deliver new quantitative insight. The reference list is short and overlooks recent empirical work on AI diffusion, intangible capital spillovers, and regional inequality, which limits the intellectual grounding of the argument.

AI safety appears only through a brief claim that inequality might threaten stability. The manuscript does not articulate concrete links between adoption‐driven disparities and safety mechanisms such as computer governance, aligned funding, or risk management incentives. Without explicit pathways, the contribution to the safety agenda is minimal.

The lack of data, code, or sensitivity analysis hinders technical quality. Several symbols in the equations lack units, and the logistic adoption curve is not illustrated with baseline values or comparative statics. The Google Drive link in the appendix is not integrated, so reproducibility is impossible. The authors themselves acknowledge these limitations and call for empirical follow-up. ​

Joel Christoph

The paper sets out to explain how differing rates of artificial intelligence adoption will shape productivity, inequality, and regional gaps. It extends the Solow growth framework with an automation variable and overlays Rogers diffusion theory, then sketches a logistic adoption process and an aggregate regional production function. The conceptual synthesis is logically organised and the notation is clear. Figures outlining Gini trajectories are helpful.

The contribution is limited by the absence of empirical grounding. All parameters in the models are hypothetical and no real adoption or productivity data are used for calibration. As a result the results section repeats the intuition that faster adoption raises output and widens gaps, but offers no quantification beyond stylised claims. The references list omits key recent papers that estimate industry level adoption curves, intangible capital spillovers, or distributional effects of large language models. The mechanisms that link AI adoption to regional inequality are described qualitatively and remain untested.

AI safety relevance is only implicit. The paper notes that widening disparities could undermine social stability but does not connect its framework to concrete safety levers such as compute governance, labour transition funds for alignment workers, or incentives for safer model deployment. Without tracing pathways from distributional outcomes to tail risk mitigation the impact on the safety agenda is weak.

Technical quality is hindered by the lack of data, code, or sensitivity tests. Several symbols in the equations are introduced without units. The logistic adoption curve is presented but no baseline or comparative static is shown. The Google Drive link mentioned in the appendix is not integrated into the submission so reproducibility is not possible.

Future work would benefit from collecting cross‐industry panel data on AI investment, estimating adoption rates, and embedding those estimates into the model. A calibrated simulation with Monte Carlo uncertainty would allow credible policy experiments. Engaging with current empirical studies and mapping specific safety channels such as funding for red teaming would strengthen both foundation and relevance. ​

Luke Drago

Appreciate the effort especially under short time constraints. Extending the Solow model to this topic was clever. I would have liked to see more engagement with existing literature (e.g. Acemoglu has lots of relevant work here) and a much more thorough policy and/or recommendations section. I would have liked to see you spell out in greater detail where you expect adoption to be faster and slower and what downstream impacts this has that could be relevant to economists or policymakers. Additionally, as you acknowledge, a lack of empirical data made this challenging to evaluate.

Fabian Braesemann

Very interesting idea to combine traditional economic growth and innovation diffusion models to study the impact AI will have on growth and disparities. The report would have been more compelling if the results of the theoretical model would have not been described in text only, but also with some simulations or, even better, contrasted with some data (even if those would have been only proxies).

Donghyun Suh

The rates of AI adoption across firms, industries, countries, etc determine how the impact of AI plays out. Therefore, it is important to identify the determinants of AI adoption to understand the aggregate implications of AI and the heterogeneity across different segments of the economy. This is where this paper comes in. The paper attempts to combine a growth model and theories of technology adoption.

The idea could be advanced further by highlighting the role of the factors influencing adoption sharper in the context of the economic growth theory. Which factors will have the first-order importance in determining the adoption pattern of AI? Which aggregate dynamics will they matter for? The authors could start by thinking about these questions to narrow down their contributions.

Cite this work

@misc {

title={

@misc {

},

author={

Catarina Badi

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.