Apr 28, 2025

The Rate of AI Adoption and Its Implications for Economic Growth and Disparities.

Catarina Badi

This project examines the economic impacts of AI adoption, focusing on its potential to increase productivity while also widening income inequality and regional disparities. It explores the factors influencing adoption rates across industries and concludes with policy recommendations aimed at mitigating these disparities through targeted AI adoption incentives and workforce upskilling programs.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The submission combines the Solow growth model with Rogers diffusion theory to argue that varying rates of AI adoption drive both productivity gains and widening disparities. The conceptual extension is clearly written and the notation is consistent. Figures tracing hypothetical Gini paths help communicate the qualitative message. Yet the paper remains entirely theoretical. Every parameter in the production and diffusion functions is chosen for illustration and no industry data or adoption surveys are used for calibration, so the results restate known intuition rather than deliver new quantitative insight. The reference list is short and overlooks recent empirical work on AI diffusion, intangible capital spillovers, and regional inequality, which limits the intellectual grounding of the argument.

AI safety appears only through a brief claim that inequality might threaten stability. The manuscript does not articulate concrete links between adoption‐driven disparities and safety mechanisms such as computer governance, aligned funding, or risk management incentives. Without explicit pathways, the contribution to the safety agenda is minimal.

The lack of data, code, or sensitivity analysis hinders technical quality. Several symbols in the equations lack units, and the logistic adoption curve is not illustrated with baseline values or comparative statics. The Google Drive link in the appendix is not integrated, so reproducibility is impossible. The authors themselves acknowledge these limitations and call for empirical follow-up. ​

The paper sets out to explain how differing rates of artificial intelligence adoption will shape productivity, inequality, and regional gaps. It extends the Solow growth framework with an automation variable and overlays Rogers diffusion theory, then sketches a logistic adoption process and an aggregate regional production function. The conceptual synthesis is logically organised and the notation is clear. Figures outlining Gini trajectories are helpful.

The contribution is limited by the absence of empirical grounding. All parameters in the models are hypothetical and no real adoption or productivity data are used for calibration. As a result the results section repeats the intuition that faster adoption raises output and widens gaps, but offers no quantification beyond stylised claims. The references list omits key recent papers that estimate industry level adoption curves, intangible capital spillovers, or distributional effects of large language models. The mechanisms that link AI adoption to regional inequality are described qualitatively and remain untested.

AI safety relevance is only implicit. The paper notes that widening disparities could undermine social stability but does not connect its framework to concrete safety levers such as compute governance, labour transition funds for alignment workers, or incentives for safer model deployment. Without tracing pathways from distributional outcomes to tail risk mitigation the impact on the safety agenda is weak.

Technical quality is hindered by the lack of data, code, or sensitivity tests. Several symbols in the equations are introduced without units. The logistic adoption curve is presented but no baseline or comparative static is shown. The Google Drive link mentioned in the appendix is not integrated into the submission so reproducibility is not possible.

Future work would benefit from collecting cross‐industry panel data on AI investment, estimating adoption rates, and embedding those estimates into the model. A calibrated simulation with Monte Carlo uncertainty would allow credible policy experiments. Engaging with current empirical studies and mapping specific safety channels such as funding for red teaming would strengthen both foundation and relevance. ​

Appreciate the effort especially under short time constraints. Extending the Solow model to this topic was clever. I would have liked to see more engagement with existing literature (e.g. Acemoglu has lots of relevant work here) and a much more thorough policy and/or recommendations section. I would have liked to see you spell out in greater detail where you expect adoption to be faster and slower and what downstream impacts this has that could be relevant to economists or policymakers. Additionally, as you acknowledge, a lack of empirical data made this challenging to evaluate.

Very interesting idea to combine traditional economic growth and innovation diffusion models to study the impact AI will have on growth and disparities. The report would have been more compelling if the results of the theoretical model would have not been described in text only, but also with some simulations or, even better, contrasted with some data (even if those would have been only proxies).

The rates of AI adoption across firms, industries, countries, etc determine how the impact of AI plays out. Therefore, it is important to identify the determinants of AI adoption to understand the aggregate implications of AI and the heterogeneity across different segments of the economy. This is where this paper comes in. The paper attempts to combine a growth model and theories of technology adoption.

The idea could be advanced further by highlighting the role of the factors influencing adoption sharper in the context of the economic growth theory. Which factors will have the first-order importance in determining the adoption pattern of AI? Which aggregate dynamics will they matter for? The authors could start by thinking about these questions to narrow down their contributions.

Cite this work

@misc {

title={

The Rate of AI Adoption and Its Implications for Economic Growth and Disparities.

},

author={

Catarina Badi

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.