Evaluating the risk of job displacement by transformative AI automation in developing countries: A case study on Brazil

Blessing Ajimoti, Vitor Tomaz, Hoda Maged , Mahi Shah, Abubakar Abdulfatah

In this paper, we introduce an empirical and reproducible approach to monitoring job displacement by TAI. We first classify occupations based on current prompting behavior from a novel dataset from Anthropic, linking 4 million Claude Sonnet 3.7 prompts to tasks from the O*NET occupation taxonomy. We then develop a seasonally-adjusted autoregressive model based on employment flow data from Brazil (CAGED) between 2021 and 2024 to analyze the effects of diverging prompting behavior on employment trends per occupation. We conclude that there is no statistically significant difference in net-job dynamics between the occupations whose tasks feature the highest frequency in prompts and the ones with the lowest frequency, indicating that current AI technology has not initiated job displacement in Brazil.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Joseph Levine

1. Innovation & Literature Foundation

1. 5

2. Engaging with exactly the right literature.

3. Also novel stuff! I bet that there's going to be a top-5 publication with this same type of analysis in the next year. Really good to get this out so fast.

4. I would look a bit deeper into the labor stuff. The most relevant is this (brand new) paper:https://bfi.uchicago.edu/wp-content/uploads/2025/04/BFI_WP_2025-56-1.pdf

5. But there's other good stuff from the last couple of years. For developing country context, see Otis et al 2024.

2. Practical Impact on AI Risk Reduction

1. 4

2. Economic data are suggesting a slow takeoff. This is an important consideration for AI safety, and under-discussed.

3. This work has nothing to say about capabilities (nor does it try to!). The economic response to novel capabilities is just as interesting.

4. A logical next step for this project: *why* is there low adoption in the T10 occupations? Why is there no displacement? Should we be reassured? You posit four hypotheses. What data would you collect (or experiments would you run) to measure the relative importance?

5. Policy recommendations are a bit premature/overconfident without a better understanding of the dynamics.

3. Methodological Rigor & Scientific Quality

1. 5

2. Strong understanding of the data used. Well-explained. Your crosswalk wouldn't pass in an academic paper, but great for a sprint like this.

3. No code in the github other than the sql file; please provide the crosswalks and the prompts as well.

4. Good econometrics.

1. You could justify your aggregationa nd scaling choices better. Your interpretation using ADF tests feels muddled.

2. Failing to reject stationarity in residuals for T10/T10aut doesn't *strongly* support the "no divergence" claim, especially given the initial series were flows. It might just mean the STL + linear trend removed most structure, leaving noise best fit by an AR model.

3. Also, the mean-scaling of net jobs needs more justification – why not scale by initial employment or use growth rates? Feels a bit arbitrary.

4. These are all nitpicks! Great stuff.

Joel Christoph

The paper presents an inventive empirical pipeline that matches three very different datasets: four million Claude Sonnet 3.7 prompts mapped to ONET tasks, a crosswalk from ONET to Brazil’s CBO occupation codes, and monthly employment flows from the CAGED register from 2021 to mid-2024. By grouping occupations into four exposure buckets and running seasonal and trend adjustments followed by simple autoregressive tests, the authors find no statistically significant divergence in net job creation between high and low prompt-exposed occupations ​

. Releasing the code and provisional crosswalk on GitHub is commendable, and the discussion section openly lists the main data and classification shortcomings. The study is a useful proof of concept for real-time labour-market monitoring in developing economies.

Innovation and literature depth are moderate. Linking real LLM usage to national employment data is a novel empirical step, but the conceptual framing relies mainly on Acemoglu and Restrepo’s task model and a single recent Anthropic paper. The review omits earlier occupation-level exposure measures and does not engage Brazilian labour-market studies, limiting its foundation.

The AI safety contribution is indirect. Monitoring displacement can inform distributional policy, yet the paper does not connect its findings to systemic safety issues such as social instability, race dynamics, or governance incentives that affect catastrophic risk. Adding a pathway from timely displacement signals to alignment or compute governance decisions would improve relevance.

Technical execution is mixed. Strengths include careful seasonality removal and candid presentation of ADF statistics. Weaknesses include heavy dependence on one week of prompt data, unverified LLM-generated crosswalks, absence of robustness checks, and small simulation sample size (five runs per scenario). Parameter choices for the AR models and lag selection are not justified, and no confidence bands are shown on the plots on pages 6 and 7. Without formal hypothesis tests comparing the four series, the “no difference” conclusion is tentative.

Suggestions for improvement

1. Expand the Anthropic dataset to multiple models and longer time windows, then rerun the analysis with rolling windows and placebo occupations.

2. Replace the LLM crosswalk with expert-validated mappings and report a sensitivity study to mapping uncertainty.

3. Use difference-in-differences or panel regressions with occupation fixed effects to test for differential shocks rather than relying on visual inspection and ADF tests.

4. Integrate policy scenarios that link early displacement signals to safety-relevant interventions such as workforce transition funds financed by windfall clauses.

5. Broaden the literature review to include empirical UBI pilots, Brazilian automation studies, and recent AI safety economics papers.

Luke Drago

Really excellent work, especially given the time constraint. It's well situated in the literature and extends core economic arguments to SOTA AI work. Moreover, I like that the framework is reusable -- you could run it again with new data or if another lab released data on their use cases.

One concern I have with using Claude data is that Claude is not very representative. It's primarily popular with programmers, which is why I expect that's the vast majority of tasks it completes. However, it's the best dataset you have access to, so I can't hold this against you. You correctly flag this as an issue in your limitations. This is another reason why it would be good for Open AI and others to release similar information.

Within your discussion section, I expect point b is unlikely. There's a difference between opinion polling and salience (i.e. people can say lots of things, but only a few actually influence their behavior). However, I expect resistance could become meaningful in the future. Either way, I think the remaining explanations were compelling.

I would have liked more detail or novelty in your policy recommendations section, though I expect the analysis took up most of your time.

Fabian Braesemann

Creative research question and use of very interesting data (Prompts to O*NET taxonomy). Results are not overwhelming, probable due to the relatively short time frame considered on the impact of AI on the labour market, but also the short amount of time to work on the project. More inferential statistics would have been interesting. Still, very solid work!

Cite this work

@misc {

title={

@misc {

},

author={

Blessing Ajimoti, Vitor Tomaz, Hoda Maged , Mahi Shah, Abubakar Abdulfatah

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.