01 : 04 : 10 : 29

01 : 04 : 10 : 29

01 : 04 : 10 : 29

01 : 04 : 10 : 29

Keep Apart Research Going: Donate Today

Apr 28, 2025

Evaluating the risk of job displacement by transformative AI automation in developing countries: A case study on Brazil

Blessing Ajimoti, Vitor Tomaz, Hoda Maged , Mahi Shah, Abubakar Abdulfatah

Details

Details

Arrow
Arrow
Arrow
Arrow
Arrow
Arrow

In this paper, we introduce an empirical and reproducible approach to monitoring job displacement by TAI. We first classify occupations based on current prompting behavior from a novel dataset from Anthropic, linking 4 million Claude Sonnet 3.7 prompts to tasks from the O*NET occupation taxonomy. We then develop a seasonally-adjusted autoregressive model based on employment flow data from Brazil (CAGED) between 2021 and 2024 to analyze the effects of diverging prompting behavior on employment trends per occupation. We conclude that there is no statistically significant difference in net-job dynamics between the occupations whose tasks feature the highest frequency in prompts and the ones with the lowest frequency, indicating that current AI technology has not initiated job displacement in Brazil.

Cite this work:

@misc {

title={

@misc {

},

author={

Blessing Ajimoti, Vitor Tomaz, Hoda Maged , Mahi Shah, Abubakar Abdulfatah

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Joseph Levine

1. Innovation & Literature Foundation

1. 5

2. Engaging with exactly the right literature.

3. Also novel stuff! I bet that there's going to be a top-5 publication with this same type of analysis in the next year. Really good to get this out so fast.

4. I would look a bit deeper into the labor stuff. The most relevant is this (brand new) paper:https://bfi.uchicago.edu/wp-content/uploads/2025/04/BFI_WP_2025-56-1.pdf

5. But there's other good stuff from the last couple of years. For developing country context, see Otis et al 2024.

2. Practical Impact on AI Risk Reduction

1. 4

2. Economic data are suggesting a slow takeoff. This is an important consideration for AI safety, and under-discussed.

3. This work has nothing to say about capabilities (nor does it try to!). The economic response to novel capabilities is just as interesting.

4. A logical next step for this project: *why* is there low adoption in the T10 occupations? Why is there no displacement? Should we be reassured? You posit four hypotheses. What data would you collect (or experiments would you run) to measure the relative importance?

5. Policy recommendations are a bit premature/overconfident without a better understanding of the dynamics.

3. Methodological Rigor & Scientific Quality

1. 5

2. Strong understanding of the data used. Well-explained. Your crosswalk wouldn't pass in an academic paper, but great for a sprint like this.

3. No code in the github other than the sql file; please provide the crosswalks and the prompts as well.

4. Good econometrics.

1. You could justify your aggregationa nd scaling choices better. Your interpretation using ADF tests feels muddled.

2. Failing to reject stationarity in residuals for T10/T10aut doesn't *strongly* support the "no divergence" claim, especially given the initial series were flows. It might just mean the STL + linear trend removed most structure, leaving noise best fit by an AR model.

3. Also, the mean-scaling of net jobs needs more justification – why not scale by initial employment or use growth rates? Feels a bit arbitrary.

4. These are all nitpicks! Great stuff.

Joel Christoph

The paper presents an inventive empirical pipeline that matches three very different datasets: four million Claude Sonnet 3.7 prompts mapped to ONET tasks, a crosswalk from ONET to Brazil’s CBO occupation codes, and monthly employment flows from the CAGED register from 2021 to mid-2024. By grouping occupations into four exposure buckets and running seasonal and trend adjustments followed by simple autoregressive tests, the authors find no statistically significant divergence in net job creation between high and low prompt-exposed occupations ​

. Releasing the code and provisional crosswalk on GitHub is commendable, and the discussion section openly lists the main data and classification shortcomings. The study is a useful proof of concept for real-time labour-market monitoring in developing economies.

Innovation and literature depth are moderate. Linking real LLM usage to national employment data is a novel empirical step, but the conceptual framing relies mainly on Acemoglu and Restrepo’s task model and a single recent Anthropic paper. The review omits earlier occupation-level exposure measures and does not engage Brazilian labour-market studies, limiting its foundation.

The AI safety contribution is indirect. Monitoring displacement can inform distributional policy, yet the paper does not connect its findings to systemic safety issues such as social instability, race dynamics, or governance incentives that affect catastrophic risk. Adding a pathway from timely displacement signals to alignment or compute governance decisions would improve relevance.

Technical execution is mixed. Strengths include careful seasonality removal and candid presentation of ADF statistics. Weaknesses include heavy dependence on one week of prompt data, unverified LLM-generated crosswalks, absence of robustness checks, and small simulation sample size (five runs per scenario). Parameter choices for the AR models and lag selection are not justified, and no confidence bands are shown on the plots on pages 6 and 7. Without formal hypothesis tests comparing the four series, the “no difference” conclusion is tentative.

Suggestions for improvement

1. Expand the Anthropic dataset to multiple models and longer time windows, then rerun the analysis with rolling windows and placebo occupations.

2. Replace the LLM crosswalk with expert-validated mappings and report a sensitivity study to mapping uncertainty.

3. Use difference-in-differences or panel regressions with occupation fixed effects to test for differential shocks rather than relying on visual inspection and ADF tests.

4. Integrate policy scenarios that link early displacement signals to safety-relevant interventions such as workforce transition funds financed by windfall clauses.

5. Broaden the literature review to include empirical UBI pilots, Brazilian automation studies, and recent AI safety economics papers.

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 20, 2025

Securing AGI Deployment and Mitigating Safety Risks

As artificial general intelligence (AGI) systems near deployment readiness, they pose unprecedented challenges in ensuring safe, secure, and aligned operations. Without robust safety measures, AGI can pose significant risks, including misalignment with human values, malicious misuse, adversarial attacks, and data breaches.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.