Impact of generative AI on tobacco, investment and tourism industry

jalokim

I examine how introducing Gen AI solutions in companies in the tourism, investment and tobacco sectors affects labour productivity using DiD. This falls under the Growth track, which studies how technological innovations drive economic growth. Economic theory (and intuition) suggests that AI-driven automation can boost output per worker. I use publicly available data such as revenue, employee count and news about Gen AI adoption to determine if companies that adopted AI increased their labour productivity since 2022 when ChatGPT was launched. Labour productivity is defined as revenue per employee.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Axel Backlund

Great work on analyzing the real-world impact of AI in the economy! The article is well written, has super clear language which I appreciate (particularly today where many use LLMs that produce text with unnecessarily complex language). The method is well described, and highlights the limitations of the approach used in a good way.

The article could have been further strengthened with references backing up claims, e.g. "Some claim that AI adoption is correlated with size". I do appreciate however that you included the prompts you used to get the findings – it is good inspiration on a meta level for researchers.

I also think the article would have benefitted from a longer discussion on the findings, to answer questions like why you think Investment has higher productivity boost from AI than other sectors you looked at. But I also understand that the time and format constraints made this difficult – so overall, well scoped research and clearly written article.

Alex Foster

* There was little to no rationale provided for selecting the 3 completely unrelated industries, paper would have been much better if it had focused on a particular single industry .

* It is fundamentally illogical to extrapolate metrics of the 2 or 3 largest companies across long-tail industries that are highly fragmented (like Tourism).

* The paper makes many claims without any evidence or reason (why would generative AI increase productivity of tobacco production) .

* The discussion section was just two short sentences.. it should have been several pages.

* The appendix content was good but not formatted well

* AI was was clearly used in the writing of this paper

* Poor grammar and use of prompting

* Not a single citation or reference?

Joel Christoph

The paper asks whether firms that adopted generative AI after 2022 experienced higher labour productivity, defined as revenue per employee, in the investment, tourism, and tobacco industries. Using revenue and headcount scraped from StockAnalysis and a hand coded flag for AI adoption, the author implements a two-period difference in differences design covering 2022 to 2024. The results table suggests investment firms gained about 0.14 million dollars per worker, tourism firms about 0.09 million, and tobacco firms saw no effect. The inclusion of a GitHub link and an appendix listing the sampled companies is a positive step toward transparency.

Inference is weak because the sample is small and convenience based, AI adoption dates are uncertain, and the two-period panel offers no way to test common trend assumptions. Revenue per employee is a noisy proxy that ignores capital intensity, hours worked, and currency movements. The regression excludes control variables and firm fixed effects, and confidence intervals are bootstrapped on the same limited cross section. Sector choice is not justified by theory, and the literature review overlooks recent work on firm level AI productivity and task exposure indices.

AI safety relevance is thin. Productivity effects matter for growth trajectories yet the paper does not connect its findings to distributional stability, governance incentives, or funding for alignment research.

Technical documentation is partial. Input numbers are listed but the scraping scripts, dummy construction, and regression code are missing, and robustness checks are absent.

The study would benefit from a larger balanced panel with multiple pre-treatment years, validated adoption dates, firm fixed effects with appropriate controls, a richer literature discussion, full code release, and an explicit link between sectoral productivity shifts and AI safety policy levers such as redistribution or compute taxation.

Duncan McClements

A few points of improvement:

- The paper mostly doesn't engage with prior literature, which would help for providing a theoretical grounding for which industries to analyse

- The paper uses revenue per employee as a proxy, but this is extremely noisy (for example, firms which are more capital intensive could be more likely to use AI - as one of many forms of capital - without having higher total factor productivity, but would have higher revenue per employee)

- Methodology of sample selection was non-random, would have benefitted from usinfg some more systematic method (such as S&P500 firms within industry, or using more granular industry data to ensure small as well as large firms were captured)

- Mean difference is below reporting accuracy, and could vanish at higher sample sizes (so would be good to collect a larger sample to check this)

- DiD implicitly assumes parallel trends - would have been good to verify this with pre-2020 data

Cite this work

@misc {

title={

Impact of generative AI on tobacco, investment and tourism industry

},

author={

jalokim

},

date={

4/27/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.