Economic Impact Analysis: The Impact of AI on the Indian IT Sector

Maimuna Zaheer, Alina Plyassulya

We studied AI’s impact on India’s IT sector. We modelled a 20% labour shock and proposed upskilling and insurance policies to reduce AI-driven job losses.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Joseph Levine

1. Innovation & Literature Foundation

1. 2.5

2. Good knowledge of the problem.

3. I'd like to see more attention to literature on the policies you assessed. Your assumptions of their efficacy are crucial — so it's helpful to get them in the right ballpark.

4. Especially upskilling. Economists have studied that one to death. Especially in the US but there's also great work in Ethiopia (Girum Abebe) and South Asia.

2. Practical Impact on AI Risk Reduction

1. 3

2. The policy problem is very clearly identified. Honestly, just showing that 20% of jobs in this sector (a not-unreasonable number) is 675,000 jobs will make policymakers sit up and pay attention.

3. It's useful to talk about the high-cost and low-return to unemployment insurance. Jobs displaced by AI don't come back. In labor econ-lingo, those displaced need to turn to new tasks. Which is what up-skilling is for! So: well-chosen policies.

4.

3. Methodological Rigor & Scientific Quality

1. 3

2. I'm not sure I follow the assumptions, and I would love to see the code!

3. It seems that you assume a ±20% to the sector scales both employment and revenue by ±20% (Figs 1 and 2). It's definitely possible for AI to increase/decrease employment by 20%, and the same for revenue, but it's very unlikely that these are correlated! For example, it seems more likely that AI would increase revenues while decreasing headcount!

4. What are the assumptions used for the policy comparison analysis? You mention dummy data (which is very reasonable in a research sprint!). As I mentioned above; it might be helpful to calibrate these assumptions to the estimates on existing upskilling experiments. There's lots of work on the US, but more relevant to your context might be work in Ethiopia and Bangladesh.

Joel Christoph

The paper tackles an important question: how transformative AI might disrupt employment and public finances in India’s IT-BPM sector. It grounds the discussion in publicly available OECD TiVA data. The authors clearly present headline results: a ±20 % labour-input shock translates into roughly ±675 000 jobs and about USD 8.4 billion in tax revenue. Two stylised policy responses (up-skilling vouchers and lay-off insurance) appear affordable relative to the taxes that would otherwise be lost. The write-up is concise, the causal chain is easy to follow, and the inclusion of tentative cost-benefit numbers offers a useful starting point for policy debate.

The analysis, however, remains exploratory. The 20 % shock is imposed exogenously with no justification from adoption curves or task-level automation risk estimates, so results could change materially under different assumptions. Treating employment as a fixed ratio of value added ignores capital deepening and substitution elasticities, while reliance on a single 25 % effective tax rate obscures India’s heterogeneous fiscal structure. The input-output framework is labelled “partial” but not specified, which prevents readers from replicating coefficient adjustments or inspecting sectoral knock-on effects. Literature coverage is thin; only a handful of general reports are cited, omitting recent empirical and theoretical work on AI labour substitution, skill-biased technical change, and AI safety-oriented governance mechanisms. AI safety relevance is indirect: the focus is economic displacement rather than mitigation of catastrophic or misuse risks, and links to alignment or systemic safety concerns are not developed. Finally, the methodology, code, and data cleaning steps are not documented in a repository, which limits transparency.

To strengthen the submission, the authors should (i) justify the shock magnitudes with evidence, (ii) perform sensitivity and scenario analysis, (iii) move toward a dynamic or general-equilibrium model that captures feedback effects, (iv) expand the literature review to situate the work in AI economics and AI safety debates, and (v) publish a reproducible notebook and data appendix. Clarifying how the proposed interventions align with broader AI safety objectives, such as reducing tail-risk incentives or supporting safe-development norms, would also raise the study’s impact.

Matt

The project tackles an important problem: how wil low and middle income countries be affected by AI. The authors could have been more precise which risks beyond inequality they are adressing. I would have liked to see the code and not only the prompt which created it (GPT sometimes hallucinates).

Fabian Braesemann

Interesting study on the impact of a labour input shock on the Indian IT sector. It would have been interesting to see a broader discussion of spillover effects through the network of the Indian Economy. Overall, well executed.

Cite this work

@misc {

title={

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

},

author={

Maimuna Zaheer, Alina Plyassulya

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.