Apr 28, 2025

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

Maimuna Zaheer, Alina Plyassulya

Details

Details

Arrow
Arrow
Arrow
Arrow
Arrow

Summary

We studied AI’s impact on India’s IT sector. We modelled a 20% labour shock and proposed upskilling and insurance policies to reduce AI-driven job losses.

Cite this work:

@misc {

title={

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

},

author={

Maimuna Zaheer, Alina Plyassulya

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Joseph Levine

1. Innovation & Literature Foundation

1. 2.5

2. Good knowledge of the problem.

3. I'd like to see more attention to literature on the policies you assessed. Your assumptions of their efficacy are crucial — so it's helpful to get them in the right ballpark.

4. Especially upskilling. Economists have studied that one to death. Especially in the US but there's also great work in Ethiopia (Girum Abebe) and South Asia.

2. Practical Impact on AI Risk Reduction

1. 3

2. The policy problem is very clearly identified. Honestly, just showing that 20% of jobs in this sector (a not-unreasonable number) is 675,000 jobs will make policymakers sit up and pay attention.

3. It's useful to talk about the high-cost and low-return to unemployment insurance. Jobs displaced by AI don't come back. In labor econ-lingo, those displaced need to turn to new tasks. Which is what up-skilling is for! So: well-chosen policies.

4.

3. Methodological Rigor & Scientific Quality

1. 3

2. I'm not sure I follow the assumptions, and I would love to see the code!

3. It seems that you assume a ±20% to the sector scales both employment and revenue by ±20% (Figs 1 and 2). It's definitely possible for AI to increase/decrease employment by 20%, and the same for revenue, but it's very unlikely that these are correlated! For example, it seems more likely that AI would increase revenues while decreasing headcount!

4. What are the assumptions used for the policy comparison analysis? You mention dummy data (which is very reasonable in a research sprint!). As I mentioned above; it might be helpful to calibrate these assumptions to the estimates on existing upskilling experiments. There's lots of work on the US, but more relevant to your context might be work in Ethiopia and Bangladesh.

Joel Christoph

The paper tackles an important question: how transformative AI might disrupt employment and public finances in India’s IT-BPM sector. It grounds the discussion in publicly available OECD TiVA data. The authors clearly present headline results: a ±20 % labour-input shock translates into roughly ±675 000 jobs and about USD 8.4 billion in tax revenue. Two stylised policy responses (up-skilling vouchers and lay-off insurance) appear affordable relative to the taxes that would otherwise be lost. The write-up is concise, the causal chain is easy to follow, and the inclusion of tentative cost-benefit numbers offers a useful starting point for policy debate.

The analysis, however, remains exploratory. The 20 % shock is imposed exogenously with no justification from adoption curves or task-level automation risk estimates, so results could change materially under different assumptions. Treating employment as a fixed ratio of value added ignores capital deepening and substitution elasticities, while reliance on a single 25 % effective tax rate obscures India’s heterogeneous fiscal structure. The input-output framework is labelled “partial” but not specified, which prevents readers from replicating coefficient adjustments or inspecting sectoral knock-on effects. Literature coverage is thin; only a handful of general reports are cited, omitting recent empirical and theoretical work on AI labour substitution, skill-biased technical change, and AI safety-oriented governance mechanisms. AI safety relevance is indirect: the focus is economic displacement rather than mitigation of catastrophic or misuse risks, and links to alignment or systemic safety concerns are not developed. Finally, the methodology, code, and data cleaning steps are not documented in a repository, which limits transparency.

To strengthen the submission, the authors should (i) justify the shock magnitudes with evidence, (ii) perform sensitivity and scenario analysis, (iii) move toward a dynamic or general-equilibrium model that captures feedback effects, (iv) expand the literature review to situate the work in AI economics and AI safety debates, and (v) publish a reproducible notebook and data appendix. Clarifying how the proposed interventions align with broader AI safety objectives, such as reducing tail-risk incentives or supporting safe-development norms, would also raise the study’s impact.

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 24, 2025

CoTEP: A Multi-Modal Chain of Thought Evaluation Platform for the Next Generation of SOTA AI Models

As advanced state-of-the-art models like OpenAI's o-1 series, the upcoming o-3 family, Gemini 2.0 Flash Thinking and DeepSeek display increasingly sophisticated chain-of-thought (CoT) capabilities, our safety evaluations have not yet caught up. We propose building a platform that allows us to gather systematic evaluations of AI reasoning processes to create comprehensive safety benchmarks. Our Chain of Thought Evaluation Platform (CoTEP) will help establish standards for assessing AI reasoning and ensure development of more robust, trustworthy AI systems through industry and government collaboration.

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.