Apr 28, 2025

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

Maimuna Zaheer, Alina Plyassulya

We studied AI’s impact on India’s IT sector. We modelled a 20% labour shock and proposed upskilling and insurance policies to reduce AI-driven job losses.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Joseph Levine

1. Innovation & Literature Foundation

1. 2.5

2. Good knowledge of the problem.

3. I'd like to see more attention to literature on the policies you assessed. Your assumptions of their efficacy are crucial — so it's helpful to get them in the right ballpark.

4. Especially upskilling. Economists have studied that one to death. Especially in the US but there's also great work in Ethiopia (Girum Abebe) and South Asia.

2. Practical Impact on AI Risk Reduction

1. 3

2. The policy problem is very clearly identified. Honestly, just showing that 20% of jobs in this sector (a not-unreasonable number) is 675,000 jobs will make policymakers sit up and pay attention.

3. It's useful to talk about the high-cost and low-return to unemployment insurance. Jobs displaced by AI don't come back. In labor econ-lingo, those displaced need to turn to new tasks. Which is what up-skilling is for! So: well-chosen policies.

4.

3. Methodological Rigor & Scientific Quality

1. 3

2. I'm not sure I follow the assumptions, and I would love to see the code!

3. It seems that you assume a ±20% to the sector scales both employment and revenue by ±20% (Figs 1 and 2). It's definitely possible for AI to increase/decrease employment by 20%, and the same for revenue, but it's very unlikely that these are correlated! For example, it seems more likely that AI would increase revenues while decreasing headcount!

4. What are the assumptions used for the policy comparison analysis? You mention dummy data (which is very reasonable in a research sprint!). As I mentioned above; it might be helpful to calibrate these assumptions to the estimates on existing upskilling experiments. There's lots of work on the US, but more relevant to your context might be work in Ethiopia and Bangladesh.

Joel Christoph

The paper tackles an important question: how transformative AI might disrupt employment and public finances in India’s IT-BPM sector. It grounds the discussion in publicly available OECD TiVA data. The authors clearly present headline results: a ±20 % labour-input shock translates into roughly ±675 000 jobs and about USD 8.4 billion in tax revenue. Two stylised policy responses (up-skilling vouchers and lay-off insurance) appear affordable relative to the taxes that would otherwise be lost. The write-up is concise, the causal chain is easy to follow, and the inclusion of tentative cost-benefit numbers offers a useful starting point for policy debate.

The analysis, however, remains exploratory. The 20 % shock is imposed exogenously with no justification from adoption curves or task-level automation risk estimates, so results could change materially under different assumptions. Treating employment as a fixed ratio of value added ignores capital deepening and substitution elasticities, while reliance on a single 25 % effective tax rate obscures India’s heterogeneous fiscal structure. The input-output framework is labelled “partial” but not specified, which prevents readers from replicating coefficient adjustments or inspecting sectoral knock-on effects. Literature coverage is thin; only a handful of general reports are cited, omitting recent empirical and theoretical work on AI labour substitution, skill-biased technical change, and AI safety-oriented governance mechanisms. AI safety relevance is indirect: the focus is economic displacement rather than mitigation of catastrophic or misuse risks, and links to alignment or systemic safety concerns are not developed. Finally, the methodology, code, and data cleaning steps are not documented in a repository, which limits transparency.

To strengthen the submission, the authors should (i) justify the shock magnitudes with evidence, (ii) perform sensitivity and scenario analysis, (iii) move toward a dynamic or general-equilibrium model that captures feedback effects, (iv) expand the literature review to situate the work in AI economics and AI safety debates, and (v) publish a reproducible notebook and data appendix. Clarifying how the proposed interventions align with broader AI safety objectives, such as reducing tail-risk incentives or supporting safe-development norms, would also raise the study’s impact.

Matt

The project tackles an important problem: how wil low and middle income countries be affected by AI. The authors could have been more precise which risks beyond inequality they are adressing. I would have liked to see the code and not only the prompt which created it (GPT sometimes hallucinates).

Fabian Braesemann

Interesting study on the impact of a labour input shock on the Indian IT sector. It would have been interesting to see a broader discussion of spillover effects through the network of the Indian Economy. Overall, well executed.

Cite this work

@misc {

title={

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

},

author={

Maimuna Zaheer, Alina Plyassulya

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.