Apr 28, 2025

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

Maimuna Zaheer, Alina Plyassulya

We studied AI’s impact on India’s IT sector. We modelled a 20% labour shock and proposed upskilling and insurance policies to reduce AI-driven job losses.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

1. Innovation & Literature Foundation

1. 2.5

2. Good knowledge of the problem.

3. I'd like to see more attention to literature on the policies you assessed. Your assumptions of their efficacy are crucial — so it's helpful to get them in the right ballpark.

4. Especially upskilling. Economists have studied that one to death. Especially in the US but there's also great work in Ethiopia (Girum Abebe) and South Asia.

2. Practical Impact on AI Risk Reduction

1. 3

2. The policy problem is very clearly identified. Honestly, just showing that 20% of jobs in this sector (a not-unreasonable number) is 675,000 jobs will make policymakers sit up and pay attention.

3. It's useful to talk about the high-cost and low-return to unemployment insurance. Jobs displaced by AI don't come back. In labor econ-lingo, those displaced need to turn to new tasks. Which is what up-skilling is for! So: well-chosen policies.

4.

3. Methodological Rigor & Scientific Quality

1. 3

2. I'm not sure I follow the assumptions, and I would love to see the code!

3. It seems that you assume a ±20% to the sector scales both employment and revenue by ±20% (Figs 1 and 2). It's definitely possible for AI to increase/decrease employment by 20%, and the same for revenue, but it's very unlikely that these are correlated! For example, it seems more likely that AI would increase revenues while decreasing headcount!

4. What are the assumptions used for the policy comparison analysis? You mention dummy data (which is very reasonable in a research sprint!). As I mentioned above; it might be helpful to calibrate these assumptions to the estimates on existing upskilling experiments. There's lots of work on the US, but more relevant to your context might be work in Ethiopia and Bangladesh.

The paper tackles an important question: how transformative AI might disrupt employment and public finances in India’s IT-BPM sector. It grounds the discussion in publicly available OECD TiVA data. The authors clearly present headline results: a ±20 % labour-input shock translates into roughly ±675 000 jobs and about USD 8.4 billion in tax revenue. Two stylised policy responses (up-skilling vouchers and lay-off insurance) appear affordable relative to the taxes that would otherwise be lost. The write-up is concise, the causal chain is easy to follow, and the inclusion of tentative cost-benefit numbers offers a useful starting point for policy debate.

The analysis, however, remains exploratory. The 20 % shock is imposed exogenously with no justification from adoption curves or task-level automation risk estimates, so results could change materially under different assumptions. Treating employment as a fixed ratio of value added ignores capital deepening and substitution elasticities, while reliance on a single 25 % effective tax rate obscures India’s heterogeneous fiscal structure. The input-output framework is labelled “partial” but not specified, which prevents readers from replicating coefficient adjustments or inspecting sectoral knock-on effects. Literature coverage is thin; only a handful of general reports are cited, omitting recent empirical and theoretical work on AI labour substitution, skill-biased technical change, and AI safety-oriented governance mechanisms. AI safety relevance is indirect: the focus is economic displacement rather than mitigation of catastrophic or misuse risks, and links to alignment or systemic safety concerns are not developed. Finally, the methodology, code, and data cleaning steps are not documented in a repository, which limits transparency.

To strengthen the submission, the authors should (i) justify the shock magnitudes with evidence, (ii) perform sensitivity and scenario analysis, (iii) move toward a dynamic or general-equilibrium model that captures feedback effects, (iv) expand the literature review to situate the work in AI economics and AI safety debates, and (v) publish a reproducible notebook and data appendix. Clarifying how the proposed interventions align with broader AI safety objectives, such as reducing tail-risk incentives or supporting safe-development norms, would also raise the study’s impact.

The project tackles an important problem: how wil low and middle income countries be affected by AI. The authors could have been more precise which risks beyond inequality they are adressing. I would have liked to see the code and not only the prompt which created it (GPT sometimes hallucinates).

Interesting study on the impact of a labour input shock on the Indian IT sector. It would have been interesting to see a broader discussion of spillover effects through the network of the Indian Economy. Overall, well executed.

Cite this work

@misc {

title={

Economic Impact Analysis: The Impact of AI on the Indian IT Sector

},

author={

Maimuna Zaheer, Alina Plyassulya

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Markov Chain Lock Watermarking: Provably Secure Authentication for LLM Outputs

We present Markov Chain Lock (MCL) watermarking, a cryptographically secure framework for authenticating LLM outputs. MCL constrains token generation to follow a secret Markov chain over SHA-256 vocabulary partitions. Using doubly stochastic transition matrices, we prove four theoretical guarantees: (1) exponentially decaying false positive rates via Hoeffding bounds, (2) graceful degradation under adversarial modification with closed-form expected scores, (3) information-theoretic security without key access, and (4) bounded quality loss via KL divergence. Experiments on 173 Wikipedia prompts using Llama-3.2-3B demonstrate that the optimal 7-state soft cycle configuration achieves 100\% detection, 0\% FPR, and perplexity 4.20. Robustness testing confirms detection above 96\% even with 30\% word replacement. The framework enables $O(n)$ model-free detection, addressing EU AI Act Article 50 requirements. Code available at \url{https://github.com/ChenghengLi/MCLW}

Read More

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.