Economic Feasibility of Universal High Income (UHI) in an Age of Advanced Automation

Anusha Asim, Haochen (Lucas) Tang, Jackson Paulson, Ivan Lee, Aneesh Karanam

This paper analyzes five interlinked fiscal measures proposed to fund a Universal High Income (UHI) system in response to large-scale technological automation: a unity wealth tax, an unused land and property tax, progressive income tax reform, and the Artificial Intelligence Dividend Income (AIDI) program. Using dynamic general equilibrium modelling, IS-MP-PC frameworks, and empirical elasticity estimates, we assess the macroeconomic impacts, revenue potential, and distributional consequences of each measure. Results indicate that the combined measures could generate 8–12% of GDP in annual revenue, sufficient to sustainably support a UHI framework even with 80–90% unemployment. The wealth tax and land tax enhance fiscal resilience while reducing inequality; the progressive income tax improves administrative efficiency and boosts aggregate consumption; the AIDI channels the productivity gains of automation directly back to displaced workers and the broader public. Nonetheless, each policy presents limitations, including vulnerability to capital flight, political resistance, behavioural tax avoidance, innovation slowdowns, and enforcement complexity. AIDI, in particular, offers a novel mechanism to maintain consumer demand while moderating excessive automation, but demands careful regulatory oversight. Overall, the findings suggest that, if implemented carefully and globally coordinated, these measures provide a robust fiscal architecture to ensure equitable prosperity in a post-labour economy dominated by artificial intelligence. Strategic design and adaptive governance will be essential to maximize economic stability, technological innovation, and social welfare during this unprecedented economic transition.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Elvis Song

The research offers a solid, theoretically grounded macroeconomic analysis of four distinct tax proposals for funding UHI in a future automated economy. It effectively uses established economic models and provides quantitative estimates for potential revenue, distributional impacts, and behavioural responses for each measure. A key strength is the comprehensive approach covering both governmental and market-side effects, as well as identifying crucial implementation challenges like international coordination and tax avoidance. While the analysis of individual measures is thorough, a more integrated modelling of the combined effects and a deeper dive into practical implementation hurdles could enhance the research further.

Joel Christoph

The paper sets an ambitious goal of financing a universal high income in a world where artificial intelligence removes the majority of paid jobs. It gathers four revenue pillars, a unity wealth tax, a land and property tax on idle assets, a redesigned progressive income tax schedule, and an artificial intelligence dividend charged on the profits of highly automated firms. The abstract claims that the combined package can raise eight to twelve percent of gross domestic product, a figure the authors argue would cover transfers even if unemployment rises to ninety percent. The narrative is accessible and the use of established macro frameworks such as the IS MP Phillips curve for the wealth tax and the Diamond Saez approach for income taxation shows familiarity with modern public finance. The inclusion of an original concept called AIDI brings a creative twist that aligns revenue with the pace of automation. Figures on pages five and six display the anticipated distributional gains, for example the bar chart on page five estimates a fall in the Gini coefficient of roughly five one hundredths under the wealth tax alone ​

.

Despite this breadth the study remains largely illustrative. All parameter values are hypothetical and no calibration to existing national accounts or tax bases is attempted. The dynamic general equilibrium modelling is referenced but no model equations beyond skeletal identities are shown, and the paper supplies no code or sensitivity analysis. Key assumptions such as capital flight of thirty to forty percent under unilateral wealth taxation are asserted without evidence. The land value tax results rely on external citations but the authors do not produce their own simulations. As a result the headline claim that the package funds two thousand dollars per adult per month is not verifiable. The reference list is extensive yet recent quantitative work on automation driven tax bases and optimal redistribution under artificial intelligence is missing, so the literature anchoring is only partial.

The link to AI safety is acknowledged but indirect. The authors argue that maintaining consumer demand and curbing extreme inequality will support social stability during a high automation transition. They do not trace how the proposed taxes would influence alignment research incentives, catastrophic misuse risk, or international compute races. A deeper discussion of how large public transfers could be conditioned on safe development norms or how AIDI could internalise externalities from risky deployment would make the paper more relevant to safety.

Technical documentation is thin. Several variables in the model statements lack units, tables omit standard errors, and the Kaggle job threat dataset mentioned in methods is not integrated into the fiscal projections. The appendix points to a Google Drive folder that is not included, so the study cannot be replicated. The graphical results are clear but no underlying data are provided.

Duncan McClements

Thanks for the paper! It lays out several key considerations for designing tax systems in general, and discusses how some of them could relate to AI. Here's a few potential areas for improvement:

- IS-MP is normally used as a short run framework analysing the world given fixed inflation expectations, and has very little relevance to the long run impacts of tax policies, where other considerations (such as capital stock adjustment and labour supply) are more relevant

- It is unclear what the tax rates of some of the taxes in the paper are, especially for the income tax and land tax changes proposed, making it difficult to evaluate the feasibility of their contribution to the end revenue stream

- Additionally, it is unclear how behavioural response is governed, and what model drives the levels of relocation observed in response to the tax, or how the optimal level is computed

- Lastly, any parameters included don't mention any extrapolation from economic data today, but AI would likely change several of them (for example, a rise in the capital share would raise the revenue from a wealth tax and reduce the value of the income tax changes), so it would be desirable to extend this to exploring those

Rubi Hudson

Comparing tax proposals to fund a UBI is a good start, but the details on the proposals were light. Along with providing these details, it would be helpful to compare them on a level playing field, for example in order to raise 1% or 10% of global GDP under each of them, what would the distortionary effects be?

For the AIDI, which seems to be the centerpiece of this project, the distortionary effects on AI investment will be important and don't seem to be considered. It also appears to assume global coordination, which may be a major barrier in practice.

It's unclear how the AI Job Threat Index relates to the rest of the model.

Some points made, like "the Global North consolidates technological and economic power at the expense of the Global South" go against standard economic theory and would benefit from further justification.

Donghyun Suh

The paper offers a timely analysis of income guarantee programs and, importantly, how to finance such policies. This kind of analysis is what we need to make informed decisions about how to prepare for potential disruptions by AI.

To further advance the analysis, the authors could examine potential costs of distortionary taxes. Importantly, distortions may differ across different stages of automation. For example, progressive income tax may incur large distortions at low levels of automation as labor is still a crucial input to production. However, as the economy approaches full automation such distortions may decrease. Then the paper could even discuss the optimal mix of different financing schemes as automatio advances.

Lastly, it may be worth taking a step back and asking: Is it ideal to have everyone rely on the income provided by the government? Such centralization of power and resources could be a source of distinct risks that are not yet discussed much.

Cite this work

@misc {

title={

Economic Feasibility of Universal High Income (UHI) in an Age of Advanced Automation

},

author={

Anusha Asim, Haochen (Lucas) Tang, Jackson Paulson, Ivan Lee, Aneesh Karanam

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Momentum–Point-Perplexity Mechanics in Large Language Models

This work analyzes the hidden states of twenty different open-source transformer language models, ranging from small to medium size and covering five major architectures. The key discovery is that these models show signs of "energy conservation" during inference—meaning a certain measure combining changes in hidden states and token unpredictability stays almost constant as the model processes text.

The authors developed a new framework inspired by physics to jointly analyze how hidden states and prediction confidence evolve over time. They propose that transformers' behavior can be understood as following certain mechanical principles, much like how physical systems follow rules like conservation of energy.

Their experiments show that this conserved quantity varies very little between tokens, especially in untrained (random-weight) models, where it's extremely stable. In pre-trained models, the average energy drops more due to training, but there are larger relative fluctuations from token to token.

They also introduce a new method, based on this framework, for controlling transformer outputs by "steering" the hidden states. This method achieves good results—producing completions rated as higher in semantic quality, while still maintaining the same kind of energy stability.

Overall, the findings suggest that viewing transformer models through the lens of physical mechanics gives new, principled ways to interpret and control their behavior. It also highlights a key difference: random models behave more like balanced systems, while trained models make quicker, more decisive state changes at the cost of less precise energy conservation.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

Jul 28, 2025

Local Learning Coefficients Predict Developmental Milestones During Group Relative Policy Optimization

In this work, we investigate the emergence of capabilities in reinforcement learning (RL) by framing them as developmental phase transitions. We propose that the individual components of the reward function can serve as direct observables for these transitions, avoiding the need for complex, derived metrics. To test this, we trained a language model on an arithmetic task using Group Relative Policy Optimization (GRPO) and analyzed its learning trajectory with the Local Learning Coefficient (LLC) from Singular Learning Theory. Our findings show a strong qualitative correlation between spikes in the LLC—indicating a phase transition—and significant shifts in the model's behavior, as reflected by changes in specific reward components for correctness and conciseness. This demonstrates a more direct and scalable method for monitoring capability acquisition, offering a valuable proof-of-concept for developmental interpretability and AI safety. To facilitate reproducibility, we make our code available at \url{github.com/ilijalichkovski/apart-physics}.

Read More

Jul 28, 2025

AI agentic system epidemiology

As AI systems scale into decentralized, multi-agent deployments, emergent vulnerabilities challenge our ability to evaluate and manage systemic risks.

In this work, we adapt classical epidemiological modeling (specifically SEIR compartment models) to model adversarial behavior propagation in AI agents.

By solving systems of ODEs describing the systems with physics-informed neural networks (PINNs), we analyze stable and unstable equilibria, bifurcation points, and the effectiveness of interventions.

We estimate parameters from real-world data (e.g., adversarial success rates, detection latency, patching delays) and simulate attack propagation scenarios across 8 sectors (enterprise, retail, trading, development, customer service, academia, medical, and critical infrastructure AI tools).

Our results demonstrate how agent population dynamics interact with architectural and policy design interventions to stabilize the system.

This framework bridges concepts from dynamical systems and cybersecurity to offer a proactive, quantitative toolbox on AI safety.

We argue that epidemic-style monitoring and tools grounded in interpretable, physics-aligned dynamics can serve as early warning systems for cascading AI agentic failures.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.