Apr 28, 2025

Economic Feasibility of Universal High Income (UHI) in an Age of Advanced Automation

Anusha Asim, Haochen (Lucas) Tang, Jackson Paulson, Ivan Lee, Aneesh Karanam

Details

Details

Arrow
Arrow
Arrow
Arrow
Arrow

Summary

economic-feasibility-of-universal-high-income-uhi-in-an-age-of-advanced-automation-jn3g

This paper analyzes five interlinked fiscal measures proposed to fund a Universal High Income (UHI) system in response to large-scale technological automation: a unity wealth tax, an unused land and property tax, progressive income tax reform, and the Artificial Intelligence Dividend Income (AIDI) program. Using dynamic general equilibrium modelling, IS-MP-PC frameworks, and empirical elasticity estimates, we assess the macroeconomic impacts, revenue potential, and distributional consequences of each measure. Results indicate that the combined measures could generate 8–12% of GDP in annual revenue, sufficient to sustainably support a UHI framework even with 80–90% unemployment. The wealth tax and land tax enhance fiscal resilience while reducing inequality; the progressive income tax improves administrative efficiency and boosts aggregate consumption; the AIDI channels the productivity gains of automation directly back to displaced workers and the broader public. Nonetheless, each policy presents limitations, including vulnerability to capital flight, political resistance, behavioural tax avoidance, innovation slowdowns, and enforcement complexity. AIDI, in particular, offers a novel mechanism to maintain consumer demand while moderating excessive automation, but demands careful regulatory oversight. Overall, the findings suggest that, if implemented carefully and globally coordinated, these measures provide a robust fiscal architecture to ensure equitable prosperity in a post-labour economy dominated by artificial intelligence. Strategic design and adaptive governance will be essential to maximize economic stability, technological innovation, and social welfare during this unprecedented economic transition.

Cite this work:

@misc {

title={

Economic Feasibility of Universal High Income (UHI) in an Age of Advanced Automation

},

author={

Anusha Asim, Haochen (Lucas) Tang, Jackson Paulson, Ivan Lee, Aneesh Karanam

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Joel Christoph

The paper sets an ambitious goal of financing a universal high income in a world where artificial intelligence removes the majority of paid jobs. It gathers four revenue pillars, a unity wealth tax, a land and property tax on idle assets, a redesigned progressive income tax schedule, and an artificial intelligence dividend charged on the profits of highly automated firms. The abstract claims that the combined package can raise eight to twelve percent of gross domestic product, a figure the authors argue would cover transfers even if unemployment rises to ninety percent. The narrative is accessible and the use of established macro frameworks such as the IS MP Phillips curve for the wealth tax and the Diamond Saez approach for income taxation shows familiarity with modern public finance. The inclusion of an original concept called AIDI brings a creative twist that aligns revenue with the pace of automation. Figures on pages five and six display the anticipated distributional gains, for example the bar chart on page five estimates a fall in the Gini coefficient of roughly five one hundredths under the wealth tax alone ​

.

Despite this breadth the study remains largely illustrative. All parameter values are hypothetical and no calibration to existing national accounts or tax bases is attempted. The dynamic general equilibrium modelling is referenced but no model equations beyond skeletal identities are shown, and the paper supplies no code or sensitivity analysis. Key assumptions such as capital flight of thirty to forty percent under unilateral wealth taxation are asserted without evidence. The land value tax results rely on external citations but the authors do not produce their own simulations. As a result the headline claim that the package funds two thousand dollars per adult per month is not verifiable. The reference list is extensive yet recent quantitative work on automation driven tax bases and optimal redistribution under artificial intelligence is missing, so the literature anchoring is only partial.

The link to AI safety is acknowledged but indirect. The authors argue that maintaining consumer demand and curbing extreme inequality will support social stability during a high automation transition. They do not trace how the proposed taxes would influence alignment research incentives, catastrophic misuse risk, or international compute races. A deeper discussion of how large public transfers could be conditioned on safe development norms or how AIDI could internalise externalities from risky deployment would make the paper more relevant to safety.

Technical documentation is thin. Several variables in the model statements lack units, tables omit standard errors, and the Kaggle job threat dataset mentioned in methods is not integrated into the fiscal projections. The appendix points to a Google Drive folder that is not included, so the study cannot be replicated. The graphical results are clear but no underlying data are provided.

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 24, 2025

CoTEP: A Multi-Modal Chain of Thought Evaluation Platform for the Next Generation of SOTA AI Models

As advanced state-of-the-art models like OpenAI's o-1 series, the upcoming o-3 family, Gemini 2.0 Flash Thinking and DeepSeek display increasingly sophisticated chain-of-thought (CoT) capabilities, our safety evaluations have not yet caught up. We propose building a platform that allows us to gather systematic evaluations of AI reasoning processes to create comprehensive safety benchmarks. Our Chain of Thought Evaluation Platform (CoTEP) will help establish standards for assessing AI reasoning and ensure development of more robust, trustworthy AI systems through industry and government collaboration.

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Mar 10, 2025

Attention Pattern Based Information Flow Visualization Tool

Understanding information flow in transformer-based language models is crucial for mechanistic interpretability. We introduce a visualization tool that extracts and represents attention patterns across model components, revealing how tokens influence each other during processing. Our tool automatically identifies and color-codes functional attention head types based on established taxonomies from recent research on indirect object identification (Wang et al., 2022), factual recall (Chughtai et al., 2024), and factual association retrieval (Geva et al., 2023). This interactive approach enables researchers to trace information propagation through transformer architectures, providing deeper insights into how these models implement reasoning and knowledge retrieval capabilities.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.