01 : 04 : 10 : 30

01 : 04 : 10 : 30

01 : 04 : 10 : 30

01 : 04 : 10 : 30

Keep Apart Research Going: Donate Today

Apr 28, 2025

Economic Feasibility of Universal High Income (UHI) in an Age of Advanced Automation

Anusha Asim, Haochen (Lucas) Tang, Jackson Paulson, Ivan Lee, Aneesh Karanam

Details

Details

Arrow
Arrow
Arrow
Arrow
Arrow
Arrow

This paper analyzes five interlinked fiscal measures proposed to fund a Universal High Income (UHI) system in response to large-scale technological automation: a unity wealth tax, an unused land and property tax, progressive income tax reform, and the Artificial Intelligence Dividend Income (AIDI) program. Using dynamic general equilibrium modelling, IS-MP-PC frameworks, and empirical elasticity estimates, we assess the macroeconomic impacts, revenue potential, and distributional consequences of each measure. Results indicate that the combined measures could generate 8–12% of GDP in annual revenue, sufficient to sustainably support a UHI framework even with 80–90% unemployment. The wealth tax and land tax enhance fiscal resilience while reducing inequality; the progressive income tax improves administrative efficiency and boosts aggregate consumption; the AIDI channels the productivity gains of automation directly back to displaced workers and the broader public. Nonetheless, each policy presents limitations, including vulnerability to capital flight, political resistance, behavioural tax avoidance, innovation slowdowns, and enforcement complexity. AIDI, in particular, offers a novel mechanism to maintain consumer demand while moderating excessive automation, but demands careful regulatory oversight. Overall, the findings suggest that, if implemented carefully and globally coordinated, these measures provide a robust fiscal architecture to ensure equitable prosperity in a post-labour economy dominated by artificial intelligence. Strategic design and adaptive governance will be essential to maximize economic stability, technological innovation, and social welfare during this unprecedented economic transition.

Cite this work:

@misc {

title={

@misc {

},

author={

Anusha Asim, Haochen (Lucas) Tang, Jackson Paulson, Ivan Lee, Aneesh Karanam

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Joel Christoph

The paper sets an ambitious goal of financing a universal high income in a world where artificial intelligence removes the majority of paid jobs. It gathers four revenue pillars, a unity wealth tax, a land and property tax on idle assets, a redesigned progressive income tax schedule, and an artificial intelligence dividend charged on the profits of highly automated firms. The abstract claims that the combined package can raise eight to twelve percent of gross domestic product, a figure the authors argue would cover transfers even if unemployment rises to ninety percent. The narrative is accessible and the use of established macro frameworks such as the IS MP Phillips curve for the wealth tax and the Diamond Saez approach for income taxation shows familiarity with modern public finance. The inclusion of an original concept called AIDI brings a creative twist that aligns revenue with the pace of automation. Figures on pages five and six display the anticipated distributional gains, for example the bar chart on page five estimates a fall in the Gini coefficient of roughly five one hundredths under the wealth tax alone ​

.

Despite this breadth the study remains largely illustrative. All parameter values are hypothetical and no calibration to existing national accounts or tax bases is attempted. The dynamic general equilibrium modelling is referenced but no model equations beyond skeletal identities are shown, and the paper supplies no code or sensitivity analysis. Key assumptions such as capital flight of thirty to forty percent under unilateral wealth taxation are asserted without evidence. The land value tax results rely on external citations but the authors do not produce their own simulations. As a result the headline claim that the package funds two thousand dollars per adult per month is not verifiable. The reference list is extensive yet recent quantitative work on automation driven tax bases and optimal redistribution under artificial intelligence is missing, so the literature anchoring is only partial.

The link to AI safety is acknowledged but indirect. The authors argue that maintaining consumer demand and curbing extreme inequality will support social stability during a high automation transition. They do not trace how the proposed taxes would influence alignment research incentives, catastrophic misuse risk, or international compute races. A deeper discussion of how large public transfers could be conditioned on safe development norms or how AIDI could internalise externalities from risky deployment would make the paper more relevant to safety.

Technical documentation is thin. Several variables in the model statements lack units, tables omit standard errors, and the Kaggle job threat dataset mentioned in methods is not integrated into the fiscal projections. The appendix points to a Google Drive folder that is not included, so the study cannot be replicated. The graphical results are clear but no underlying data are provided.

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 20, 2025

Securing AGI Deployment and Mitigating Safety Risks

As artificial general intelligence (AGI) systems near deployment readiness, they pose unprecedented challenges in ensuring safe, secure, and aligned operations. Without robust safety measures, AGI can pose significant risks, including misalignment with human values, malicious misuse, adversarial attacks, and data breaches.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

Jan 24, 2025

Safe ai

The rapid adoption of AI in critical industries like healthcare and legal services has highlighted the urgent need for robust risk mitigation mechanisms. While domain-specific AI agents offer efficiency, they often lack transparency and accountability, raising concerns about safety, reliability, and compliance. The stakes are high, as AI failures in these sectors can lead to catastrophic outcomes, including loss of life, legal repercussions, and significant financial and reputational damage. Current solutions, such as regulatory frameworks and quality assurance protocols, provide only partial protection against the multifaceted risks associated with AI deployment. This situation underscores the necessity for an innovative approach that combines comprehensive risk assessment with financial safeguards to ensure the responsible and secure implementation of AI technologies across high-stakes industries.

Read More

Jan 20, 2025

AI Risk Management Assurance Network (AIRMAN)

The AI Risk Management Assurance Network (AIRMAN) addresses a critical gap in AI safety: the disconnect between existing AI assurance technologies and standardized safety documentation practices. While the market shows high demand for both quality/conformity tools and observability/monitoring systems, currently used solutions operate in silos, offsetting risks of intellectual property leaks and antitrust action at the expense of risk management robustness and transparency. This fragmentation not only weakens safety practices but also exposes organizations to significant liability risks when operating without clear documentation standards and evidence of reasonable duty of care.

Our solution creates an open-source standards framework that enables collaboration and knowledge-sharing between frontier AI safety teams while protecting intellectual property and addressing antitrust concerns. By operating as an OASIS Open Project, we can provide legal protection for industry cooperation on developing integrated standards for risk management and monitoring.

The AIRMAN is unique in three ways: First, it creates a neutral, dedicated platform where competitors can collaborate on safety standards. Second, it provides technical integration layers that enable interoperability between different types of assurance tools. Third, it offers practical implementation support through templates, training programs, and mentorship systems.

The commercial viability of our solution is evidenced by strong willingness-to-pay across all major stakeholder groups for quality and conformity tools. By reducing duplication of effort in standards development and enabling economies of scale in implementation, we create clear value for participants while advancing the critical goal of AI safety.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.