Economics of TAI Sprint Submission: A Recursively-Inspired Framework and Simulation Results for Evaluating a Value-Based UBI Policy

Ram Potham

The potential for Transformative AI (TAI) to exacerbate inequality necessitates exploring novel redistributive policies. This document presents a comprehensive framework and reports results from a conceptual simulation study evaluating the economic implications of a Universal Basic Income (UBI) funded by a significant annual tax (e.g., 4%) on company value. Drawing insights from recursive alignment (RA) and functional models of intelligence (FMI), this framework emphasizes policy adaptability and potential failure modes arising from rigid design under TAI's deep uncertainty. Developed for the Economics of TAI Sprint (Tracks 3 & 5), this document details the core research questions, analytical focal points, conceptual model structure, RA-inspired simulation design, and interprets the simulation outputs. It serves as a proof-of-concept and guide for future research into designing resilient economic policies for the TAI era.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Joel Christoph

The paper proposes a value based UBI funded by a four percent annual tax on company equity and evaluates it with an agent based model inspired by recursive alignment ideas. The concept of viewing policy design through adaptability, rigidity, and failure modes is creative and fits the sprint’s call for resilient governance tools. The author outlines a clear modular framework with heterogeneous households, strategic firms, and government rules, and implements five scenarios in Mesa, reporting time series for inequality, investment, and fiscal balance on page 6 and the triplet of panels in Figure 1 on page 8 ​

.

Nonetheless the contribution remains exploratory. Key parameters that drive the simulated paths, such as the elasticity of investment to after tax returns or the stochastic process for firm value manipulation, are only described qualitatively. No calibration to empirical data, robustness checks, or confidence bands accompany the headline Figure 1 results. The decision to average just five runs for each scenario limits statistical power, and the model horizon of thirty years may not capture longer term capital accumulation or demographic feedbacks. The literature engagement focuses on recursive alignment and functional models of intelligence but omits the extensive empirical and theoretical work on UBI finance, wealth taxes, and adaptive fiscal rules.

Links to AI safety are present but indirect. The paper claims that adaptive fiscal policy can mimic recursive alignment principles and avoid stability failures, yet it does not show how the tax design interacts with advanced AI incentives, ownership concentration of compute, or catastrophic misuse channels. Adding explicit pathways from tax funded redistribution to safety outcomes, for example reduced race dynamics or funding for alignment research, would raise the safety relevance.

Technical documentation is thin. The Mesa code, parameter file, and raw outputs are not shared, preventing replication. The framework is verbally specified but lacks formal equations or a flowchart. Tables summarising baseline values, shock sizes, and adaptive rule algorithms would help readers evaluate validity. A sensitivity sweep over tax rates and adaptive triggers, together with comparisons to simpler lump sum or payroll financed UBI baselines, would illustrate the marginal value of the proposed policy.

Suggested next steps

1. Release the model repository with a reproducible environment and add Monte Carlo sensitivity around growth, gaming probability, and tax elasticity.

2. Calibrate initial distributions and TAI productivity shocks to stylised facts from AI macro projections.

3. Expand the literature review to include recent papers on wealth taxation feasibility and dynamic fiscal rules.

4. Map the policy’s impact on safety relevant metrics such as concentration of AI profits or funding for public goods that reduce existential risk.

5. Provide formal notation for production, household, and government behaviour to allow integration with general equilibrium or optimal control extensions.

Cecilia Wood

Interesting approach with clear policy implication. I thought Section 5 was particularly strong and introduced some interesting and potentially important messages. I would have liked to have understood the methodology in a lot more detail - it's difficult to assess the validity of your conclusions without understanding how you generated each of the graphs. (There should be enough detail to essentially reproduce them)

Some more specific remarks below.

Define what you mean by tax margin – do you mean the marginal rate of taxation?

When you talk about incentives, I would talk about efficiency losses at the equilibrium (which implies

strategic behaviour). This project felt very ambitious at first - it seemed like you were introducing a lot of non-standard assumptions unless you were basing this on an existing model (this isn't my area of expertise). My guess is including all of them at once would make it hard both to solve analytically and to draw conclusions about how each addition changes the dynamics, but a simulation based approach is more tractable.

Explain recursive alignment and FMI (you define in introduction but don’t explain). I didn't quite understand how this impacted your approach, and how your approach differs from standard macro models.

Fiscal neutrality is a nice goal but hard in practice – do you mean scenario 2 as a benchmark, ie what’s achievable in an ideal scenario? Worth discussing limitations (e.g. governments might need to accurately understand productivity changes, which are hard to estimate even if you did have full access to corporate data)

Key parameters set to plausible values – ideally, find another well-known paper which you can point to, or otherwise need to justify why these are plausible.

5 runs aren’t generally enough. Unclear where you introduce stochasticity in your modelling

Section 5 has some thoughtful discussion of the results - negligible policy impact on investment rate is very interesting.

Yulu Pi

This paper investigates a UBI policy funded through a substantial annual tax on company value, as a potential response to the growing inequality that TAI might exacerbate. While the concepts of RA and FMI are mentioned frequently, they are never clearly defined, making it hard for to fully understand the framework or assess the reasoning behind the simulations.

Your project rightly highlights the uncertainty around TAI development as a key factor in designing and evaluating UBI policy. This focus is well-placed and necessary. However, the simulation could do more to reflect that uncertainty. Expanding the scenarios to include not just different policy designs, but also how the same policy performs under varied TAI trajectories, would provide a more robust test of the model.

For readers who aren’t already familiar with the Mesa simulation library, the technical setup could use more clarity. It would be helpful to explain how key assumptions were made, how agent behavior was modeled, and what led to specific outcomes—such as the finding that the tax had little effect on investment.

This is a very timely contribution which tackles important issues TAI might rise. It asks important questions and presents a really creative modeling approach. Addressing issues I mentioned above can help the paper become more robust.

Rubi Hudson

This project could be improved by writing out the equations for the model and justifying why those modelling choices were made. The parameter values used for simulations should be specified.

Engaging with existing literature and citing it would help improve this work.

It's unclear how this model was inspired by recursive alignment and functional models of intelligence.

Strong research questions often involve a comparison along a single axis. What's the state of the art or default policy, and how does your work improve on it? Answering this question clearly would make for a stronger project.

Cite this work

@misc {

title={

@misc {

},

author={

Ram Potham

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.