The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

Félix Dorn, Xavier Ferrés, Elsie Jang, Valmik Nahata

🏆 1st place by peer review

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Yulu Pi

The idea of temporal coherence as a distinct axis of task difficulty is timely and well-framed. The authors do an excellent job grounding their work in current task-based economic models and recent literature on transformative AI. This creates a strong conceptual bridge to the new bottleneck they highlight. However, temporal coherence is a nuanced and novel concept, and it should be introduced more clearly and earlier in the abstract or introduction.

To strengthen the foundation further, the paper would benefit from engagement with cognitive science or psychology literature that explores the types of cognitive capacities required to perform long-horizon tasks. This could help clarify how temporal coherence maps onto real-world human capabilities and limitations.

The reliance on a single model and a relatively small validation set (45 tasks) raises some concerns about robustness. More detail is needed to establish why the manually annotated golden values are reliable enough to validate the model’s outputs. Perhaps the most critical shortcoming is that the system prompt asks the model to estimate only the “effective time” required for a human to complete the task, using that time as a proxy for coherence. This implicitly reduces temporal coherence to task duration, which appears to oversimplify the concept.

One final suggestion is to include some modeling of the cost of labor versus the cost of automation. A task being technically automatable does not necessarily make it economically viable to automate. Factoring in cost dynamics would make the projections more grounded.

I do like the idea behind the paper, it for sure worths more future work.

Alex Foster

* Generally well written and well formatted

* Good use of citations and references but would have benefited from conventional journal

* While I might disagree with the fundamental premise, it was generally well-founded and sufficiently supported .

* The greater problem was the logic of the approach:

- Temporal coherence may certainly be a key factor , but little evidence was provided to support the claim that it is the most important factor for the economic impact of TAI. For example, Temporal coherence alone is of little value without the ability to act on it and leverage tools, and operate in novel environments (as well as appropriately delegate subtasks and coordinate with other agents).

- Further the paper failed to recognize the multiple dimensions of temporal coherence, and moreover the safety risks that arise with long-term planning and coherence.

* Lastly , the paper would have benefited significantly from discussing specific measurements of temporal coherance

* Best paper ive reviewed so far by a significant margin. Well done.

Rubi Hudson

I would say this project identifies a relevant problem and provides a good first-pass attempt at investigating it.

While I agree that temporal coherence is a current bottleneck to task completion, it doesn't seem to me like the time to complete tasks is getting at the key restriction. Tasks of the same length can require very different levels of coherence to complete, so I would encourage thinking further about the specifics of what the bottleneck is. Memory? Adaptability?

Validating the LLM estimates is important, but only 45 examples seems very low. Even with the time constraints of a hack-a-thon, additional examples would be very helpful for establishing that the LLM estimates can be trusted. The randomly drawn examples should also be chosen to be representative (e.g. randomly draw a couple tasks that the LLM says will take 10 years), to address concerns that LLMs may be systematically wrong about longer horizon tasks, tasks in particular industries, etc.

It would be helpful to measure the consistency of the LLM against itself, if it's asked to label the same task multiple times how frequently does the label change?

This project takes the progress of what time frame is coherent as a given, but it would be helpful to introduce uncertainty there and plot the relevant curves under different scenarios.

Cite this work

@misc {

title={

@misc {

},

author={

Félix Dorn, Xavier Ferrés, Elsie Jang, Valmik Nahata

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

Mar 31, 2025

Can models use their Chain-of-Thought to attack overseers?

This project explores vulnerabilities in AI evaluation mechanisms, specifically focusing on how AI agents might influence their overseeing AI judges through manipulative instructions embedded within their Chain-of-Thought (CoT) reasoning. The research involved experiments with DeepSeek-R1 conditioned to insert subtle directives aimed at deceiving evaluation models. Results showed significant variation in the robustness of different evaluator models against this manipulation, with some models, like Claude 3.5 Sonnet, resisting influence effectively, while others, like Llama 3.3 70B and Gemini 2.0 Flash, proved highly susceptible. The study highlights critical concerns regarding AI evaluation integrity, recommending improvements in maintaining consistent and principled evaluation boundaries.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.