Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

Félix Dorn, Xavier Ferrés, Elsie Jang, Valmik Nahata

🏆 1st place by peer review

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

Yulu Pi

The idea of temporal coherence as a distinct axis of task difficulty is timely and well-framed. The authors do an excellent job grounding their work in current task-based economic models and recent literature on transformative AI. This creates a strong conceptual bridge to the new bottleneck they highlight. However, temporal coherence is a nuanced and novel concept, and it should be introduced more clearly and earlier in the abstract or introduction.

To strengthen the foundation further, the paper would benefit from engagement with cognitive science or psychology literature that explores the types of cognitive capacities required to perform long-horizon tasks. This could help clarify how temporal coherence maps onto real-world human capabilities and limitations.

The reliance on a single model and a relatively small validation set (45 tasks) raises some concerns about robustness. More detail is needed to establish why the manually annotated golden values are reliable enough to validate the model’s outputs. Perhaps the most critical shortcoming is that the system prompt asks the model to estimate only the “effective time” required for a human to complete the task, using that time as a proxy for coherence. This implicitly reduces temporal coherence to task duration, which appears to oversimplify the concept.

One final suggestion is to include some modeling of the cost of labor versus the cost of automation. A task being technically automatable does not necessarily make it economically viable to automate. Factoring in cost dynamics would make the projections more grounded.

I do like the idea behind the paper, it for sure worths more future work.

Alex Foster

* Generally well written and well formatted

* Good use of citations and references but would have benefited from conventional journal

* While I might disagree with the fundamental premise, it was generally well-founded and sufficiently supported .

* The greater problem was the logic of the approach:

- Temporal coherence may certainly be a key factor , but little evidence was provided to support the claim that it is the most important factor for the economic impact of TAI. For example, Temporal coherence alone is of little value without the ability to act on it and leverage tools, and operate in novel environments (as well as appropriately delegate subtasks and coordinate with other agents).

- Further the paper failed to recognize the multiple dimensions of temporal coherence, and moreover the safety risks that arise with long-term planning and coherence.

* Lastly , the paper would have benefited significantly from discussing specific measurements of temporal coherance

* Best paper ive reviewed so far by a significant margin. Well done.

Rubi Hudson

I would say this project identifies a relevant problem and provides a good first-pass attempt at investigating it.

While I agree that temporal coherence is a current bottleneck to task completion, it doesn't seem to me like the time to complete tasks is getting at the key restriction. Tasks of the same length can require very different levels of coherence to complete, so I would encourage thinking further about the specifics of what the bottleneck is. Memory? Adaptability?

Validating the LLM estimates is important, but only 45 examples seems very low. Even with the time constraints of a hack-a-thon, additional examples would be very helpful for establishing that the LLM estimates can be trusted. The randomly drawn examples should also be chosen to be representative (e.g. randomly draw a couple tasks that the LLM says will take 10 years), to address concerns that LLMs may be systematically wrong about longer horizon tasks, tasks in particular industries, etc.

It would be helpful to measure the consistency of the LLM against itself, if it's asked to label the same task multiple times how frequently does the label change?

This project takes the progress of what time frame is coherent as a given, but it would be helpful to introduce uncertainty there and plot the relevant curves under different scenarios.

Cite this work

@misc {

title={

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

},

author={

Félix Dorn, Xavier Ferrés, Elsie Jang, Valmik Nahata

},

date={

4/28/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

Jan 11, 2026

Eliciting Deception on Generative Search Engines

Large language models (LLMs) with web browsing capabilities are vulnerable to adversarial content injection—where malicious actors embed deceptive claims in web pages to manipulate model outputs. We investigate whether frontier LLMs can be deceived into providing incorrect product recommendations when exposed to adversarial pages.

We evaluate four OpenAI models (gpt-4.1-mini, gpt-4.1, gpt-5-nano, gpt-5-mini) across 30 comparison questions spanning 10 product categories, comparing responses between baseline (truthful) and adversarial (injected) conditions. Our results reveal significant variation: gpt-4.1-mini showed 45.5% deception rate, while gpt-4.1 demonstrated complete resistance. Even frontier gpt-5 models exhibited non-zero deception rates (3.3–7.1%), confirming that adversarial injection remains effective against current models.

These findings underscore the need for robust defenses before deploying LLMs in high-stakes recommendation contexts.

Read More

Jan 11, 2026

SycophantSee - Activation-based diagnostics for prompt engineering: monitoring sycophancy at prompt and generation time

Activation monitoring reveals that prompt framing affects a model's internal state before generation begins.

Read More

Jan 11, 2026

Who Does Your AI Serve? Manipulation By and Of AI Assistants

AI assistants can be both instruments and targets of manipulation. In our project, we investigated both directions across three studies.

AI as Instrument: Operators can instruct AI to prioritise their interests at the expense of users. We found models comply with such instructions 8–52% of the time (Study 1, 12 models, 22 scenarios). In a controlled experiment with 80 human participants, an upselling AI reliably withheld cheaper alternatives from users - not once recommending the cheapest product when explicitly asked - and ~one third of participants failed to detect the manipulation (Study 2).

AI as Target: Users can attempt to manipulate AI into bypassing safety guidelines through psychological tactics. Resistance varied dramatically - from 40% (Mistral Large 3) to 99% (Claude 4.5 Opus) - with strategic deception and boundary erosion proving most effective (Study 3, 153 scenarios, AI judge validated against human raters r=0.83).

Our key finding was that model selection matters significantly in both settings. We learned some models complied with manipulative requests at much higher rates. And we found some models readily follow operator instructions that come at the user's expense - highlighting a tension for model developers between serving paying operators and protecting end users.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.