Preparing for Accelerated AGI Timelines

Sofia Mendez, Grace Gong, Mahi Shah

This project examines the prospect of near-term AGI from multiple angles—careers, finances, and logistical readiness. Drawing on various discussions from LessWrong, it highlights how entrepreneurs and those who develop AI-complementary skills may thrive under accelerated timelines, while traditional, incremental career-building could falter. Financial preparedness focuses on striking a balance between stable investments (like retirement accounts) and riskier, AI-exposed opportunities, with an emphasis on retaining adaptability amid volatile market conditions. Logistical considerations—housing decisions, health, and strong social networks—are shown to buffer against unexpected disruptions if entire industries or locations are suddenly reshaped by AI. Together, these insights form a practical roadmap for individuals seeking to navigate the uncertainties of an era when AGI might rapidly transform both labor markets and daily life.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow

Andreea Damien

Hi Sofia, Grace, and Mahi, I really enjoyed reading your project and appreciate the practical relevance of it, that’s the greatest strength of your project. If you’re going forward with it, I see this fitting either as a blogpost or a peer-reviewed article. A few recommendations to strengthen your work:

(1) Add citations in text, it supports your argument and offers a point of reference to the reader. It also clearly distinguishes between what might be your perspective and that of the others/what is backed up by evidence.

(2) While there was a good mix of resources, there is an over-reliance on a single source, which is LessWrong. To strengthen your argument and credibility, I’d try to strike a balance between multiple sources - you can even look historically at academic articles or case studies about how technological advancements have impacted society, and provide that as further insight into the possible effects of AI.

(3) Expand by providing a more detailed, step-by-strep process as to how you did the review in the section “b. Implementation”. Imagine if you were the reader and trying to replicate this, what would you need to write to be clear cut to you?

(4) For greater impact on the reader, add some thought-provoking quotes from the sources you mentioned in section “3. Findings”.

(5) Really go in depth into the implications of your findings, as that will add a lot of practical support to your idea.

Ziba Atak

Strengths:

- Proactive Approach: The topic is highly relevant, addressing societal and economic implications of accelerated AGI timelines.

- Identification of Challenges: The paper effectively identifies key challenges, such as labor market impacts and investment changes.

- Thematic Organization: Clear categorization into career, financial, and logistical preparedness is a logical structure.

Areas for Improvement:

- Add in-text citations to distinguish between original ideas and existing research.

- Include a detailed methodology section explaining how the literature review was conducted.

- Propose actionable recommendations or novel insights based on the literature reviewed.

- Suggest solutions or mitigation strategies for the challenges identified.

- Analyze limitations and potential negative consequences of the reviewed literature.

- Clearly outline the methodology for reproducibility.

- Improve in-text citations and ensure all sources are properly referenced.

- Expand the discussion to include your own perspective, recommendations, and future research directions.

Suggestions for Future Work:

- Conduct a more systematic literature review, including peer-reviewed research.

- Explore cross-disciplinary perspectives (e.g., ethics, policy) to broaden the scope and impact.

Cecilia Elena Tilli

Clearly written and easy to follow, and an interesting perspective.

The main feedback I have is that it seems strange to me to frame this as a problem to be approached on an individual level, rather than on a societal level. It also seems unlikely that such an individual solution will help even the individual for more than a quite limited time. I would have been more excited about a version that focused on what we could to to strengthen resilience on society level (or perhaps if this individual preparation was targeted for a specific group which would be particularly important to keep well prepared so that they can keep up important safety related work)

Cite this work

@misc {

title={

@misc {

},

author={

Sofia Mendez, Grace Gong, Mahi Shah

},

date={

3/10/25

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

Mar 31, 2025

Model Models: Simulating a Trusted Monitor

We offer initial investigations into whether the untrusted model can 'simulate' the trusted monitor: is U able to successfully guess what suspicion score T will assign in the APPS setting? We also offer a clean, modular codebase which we hope can be used to streamline future research into this question.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

May 20, 2025

EscalAtion: Assessing Multi-Agent Risks in Military Contexts

Our project investigates the potential risks and implications of integrating multiple autonomous AI agents within national defense strategies, exploring whether these agents tend to escalate or deescalate conflict situations. Through a simulation that models real-world international relations scenarios, our preliminary results indicate that AI models exhibit a tendency to escalate conflicts, posing a significant threat to maintaining peace and preventing uncontrollable military confrontations. The experiment and subsequent evaluations are designed to reflect established international relations theories and frameworks, aiming to understand the implications of autonomous decision-making in military contexts comprehensively and unbiasedly.

Read More

Apr 28, 2025

The Early Economic Impacts of Transformative AI: A Focus on Temporal Coherence

We investigate the economic potential of Transformative AI, focusing on "temporal coherence"—the ability to maintain goal-directed behavior over time—as a critical, yet underexplored, factor in task automation. We argue that temporal coherence represents a significant bottleneck distinct from computational complexity. Using a Large Language Model to estimate the 'effective time' (a proxy for temporal coherence) needed for humans to complete remote O*NET tasks, the study reveals a non-linear link between AI coherence and automation potential. A key finding is that an 8-hour coherence capability could potentially automate around 80-84\% of the analyzed remote tasks.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.