Feb 2, 2026

Red Lines Forecasting: When Will Frontier AI Cross Compute Thresholds?

Kwesi Amanfu

We forecast when frontier AI training runs will cross compute thresholds relevant to AI governance. Frontier models are those trained with the largest computational budgets each year; we measure training compute in FLOPs (floating-point operations, the total computational work used to train a model). Using Epoch AI data, we find that both policy-defined thresholds (EU AI Act 10²⁵ FLOP, US EO 10²⁶ FLOP) have already been crossed. Compute milestones at 10×, 100×, and approximately 3000× the US EO threshold, which are expected within one to three years. We validate forecasts with rolling origin cross-validation and calibration correction. We distinguish policy thresholds from compute milestones. The latter are training budget forecasts, not capability claims.

Reviewer's Comments

Reviewer's Comments

Arrow
Arrow
Arrow
Arrow
Arrow

The elephant in the room is saturation modeling. The paper acknowledges the possibility of diminishing returns but doesn't attempt any S-curve or logistic alternatives, which would substantially strengthen the analysis given active discussion of data exhaustion, hardware limits, and algorithmic diminishing returns. The analysis also ignores algorithmic efficiency entirely — raw FLOP trends don't capture that each FLOP buys more capability over time. With only 14 frontier data points, the piecewise model selection over the more parsimonious linear model deserves more explicit defense. Solid

Impact potential & innovation: 2.5

Plausibly useful infrastructure for policy makers to think about timelines. But the backward-looking finding will not be surprising, and it would have been more helpful to have policy recommendations or some more novel addition to the discourse

Execution quality: 3.5

Validation framework is fairly rigorous, higher than the average hackathon submission. Would have been helpful to explore the saturation question, potentially with a quick sensitivity analysis

Clarity: 3.5

The structure is logical and writing accessible - I imagine this would be legible to a regular journalist or political staffer. Some small amounts of sloppiness, e.g. 'data sources' in section 3.1 is empty, but that's forgiveable given the hackathon setting. I am a bit confused by the methodology section claiming the selection was piecewise and figure 4 showing linear. I doubt this matters much but it adds a bit of confusion

Cite this work

@misc {

title={

(HckPrj) Red Lines Forecasting: When Will Frontier AI Cross Compute Thresholds?

},

author={

Kwesi Amanfu

},

date={

2/2/26

},

organization={Apart Research},

note={Research submission to the research sprint hosted by Apart.},

howpublished={https://apartresearch.com}

}

Recent Projects

View All

View All

Feb 2, 2026

Prototyping an Embedded Off-Switch for AI Compute

This project prototypes an embedded off-switch for AI accelerators. The security block requires periodic cryptographic authorization to operate: the chip generates a nonce, an external authority signs it, and the chip verifies the signature before granting time-limited permission. Without valid authorization, outputs are gated to zero. The design was implemented in HardCaml and validated in simulation.

Read More

Feb 2, 2026

Fingerprinting All AI Cluster I/O Without Mutually Trusted Processors

We design and simulate a "border patrol" device for generating cryptographic evidence of data traffic entering and leaving an AI cluster, while eliminating the specific analog and steganographic side-channels that post-hoc verification can not close. The device eliminates the need for any mutually trusted logic, while still meeting the security needs of the prover and verifier.

Read More

Feb 2, 2026

Modelling the impact of verification in cross-border AI training projects

This paper develops a stylized game-theoretic model of cross-border AI training projects in which multiple states jointly train frontier models while retaining national control over compute resources. We focus on decentralized coordination regimes, where actors publicly pledge compute contributions but privately choose actual delivery, creating incentives to free-ride on a shared public good. To address this, the model introduces explicit verification mechanisms, represented as a continuous monitoring intensity that improves the precision of noisy signals about each actor's true compute contribution. Our findings suggest that policymakers designing international AI governance institutions face a commitment problem: half-measures in verification are counterproductive, and effective regimes require either accepting some free-riding or investing substantially in monitoring infrastructure.

Read More

This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.