← all news

Situational Awareness: One Insider's Case for Fast AGI

AI · · 1 year ago · source (situational-awareness.ai)

Leopold Aschenbrenner, who worked on OpenAI's superalignment team, wrote a long essay series in June 2024 arguing that AGI by roughly 2027 is, in his words, strikingly plausible. The argument is an extrapolation of three trend lines rather than a single prediction. Compute for frontier training is growing about half an order of magnitude per year. Algorithmic efficiency is improving at a similar rate. On top of that he adds gains from what he calls unhobbling, the shift from raw chatbot to tool-using agent. Stack those together and he projects a jump comparable to the move from GPT-2 to GPT-4 happening again within a few years, then hundreds of millions of automated researchers compressing a decade of algorithmic progress into well under a year. The essay does not stop at capability. Aschenbrenner argues the buildout implies trillion-dollar compute clusters, warns that frontier labs have almost no real security around model weights and algorithmic secrets, and predicts a nationalized, Manhattan-style effort by the late 2020s. You can read the full series at situational-awareness.ai.

Why it matters

This document shaped a lot of the 2024 to 2026 conversation about AGI timelines, datacenter capex, and AI as a national security problem. Whether or not you accept the timeline, investors, labs, and policymakers now argue using its framing, so it is worth reading the original rather than the summaries.

ForecastingAGI