Jack Clark puts a number on AI that trains its own successor
In Import AI 455, Jack Clark puts a number on a question most people keep vague: he estimates a better-than-60% chance that AI systems train their successors with no human in the loop by the end of 2028, and about 30% by 2027. The interesting part is the evidence he stacks up, not the headline probability.
On coding, SWE-Bench went from roughly 2% with Claude 2 in late 2023 to 93.9% with Claude Mythos Preview, which effectively saturates the benchmark. METR's task-horizon measure grew from 30 seconds in 2022 to about 12 hours in 2026. On research skills the jumps are similar: CORE-Bench reproducibility moved from 21.5% for GPT-4o in late 2024 to 95.5% for Opus 4.5 by December 2025, and on training optimization Claude Mythos reportedly hit a 52x speedup where humans need hours for a 4x gain. AI systems are also starting to supervise sub-agents, acting as a synthetic team lead.
Clark's reasoning is the unglamorous version of the argument. Recursive self-improvement does not require the model to invent something radical. Most progress is methodical scaling and engineering, the "99% perspiration," and that is exactly the part AI is getting reliably good at while able to work for long stretches alone. Read the full issue on Import AI.
Why it matters
If your planning horizon is two or three years, this is the assumption to stress-test. Whether or not the 2028 figure holds, the benchmark curves are public and checkable, and they are the thing to track rather than the debate around them.