Nathan Lambert's counter-take: self-improvement will be lossy, not explosive
Nathan Lambert offers the deliberate counterweight to the recursive-self-improvement story. His term is lossy self-improvement: AI becomes central to building AI, but friction keeps the loop from compounding into a singularity-style takeoff. It reads as a direct reply to the "AI builds itself" thesis, and the disagreement is the point.
He names three bottlenecks. Automation is narrow, so AI optimizes single metrics well but struggles with the many-metrics-at-once judgment real research needs. Parallelization saturates fast, because adding agents runs into Amdahl's law where human intuition and experiment time, not compute, are the constraint. And organizational friction, resource fights and human oversight, does not automate away. His conclusion is that progress feels explosive at the bottom of the sigmoid but stays closer to linear, with AI superb at hill climbing and still unable to invent fundamentally new approaches on its own. Read the full essay on Interconnects.
Why it matters
If your plans assume a fast takeoff, this is the argument to test them against. Read it next to the recursive-improvement case and the disagreement is concrete and checkable: watch whether AI starts handling many-metric research judgment, because that is the specific hinge both sides actually turn on.