← all news

Nathan Lambert bets open models win on economics, not benchmarks

AI · · 1 month ago · source (interconnects.ai)

Nathan Lambert spends most of his time watching the open model field, and his mid-2026 forecast is less about capability than about money. Open weights now keep pace with closed labs on benchmarks, he writes, but the gap that still matters is the one that is hard to measure: how reliably a model helps a knowledge worker who keeps running into new, messy problems. Closed labs hold that edge partly because reinforcement learning from real user feedback pulls their training toward actual use rather than test sets.

His bets are specific. Chinese open-weight labs will feel funding pressure first, possibly before the end of this year. United States adoption of open models will slowly recover from early 2027. Open weights will take over the repetitive automation work in API markets, where price per token decides almost everything. And attempts to ban open models, he argues, will be impossible to enforce in practice.

The piece is useful because it separates two questions that usually get mashed together: can open models match closed ones, and where does each actually get used. Lambert's answer is that the second question has a clear direction, and it points at cost.

Why it matters

If you pick models for a living, stop choosing on leaderboard scores alone: send high-volume repetitive calls to cheap open weights now, keep closed models where users face open-ended work, and expect the funding squeeze to hit Chinese open labs first.

Open ModelsForecasting