Why China's open-model lead is about process, not the models
Nathan Lambert's argument in Interconnects is that China's open-model lead is not really about the models. It is about the development process behind them. His starting point is a cost split: research from Ai2's Olmo 3 work and Epoch AI suggests roughly 80% of compute goes to R&D, the exploration and dead ends, and only about 20% to training the final model.
If most of the cost is exploration, then sharing what you explored is worth a lot. Lambert points out that Chinese labs publish unusually thorough technical reports, and those reports act as risk reduction for everyone else: a competitor can skip a research direction that already failed. Closed labs cannot get this benefit, because nobody tells them what did not work. He is careful about the limit of the analogy. Unlike open-source software, where users send back fixes, open models mostly lower future development cost for the ecosystem rather than improving the deployed model today.
The durable point is that the advantage compounds at the process level rather than the artifact level. Shared methods cut redundant exploration across labs, which lets the whole ecosystem stay at the frontier longer than a closed model would predict. The catch he names: it breaks if labs quietly fork open work into closed internal versions and stop sharing. Read the full piece on Interconnects.
Why it matters
If you plan model strategy or pick which ecosystem to build on, the number to internalize is the 80/20 R&D split. It reframes "open versus closed" as a question about who absorbs exploration cost, which is a more useful lens than benchmark scores when you are betting years ahead.