China is not out of the frontier AI race. Its open-weight models (models whose parameters are publicly released, allowing anyone to run or adapt them) remain genuinely competitive, and the overall capability lead has changed hands more than once since early 2025. But the more consequential story for anyone building on top of these models is where the gap is actually opening up. In coding agents, the AI tools most likely to reshape how software teams work in the near term, the U.S. advantage is large and does not appear to be closing.
What makes this structural rather than cyclical is that several drags on Chinese labs compound each other over time. Chip shortages slow iteration cycles, and iteration is where a significant share of model improvement actually comes from. Over-reliance on distillation, training on outputs from stronger Western models rather than investing in data pipelines, caps how capable a Chinese model can ultimately become. Benchmark gaming (“benchmaxxing”) pulls engineering effort toward test scores rather than real-world utility. And operating inside large consumer platforms creates pressure to ship products rather than pursue the open-ended research that generates genuine breakthroughs. Meanwhile, U.S. models benefit from a self-reinforcing loop: global users doing serious work generate rich failure data, which trains better models, which attract more serious users. That flywheel is difficult to replicate from a catch-up position.
Finding this useful? Consider becoming a paid supporter 🙏
