China rode that momentum hard. A year after DeepSeek’s release, there’s now a cohort of Chinese open-source giants following the same blueprint, including Z.ai (formerly Zhipu), Moonshot, Alibaba’s Qwen, and MiniMax. They’re all racing to release more capable models, and they are closing in on US rivals at a pace few anticipated.
That matters because AI hype is dying down, and companies are shifting focus from buzzy pilots to deployment and integration, where cheaper and more customizable tools tend to win. Chinese pricing means developers with limited budgets can experiment more, and open weights mean they can adapt models without asking for permission.
A study by researchers at MIT and Hugging Face found that Chinese open-weight models accounted for 17.1% of global AI model downloads over the year ending in August 2025. That narrowly surpassed the US share of 15.86%—the first time China had led in this metric. And Hugging Face data from last month shows that Alibaba’s models, including its Qwen family, now have the most user-generated variants—more than models from Google and Meta combined.
The open-source ideal, though, runs headlong into some hard realities. Chinese models carry the imprint of China’s content moderation regime and are trained to avoid outputs that conflict with government policy. And in February, Anthropic accused several Chinese labs of illicitly extracting capabilities from Claude through distillation, a process where you use one model’s outputs to train another. That’s a standard industry practice, but top US firms like OpenAI and Anthropic claim that Chinese companies have used fraudulent methods to do it.

