Cursor's Composer is built on Kimi K2.5, which is Moonshot's Chinese model. Shopify switched to Alibaba's Qwen and saved $5 million a year. Airbnb CEO Brian Chesky has said publicly: "We rely a lot on Qwen. It's very good, fast, and cheap." Cognition's SWE-1.6 model is likely post-trained on Zhipu's GLM. And last week Zhipu dropped GLM-5.1, an open source model that benchmarks close to Claude Opus on coding tasks.
Meanwhile the tech press is full of stories about OpenAI vs. Anthropic vs. Google. The narrative is still that American closed-lab models are the ones actually deployed in production. But what's running inside some of Silicon Valley's biggest products right now? Chinese open source.
These companies aren't making ideological choices. They're using Kimi and Qwen because they're fast, cheap, and accurate enough for their specific tasks. That's actually the most interesting part - it's a story about how well-optimized open source competes with frontier labs on real-world economics, not benchmarks. And it's happening faster than most people expected.
There's also a dimension that nobody wants to say out loud: users booking Airbnb trips are getting results from a model built in Shanghai. People using Cursor are getting code completions from a Chinese company's research. Most of them have no idea, and Airbnb didn't exactly put it in the changelog.
The question I'm genuinely uncertain about: does the model's origin actually matter once it's running in your infrastructure, if the data pipeline is controlled by the American company? Or does there remain some structural difference - in training data provenance, in post-training alignment choices, in the incentives of the organization that built it - that carries forward even when the weights are open source?
[link] [comments]



