(By Chen Jishen, Editor: Zhang Guangkai)

In recent days, an increasing number of American AI companies and even major AI giants have begun to openly claim that they are using Chinese AI large models.

On October 22, Brian Chesky, CEO of Airbnb, stated in an interview that the customer service AI of Airbnb consists of 13 models, and the company "largely relies on Alibaba's Qwen, which is better and cheaper than OpenAI's products."

He also said that Airbnb has not yet integrated ChatGPT, "we also use the latest models from OpenAI, but usually do not use them extensively in production environments because there are faster and cheaper models available. OpenAI's connectivity capabilities are not fully ready yet."

Not surprisingly, Windsurf, a top AI programming product abroad, recently launched a mysterious model, claiming it was specifically designed for speed and Agentic. However, this American-born company used the Zhicun Glm 4.6 model.

Similarly, the U.S. cloud service platform Together AI officially announced in July that it had deployed Qwen-3-Coder.

And just this month, Chamath Palihapitiya, the founder of Social Capital, a well-known Silicon Valley investor, bluntly stated, "We have already started using Kimi-K2 on Groq. Although the models from OpenAI and Anthropic are quite good, they are too expensive."

Facing the current situation where many American big tech companies are turning to Chinese large models, a well-known AI researcher asked sharply, "Is Silicon Valley built on Qwen?" Behind this, more and more American companies are no longer hiding the fact that they are using Chinese AI.

China's AI is Rapidly Taking Over the United States

The statement by the CEO of Airbnb is the most representative. He directly pointed out the core advantage of Tongyi Qianwen (Qwen) "better and cheaper." This kind of recognition has already spread among Silicon Valley executives.

This year in May, Huang Renxun, CEO of NVIDIA, explicitly mentioned in the earnings call that Alibaba's Tongyi Qianwen model is the best among open-source AI models. Jack Dorsey, the founder of Twitter, also praised the code model Qwen3-Coder of the Qwen series. Even Elon Musk, after seeing images generated based on Tongyi Wanxiang Wan2.2, said the results were "indistinguishable from reality."

This influence is quickly transforming into real commercial applications. In addition to the aforementioned Together AI, e-commerce giant Amazon is reported to be adopting the Alibaba Tongyi Qianwen model in its robot control system. Previously, there were also multiple reports that Apple plans to introduce the Tongyi Qianwen model in devices such as iPhones in the Chinese market to support its AI features.

When looking at the global perspective, the Japanese Economic News also stated that Tongyi Qianwen has become the foundation of AI development in Japan; the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in the UAE has also built its presidential-supported K2 Think reasoning model based on Tongyi Qianwen.

Behind this is the terrifying strength of Tongyi Qianwen in open source: it has already opened over 300 models, with the number of derivative models exceeding 170,000, forming the world's largest open-source model matrix. On the latest list of the world's largest AI open-source community Hugging Face, the Alibaba Tongyi series also achieved "domination" by having seven models selected in the top ten.

In terms of developer tools and platforms, the penetration of Zhicun GLM is also rapid. The AI programming product Windsurf "secretly" using GLM 4.6 is just the tip of the iceberg.

More importantly, the U.S. company Vercel, valued at 9.3 billion dollars, recently publicly announced that it has reached a cooperation with Zhicun, providing API services for GLM-4.6 on its platform. Its boss Guillermo Rauch also forwarded and praised GLM 4.6, saying, "It's very good, ranked third on http://nextjs.org/evals, and is the only open-source model among the top five."

Additionally, Cerebras, an AI inference service provider in the United States—known for releasing the world's largest AI chip—has already listed GLM-4.6 on its platform for global developers to access. This indicates that Chinese large models are being integrated into the U.S. AI development ecosystem as infrastructure.

If Qwen and GLM demonstrate the breadth of the platform, then Kimi has ignited the fuse of "cost-effectiveness," even seen as a signal of "defection" from the heart of Silicon Valley.

Chamath's (Chamath Palihapitiya) "defection" carries significant weight, due to his unquestionable industry status. He is not an ordinary commentator, but a top player worth billions. He once led Facebook's user base from 45 million to 700 million, and the "You Might Know Someone" feature, which went viral, was the work of his team. His Social Capital accurately bet on Slack, Box, etc. Therefore, when such a top predator who pursues extreme commercial rationality openly uses Kimi K2 to replace OpenAI and Anthropic, it has already become a strong market leading indicator.

Chamath's choice is rapidly evolving into a group trend. A series of platforms that play a crucial role in the U.S. developer ecosystem have quietly taken action: Vercel: This top cloud development platform has already integrated the API of Kimi K2. Cursor: As a star product of AI-native code editor, it has integrated Kimi as a core option. Perplexity, Genspark, Youware: These widely acclaimed AI-native applications have also announced the integration of Kimi K2 successively.

Developers' "votes" through code are more powerful than any ranking list. It marks that Kimi has smoothly transitioned from a "model worth noting" to a practical production value tool that is quietly integrated into the global developer workflow.

Leaving behind the pursuit, a dual-track parallel AI world

Seeing this, the motivation for American companies to turn toward is very clear. This is not just a "price war," but a profound paradigm shift, which has been verified by authoritative reports.

Called the "annual wind direction indicator" of the global AI industry, the "State of AI Report 2025" has, for the first time, elevated the "Chinese AI system" from a "peripheral follower" to a "parallel competitor," and clearly pointed out: "In 2025, China is no longer a follower—it is setting the pace in open-source AI and commercial deployment."

This is not empty praise. The report, when listing the most important technological advances of the year, only listed three representative large models: OpenAI's o1, and China's DeepSeek-v3 series and Kimi-K2 series. In the highest research sanctuaries of global AI, Chinese models now occupy two-thirds of the seats.

Behind this is the emergence of two development paradigms in global AI: The U.S. "technology peak climbing" paradigm: represented by OpenAI, Anthropic, focusing on academic and research laboratories, pouring resources into chasing AGI, pursuing absolute technical height. The Chinese "application prosperity" paradigm: represented by Moonlight Dark Side, Deep Seek, Tongyi Qianwen, etc., aiming to build an open and prosperous application ecology, empowering thousands of developers through high-cost-performance, fast-iterating open-source/open models, allowing innovation to grow from the bottom up. Therefore, the "defections" of Vercel, Airbnb, and Chamath have burst the bubble of "only the strongest model." They are choosing high-cost-performance solutions under the "application prosperity" paradigm with their feet.

This marks that the development of Chinese AI has completed the prelude of "following," and has begun to "ecological breakthrough." A dual-track parallel AI world order is gradually unfolding before our eyes.

Original article: https://www.toutiao.com/article/7566960651231429170/

Statement: The article represents the views of the author. Please express your opinion by clicking on the 【Up/Down】 buttons below.