Following Deepseek's victory over OpenAI, the AI computing power competition between China and the US has also entered a phase of close combat!

Following Deepseek's victory over OpenAI, the focus of China-US AI competition has gradually shifted from the application layer to the AI infrastructure layer. Unlike breakthroughs in the field of large model applications, the challenges faced by China's semiconductor industry in the AI computing power competition are world-class. Even so, we are still fortunate to have enterprises like Huawei continuously solving problems and tackling difficult tasks.

At just the recently concluded Huawei Cloud Ecology Conference 2025, Huawei Cloud officially released the CloudMatrix384 super node with "high density," "high speed," and "high efficiency" features. Its computing power scale and inference performance comprehensively surpass those of NVIDIA NVL72. In this computing power competition, although our single-chip performance is not as good as NVIDIA's, Huawei has achieved improvements in server system architecture capabilities through engineering innovation by utilizing space and power resources. This has enabled stronger system-level AI computing power performance to address the insufficiency of single-chip capabilities. The process may be difficult, but the innovative outcomes are undoubtedly solid.

It is understood that unlike traditional single-server delivery models, based on super nodes, CloudMatrix can provide dynamic combinable computing power slices, significantly improving resource utilization efficiency. Actual operation data shows that its continuous stable operating time can reach 40 days, far exceeding the industry average of 2.8 days; second-level fault monitoring and automatic recovery can restore training jobs within 10 minutes, while the industry average is 60 minutes.

This underlying technical capability is gradually transforming into practical industrial driving force. At the beginning of this year, when the DeepSeek large model faced server pressure due to a surge in users, Huawei Cloud and the Silicon Base Flow team urgently tackled the challenge during the Spring Festival. They successfully deployed and launched the DeepSeek R1/V3 inference service on the Ascend AI cloud service of Huawei Cloud on February 1st, in just a few days. This breakthrough not only verified the feasibility of deploying complex models on Ascend Cloud but also marked a leap in the ability of domestic AI computing power to support large-scale inference applications, providing a replicable paradigm for deeper cooperation between more AI companies and cloud service providers in the future.

This collaboration also reflects the evolution direction of the AI industry ecosystem - moving from point-by-point innovation toward multi-party collaborative construction. From the underlying computing power foundation, middleware adaptation, to industry application implementation, Huawei Cloud is promoting the transition of AI applications from "feasibility" to "usability" and "scalability" through systematic integration of technological capabilities and ecological resources.

With the gradual maturity of domestic AI infrastructure, the way enterprises access intelligent resources will also undergo fundamental changes. Once high-threshold large model development and deployment are now entering more industry scenarios at lower cost and higher efficiency through innovative infrastructures such as super nodes, helping to form a more resilient and innovative AI industry ecosystem in China.

Original article: https://www.toutiao.com/article/1829087849032844/

Disclaimer: The article solely represents the author's personal views.