The Shanghai Artificial Intelligence Laboratory released a landmark achievement yesterday. The DeepLink hyperscale cross-domain mixed training technology solution developed by it was successfully applied to the China Unicom Network, “splitting” two heterogeneous intelligent computing centers 1,500 kilometers apart into a “supernode” and completing AI model training with 100 billion parameters. This move is the first in the world to achieve efficient integration of heterogeneous intelligent computing power over long distances. It can not only resolve the bottleneck of uneven distribution of computing power resources and poor utilization rates across the country, but also reduce the AI industry's dependence on specific chips. Once supply chain fluctuations occur, it will provide important underwriting support for the AI industry and avoid being “stuck in the neck.” According to the Shanghai AI Laboratory, in February of this year, they collaborated with more than 10 partners to build a prototype of a large-scale cross-domain mixed training cluster in Shanghai, which achieved 20 days of uninterrupted training for 20 days of models with large parameters of 100 billion. Based on this, they integrated China Unicom's AINET computing power intelligent network, spanned 1,500 kilometers, connected the intelligent computing center between Shanghai and Jinan, and completed mixed training on the 100 billion parameter large model.

Zhitongcaijing · 07/19/2025 23:41
The Shanghai Artificial Intelligence Laboratory released a landmark achievement yesterday. The DeepLink hyperscale cross-domain mixed training technology solution developed by it was successfully applied to the China Unicom Network, “splitting” two heterogeneous intelligent computing centers 1,500 kilometers apart into a “supernode” and completing AI model training with 100 billion parameters. This move is the first in the world to achieve efficient integration of heterogeneous intelligent computing power over long distances. It can not only resolve the bottleneck of uneven distribution of computing power resources and poor utilization rates across the country, but also reduce the AI industry's dependence on specific chips. Once supply chain fluctuations occur, it will provide important underwriting support for the AI industry and avoid being “stuck in the neck.” According to the Shanghai AI Laboratory, in February of this year, they collaborated with more than 10 partners to build a prototype of a large-scale cross-domain mixed training cluster in Shanghai, which achieved 20 days of uninterrupted training for 20 days of models with large parameters of 100 billion. Based on this, they integrated China Unicom's AINET computing power intelligent network, spanned 1,500 kilometers, connected the intelligent computing center between Shanghai and Jinan, and completed mixed training on the 100 billion parameter large model.