According to J.P. Morgan Chase, the release of DeepSeek V3.2 marks the second wave of “DeepSeek shock” in the Chinese AI market, which means that open source reasoning capabilities close to cutting-edge models can be obtained at a moderate price in China, which is beneficial to most stakeholders in the Chinese AI ecosystem, namely cloud operators, AI chip makers, AI server manufacturers, AI intelligence platforms, and SaaS developers. Alex Yao, an analyst at J.P. Morgan Chase, said in the report that DeepSeek reduced the price of the model API by 30%-70%, while long-term contextual reasoning could save 6-10 times the workload. Beneficiaries include Alibaba, Tencent, Baidu, Zhongwei, North China Chuang, Huaqin Technology and Wave Information. The previous model V3.1 was mainly optimized for Nvidia CUDA, while the new model V3.2/V3.2-Exp provides Day-0 support for Huawei Ascend, Cambrian, and Haiguang, and provides ready-made kernels for SGLang, VLLM, and other inference frameworks, marking a clear shift to domestic hardware autonomy.

Zhitongcaijing · 20h ago
According to J.P. Morgan Chase, the release of DeepSeek V3.2 marks the second wave of “DeepSeek shock” in the Chinese AI market, which means that open source reasoning capabilities close to cutting-edge models can be obtained at a moderate price in China, which is beneficial to most stakeholders in the Chinese AI ecosystem, namely cloud operators, AI chip makers, AI server manufacturers, AI intelligence platforms, and SaaS developers. Alex Yao, an analyst at J.P. Morgan Chase, said in the report that DeepSeek reduced the price of the model API by 30%-70%, while long-term contextual reasoning could save 6-10 times the workload. Beneficiaries include Alibaba, Tencent, Baidu, Zhongwei, North China Chuang, Huaqin Technology and Wave Information. The previous model V3.1 was mainly optimized for Nvidia CUDA, while the new model V3.2/V3.2-Exp provides Day-0 support for Huawei Ascend, Cambrian, and Haiguang, and provides ready-made kernels for SGLang, VLLM, and other inference frameworks, marking a clear shift to domestic hardware autonomy.