Under the big wave of AI, the “sovereign AI” process is in full swing! Musk's xAI becomes the first Saudi data center customer

Zhitongcaijing · 11/20/2025 01:25

The Zhitong Finance App learned that Nvidia and xAI jointly stated that a large-scale AI data center facility under construction in Saudi Arabia will soon be equipped with hundreds of thousands of Nvidia high-performance AI chips, and XAI, the AI superunicorn under Tesla CEO Elon Musk, will become its first major customer. Furthermore, in addition to purchasing Nvidia's AI GPU computing power cluster, Saudi Arabia's “sovereign AI system,” Humain, will also spend huge sums of money to purchase AI chips and AI software and hardware collaboration systems developed by AMD and Qualcomm.

XAI founder Musk and Nvidia CEO Wong In-hoon attended the Misa Investment Forum in Washington, D.C., on Wednesday. This latest announcement between the two sides is a further acceleration based on previous in-depth cooperation. In May of this year, Nvidia said it would provide AI chip computing power clusters that consume 500 megawatts of electricity to Saudi Arabia's Humain data center.

Humain, an AI startup backed by the Saudi-based sovereign wealth fund, also said the statement was a further extension of the cooperation reached in May. On Wednesday EST, Humain said that the project will include at least 600,000 Nvidia AI chips, mainly the Blackwell/Blackwell Ultra architecture AI GPU computing power cluster.

Humain, an AI startup that has been in the limelight recently, was founded earlier this year and belongs to Saudi Arabia's sovereign wealth fund, the Saudi Public Investment Fund (Saudi Public Investment Fund), which is the best example of a global “sovereign AI system.” While visiting Saudi Arabia with US President Donald Trump, Hwang first announced the prototype of Saudi AI ambition to build this large-scale data center in May, and will also purchase AI chips from AMD and Qualcomm in the future.

“Can you imagine that a newly founded AI startup with revenue of about $0 billion now wants to build a large-scale AI data center for Musk.” Hwang In-hoon said in an interview with the media in Washington.

Sovereign AI trend is coming

This facility is one of the most notable examples of what Nvidia calls “sovereign AI” (sovereign AI). Executives at Nvidia, the “AI chip hegemon,” have publicly stated many times that governments will increasingly need to build sovereign level hyperscale AI data centers for AI training/inference to protect their national security and culture. This also means that outside of a few hyperscale cloud computing vendors, the potential market for Nvidia's expensive AI chips is huge.

Nvidia CEO Hwang In-hoon mentioned “sovereign AI” several times this year, implying a surge in demand for national-level artificial intelligence hardware. Therefore, in addition to providing AI GPUs to hyperscale cloud computing service providers such as Microsoft, Amazon, and Google, Nvidia is now also striving to diversify its business. “Sovereign AI” is undoubtedly one of the areas Hwang In-hoon places the most importance on.

In an interview, Hwang In-hoon once said that countries including India, Japan, France, and Canada are all talking about the importance of investing in “sovereign level artificial intelligence systems.” “From their natural resources to exclusive confidential databases, it is seen as something that should be refined and produced for their country. Recognition of sovereign AI capabilities is a global concept.”

With the “CUDA software and hardware collaboration platform+high-performance AI GPU” and the extremely powerful AI developer ecosystem, Nvidia's Blackwell/Blackwell Ultra architecture AI GPU, which is currently most in demand, has undoubtedly become the underlying AI hardware preferred by governments around the world.

According to the latest financial report released by Nvidia on Wednesday, the revenue outlook for the next quarter exceeded Wall Street expectations, and Nvidia's total Q3 revenue surged 62% year on year to reach the highest revenue scale in history of 57 billion US dollars; as the wave of global AI deployment is in full swing, Nvidia's data center business achieved revenue of 512 billion US dollars in the third fiscal quarter, surging 66% year on year and 25% month-on-month. It can be described as completely breaking the “AI bubble argument” that has recently become popular in the market. To a great extent, stock market investors are about to retreat from the unprecedented global AI spending boom feelings of worry .

Nvidia's strong performance report highlights that AI application leaders such as OpenAI, xAI, Anthropic, and Palantir, or government departments that focus on “sovereign AI systems” continue to spend huge sums of money to build AI data centers. The fanatical wave of global artificial intelligence layout shows no sign of abating.

“There is a lot of talk about the AI bubble, but from our point of view, I haven't seen a bubble. The actual situation with regard to AI computing power requirements is completely different.” Hwang In-hoon said during the performance conference call. “Sales of the latest generation Blackwell architecture AI GPUs far exceeded expectations. Cloud GPUs have been sold out, and computing demand for AI training and inference continues to grow at an exponential rate. We have entered a virtuous cycle in the AI era.”

Hwang In-hoon's appearance at a major investment event supported by US President Trump further demonstrated the current administration's focus on AI. As Nvidia's management actively lobbies to obtain a White House license to export AI chips to China in the future, Huang In-hoon has gradually become familiar with the US president who has returned to the White House.

When the agreement was announced, Musk — an important figure in the early days of Trump's second administration, made a brief misstatement about the size of AI data centers, misrepresenting it in megawatts per unit of power, and mistakenly expressed megawatts as gigawatts. Soon after, he joked that the plan to make the data center 1,000 times larger “can only be discussed later.” “That would cost about $8 trillion.” Musk quipped at the time.

Not just partnering with Nvidia! Humain will also join hands with other chip giants to create a prosperous AI world

According to information, Humain is not only equipped with Nvidia's high-performance AI GPUs. Nvidia's longtime strongest competitor, AMD, and Qualcomm (Qualcomm) will also sell AI chips and AI hardware and software collaboration systems on a large scale to Humain. AMD CEO Lisa Su (Lisa Su) and Qualcomm CEO Cristiano Amon (Cristiano Amon) both attended the Trump administration's state banquet for Saudi Crown Prince Mohammed bin Salman this week. After the banquet, the Crown Prince said that Saudi investment in the US will increase from 600 billion US dollars to 1 trillion US dollars.

Humain said it is expected that by 2030, AMD will provide AI chip computing power clusters that may be as high as 1 gigawatt of electricity. The company said it will provide its next generation AI computing power cluster device for AI training/inference — the Instinct MI450 AI GPU cluster. AMD added that Cisco (Cisco) will provide additional core infrastructure for large data centers.

According to reports, compared to AMD's previous generation MI300X/MI350X, the performance of the MI450 can be called a “memory monster” level AI performance transition. In terms of process, it uses TSMC's 2nm (N2) most advanced manufacturing process, based on the CDNA 5 advanced architecture. Compared with the MI300X, the video memory bandwidth jumped dramatically from 5.3 Tb/s to 19.6 Tb/s, and the video memory capacity jumped from 192 GB to 432 GB. For OpenAI's biggest players who are doing ultra-large-scale multi-modal+ultra-long context LLM, the value of ultra-large video memory + ultra-long context LLM is often more cost-effective than a single card, and for AI application developers such as OpenAI, this is also a second AI GPU computing power front centered on “ultra-large video memory/bandwidth+more advanced manufacturing process” in addition to the Nvidia Blackwell AI GPU system.

Qualcomm plans to sell new high-performance AI chips for data centers to Humain, called AI200 and AI250, which it first released in October. Qualcomm said Humain will deploy Qualcomm chips with a power scale of about 200 megawatts.

Qualcomm launched the Cloud AI 100 series for data center/edge inference from 2019 to 2020 (followed by Cloud AI 100 Ultra). The new AI200/AI250 is the first rack-scale (rack-scale) solution, with a maximum of 768GB LPDDR per card, focusing on energy-efficient reasoning for large AI models, and claims that it will be able to do a large scale through architectures (such as near-memory compute) Reduce the “total cost of ownership” (TCO) of AI inference computing power clusters for AI data center operators.

Qualcomm recently released two AI200/AI250 AI inference accelerators and corresponding machine/rack level AI computing power cluster solutions, which are expected to drive the data center chip business to become a new “superengine” for Qualcomm's performance growth starting in 2026. AI200/AI250 targets AI inference computing power clusters at the AI data center computer room level. It is an AI ASIC technology route idea. The target targets Google TPU; the AI200/AI250 and Google TPU belong to the same “ASIC-specific reasoning and training accelerator” technology family (Qualcomm emphasizes inference and TCO/energy efficiency).