Tesla (TSLA.US) AI5 chip uses TSMC+Samsung dual-wire foundry for efficient AI inference on FSD cars

Zhitongcaijing · 10/23/2025 04:01

The Zhitong Finance App learned that Tesla CEO Elon Musk said in the company's latest performance conference call that South Korean semiconductor giant Samsung Electronics (Samsung Electronics) is playing a more important cooperative role in Tesla chip manufacturing. His latest remarks indicate that the South Korean chip company, which has a central position in the Korean economy, is entering a market dominated by TSMC, the “king of chip foundry.” Musk said that AI5 will “be produced by TSMC and Samsung at the same time,” and the two companies will share manufacturing. The locations are TSMC's Arizona plant and Samsung's Texas plant, respectively.

In a performance conference call with Tesla investors, Musk said that Samsung and TSMC will jointly share the large-scale chip manufacturing tasks of the company's self-developed artificial intelligence chip, the Tesla AI5 chip. As a comparison, he previously stated that this chip will be exclusively produced by TSMC, the world's largest chip foundry giant.

Tesla's self-developed AI5 chip is different from traditional processor technology routes such as Nvidia's AI GPU, and removes ISP and other modules that Tesla sees as redundant to improve efficiency. AI inference accelerators that cut down ISPs are positioned more “narrower and deeper”, betting on end-to-end deep learning and FSD car-end inference. Therefore, AI5 may have a significant advantage in perf/w under target AI workloads.

“I need to clarify some of my previous public comments, that is, we will actually have both TSMC and Samsung Electronics focus on AI5 chip manufacturing at the same time when production starts.” Musk said during Tesla's third-quarter results conference call held on Wednesday EST.

The latest Tesla news all indicates that AI5 will be launched as a “dual foundry” by TSMC+Samsung, with the aim of locking in production capacity and supply safety. “It makes perfect sense for both Samsung and TSMC to focus on AI5, and both sides manufacture chips in the US (TSMC's Arizona plant and Samsung's Texas plant), and the company hopes there will be an oversupply of AI5 from the beginning.” Musk said during a conference call.

In the global chip foundry industry, Samsung still lags far behind TSMC in terms of advanced process yield and OEM scale. However, Samsung is scaling up its investment in a chip manufacturing center near Austin in response to the Trump administration's ambition to “return chip manufacturing to the US,” where Tesla's headquarters is located.

In the chip industry chain, TSMC can be called an “eternal god” (YYDS), explaining what “classics never go out of style” — AI GPUs and AI ASICs, which are currently in extremely high demand, are inseparable from TSMC. With decades of core manufacturing technology accumulation in the field of chip manufacturing and a long time at the cutting edge of chip manufacturing technology improvement and innovation, TSMC has long dominated the vast majority of global chip OEM orders with advanced processes and packaging technology from global chip manufacturers, and high yield, especially chip OEM orders with the most advanced manufacturing processes at 5nm and below.

Compared to Nvidia AI GPUs, AI5 pursues more powerful performance/power consumption and lower latency

Speaking at the performance conference call, Musk said through a post on the X platform that Tesla's self-developed AI chip, AI5, was not designed as a graphics processor along the traditional AI computing power route because it eliminated image signal processing to save space and maximize cost performance and energy efficiency. Musk once said that the AI5 chip's performance is 40 times that of AI4.

Tesla relies on large-scale computing power clusters composed of these AI chips — including self-developed AI5 clusters and Nvidia's AI GPU training computing power clusters, to provide extremely powerful AI computing power support for Tesla's fully autonomous electric vehicle functions (i.e. Tesla FSD) and the humanoid robot product line, which is still in its early stages. The electric vehicle company is also striving to be able to use self-developed AI hardware technology in combination with flagship AI computing power hardware owned by Nvidia, a leader in artificial intelligence computing processors.

In July of this year, Musk said that Tesla will use Samsung's advanced manufacturing process to manufacture the next AI6 chip, and said that the two companies signed a cooperation agreement worth 16.5 billion US dollars, which is seen as a major victory for Samsung's chip foundry business — that is, the business that produces chips for external customers. This also means that AI6 will be completely OEM by Samsung rather than a two-wire foundry like AI5.

At the time, Musk said that Samsung had already produced the previous AI4 chip, while Tesla relied on TSMC to produce the version it was developing at the time.

According to current public information, AI5 is a dedicated AI chip (ASIC) “for real-time inference on the vehicle side”. Musk clearly stated that it was not a traditional GPU design, and the image signal processing (ISP) module was eliminated to free up more transistor space for neural network computation and local storage, thereby pursuing better performance/power consumption (perf/W) and lower latency compared to GPUs.

The biggest difference with Nvidia's AI GPU architecture is that AI5 cuts out hardware modules that have nothing to do with car-side inference or have low marginal benefits (Musk names ISP, which significantly saves transistors and static/dynamic power consumption, and promotes greater coverage area of inference power and on-chip storage), and focuses the area and power consumption budget on matrix/attention units, on-chip SRAM, and high-speed interconnect between chips; in principle, such “task-specific AI ASICs” can bring more enhanced /W and better deterministic latency.

It is worth noting that Tesla's latest caliber focuses on “inference” rather than self-developed AI training chips (especially after the Dojo supercomputing team was shut down, Tesla's computing power routes were integrated into AI5/AI6 as the core route). The goal is a multi-modal perception/planning model for high-efficiency motorcycles with low to medium power consumption. This is different from Nvidia's general approach in data center training+extensive edge applications. This is why Tesla is still insisting on purchasing Nvidia AI GPUs while developing its own AI chips — probably Tesla's large autonomous driving AI model for data centers Training computing power clusters.