The Zhitong Finance App learned that Broadcom (AVGO.US), one of the biggest winners of the global AI craze, announced financial results for the third quarter of the 2025 fiscal year ending August 3 on the morning of September 5, Beijing time. Broadcom is one of the core chip suppliers for Apple and other major technology companies. It is also the core supplier of high-performance Ethernet switch chips in large-scale AI data centers around the world, and AI ASIC, a customized AI chip that is essential for AI training/inference. After Broadcom announced its earnings report, the stock price rose nearly 5% in after-market trading of US stocks, which can be described as a single effort to revive the “AI faith” that had recently been sluggish, proving to investors that tech giants such as Google and Meta, as well as AI leaders such as OpenAI, are still spending a strong momentum in the field of artificial intelligence computing power infrastructure.
Broadcom's strong performance data and future performance outlook can be described as comprehensively reviving the strength of US technology stock investors' belief in artificial intelligence and driving chip stocks to regain their upward momentum after the market. Even the “AI chip hegemon” Nvidia (NVDA.US) failed to achieve this at the end of August. The mixed results of CRM.US (CRM.US), MRVL.US (MRVL.US), and Nvidia have caused some investors who are cautious about “artificial intelligence monetization and monetization paths” to drastically sell popular technology stocks — they generally believe that the AI investment frenzy has led to a bubble in technology stocks, compounded by rising market expectations that the US economy will fall into “stagflation” against the backdrop of Trump-led tariffs, causing US technology stocks to continue to fall since the beginning of September.
However, the strongest force in the AI ASIC field, Broadcom's strong performance and optimistic outlook, with a market capitalization of 1.4 trillion US dollars, told investors that demand for AI computing power is still showing an explosive growth trend. In particular, the growth rate of demand for AI ASICs and high-performance Ethernet chips is comparable to the unprecedented growth rate of AI GPU demand in Nvidia data centers in 2023-2024. Broadcom CEO Hock Tan (Hock Tan) said in a performance conference call with Wall Street analysts that the chip giant's prospects for revenue generation linked to artificial intelligence in the 2026 fiscal year will expand “significantly”. This optimistic forecast can be described as fully mitigating market concerns about the slowdown in AI computing power growth.
In a conference call after the quarterly results were announced, Chen Fuyang said that the chip company is cooperating with more potential big customers to develop AI training/inference acceleration chips — this market is currently dominated by Nvidia AI GPUs, but the AI ASIC route led by Broadcom is beginning to see a surge in market size in the field of AI training/reasoning. Since this year, Broadcom's stock price has repeatedly reached record highs, driven by an unprecedented boom in AI investment, and has joined hands with Nvidia and TSMC to drive the entire AI computing power industry chain into a hot bullish curve.

“In the last fiscal quarter, one of our potential customers placed large-scale mass production orders related to AI infrastructure to Broadcom.” In his conversations with analysts, he did not reveal the names of his clients. “We now expect revenue prospects related to AI infrastructure construction in FY2026 to expand significantly from the already strong performance growth rate we described last quarter.”
At the last performance conference call, Chen Fuyang said that the 2026 AI-related revenue outlook will show a similar growth trajectory to this year — that is, the growth rate is expected to be around 50% to 60%. Now, with the addition of a new major customer whose “demand is timely and huge in scale,” Chen Fuyang said, the revenue growth rate associated with AI will be significantly upgraded in a “quite substantial and impressive” manner.
Broadcom said in a performance report released on the morning of the 5th Beijing time that the company's management expects overall revenue for the fourth fiscal quarter (ending October) to be about 17.4 billion US dollars, which is higher than the average forecast of 17.05 billion US dollars by Wall Street analysts, which means it is expected to increase by about 25% year on year.
Before the financial report was released, the market's expectations for Broadcom's performance and future outlook data were very high, so exceeding market expectations can be said to have greatly boosted investors' bullish sentiment about Broadcom and the AI computing power industry chain as a whole. Since the year's low in April, Broadcom's stock price has more than doubled, adding about US$730 billion to the company's market capitalization, making it the third-best performing individual stock in the Nasdaq 100 Index. The stock price increase is stronger than Nvidia.
Investors have recently been looking for signs that AI computing power spending remains strong. Last week, Nvidia gave mixed performance guidance, raising concerns in the market about the bursting of the AI industry bubble.
Although Broadcom has not experienced a sharp market expansion like Nvidia — Nvidia's market capitalization has grown by more than $3 trillion so far in 2023, it is still regarded by the market as the core beneficiary of the AI boom. Hyperscale customers that develop and operate large models of artificial intelligence that are continuously updated and iterated — such as Google and Facebook's parent company Meta, rely heavily on Broadcom's custom-designed AI ASIC chips and high-performance network devices to handle huge AI workloads.
In the Google and Meta performance conference call, Pichay and Zuckerberg both said they will step up efforts to launch self-developed AI ASICs with chip manufacturer Broadcom. The AI ASIC technology partners of the two giants are Broadcom, a leader in customized chips. For example, the TPU (Tensor Processing Unit) created by Google and Broadcom is the most typical AI ASIC.
During the performance conference call, Chen Fuyang said that he and the board of directors have agreed that he will serve as CEO of Broadcom until at least 2030.
Performance data shows that in the third fiscal quarter ending August 3, Broadcom's overall revenue increased 22% to nearly US$16 billion. Excluding certain items, adjusted profit was $1.69 per share, which was higher than the average revenue forecast of Wall Street analysts of about $15.8 billion and earnings per share of $1.67 billion — both of which can be described as being continuously raised recently.
Broadcom's semiconductor revenue related to AI infrastructure for the third fiscal quarter was about 5.2 billion US dollars, a year-on-year growth rate of 63%, higher than the Wall Street average forecast of 5.11 billion US dollars. Broadcom management expects this category's revenue to reach about US$6.2 billion in the fourth fiscal quarter, higher than analysts' previous expectations of US$5.82 billion.
Other chipmakers focusing on AI computing power infrastructure have not performed well in recent days. Marvell Technology Inc. (Marvell Technology Inc.), one of Broadcom's rivals in the customized semiconductor market, plummeted 19% on Friday after the company's data center business revenue fell short of expectations.
In addition to collaborating with major customers such as Google to develop customized AI accelerators — AI ASIC chips — Broadcom has also been upgrading the company's high-performance network equipment to better transfer information between AI server systems at the core of AI data centers. As suggested by Chen Fuyang's latest comments, Broadcom continues to make positive progress in finding large customers who want to provide high-performance equipment for high-load AI training/inference tasks.
Chen Fuyang has turned Broadcom into a giant spanning the software and hardware fields through mergers and acquisitions over the years. In addition to the semiconductor business, which is closely related to AI infrastructure, the chip giant, headquartered in Palo Alto, California, also provides the most core networking components for Apple's iPhone devices.
The “AI ASIC Super Wave” led by Google and Meta is coming
As US tech giants are determined to invest heavily in the field of artificial intelligence, the biggest winners include not only Nvidia, but also AI ASIC giants such as Broadcom, Mywell Technology, and Shixin from Taiwan. Microsoft, Amazon, Google, Meta, and even generative AI leader OpenAI are all teaming up with Broadcom or other ASCI giants to update iterative AI ASIC chips for massive inference side AI computing power deployment. Therefore, the future market share expansion trend of AI ASICs is expected to be significantly stronger than AI GPUs, and will tend to have equal shares, rather than the current situation where Nvidia AI GPUs are alone — accounting for up to 90% of the AI chip field.
With absolute technological leadership in the field of inter-chip interconnection communication and high-speed data transmission between chips, in recent years, Broadcom is currently the most important participant in ASIC customized chips in the AI field. For example, Google's self-developed server AI chip - TPU AI accelerator chip. Broadcom is the core participant. Broadcom and the Google team are participating in the development of TPU AI acceleration chips. In addition to chip design, Broadcom also provided Google with key intellectual property rights for inter-chip interconnection communication, and was responsible for manufacturing, testing, and packaging new chips, thus protecting Google's expansion of new AI data centers.
Broadcom's high-performance Ethernet switch chips are mainly used in data centers and server cluster equipment, and are responsible for processing and transmitting data streams efficiently and at high speed. Broadcom Ethernet chips are essential for building AI hardware infrastructure because they can ensure high-speed data transmission between GPU processors, storage systems, and networks, and this is extremely important for generative AI such as ChatGPT, especially for applications that require processing large amounts of data input and real-time processing capabilities, such as Dall-E Wensheng maps and Sora Wensheng video models.
Based on Broadcom's unique chip-to-chip communication technology and many patents for data transmission, Broadcom has now become the most important participant in the AI ASIC chip market in the AI hardware field. Not only does Google continue to choose to cooperate with Broadcom to design and develop customized AI ASIC chips, giants such as Apple and Meta, and more data center service operators are expected to join forces with Broadcom for a long time to build high-performance AI ASICs in the future. According to information, at the performance conference at the beginning of the year, Broadcom management predicted that the potential market size of AI components (Ethernet chips+AI ASICs) it has built for global data center operators will reach 60 to 90 billion US dollars by the 2027 fiscal year.
According to information, Google, one of Broadcom's major customers, revealed the latest details of the Ironwood TPU (TPU v6) at the conference, showing remarkable performance improvements. Compared with the TPU v5p, Ironwood's peak FLOPS performance is increased by 10 times, and the efficacy ratio is increased 5.6 times. Compared with the TPU v4 launched by Google in 2022, Ironwood's single-chip computing power has even increased by more than 16 times.
The data released by Google this time clearly shows the evolution of the performance of its TPU platform. Ironwood's single-chip peak computing power reached 4614 TFlops, and was equipped with a 192 GB HBM, with a bandwidth of up to 7.4 Tb/s. In comparison, the TPU v4 single chip released in 2022 has a computing power of 275 TFlops, is equipped with a 32 GB HBM, and has a bandwidth of 1.2 Tb/s. The TPU v5p, launched in 2023, has a single-chip computing power of 459 TFlops, is equipped with 95 GB HBM, and has a bandwidth of 2.8 Tb/s.
Performance comparison shows that Google Ironwood's 4.2 TFLOPS/watt efficiency ratio is only slightly lower than the 4.5 TFLOPS/W of the Nvidia B200/300 GPU. J.P. Morgan commented: This performance data highlights that AI ASIC chips dedicated to advanced AI are rapidly narrowing the performance gap with market-leading AI GPUs, driving hyperscale cloud computing service providers to increase investment in more cost-effective customized ASIC projects.
According to Wall Street financial giant J.P. Morgan Chase's latest forecast, the chip uses a 3nm advanced manufacturing process in collaboration with Broadcom and will be mass-produced on a large scale in the second half of 2025, and Ironwood is expected to bring Broadcom about $10 billion in revenue over the next 6 to 7 months.
Notably, according to media reports, Google recently contacted some cloud service providers that mainly rent Nvidia AI GPU server clusters, and expressed the hope that their data centers can also deploy Google TPU computing power clusters. According to representatives of the companies involved in the deal privately told the media, Google has reached an agreement with at least one cloud service provider, including London-based Fluidstack, which will deploy a Google TP computing power cluster at a data center in New York.
Gil Luria, a well-known analyst at Wall Street investment agency D.A. Davidson, said that more and more cloud service providers and large AI application developers are interested in Google's TPU and hope to use this to get rid of their dependence on Nvidia. D.A. Davidson discovered after communicating with researchers and engineers at various cutting-edge artificial intelligence laboratories that the engineers had a very positive evaluation of Google's accelerator chip customized for AI training/inference.
After DeepSeek shocked the world, Broadcom's stock price rose stronger than Nvidia! Wall Street is optimistic that Broadcom's stock price will continue to reach new highs
The world's demand for AI computing power continues to blowout, and AI infrastructure investment projects led by the US government are getting bigger, and tech giants continue to invest huge sums of money to build large-scale data centers. This largely means that for investors who have long loved Nvidia and the AI computing power industry chain, the “AI belief” that has taken the world by storm “supercatalysis” to the stock prices of computing power leaders is far from over. They are betting that the stock prices of AI computing power industry chain companies led by Nvidia, TSMC, and Broadcom will continue to interpret the “bull curve” and enter the market line. And push the global stock market to continue its bullish market .
Driven by the epic stock price increases and continued strong performance of AI computing power industry leaders such as Nvidia, Google, TSMC, and Broadcom, an unprecedented AI investment boom has swept through the US stock market and the global stock market, driving the global stock index benchmark stock index, the MSCI Global Index, to rise sharply since April, and recently it has continued to reach record highs.
After DeepSeek R1 came out of nowhere at the end of January and shocked Silicon Valley and Wall Street, causing the AI computing power sector in the US stock market to experience a record sharp drop in a single day, Broadcom's stock price increase since then has been stronger than that of Nvidia, the AI chip dominator.
As DeepSeek completely sets off an “efficiency revolution” at the level of AI training and inference, and pushes future AI model development to focus on “low cost” and “high performance,” AI ASICs are entering a stronger demand expansion trajectory than the 2023-2024 AI boom period against the backdrop of a sharp increase in demand for AI inference in the cloud. In the future, major customers such as Google, OpenAI, and Meta are expected to continue to invest heavily in developing AI ASIC chips with Broadcom.
As large model architectures gradually converge to several mature paradigms (such as standardized Transformer decoders and diffusion model pipelines), more cost-effective AI ASICs can more easily eat the computing power load of mainstream inference terminals. Furthermore, some cloud service providers or industry giants will deeply couple software stacks, make ASIC compatible with common network operators, and provide excellent developer tools, which will accelerate the spread of ASIC inference in normalized/mass scenarios. Nvidia AI GPUs may focus more on large-scale cutting-edge exploratory training, rapid testing of rapidly changing multi-modal or new structures, and general computing power such as HPC, graphics rendering, and visual analysis.
As demand for Broadcom's Ethernet switch chips and AI ASIC chips continues to grow rapidly, Wall Street is generally bullish on Broadcom's stock price prospects and is optimistic that Broadcom will continue to record new highs in stock prices. Evercore recently raised Broadcom's target share price within 12 months from 304 US dollars to 342 US dollars, and Morgan Stanley sharply raised Broadcom's target price from 338 US dollars to 357 US dollars.
Furthermore, “silicon photon technology” is expected to be an important catalyst for Broadcom's stock price to move towards a new round of bull market curves. The “silicon photon technology” wave led by global chip giants such as Nvidia, TSMC, and Broadcom is about to evolve into an unprecedented revolution — the “silicon photon revolution” that will sweep the entire AI computing power industry chain. This also means that CPO and optical I/O technology routes will soon accelerate penetration from cutting-edge laboratories to global applications.
On the one hand, Broadcom is developing its own CPO high-performance switching chip solutions (combined with its Tomahawk series flagship switch chip products), and on the other hand, accumulating technology in the field of optical interconnection through mergers and acquisitions (previously acquired optical module manufacturer Brocade). Broadcom has an extensive global cloud vendor customer base and mature switch ASIC business. Large-scale introduction of CPO technology will undoubtedly greatly enhance the competitiveness of its switching system products.