Demand for AI ASICs continues to skyrocket! Mywell (MRVL.US) net profit surged 876% and invested US$3.25 billion in optical interconnection

Zhitongcaijing · 12/03/2025 00:33

The Zhitong Finance App learned that on Wednesday morning Beijing time, Maywell Technology (MRVL.US), which focuses on customized AI chips (AI ASIC chips) for large-scale AI data centers and is one of Amazon's largest partners in the AWS Trainium series AI ASIC, announced the results report for the third fiscal quarter of fiscal year 2026 ending November 1. The latest financial data shows that the company's performance and future performance prospects have surpassed the average expectations of Wall Street analysts, highlighting the strong demand for Nvidia's AI GPU computing power cluster's strongest technology competition route — the AI ASIC computing power cluster.

In addition to strong Q3 results and strong expansion of the current quarterly results outlook, the chip company also revealed in financial disclosure that it will use $3.25 billion to acquire Celestial AI, a chip startup focusing on optical interconnect I/O, to strengthen its network product portfolio and jointly drive the US stock price to soar more than 16% in after-hours trading after the announcement of the results. However, due to the long-term “AI ASIC hegemon” — the US chip giant Broadcom, increasing competition in the data center customized AI chip and high-performance network infrastructure market, and the recent potential concerns of Wall Street investors about the bursting of the AI bubble, Maywell's stock price has fallen by more than 15% since this year.

In terms of the market-focused performance outlook, Maywell Technology's management expects revenue for the fourth fiscal quarter to be about US$2.2 billion, with a fluctuation range of +or minus 5%. This benchmark forecast is significantly higher than the average Wall Street analysts' expectations of about US$2.18 billion — we need to know that this performance forecast has been continuously raised since tech giants such as Google, Amazon, and Nvidia announced strong results at the end of October. Even so, the official outlook given by Maywell is still stronger than analysts' expectations after continuous revisions, which is enough to see the AI computing power of the global AI ASIC technology route Just how much infrastructure demand has exploded, the performance of Maywell and Broadcom (AVGO.US), a leader in the ASIC field, will still be on a sharp growth trajectory from 2023.

The company gave a range of non-GAAP earnings per share of $0.74 to $0.84. The median range was significantly higher than Wall Street expectations, while the expected non-GAAP gross margin range was 58.5% to 59.5%. In the results conference call, management expected total revenue of approximately $10 billion for the next fiscal year, including a 25% increase in data center business revenue.

According to financial data, for the third fiscal quarter ending November 1, Marvell Technology (Marvell Technology)'s total revenue increased 37% year over year to reach US$2.07 billion, slightly higher than the average Wall Street analysts' estimate of US$2.05 billion. Adjusted earnings per share (non-GAAP) were approximately US$0.76, higher than the Wall Street average forecast of US$0.74, and US$0.43 for the same period last year.

Under GAAP guidelines, Maywell Technology's Q3 net profit was about 1.9 billion US dollars, which was significantly higher than the previous quarter's net loss of 194.8 million US dollars (surged 876% from month to month) and net loss of 676 million US dollars in the same period last year; earnings per share after GAAP dilution were 2.20 US dollars, higher than the previous quarter's 0.22 US dollar and US stock loss of 0.78 US dollars in the same period last year.

The generative AI boom that has taken the world by storm has accelerated the AI chip development process of cloud computing and chip giants. They are vying to design the fastest and most energy-efficient AI computing power infrastructure clusters for advanced large-scale AI data centers. Maywell and its biggest competitor Broadcom mainly focus on collaborating with cloud computing giants such as Amazon and Google to create AI ASIC computing power clusters tailored to the specific needs of their AI data centers, and this ASIC business has grown into a very important business for both companies. For example, the TPU AI computing power cluster built by Broadcom and Google is the most typical AI ASIC technology route.

Major takeover in the chip industry! Maywell Technology swallows optical interconnection leader Celestial AI

Matt Murphy (Matt Murphy), CEO of Mywell Technology, said during the performance conference call that the company expects revenue from its customized chip business to continue to grow strongly next year and is expected to grow by about 20%. According to a research report published by Harlan Sur (Harlan Sur), a senior analyst at J.P. Morgan Chase, in addition to partnering with Amazon to build the Trainium series of AI chips, this chip company headquartered in Santa Clara, California is helping another big tech giant — Microsoft — build its first customized AI ASIC computing power cluster device.

The acquisition will enable Maywell Technology to take advantage of the startup chip company's research results in silicon photonic interconnect chip technology. Optical interconnection technology uses optical rather than electrical signals to establish photonic-level connections between AI chip devices and internal chip systems such as memory chips, which can be described as a huge leap forward compared to traditional electronic signals in terms of AI model performance and AI data center energy efficiency; in this cutting-edge chip field in the world, competition between Maywell Technology and Broadcom, and NVDA.US (NVDA.US), the company with the highest market capitalization in the world, is already in full swing.

In an interview with the media, Murphy said that Celestial's technology will be used in Maywell Technology's next-generation infrastructure hardware products related to silicon photonics, and these products will contribute an additional super blue ocean market that is expected to reach 10 billion US dollars in size.

When the dust settles, we will create a 'super powerhouse player' within Marvell (Mewell Technology) in the field of silicon photonics.” He said in an interview.

Murphy said that large cloud computing leaders such as Amazon AWS will begin large-scale application and deployment of silicon photonic interconnection technology in 2027 or 2028. He also said that eventually this technology will be extremely widely adopted around the world.

Under the terms of the deal, Celestial AI will receive $1 billion in cash and 27.2 million Marvell shares worth approximately $2.25 billion.

Murphy and other senior company executives also said that it is expected that they will start receiving significant revenue contributions from the Celestial AI business in the second half of FY2028, achieve the expected annualized revenue and operating scale of about 500 million US dollars by the fourth quarter of fiscal year 2028, and double this revenue forecast to 1 billion US dollars in the fourth quarter of fiscal year 2029.

Maywell said the deal is expected to be completed in the first quarter of the 2026 natural year.

In connection with the acquisition of Celestial, Maywell also issued a warrant for its shares to its long-time major customer Amazon. The agreement allows Amazon to buy Maywell shares based on the scale of its procurement of silicon photonic optical interconnect products until the end of 2030.

According to Mywell's latest disclosure, Amazon, its largest ASIC customer, can subscribe for up to 90 million US dollars worth of Maywell shares, equivalent to 1 million shares, and the exercise price is about 87 US dollars per share. As of the US stock close on Tuesday, Maywell's stock price closed at $92.890.

Celestial AI is officially positioned as the creator of Photonic Fabric's silicon photonic optical interconnection platform. It focuses deeply on optical interconnect technology for AI computing/memory, and is a “photonic web company” that uses light to connect XPUs, switching chips, and HBM. The startup acquired Rockley Photonics' silicon photonics patent portfolio. After the merger, the number of silicon photonic IPs in the field of optical computing interconnection exceeded 200, officially known as “one of the strongest IP combinations in the field of silicon photonic optical computing interconnection.” Celestial AI's current focus on silicon photonics technology is clearly understood as the direction of “optical interconnect I/O (optical I/O)”.

The demand for AI computing power brought about by the inference side can be called a “sea of stars”, which is expected to drive the AI computing power infrastructure market to continue to show exponential growth. “AI inference systems” are also the largest source of future revenue for Nvidia Hwang. The growing demand for AI computing power is bound to bring enormous demand for optical interconnection, so silicon photonic technology has considerable potential in large-scale application scenarios with high bandwidth, low power consumption/low thermal requirements, such as high-speed data communication and data center interconnection. As the penetration rate of cloud-based AI computing power services and ChatGPT generative AI applications based on AI training/inference computing power systems increases, demand for AI computing power surges, and silicon photonic technology will play an increasingly important role.

Moore's law approaching the limit has largely caused the performance enhancement of traditional electronic chips to slow down, and silicon photon-based chip packaging technology provides a performance enhancement scheme based on optical technology, which accelerates the expansion of chip performance when nano-process technology is limited. Silicon photonics technology is a technology that integrates optical components such as laser devices with silicon-based integrated circuits. It uses optical rather than electrical signals to achieve high-speed data transmission, longer transmission distance, and low power consumption. Furthermore, compared to ordinary electrical signal chips, silicon photonic chips can also provide much lower latency.

In the silicon photonics technology landscape, “co-packaged optics (CPO)” and “optical I/O (optical I/O)” form two complementary but very different approaches: the former prioritizes the power consumption and panel density bottlenecks of rack-level switching ASIC interfaces, while the latter uses optical transceivers as cores and positions them as next-generation off-chip buses between computational chips such as CPU/GPU/NPU.

According to many Wall Street analysts, the scale of optical I/O will become the mainstream route for silicon photonics technology, and the future market size will be much larger than CPO. Simply put, CPO technology mainly targets immediate power/efficiency density bottlenecks in high-performance network switching ASICs such as Nvidia InfiniBand and Broadcom Ethernet switch chips, while optical I/O targets future bandwidth, energy efficiency, and chip pinwall challenges for heterogeneous computing. From the perspective of advanced packaging, the possibility of CPO and optical I/O being integrated into one package system is very high. Advanced 2.5D/3D packages (CoWoS, InFO, EMIB, etc.) allow the simultaneous placement of an exchange ASIC+CPO optical engine and several computational cores with optical I/O within a large system-level package (SiP) to form a heterogeneous advanced package “sealed with different domains and thermal zone isolation” — so the two routes evolve independently and are highly complementary at the chip packaging system level.

The heyday belonging to the AI ASIC technology route has arrived

Facebook's parent company Meta is considering spending billions of dollars to buy Google's TPU AI computing power infrastructure in 2027, including the construction of a huge AI data center for Meta, and SEFTSE CEO Mark Benioff recently stated that the company will abandon the OpenAI big model and use Gemini 3, the latest artificial intelligence model released by Google. In addition to these latest news, Anthropic, which recently had the title of “OpenAI rival,” plans to spend tens of billions of dollars to buy 1 million TPU chips Under the strong catalysts of the film, the so-called “Google AI ecosystem” can be described as becoming more and more popular. The stock prices of almost all participants in the ecosystem have recently surged. In particular, TPU, the most typical technology route for AI ASIC, has been increasingly recognized in the field of science and technology.

As DeepSeek completely sets off an “efficiency revolution” at the AI training and inference level, driving future AI model development to focus on “low cost” and “high performance”. Compared with Nvidia's AI GPU route, AI ASICs, which are more cost-effective than the Nvidia AI GPU route, will enter a stronger demand expansion trajectory than the 2023-2024 AI boom period. In the future, major customers such as Google, OpenAI, and Meta are expected to continue to invest heavily in developing AI with Broadcom ASIC chip.

According to Semianalysis calculation data, Google's latest TPU v7 (Ironwood) shows an amazing intergenerational leap. The BF16 computing power of the TPU v7 is as high as 4614 TFLOPS, while the TPU v5p, which was widely used in the previous generation, was only 459 TFLOPS, which is an entire order of magnitude improvement. Furthermore, the TPU V7 video memory directly matches the B200 of Nvidia's BlackWell architecture. For specific applications, AI ASICs that are architecturally more cost-effective and energy-efficient can more easily absorb mainstream inference computing power loads. For example, Google's latest TPU cluster can even provide 1.4 times higher performance per dollar than Nvidia Blackwell.

Google's recently released a series of AI product portfolios based on Gemini 3, which brought huge AI token processing capacity, causing Google to limit daily content generation, further verifying that Wall Street called “the AI boom is still in the early stages of construction where computing power infrastructure is in short supply,” and then superimposed on “stock god” Buffett, Google's parent company Alphabet (GOOGL.US), ranked among the top ten largest stocks in Berkshire. Google can be described as comprehensively strengthening the “AI bull market narrative” recently. The “AI bubble moment” that strongly refutes the anxiety of some investors has arrived.

Recently, Wall Street's bullish sentiment on the stock prices of the two AI ASIC leaders, Broadcom and Mywell, has been getting stronger. For example, UBS Group raised Broadcom's target price in the next 12 months from $415 to $472, and Bank of America drastically raised Broadcom's target share price from $400 to $460. By the end of the US stock market on Tuesday, Broadcom's stock price closed around $381; Oppenheimer and UBS Group are optimistic that Mewell's stock price will hit the super market of $110 over the next 12 months.

Although about 95% of Meta's current huge AI computing power infrastructure is based on Nvidia's AI GPUs — that is, large-scale AI GPU computing power clusters based on Nvidia and AMD leaders, more and more technology companies are turning to new TPU orders brought by TPU AI computing power clusters, which is undoubtedly very beneficial to Broadcom and Mywell. Even if it only erodes 10% of the AI GPU computing power cluster share in Meta's AI computing power infrastructure, it will also pose a mild challenge to Nvidia and AMD.

According to a recent research report released by Morgan Stanley, the actual production of Google's TPU AI chips will reach 5 million and 7 million yuan respectively in 2027 and 2028, which can be described as a significant increase of 67% and 120%, respectively, from the financial giant's previous expectations. And this surge in production is expected to indicate that Google will begin direct foreign sales of TPU AI chips. More fundamentally impactful, Damo's research report estimates that for every 500,000 pieces of TPU exported, it is expected to bring Google an additional revenue of 13 billion US dollars and up to 0.40 US dollars in revenue per share.