The Zhitong Finance App learned that Wall Street financial giant Morgan Stanley recently released a research report saying that the “king of chip foundry” TSM.US (TSM.US) has begun construction of a 310 mm² Panel-Level chiplet advanced packaging trial production line (that is, CoPOS advanced packaging system), and has driven semiconductor equipment and advanced packaging equipment giants such as ASE to simultaneously shrink the FOPLP size to 300/310 mm², which means that the “wicker-level CoOS to panel-level CoPOS advanced packaging” package super update iteration has officially entered Investment and initial trial manufacturing period.
The CoPOS trial production line launched by TSMC means that this chip manufacturing giant has officially launched an “advanced packaging revolution” covering the upstream to downstream chip industry chains. In the future, CoPOS will mainly be used to solve CoOS advanced packaging production capacity bottlenecks and initial production to packaging costs on a large scale. For next-generation AI training/inference AI GPU/AI ASICs, it pursues packaging larger chiplet cores and a higher number of HBM stacks at a time to achieve exponential performance improvement and is expected to reduce the cost of expanding production capacity compared to CoWOS.
According to Daimo's global chip industry chain research data, TSMC has invested in the construction of a CoPos 310 mm² trial production line, and ASE released 2.3D packaging technology (Focos-Bridge) using 300 mm² panels at almost the same time, indicating that the advanced packaging industry is accelerating the transition to 310 mm². In June 2025, a large number of semiconductor equipment and raw materials related to PLP/CoPos also appeared at the Japan Electronic Packaging Society (JIEP) seminar. According to the Damo report, the industry expects medium to large-scale CoPoS-related semiconductor equipment delivery, installation and commissioning in 2026, process launch in 2027, and large-scale equipment investment decision period in mid-2027 as well as initial streaming.
The CoPOS advanced packaging system draws on CoWoS's silicon-silicon technology stack, but system-level adjustments have been made in the form of substrates, high-end semiconductor equipment chains, and yield bottlenecks. It uses stronger performance ceilings and easier production capacity to meet the world's growing demand for AI computing power on a wider scale.
For AI/HPC super customers such as Nvidia, AMD, Broadcom, and Mwell Technology, CoPOS provides a larger number of advanced package I/O and HBM stacks, greatly mitigating the shortage of advanced packaging production capacity and high initial production costs for film and chips. From the perspective of “upper performance limit”, CoPOS's panel-level area+HBM stacking combination can bring about greater bandwidth/capacity expansion than current CowOS advanced packages, so it has a higher performance ceiling for AI chips focusing on very large model training/inference systems.
From the perspective of performance growth and valuation expansion, the entire chip industry chain is expected to usher in significant growth. For Nvidia, AMD, and the three major EDA giants, it is expected to drive larger terminal demand through supply-side product updates and iteration, especially for the AI chip hegemon Nvidia, which is expected to meet the AI computing power demand to a greater extent; due to CoPOS panelization, high-end semiconductor equipment and raw material chip chains will soon usher in a new round of large-scale equipment capital expenditure, especially for laser cutting, panel lithography, vacuum bonding, and dry film packaging. The key equipment is Panel-level direct writing lithography, Laser cutting and panel patching.
From discs to panels: TSMC leads the “CoPOS revolution”
The advanced COWOS (chip-on-wafer-on-substrate) packaging process focuses on first completing wiring and TSV on a 300mm silicon interposer (interposer), then attaching the logic/storage die to the BT/ABF organic substrate. Since the effective area of the disc is very limited, after the effective area of the disc is very limited, after the large core chip+multiple HBM is occupied, the single chip produces only 3-4 pieces, and the yield rate follows the decline in area, which ultimately results in a high single cost, limited production capacity for a long time, and performance ceilings The upper limit is starting to be reached.
The COPos (chip-on-panel-on-substrate) process focuses on moving the silicon intermediate layer or rewiring layer to a rectangular panel (PLP) (typically 310 mm × 310 mm), first forming a large area of silicon embedded RDL, then attaching the core Chiplet core/HBM, and finally assembling it with an organic substrate. COPOS aims to package more Chiplet chips at once, and increase the performance of the next generation of AI chips for ultra-advanced manufacturing processes of 1nm and below. However, warpage and the uniformity of the coating on the corner are new challenges.
Therefore, CoPoS-based panels have a high utilization rate, the area of a single board is about 3-5 times that of a disc, the potential production capacity of superposition is increased by ×2 — ×3, the cost per unit area is reduced by about 20-30%, and the semiconductor equipment chain may need to be re-adapted (mainly focusing on large-scale laser segmentation, direct imaging lithography, and vacuum mounters).
Damo said that for the chip industry chain, moving from 12-inch wafer-level equipment to PLP related raw materials and equipment is a new ultra-large-scale CAPEX cycle, and semiconductor equipment giants (such as Disco, Ulvac, Screen HD, and Canon) are expected to receive incremental orders, which is a major structural growth opportunity.
Copos and AI computing power
As ChatGPT became popular around the world and the big video model of Sora Wensheng became popular, superimposing the unparalleled performance of Nvidia, the “seller” in the AI field, for many consecutive quarters, it means that human society has entered the AI era. At the Nvidia performance conference at the end of May, Hwang In-hoon was extremely optimistic that the Blackwell series would set the strongest AI chip sales record in history, driving the AI computing power infrastructure market to “show exponential growth.” “Today, every country sees AI at the core of the next industrial revolution — an emerging industry that continuously produces intelligence and critical infrastructure for every economy in the world,” Hwang In-hoon said in a performance discussion with analysts.
The demand for AI computing power brought about by the inference side can be called a “sea of stars”, which is expected to drive the AI computing power infrastructure market to continue to show exponential growth. “AI inference systems” are also the largest source of future revenue for Nvidia Hwang.
In the current “bandwidth-computing power” AI infrastructure competition with AI chips as the core, disc-level CowOS has advanced the Nvidia AI GPU package to the limit of at least 6 HBM storage systems with a total bandwidth of 3.9 to 4.8 Tb/s. For example, COWos-s is limited to a silicon intermediate layer size within 120 x 150 mm.
By increasing the carrying area to a typical 310 x 310 mm, panel-level CoPos can accommodate up to 10-12 next-generation HBM-HBM4 and more chiplet cores. The theoretical peak bandwidth is expected to exceed 13-15 Tb/s and at least double the storage capacity. Larger panels allow GPU/CPU chiplets, optical I/O dies, and dedicated AI acceleration IPs to be packaged and integrated on a larger scale, exponentially shortening interconnections, and greatly reducing overall latency and power consumption. Therefore, in terms of next-generation AI chip performance and meeting computing power requirements, CoPos provides a much wider “performance upper limit” to meet computing power requirements on a wider range.
In other words, as AI computing power requirements and the parameter scale of AI models continue to explode, and even when HBMs are stacked to 10 or more, the CoPOS advanced package will fully unleash the advantages of panel area, leading to improvements in the performance of AI computing power infrastructure such as larger AI chips and a reduction in the cost of computing power per unit. For example, when the usable area of a CoPOS panel reaches more than 5 times that of a single CoOS chip, and when used with HBM4 (1.6 Tb/s/stack, 2,048-bit bus), 12 stacks can achieve a peak value greater than 19 Tb/s — that is, the bandwidth limit is more than 4 times that of the current CoOS theory.