A few weeks ago, we wrote how Eliyan's NuLink PHY could eliminate silicon interposers and integrate everything into a single, elegant package. How, in essence, the socket could become the motherboard.
At the recent 30th Annual North American Technology Symposium, Taiwan Semiconductor Manufacturing Company (TSMC) revealed plans to build a version of its chip-on-wafer-on-substrate (CoWoS) packaging technology that could lead to systems-in-packages (SiPs). more than double the size of the current largest.
“With System-on-Wafer, TSMC offers a revolutionary new option to enable a large variety of dies on a 300mm wafer, delivering more computing power while taking up much less space in the data center and increasing performance per watt in orders of magnitude,” he said. the company said.
A huge amount of power
TSMC's first SoW offering, a logic-only wafer based on Integrated Fan-Out (InFO) technology, is now in production.
A chip-on-wafer version that uses CoWoS technology is expected to arrive in 2027 and will allow the “integration of SoIC, HBM and other components to create a powerful wafer-level system with computing power comparable to that of a server rack of a data center, or even an entire server.”
Informing on the move, Tom Hardware expands on this saying: “One of the designs TSMC envisions is based on four stacked SoICs coupled with 12 HBM4 memory stacks and additional I/O dies. Such a giant will surely consume an enormous amount of energy: we are talking thousands of watts and will need very sophisticated cooling technology. TSMC also expects such solutions to use a 120x120mm substrate.”
However, TSMC's ambitious quest to create giant chips is being overshadowed by Cerebras Systems' new Wafer Scale Engine 3 (WSE-3), dubbed “the world's fastest AI chip.” The WSE-3 has four billion transistors and is twice as powerful as its predecessor, the WSE-2, while maintaining the same power consumption and price. This new chip built on a 5nm TSMC process provides an astonishing maximum AI performance of 125 petaflops, which is equivalent to 62 Nvidia H100 GPUs.