TSMC: due to tight CoWoS packaging production capacity, NVIDIA AI GPU supply shortage may last until 2025

In recent months, artificial intelligence tools, led predominantly by ChatGPT, have ignited global fervor, substantially amplifying demand for data center GPUs like NVIDIA’s A100 and H100. This surging demand has strained the advanced packaging capacity of Taiwan Semiconductor Manufacturing Company (TSMC), prompting them to urgently order new equipment with plans to augment their 2.5D packaging capacity by over 40%, aiming to satiate NVIDIA’s burgeoning needs.

According to a report by Nikkei Asia, TSMC’s Chairman, Mark Lui, recently acknowledged at a public event that the ascent of artificial intelligence (AI) necessitates vast computational power, leading to a surge in GPU demand and consequent strain on CoWoS packaging capabilities. At present, TSMC cannot fully meet customer demands, managing to fulfill only around 80% of the requirements.

Liu perceives the deficit in CoWoS packaging capacity as a transitory setback. With TSMC’s anticipated expansion in this domain, he expects the situation to be alleviated within the next 18 months. This suggests that NVIDIA’s data center GPUs might face supply shortages for the foreseeable future, with no immediate resolution in sight.

NVIDIA’s pressing requirements combined with TSMC’s inability to promptly upscale their packaging capacity have presented opportunities for other industry players. Previously, Samsung proposed that NVIDIA source the manufactured chips from TSMC, then procure HBM3 from Samsung’s memory business division, utilizing Samsung’s I-Cube 2.5D packaging for the subsequent processes.

Recent reports suggest that Samsung has inked a deal with NVIDIA to supply HBM3 chips, commencing as early as October 2023. By 2024, it’s anticipated that Samsung might secure up to 30% of NVIDIA’s HBM3 orders. Samsung remains poised to capitalize further, hoping to seize a larger share of the 2.5D packaging orders.