Nvidia expects Blackwell architecture GPU supply to remain limited

Recent reports have highlighted that, driven by a surging demand for artificial intelligence (AI), the market is in dire need of more powerful solutions. In response, NVIDIA has decided to accelerate the launch of its next-generation Blackwell architecture GB100 GPU from the fourth quarter of 2024 to the end of the second quarter, aiming to continue its dominance over competitors in the data center market. Moreover, NVIDIA has forged an agreement with SK Hynix to employ the latter’s cutting-edge, AI-oriented, ultra-high-performance DRAM product, HBM3E, in its forthcoming B100 compute cards.

According to Seeking Alpha, while the delivery times for the current AI and HPC-oriented H100 compute cards have significantly reduced, the outlook for the supply of the new Blackwell architecture-based products remains pessimistic. NVIDIA’s Chief Financial Officer, Colette Kress, during an earnings call with financial analysts and investors, indicated that the supply of the next-generation products is expected to be constrained due to demand far outstripping supply.

There are rumors that some of NVIDIA’s clients have already placed orders for a small batch of B100 compute cards. The question remains: once officially launched, how quickly can NVIDIA ramp up production of the new components for the B100 SXM and B100 PCIe, along with the associated DGX servers? Should the market demand be overwhelming, the initial delivery delays experienced with the H100 could probably be repeated.

The Blackwell architecture-based GB100 GPU, designed with a small chip and Multi-Chip Module (MCM) packaging, facilitates easier enhancements to the chip’s offerings. However, the multi-chip packaging solution might complicate the later stages of packaging. Beyond the B100, NVIDIA has also prepared the B40 for enterprise and training applications, the GB200 product that combines the B100 and Grace CPU, and the GB200 NVL, designed specifically for training large language models.