Nvidia B100 will come with 192GB 8-layer stacked HBM3E

The GTC 2024 Conference is slated to take place from March 18th to 21st, 2024, at the San Jose Convention Center in California, USA, with a simultaneous online conference being made available. This iteration will see NVIDIA placing a significant emphasis on the realm of Artificial Intelligence (AI), which has been the industry’s focal point over the past year. NVIDIA’s next-generation Blackwell architecture, aimed at server products, is set to make its debut, continuing to dominate the data center market against its competitors.

The keynote address, titled “The Premier AI Summit for Developers,” will feature NVIDIA’s founder and CEO, Jensen Huang, taking the stage. As the event draws near, rumors have surfaced suggesting NVIDIA’s unveiling of the Blackwell architecture B100, incorporating two chips using TSMC’s CoWoS-L packaging technology, and connected to eight layers of stacked HBM3E memory, achieving a total capacity of 192GB. Furthermore, NVIDIA is expected to introduce the B200 approximately a year later, transitioning to a 12-layer stacked HBM solution, thereby extending the capacity to 288GB.

This indicates that the B100’s memory capacity will match that of AMD’s Instinct MI300X, and with both utilizing eight memory chips, the HBM3E represents a step forward from HBM3. CoWoS-L, a packaging technology launched by TSMC last year, allows the creation of larger interposer layers, with initial mass production targeted for 2025. It appears, however, that this timeline has been expedited.

The specifics regarding whether next year’s B200 upgrade to a 12-layer stacked HBM product will utilize HBM3E or HBM4 remain unclear. There are rumors that AMD also plans to upgrade its Instinct MI300 series, transitioning from HBM3 to HBM3E, to enhance performance in certain workloads.