NVIDIA next-gen H100 Hopper GPU supports 6 stacks of high-bandwidth-memory
According to VideoCardz, the leaked pictures show that the H100 Hopper GPU supports up to 6 high-bandwidth memory stacks and is still a monolithic structure, but its specific specifications are still uncertain. Perhaps as rumored, the GH202 will use a multi-chip module design (MCM) and advanced CoWoS packaging. Nvidia will launch a number of GH100-based products, including an SXM based H100 card for DGX mainboard, a DGX H100 station, and even a DGX H100 SuperPod.
Some netizens have sorted out the information stolen by hackers and the information that has been circulated before and summarized the situation of the H100 Hopper GPU. It is said that the GH100 is manufactured using TSMC’s 5nm process and will have 48MB of L2 cache, which is an improvement over the 40MB of the Ampere architecture GA100 and three times the AMD Instinct MI250 (16MB). However, compared with the 96MB of the flagship chip AD102 of the Ada (Lovelace) architecture, the L2 cache of the GH100 is half less.