NVIDIA releases H200 and GH200 product lines
NVIDIA has announced the launch of its most potent chipsets ever, the H200 and GH200 product lines, building upon the existing Hopper architecture. These new models boast increased memory and computational capabilities, propelling the next generation of artificial intelligence supercomputers into an unprecedented era of performance.
The H200 is equipped with a staggering 141GB of HBM3e memory, operating at approximately 6.25 Gbps, and six HBM3e stacks, bringing a total bandwidth of 4.8 TB/s to each GPU. This represents a significant leap from the original H100, which featured 80GB of HBM3 and a total bandwidth of 3.35 TB/s. In comparison to the H100’s SXM version, the H200’s SXM variant exhibits a 76% increase in memory capacity and a 43% increase in total bandwidth. However, in terms of raw computational power, the H200 remains relatively unchanged, benefiting from the enhanced memory configuration in specific application scenarios.
NVIDIA has also introduced the GH200, which integrates the H200 GPU with the Grace CPU, combining the Hopper architecture GPU with the Arm architecture Grace CPU, and utilizing NVLink-C2C to connect the two. Each Grace Hopper Superchip contains 624GB of memory, comprising 144GB of HBM3e and 480GB of LPDDR5x memory.
The Swiss National Supercomputing Centre’s Alps supercomputer is likely to be one of the first to deploy the Grace Hopper supercomputer next year, albeit still utilizing the GH100. The first GH200 system in the United States will be the Venado supercomputer at Los Alamos National Laboratory, and Texas Advanced Computing Center’s (TACC) Vista system is also equipped with the Grace Hopper Superchip, though it’s unclear if it includes the GH200.
The largest supercomputer known to employ the GH200 to date is from the Jülich Supercomputing Centre’s Jupiter, set to house nearly 24,000 GH200 chips. It will offer an astounding 93 ExaFLOPS of artificial intelligence computational performance, along with an additional 1 ExaFLOPS of traditional FP64 computational capability.