NVIDIA delivers the first DGX H200 to OpenAI

Recently, NVIDIA delivered the world’s first DGX H200 supercomputer to OpenAI. Greg Brockman, President and Co-founder of OpenAI, shared a photo with Jensen Huang on Twitter, writing, “First @NVIDIA DGX H200 in the world, hand-delivered to OpenAI and dedicated by Jensen “to advance AI, computing, and humanity‘”

The photograph illustrates the substantial size of the DGX H200 supercomputer, which is adorned with handwritten slogans and personally signed by Jensen Huang. NVIDIA officially launched the H200 and GH200 product lines at the end of last year, building on the existing Hopper architecture to enhance memory and computing power. The H200 is equipped with 141GB of HBM3e memory, operating at approximately 6.25 Gbps, with six HBM3e stacks bringing a total bandwidth of 4.8 TB/s per GPU. Compared to the SXM version of the H100, the SXM version of the H200 has increased its memory capacity and total bandwidth by 76% and 43%, respectively. However, its raw computing power has not significantly increased over the H100, benefiting only in specific applications that require larger memory configurations.

The GH200’s AI performance reaches the level of 1 Exaflop. It contains 256 GH200 Grace Hopper chips, sharing 144TB of memory internally and introducing a new NVLink Switch topology to construct the entire supercomputer cluster. This new structure offers higher bandwidth than previous generations, with interconnects between GPU-GPU and CPU-GPU being ten and seven times higher, respectively, and energy efficiency five times greater than its competitors.

The Grace Hopper chip combines a Hopper architecture GPU with an Arm architecture Grace CPU, connected via NVLink-C2C. It features 72 Arm v9 architecture CPU cores and 16,896 FP32 CUDA cores, along with 96GB of HBM3 and 512GB of LPDDR5X memory. This configuration allows for optimal workload distribution between the CPU and GPU in high-performance computing or AI applications, achieving peak operational efficiency.