Nvidia launches Grace CPU Superchip

At this year’s GTC 2022, NVIDIA has further expanded on the original Grace CPU. First, two Grace CPUs were packaged together, and the Grace CPU Superchip was launched, with a total of 144 Arm v9 architecture CPU cores, a cache capacity of 396MB, and LPDDR5x memory bandwidth with ECC check function reaching 1TB/s. It is understood that Grace CPU Superchip uses Arm’s Neoverse N2 platform, and is also the first product to use the latest Arm v9 architecture, which means that it can support PCIe 5.0, DDR5, HBM3, CCIX 2.0 and CXL 2.0 and other features.

According to NVIDIA, the Grace CPU Superchip has a TDP of 500W and a SPECint 2017 score of over 740 points, twice the performance per watt of today’s CPUs. Designed for AI and HPC applications, the Grace CPU Superchip can run all NVIDIA software stacks and platforms, including NVIDIA RTX, HPC, NVIDIA AI, and NVIDIA Omniverse.
The Grace CPU Superchip is connected via NVIDIA’s latest NVLink-C2C, which provides 900 GB/s of connection bandwidth, which ensures low latency and consistency across chip-to-chip interconnects and allows connected devices to work on the same memory pool. Compared with PCIe 5.0, NVLink-C2C has 25 times the energy efficiency and 90 times the area efficiency, which means that it can provide higher transmission efficiency with a smaller footprint and lower power consumption. Through NVLink-C2C technology, integrated products built from different types of small chips such as CPU, GPU, DPU, NIC, and SoC can be created, helping NVIDIA continue to advance the CPU, GPU, and DPU three-chip strategy.

Nvidia said NVLink-C2C also supports Arm’s AMBA CHI (Arm AMBA Coherent Hub Interface) protocol, and the two parties are working closely to further enhance its capabilities to support accelerators that are fully consistent and secure with other interconnected processors. Nvidia confirmed that it will also support the just-launched UCIe specification, and its custom chips can choose to use UCIe or NVLink-C2C for interconnection in the future.