Nvidia to launch a liquid-cooled version of the A100 computing card
NVIDIA GTC 2020 launched the A100 computing card based on the new-generation Ampere architecture. The GA100 core with an area of up to 826mm² is manufactured by TSMC’s 7nm process. The A100 computing card has two different forms, one is the SXM4 version, and the other is the general-purpose PCIe version, which is limited to two GPUs interconnected through NVLink.
According to VideoCardz, Nvidia is preparing to launch a liquid-cooled version of the A100 computing card, most likely based on the A100 PCIe version of the computing card released in June last year, equipped with 80GB of HBM2e memory. It is not a special thing to change the data center GPU to liquid cooling. The third party already has the A100 computing card to change the liquid cooling kit, and it has been widely used, but this time it should be NVIDIA’s own official design. Although NVIDIA has released a new generation of H100 computing cards based on the Hopper architecture, there is still a lot of market demand for the A100 computing cards.
As can be seen from the picture, the liquid-cooled version of the A100 computing card is a single-slot specification, and the position of the liquid pipe connector is at the rear of the card, which is connected to the 8Pin external power interface. The passive cooling design used in the original version is a dual-slot specification. For some workstations, the cooling effect is not ideal, and there is a certain risk in replacing the liquid-cooling module by yourself. There may be another link in the deployment. Nvidia’s launch of the liquid-cooled version is beneficial for some users to purchase.
Compared with the 40GB HBM2 version of the A100 computing card, the 80GB HBM2e version’s memory rate has been increased from 2.4Gbps to 3.2Gbps, and the memory bandwidth has also changed from 1.6TB/s to 2TB/s. Other specs remain largely unchanged, including 19.5 TFLOPS of single-precision performance and 9.7 TFLOPS of double-precision performance. According to NVIDIA, the larger memory version of the A100 computing card is ideal for a variety of data-hungry applications, such as AI training, weather forecasting, and quantum chemistry. NVIDIA calls it the world’s fastest data center GPU.