NVIDIA released the new HGX A100 system

Nvidia announced that it will inject power into the HGX AI supercomputing platform through the integration of artificial intelligence and high-performance computing so that supercomputing can serve more industries.

In order to accelerate the arrival of the industrial AI and HPC era, NVIDIA has added three key technologies to its HGX platform namely NVIDIA A100 80GB PCIe GPU, NVIDIA NDR 400G InfiniBand network, and NVIDIA Magnum IO GPUDirect Storage software. The combination of the three provides excellent performance to realize the innovation of industrial HPC, said Jensen Huang, CEO of NVIDIA:
The HPC revolution started in academia and is rapidly extending across a broad range of industries. Key dynamics are driving super-exponential, super-Moore’s law advances that have made HPC a useful tool for industries. NVIDIA’s HGX platform gives researchers unparalleled high performance computing acceleration to tackle the toughest problems industries face.

Image: Nvidia

This also officially announced that NVIDIA has launched the A100 80GB computing card with standard interfaces. Except for interface changes and lower TDP, this product has exactly the same functions as the SXM4 version of the A100 80GB computing card released in November last year. Larger memory capacity and bandwidth can save more data and a larger neural network, thereby minimizing communication and energy consumption between nodes.

In addition, Nvidia’s Magnum IO GPU Direct Storage this time has similarities with Microsoft’s DirectStorage technology.

In the consumer sector, Microsoft’s technology can provide fast access to NVMe storage to improve the loading efficiency of certain workloads. Nvidia’s technology seems to focus on similar access types.