Nvidia warms up for GTC 2022

GTC 2022 will be held from March 21st to 24th. Mr. Jensen Huang, CEO of NVIDIA, will also give a keynote speech on March 22nd to discuss the new generation of AI technology with other AI leaders. People from industry, research, and academia will discuss accelerated computing, deep learning, data science, quantum computing, and data center, cloud, and edge computing. Nvidia officials also issued a notice to warm up the GTC 2022 event.
Nvidia command processor

At this GTC conference, Nvidia is likely to bring a new generation of Hopper architecture. This is Nvidia’s first multi-chip module design (MCM) based GPU, which will be manufactured using TSMC’s 5nm process and CoWoS advanced packaging, supporting HBM2e and other connectivity features, and targeting data centers.

It is rumored that the number of transistors of the GH100 based on the Hopper architecture will reach 140 billion, which is almost 2.5 times the current Ampere-based GA100 (54.2 billion) or AMD’s CDNA 2-based Instinct MI200 series (58 billion). The chip size of the GH100 is said to be close to 900mm², which is smaller than the 1000mm² rumored in the past, but larger than the GA100 (862mm²) and the Instinct MI200 series (about 790mm²). It is rumored that the GH100 is equipped with a total of 288 SMs, which can provide three times the performance of the current A100 computing card.

In addition, Nvidia may have a secret weapon, which is the COPA scheme based on the Hopper architecture. Previously, NVIDIA researchers published an article detailing how NVIDIA is exploring how to deploy multi-chip designs for future products, which mentioned related designs. Nvidia will have two designs based on the same architecture, but for the high-performance computing (HPC) and deep learning (DL) segments. HPC will use the standard scheme, DL will have a huge independent cache connected to the GPU.

The keynote speech of Mr. Jensen Huang, CEO of NVIDIA, is at 8:00 am PST on March 22, and you can watch it without registering for the meeting.