Nvidia introduces how to use AI-assisted development of GPUs

Previously, NVIDIA’s chief scientist and senior vice president of research, Bill Dally, shared some information on NVIDIA’s research and development at GTC 2022. It involves the use of machine learning (ML) and artificial intelligence (AI) techniques to develop, improve and accelerate GPU designs. In the past few years, NVIDIA’s efforts in the field of AI and ML cannot be ignored, and its GPU has become the first choice for many data centers and HPC.

Currently, NVIDIA mainly uses the most advanced EDA (Electronic Design Automation) tools to design GPU. It also uses an artificial intelligence model called PrefixRL that uses deep reinforcement learning to optimize parallel prefix circuits, allowing Nvidia to design chips with smaller areas while delivering similar or better performance.

According to NVIDIA, there are nearly 13,000 circuit instances on the latest Hopper architecture GPU, and these instances are created entirely by AI. As can be seen from the comparison chart shown by NVIDIA, the 64b adder circuit designed based on PrefixRL AI reduces the area by 25% compared with the traditional EDA tool, but the speed and function are equivalent.

Training a model like PrefixRL is a computationally intensive task, with physics simulations requiring 256 CPUs per GPU and over 32,000 GPU hours to train the 64b case. Nvidia developed Raptor for this, an in-house distributed reinforcement learning platform that leverages the unique strengths of Nvidia hardware for reinforcement learning.

Nvidia says this should be the first method to use deep reinforcement learning agents to design arithmetic circuits, and it hopes this approach can become a problem for applying AI to real-world circuit design.