AMD has previously updated the GPU’s RDNA/CDNA series architecture roadmap, which has confirmed that the Instinct MI300 series will provide multiple specifications, one of which is an APU that packs Zen 4 architecture and CDNA 3 architecture modules and HBM memory stacks together. According to HPC Wire, Lawrence Livermore National Laboratory’s (LLNL) Terri Quinn in a presentation delivered to the 79th HPC User Forum at Oak Ridge National Laboratory (ORNL) reveals that the ExaFLOP-class El Capitan supercomputer which is slated for installation at LLNL in late 2023, will use the Instinct MI300 APU.
The El Capitan supercomputer will have multiple nodes, each with multiple Instinct MI300 APUs. It was previously rumored that the Instinct MI300 APU will be mounted on the new SH5 socket (LGA 6096). It is reported that the FP 64 peak computing performance of the El Capitan supercomputer has reached 2 ExaFLOPs, which means that its performance will be 10 times that of Sierra, which has been running since 2018 and uses IBM’s Power9 CPU and NVIDIA’s Volta architecture GPU.
It is understood that the El Capitan supercomputer will be designed by HPE, using the Slingshot-11 interconnect to connect each HPE Cray XE rack together. The first use of APU as the core of the supercomputing system will pave the way for the development of AMD’s future Exascale APU model. The Instinct MI300 APU will be manufactured on a 5nm process, equipped with next-generation Infinity Cache, and use the fourth-generation Infinity architecture to support the CXL 3.0 ecosystem.