AMD unveiled the new Instinct MI300 series compute cards: Instinct MI300A & Instinct MI300X

On December 6, 2023, at 10:00 AM Pacific Standard Time, AMD hosted an event titled “Advancing AI,” during which they unveiled the new Instinct MI300 series compute cards, tailored for data centers. AMD announced that this new product boasts industry-leading generative AI memory bandwidth, delivering superior performance in training and inference for Large Language Models (LLM). The product combines the latest CDNA 3 and Zen 4 architectures, offering groundbreaking performance for HPC and AI workloads.

The inaugural Instinct MI300 series introduced two models: the Instinct MI300A (CPU+GPU) and the Instinct MI300X (GPU only).

The Instinct MI300X features a compact chip design, utilizing both 5nm and 6nm processes, with a total transistor count of 153 billion. It employs the fourth-generation Infinity Fabric solution, comprising 28 chiplets, including eight HBM and four compute dies. Each compute die houses two GCDs based on the CDNA 3 architecture, totaling 80 compute units, which equates to 320 compute units and 20,480 stream processors. To account for yield rates, AMD reduced some compute units, resulting in an actual count of 304 compute units and 19,456 stream processors. Moreover, the HBM3 capacity reaches 192GB, offering 5.3TB/s of memory bandwidth and 896GB/s of Infinity Fabric bandwidth.

At this conference, AMD also announced that the Instinct MI300A has entered mass production. This model, an APU design, features 24 cores based on the Zen 4 architecture and has one fewer compute die, reducing the compute unit count to 228, corresponding to 14,592 stream processors.

It is reported that the expected shipment volume for the Instinct MI300 series next year is between 300,000 to 400,000 units, with Google and Microsoft being the largest customers.