Data Center Dominator: Microsoft Hoards Instinct MI300X for AI Power

On December 6, 2023, AMD hosted an event titled “Advancing AI,” during which it unveiled the next generation of Instinct MI300 series compute cards designed for data centers. The Instinct MI300X, a purely GPU-based design utilizing the CDNA 3 architecture, delivers groundbreaking performance for HPC and AI workloads.

According to a report by Seeking Alpha, based on an analysis by Citigroup, Microsoft’s data center division is the largest purchaser of the Instinct MI300X, already deploying it for large language models (LLMs) such as GPT-4. Dr. Lisa Su, AMD’s CEO, has stated that AMD’s sales revenue from AI chips is expected to reach $3.5 billion in 2024, surpassing the earlier forecast of $2 billion.

Although AMD has not disclosed the price of the Instinct MI300X, insiders have revealed that each card sells for $15,000, offering a more affordable alternative to Nvidia’s offerings. Currently, the market price for Nvidia’s H100 PCIe 80GB HBM2E version ranges from $30,000 to $40,000, with even higher prices for the more powerful H100 SXM5 80GB HBM3 version. Analysts from Citigroup suggest that the pricing of the Instinct MI300X could be merely a quarter of the latter’s. Dr. Lisa Su has reiterated AMD’s anticipation of AI chip sales reaching $3.5 billion in 2024, well above the prior expectation of $2 billion.

The Instinct MI300X features a chiplet design, employing both 5nm and 6nm processes, with a transistor count of 153 billion. It utilizes the fourth-generation Infinity Fabric solution, comprising 28 chiplets, including 8 HBM and 4 compute dies. Each compute die contains 2 GCDs based on the CDNA 3 architecture, totaling 80 compute units, equating to 320 compute units and 20,480 stream processors. Considering yield rates, AMD has reduced the number of operational compute units to 304, and stream processors to 19,456. Additionally, the HBM3 capacity reaches 192GB, offering 5.3TB/s of memory bandwidth and 896GB/s of Infinity Fabric bandwidth.