Micron has provided HBM3 Gen2 samples to customers

In July of this year, Micron Technology unveiled an industry-first High Bandwidth Memory (HBM3 Gen2) with an impressive bandwidth surpassing 1.2TB/s, a pin speed exceeding 9.2GB/s, and an 8-layer vertical stack boasting a 24GB capacity—a remarkable 50% enhancement compared to currently shipped HBM3 solutions.

Reuters indicates that during a recent earnings call, Micron disclosed that they have commenced distributing samples of the HBM3 Gen2 to select clients. Its augmented performance and reduced power consumption have not only met but indeed exceeded anticipations. The metrics were so astounding that certain clients, prior to hands-on testing, harbored skepticism. Upon juxtaposing Micron’s samples with those of competitors, these clients were taken aback by the significant disparity.

HBM3 Gen2

Micron’s President and Chief Executive Officer, Sanjay Mehrotra, expressed optimism that the burgeoning realm of Artificial Intelligence (AI) would usher in revenues tallying in the billions of dollars, potentially extricating Micron from its fiscal challenges. Additionally, with NVIDIA as a steadfast ally, Micron anticipates the debut of HBM3 Gen2 on NVIDIA’s forthcoming GPUs, tailored for AI and High-Performance Computing (HPC), in 2024.

Per Micron’s elaboration, the per-watt performance of the HBM3 Gen2 is a staggering 2.5 times that of its predecessors. It sets new benchmarks for key metrics like performance, capacity, and power efficiency in AI data centers. Such advancements can expedite the training time of expansive language models like GPT-4 while offering an exceptional Total Cost of Ownership (TCO). The foundation of Micron’s HBM3 Gen2 solution lies in its pioneering 1β (1-beta) process, ingeniously stacking 24GB DRAM chips into an 8-layer cube within standard packaging dimensions.

Furthermore, Micron is in the midst of perfecting a 12-layer vertical stack comprising a singular 36GB DRAM chip. When paralleled with existing competitive solutions, Micron’s offering boasts a 50% enhancement in capacity within the prescribed stack height.