HBM4 memory chips may undergo major changes

In recent epochs, the spheres of Artificial Intelligence (AI), High-Performance Computing (HPC), and the ubiquitous Personal Computer have been the fulcrums propelling the evolution of high-performance DRAM products. Concurrently, there’s a burgeoning demand for HBM-class DRAMs. Last month, SK Hynix unveiled their prodigious AI-centric high-performance DRAM innovation, the HBM3E, commencing the distribution of samples to clientele for performance validation. Subsequent to this, Micron launched an industry pioneer: an HBM3 Gen2 with an astounding bandwidth surpassing 1.2TB/s, pin speeds exceeding 9.2GB/s, and an 8-layer vertical stack boasting 24GB capacity—a quantum leap of 50% compared to extant HBM3 solutions.

SK hynix HBM3E Nvidia

HBM (High Bandwidth Memory), characterized as a vertically integrated high-bandwidth storage juxtaposed with multiple DRAMs, emerges as a sterling product that offers substantial value augmentation and prodigious speeds. Despite being a relative neophyte in the market for less than a decade, it has made significant strides. As per DigiTimes dispatches, the forthcoming HBM4 design will undergo a profound metamorphosis, with the memory stack adopting a 2048-bit interface.

Since the annus mirabilis of 2015, each HBM stack has been wedded to a 1024-bit interface. This doubling of bit-width stands as the most salient evolution since the inception of HBM memory technology. While the pin speeds of HBM4 memory remain veiled in uncertainty, maintaining the current benchmark of 9GB/s would catapult the bandwidth from the present 1.15TB/s of HBM3E to an impressive 2.3TB/s—a conspicuous enhancement.

Contemporary marvels like NVIDIA’s H100 computation card employ six HBM3/HBM3E chips, each tethered to a 1024-bit interface—culminating in a total of 6144 bits. With the amplification in the interface bit-width of individual HBM chips, it remains a conundrum whether manufacturers will persist in utilizing an identical count of HBM chips in their creations, or perhaps opt for a judicious reduction, trimming costs.

Many harbor trepidations that a doubling of bit-width might precipitate a decline in production capacity, given the storage requires thousands of Through-Silicon Vias (TSVs) for stacking. Nonetheless, recent missives suggest both Samsung and SK Hynix remain sanguine, firmly believing that HBM4 can sustain its current output.