Samsung is actively promoting the 5th-gen HBM3E

In light of the burgeoning demand for Artificial Intelligence (AI), and the market’s thirst for more potent solutions, NVIDIA has made the executive decision to expedite the launch of its next-generation Blackwell architecture GB100 GPU from the fourth quarter of 2024 to the tail end of the second quarter. Concurrently, NVIDIA has forged an alliance with SK Hynix, electing to incorporate the latter’s ultra-high-performance DRAM product, HBM3E, tailored for AI, into its new B100 compute card.

According to BusinessKorea, Samsung, another titan in the storage arena, has accelerated the development and sales timeline of its 5th-gen HBM3E, aptly named “Shinebolt,” projected to follow closely on the heels of SK Hynix. Preliminary testing suggests that “Shinebolt” will offer a data transfer rate that outstrips its predecessor, anticipated to clock in at an impressive 1.228TB/s—eclipsing SK Hynix’s HBM3e, which peaks at 1.15TB/s. Furthermore, “Shinebolt” embraces a cutting-edge 12-layer vertical stacking design, enabling a substantial 36GB capacity on a single HBM3e package, in contrast to the prototype’s 8-layer design, which is capped at 24GB. Although Samsung Electronics’ pace in HBM development and production lingers behind SK Hynix, Micron has already paraded test samples of its comparable HBM3 Gen2 to clientele. However, Samsung Electronics is orchestrating strategic maneuvers to reclaim its vanguard position in advanced storage chip production.

High Bandwidth Memory (HBM) epitomizes a high-value, high-performance product characterized by vertically connecting multiple DRAMs, offering a discernible amplification in data processing speeds compared to conventional DRAM. The HBM DRAM product lineage unfolds as HBM (first generation), HBM2 (second generation), HBM2E (third generation), HBM3 (fourth generation), and HBM3E (fifth generation), with HBM3E delineated as an extended version of HBM3. HBM is heralded as the quintessential DRAM for the AI epoch.