SK Hynix announces mass production of HBM4 in 2026 to prepare for next-gen AI GPU
High Bandwidth Memory (HBM) products are regarded as one of the pillars of artificial intelligence (AI) computing, with the industry experiencing rapid development over the past two years. Influenced by the advancements in artificial intelligence and high-performance computing, the HBM market has offered new hope to memory manufacturers, propelling significant revenue growth. As a high bandwidth memory partner of NVIDIA, SK Hynix currently holds a leadership position in the HBM market.
According to Business Korea, Vice President Chun-hwan Kim of SK Hynix, in his keynote speech at SEMICON Korea 2024, stated that the generative AI market is expected to grow at an annual rate of 35%, with SK Hynix poised to mass-produce the next-generation HBM4 by 2026.
The evolution of HBM products has progressed through several generations: HBM (first generation), HBM2 (second generation), HBM2E (third generation), HBM3 (fourth generation), and HBM3E (fifth generation), with HBM3E being an extended version of HBM3. HBM4, as the sixth generation product, will revolutionize the design of the 1024-bit interface in place since 2015 by adopting a 2048-bit interface, marking the most significant change since the introduction of HBM memory technology.
Currently, a single HBM3E stack has a data transfer rate of 9.6GT/s and a theoretical peak bandwidth of 1.2TB/s, while a memory subsystem composed of six stacks can reach up to 7.2TB/s. Due to considerations of reliability and power consumption, speeds generally do not achieve the maximum theoretical bandwidth, as seen with H200’s peak bandwidth of 4.8TB/s. According to Micron’s previous statements, the peak bandwidth of a single HBM3E stack is expected to increase to 1.5TB/s. Given the complexity of the wiring required on integrated circuits for the 2048-bit interface, the cost is anticipated to be higher than that of HBM3 and HBM3E.
HBM4 will also see changes in the number of stack layers, with the initial release featuring 12 layers of vertical stacking, and memory manufacturers expected to introduce 16 layers of vertical stacking by 2027. Furthermore, HBM technology will evolve towards greater customization, not only being positioned alongside the SoC’s main chip but also transitioning towards stacking above the SoC’s main chip.