Samsung has developed the industry’s first CXL memory module supporting CXL 2.0

Samsung has announced the industry’s first 128GB CXL DRAM that supports CXL 2.0, based on the industry’s first CXL DRAM, which was based on Compute Express Link (CXL) 1.1 and launched in May 2022. This development is expected to expedite the commercialization of next-generation storage solutions.

The CXL memory module that supports CXL 2.0 utilizes a PCIe 5.0 x8 interface, delivering a bandwidth of 35GB/s. Samsung plans to initiate the production of CXL memory modules that support CXL 2.0 later this year and is poised to introduce a variety of derivative products of different capacities to meet the demands of future computational applications.

Samsung has stated that it remains at the forefront of CXL technology and is committed to expanding the CXL ecosystem through collaboration with the data center, server, and chipset companies across the industry. The collaborative milestone achieved with Intel on the Xeon server platform will stimulate the application and development of CXL products across the industry.

At the end of 2020, the CXL consortium introduced the CXL 2.0 specification, which is built on the physical and electrical interface of the PCIe 5.0 standard. It primarily adds support for memory pools to maximize memory utilization and provides standardized management for persistent memory, allowing it to run concurrently with DDR, thereby freeing up DDR for other purposes.

As an open interconnect protocol, CXL boasts higher bandwidth and facilitates high-speed, efficient interconnection between CPUs and GPUs, FPGAs, or other accelerators, meeting the requirements of contemporary high-performance heterogeneous computing. Furthermore, it offers greater bandwidth and superior memory coherence. When used in conjunction with primary DRAM as a next-generation interface, it can expand bandwidth and capacity, which is expected to generate a significant impact in the next-generation computing market, where the need for high-speed data processing in core technologies such as artificial intelligence (AI) and machine learning (ML) is rapidly increasing.