CXL Alliance Releases Compute Express Link 3.0

The CXL Alliance announced the launch of the Compute Express Link (CXL) 3.0 specification, which further expands on the previous generation technology, improves scalability, and optimizes system-level data flow through advanced interaction capabilities, and efficient peer-to-peer communication, and fine-grained resource sharing across multiple computing domains.

As an open interconnection protocol, CXL enables high-speed and efficient interconnection between CPU and GPU, FPGA, or other accelerators, meets the requirements of today’s high-performance heterogeneous computing, and provides higher bandwidth and better memory consistency.

In the CXL 3.0 specification, fabric functions and management, improved memory pools, enhanced consistency, and peer-to-peer communication were introduced. The data transfer rate is doubled to 64 GT/s with no increased latency compared to CXL 2.0, also backward compatible with CXL 2.0, CXL 1.1, and CXL version 1.0 specifications.

The CXL alliance was launched in 2019. Intel joined Alibaba, Dell EMC, Facebook, Google, HPE, Huawei, and Microsoft to establish, and then AMD and Arm also joined. Late last year, the Gen-Z Alliance confirmed the transfer of all its technical specifications and assets to the CXL Alliance, moving forward with the CXL protocol as the sole industry standard.

At its inception, the CXL Consortium released the CXL 1.0 specification, followed by an improved CXL 1.1 specification. The new CXL 2.0 specification was announced in late 2020, building on the physical and electrical interfaces of the PCIe 5.0 standard, mainly adding support for memory pools, to maximize memory utilization, and providing standardized management of persistent memory, allowing concurrent operation with DDR, freeing DDR for other uses.