AMD introduces future chip design

ISSCC 2023 will be held in San Francisco, USA from February 19th to 23rd, 2023. Industry giant AMD also made an appearance at the conference, and its keynote detailed how to improve the energy efficiency of data centers and manage to keep up with Moore’s Law, even as the pace of semiconductor process node advancement has slowed down.

According to Planet3DNow.de, AMD’s most notable prediction for server processors and HPC accelerators is multi-layer stacked DRAM.

For some time now, AMD has started making logic products with stacked HBM, such as GPUs. These are multi-chip modules (MCMs), where the logic die and HBM stack sit on top of a silicon interposer. While this method saves PCB space compared to standalone DRAM chips/modules, it is very inefficient on the substrate, and the interposer is essentially a silicon chip with tiny wiring between chips stacked on top of it.

AMD envisions a near future where high-density server processors will have multiple layers of DRAM stacked on top of logic chips. This stacking method saves space on PCBs and substrates, allowing chip designers to cram more cores and DRAM per socket.

AMD also sees a larger role for in-memory computing, where simple calculations and data movement functions can be performed directly in memory, eliminating a round trip to and from the processor. AMD also talked about the possibility of packaging an optical PHY, which would simplify networking infrastructure.

AMD said in 2021 that the future belongs to modular design and matching and coordinated packaging. With the increase of through-silicon vias (TSVs), AMD will focus on more complex 3D stacking technologies in the future, such as core stacking cores, IP stacking IP, and even macroblocks can be stacked in 3D. Eventually, TSVs will be so closely spaced that module splitting, folding, and even circuit splitting will become possible, revolutionizing how processors are perceived today.