Last year, Samsung announced the launch of the new HBM2 memory, which integrates an AI processor, which can provide up to 1.2 TFLOPS of embedded computing power, so that the memory chip itself can perform CPU, GPU, ASIC, or FPGA operations. The new HBM-PIM (processing-in-memory) chip injects an AI processor inside each memory module, thereby offloading processing operations to the HBM itself, thereby reducing the burden of transferring data between the memory and the general processor.
According to Business Korea, Samsung modified AMD’s Instinct MI100 computing card based on the CDNA architecture, added HBM-PIM chips, and then used 96 modified Instinct MI100 computing cards to build a large-scale computing system. Compared with the original system, the performance of the new system is 2.5 times higher when using the training language model algorithm T5, and the power consumption is reduced to 1/2.67 of the original, which greatly improves the efficiency of GPU running AI algorithms.
The head of the Samsung Artificial Intelligence Research Center said that the use of AI technology in the semiconductor production process is increasing in the future, and the application of PIM technology appears to be an effective solution to speed up the AI workflow, making Samsung a semiconductor company that uses AI technology better than any other company.
“The current method of checking yields has its limits, as testing can only be done every three to six months when wafers are put into and taken out of the fab. We need to move on to a stage where yields can be predicted using AI sensors and inspection data,” Choi said.