It was previously reported that after TSMC completes technology research and development and initial trial production at the (TSMC) 3nm process node, its production capacity will increase significantly in the second half of the third quarter of this year, and the N3 process will officially enter the mass production stage. But in the end, due to various reasons, it has been delayed. It is rumored that Intel has delayed the order, and Apple is not satisfied with the existing N3 process.
According to TSMC’s plan, from 2022 to 2025, N3, N3E, N3P, N3X, and other processes will be launched successively, and there will be an optimized N3S process in the future. It can cover the use requirements of different platforms such as smartphones, the Internet of Things, automotive chips, and HPC. TSMC still uses FinFET at the 3nm process node, but it can use FINFLEX technology to expand the performance, power, and density range of the process, that further improves PPA by allowing chip designers to use the same design toolset to select the best option for each key functional block on the same chip.
Every time a new process node is entered, everyone hopes to improve performance, reduce power consumption and increase transistor density. Although the logic circuit has a good improvement in the new process technology, the SRAM has been lagging behind, and TSMC’s latest 3nm process node has even stagnated. According to a report from
WikiChip, TSMC’s shrinking rate of SRAM bitcells size has slowed down considerably.
TSMC has stated that if the N3 and N5 processes are compared together, the former is expected to result in a 10% to 15% performance improvement (same power consumption and complexity), or 25%-30% lower power consumption (same frequency and transistor count) while increasing logic density by about 1.6 times. N3E is TSMC’s second-generation 3nm process. Compared with N5, the performance improvement is about 18% or the power consumption is reduced by 34%, and the logic density is increased by about 1.7 times.
Recently, TSMC stated in a paper published at the IEDM 2022 conference that the size of the SRAM bit cells using the N3 and N5 processes is 0.0199 μm² and 0.021 μm², which is only reduced by about 5%. The N3E process is even worse, basically maintained at 0.021μm², which means that there is almost no reduction compared to the N5 process. For Intel, the SRAM bit cell size of the Intel 7 process is 0.0312 μm², and the next Intel 4 process is 0.024 μm². TSMC’s density-optimized N3S process may perform better. It is expected to be launched in 2024, but if you want to make a big breakthrough, you have to wait for the future 2nm process node, which means you have to wait for a few years.
Modern CPUs, GPUs, and SoCs all use SRAM for various caches when processing data, especially for artificial intelligence (AI) and machine learning (ML) workloads, and it has become a trend to be equipped with large-capacity caches. Looking into the future, the demand for cache will only increase, but choosing a 3nm process node will not reduce the chip area occupied by SRAM, and compared with the existing 5nm process node, the process cost is higher, that is to say, the chip size of high-performance chips increases and the cost also increases. This can also explain why TSMC will launch
FINFLEX technology at the 3nm process node to alleviate SRAM problems.
A more realistic solution is to adopt a small chip design and decompose the large-capacity cache into a lower-cost process to manufacture chips separately. This is one of the reasons why AMD has focused on 3D V-Cache technology in the past two years. On the recently released RDNA 3 architecture GPU, AMD uses different processes for GCD and MCD, and the N6 process used by the latter is much cheaper than the N5 process of the former. Another approach is to use alternative technologies, such as using eDRAM or FeRAM for caching.
It can be foreseen that in the next few years, chips based on new process nodes will be slowed down due to the reduction of SRAM bitcells, which will be the main challenge faced by designers.