Tesla D1 chip has 50 billion transistors
In the recent AI Day event held by Tesla, Elon Musk and several engineers explained the progress of Tesla’s pure vision solution FSD, neural network autopilot training, D1 chip, and Dojo supercomputer, and other related information. Among them, the AI training chip D1 developed by Tesla has attracted the interest of many people. This chip will be used in the supercomputer Tesla is currently building, designed to provide higher performance with less consumption and less space.
If 120 training modules (including 3000 D1 chips) are deployed in several cabinets, ExaPOD can be formed. This is the world’s premier AI training supercomputer, with more than 1 million training nodes, and the peak computing power of BF16/CFP8 reaches 1.1 ExaFLOPS. Compared with Tesla’s current supercomputer based on NVIDIA equipment, under the same cost conditions, the performance has increased by 4 times, the performance per watt has increased by 1.3 times, and the floor space is only one-fifth.