Intel announced the performance comparison between Sapphire Rapids and EPYC Genoa

Following the formal launch of the fourth-generation Intel Xeon Scalable processors in January 2023, Intel has made substantial advancements in performance through its industry-leading accelerator engine, bolstering performance-per-watt in key workload areas such as AI, data analytics, and high-performance computing (HPC). The rapid rollout, broad adoption by global customers, and superior performance demonstrated by the fourth-generation Xeon across diverse key workloads in numerous commercial applications have garnered considerable attention from the industry.

After several weeks of stringent and comprehensive comparative testing, Intel has unveiled a more exhaustive performance comparison between the Xeon Scalable processor and AMD’s EPYC Genoa. The most frequently deployed solutions on the market are based on the performance offered by medium core counts, where per-core performance, power consumption, and throughput are crucial performance indicators. Accordingly, Intel has juxtaposed its 32-core fourth-generation Xeon Platinum 8462Y+ processor against its rival’s best and mainstream 32-core EPYC 9354 processor.

In AI performance benchmarks, Intel asserts that the Xeon Platinum 8462Y+ outperforms the AMD EPYC 9354 by a factor of 7.11. All benchmarks indicate that the Xeon Platinum 8462Y+ not only leads in overall performance but also outperforms in performance-per-watt. These workloads utilize the Intel AMX Advanced Matrix Extensions on Sapphire Rapids, enhancing specific AI tasks such as classification, natural language processing, recommendation, and detection.

Next, in a broader range of workload projects, from SPECint to MySQL Casandra and MongoDB, including workloads that use Intel’s accelerator engine, such as Microsoft SQL, GROMACS, LAMMPS, NAMD, and Monte Carlo, the performance improvement over AMD’s fourth-generation EPYC Genoa can reach up to 2.52 times, and the highest performance-per-watt can increase by 251%. The greatest performance improvements are seen in storage and HPC-specific benchmark tests. While the general workload performance is inferior to EPYC Genoa, microservices and data services exceed the competition by 20% to 30%.

In terms of the total cost of ownership (TCO), Intel suggests it can achieve the same performance as 40 sets of competitors’ servers with fewer servers, resulting in lower energy consumption and overall usage cost. In a PostgreSQL database load scenario, savings can reach up to 8%, while a Microsoft SQL 2022+ QAT Backup database load can save up to 35%. Savings can reach 38% in a BlackScholes HPC load and 61% in a DLRM AI recommendation scenario, with natural language processing in a BertLarge AI saving up to 79%.

Lastly, Intel compared its flagship 56-core Xeon Max 9480, which incorporates HBM memory, with AMD’s strongest 96-core EPYC 9654 in a pure HPC workload comparison. It exhibited a performance advantage exceeding 40% over the competitor in earth system modeling, energy, and manufacturing.