Intel Xeon 6 Delivers up to 17x AI Performance Gains over 4 Years of MLPerf Results
Today, MLCommons published results of its industry-standard AI performance benchmark suite, MLPerf Inference v4.1. Intel submitted results across six MLPerf benchmarks for 5th Gen Intel® Xeon® Scalable processors and, for the first time, Intel® Xeon® 6 processors with Performance-cores (P-cores). Intel Xeon 6 processors with P-cores achieved about 1.9x geomean performance improvement in AI performance compared with 5th Gen Xeon processors.
“The newest MLPerf results show how continued investment and resourcing is critical for improving AI performance. Over the past four years, we have raised the bar for AI performance on Intel Xeon processors by up to 17x based on MLPerf. As we near general availability later this year, we look forward to ramping Xeon 6 with our customers and partners.”
–Pallavi Mahajan, Intel corporate vice president and general manager of Data Center and AI Software
Why MLPerf Results Matter: AI systems require CPUs as a critical component to successfully deploy solutions across a variety of scenarios. Intel Xeon provides a great solution for AI inference, including classical machine learning and vector search embedding.
With MLPerf Inference v4.1, Intel submitted 5th Gen Intel Xeon processors and Xeon 6 processors with P-Cores on ResNet50, RetinaNet, 3DUNet, BERT, DLRM v2 and GPT-J. Compared with 5th Gen Intel Xeon, Xeon 6 provides an average of about 1.9x better AI inference performance across these six benchmarks. Intel continues to be the only server processor vendor to submit CPU results to MLPerf.
Over the past four years, Intel has made significant gains in AI performance with CPUs since it first submitted MLPerf results. Compared with 3rd Gen Intel® Xeon® Scalable processors in 2021, Xeon 6 performs up to 17x better on natural language processing (BERT) and up to 15x better on computer vision (ResNet50) workloads. Intel continues to invest in AI for its CPU roadmap. As an example, it continues to innovate with Intel® Advanced Matrix Extensions (AMX) through new data types and increased efficiency.
How Intel Supports Its AI Customers: The latest MLCommons benchmarks highlight how Xeon processors deliver strong CPU AI server solutions to original equipment manufacturers (OEMs). As the need for AI compute grows and many customers run AI workloads alongside their enterprise workloads, OEMs are prioritizing MLPerf submissions to ensure they deliver highly performant Xeon systems optimized for AI workloads to customers.
Intel supported five OEM partners – Cisco, Dell Technologies, HPE, Quanta and Supermicro – with their MLPerf submissions in this round. Each customer submitted MLPerf results with 5th Gen Xeon Scalable processors, displaying their systems’ support for a variety of AI workloads and deployments.
What’s Next: Intel will deliver more information about Xeon 6 processors with P-cores during a launch event in September.