Generated by DeepSeek V3.2| Xeon Platinum 8490H | |
|---|---|
| Name | Xeon Platinum 8490H |
| Code name | Sapphire Rapids HBM |
| Designed by | Intel |
| Produced by | Intel |
| Instruction set | x86-64 |
| Socket | LGA 4677 |
| Fabrication process | Intel 7 |
| Cores | 60 |
| Threads | 120 |
| Cache | L1: 4.5 MB, L2: 60 MB, L3: 112.5 MB |
| Memory support | HBM2e, DDR5 |
| TDP | 350 W |
| Predecessor | Xeon Platinum 8380 |
Xeon Platinum 8490H is a high-performance server processor from Intel's Xeon Scalable processor family, based on the Sapphire Rapids microarchitecture. It is distinguished by its integration of High Bandwidth Memory 2e (HBM2e) on-package, a feature designed to accelerate memory-intensive workloads in high-performance computing and artificial intelligence. The processor is manufactured on the Intel 7 process node and is intended for deployment in multi-socket servers for enterprise and scientific data centers.
The processor features 60 physical cores and 120 threads, with a base clock rate of 1.9 GHz and a maximum turbo frequency of 3.5 GHz. It utilizes the LGA 4677 socket, also known as Socket E, and has a default thermal design power of 350 watts. Its cache hierarchy includes 4.5 MB of L1 cache, 60 MB of L2 cache, and 112.5 MB of shared L3 cache. Memory support is dual-mode, with four stacks of on-package HBM2e providing 64 GB of capacity and eight-channel DDR5 support for traditional DRAM expansion, compatible with Intel Xeon Max Series platforms.
The core design is based on the Golden Cove microarchitecture, which is part of the wider Sapphire Rapids platform. A key innovation is the inclusion of on-package HBM2e, which operates as a cache or flat memory space to provide extremely high bandwidth for workloads like computational fluid dynamics and molecular modeling. The chip uses Intel Advanced Matrix Extensions (AMX) to accelerate deep learning inference and training tasks. Other integrated technologies include Data Streaming Accelerator (DSA), QuickAssist Technology (QAT) for cryptographic acceleration, and support for Compute Express Link (CEL) and PCIe 5.0 for high-speed interconnects and storage.
In benchmark tests, the processor demonstrates significant advantages in memory-bound applications common in high-performance computing environments, such as weather forecasting simulations and genome sequencing analysis, due to the high bandwidth of its integrated HBM2e. Its AMX instructions provide substantial performance gains for artificial intelligence frameworks like TensorFlow and PyTorch compared to previous-generation Xeon processors without this feature. Performance in traditional enterprise workloads, such as large database management on Microsoft SQL Server or virtualization on VMware vSphere, is also robust, benefiting from the high core count and support for DDR5 memory.
The processor is positioned at the extreme high-end of the server market, competing directly with AMD's EPYC processors with 3D V-Cache technology in technical computing segments. Its primary use cases are in exascale computing projects, national laboratories like Lawrence Livermore National Laboratory, and commercial cloud computing providers running demanding analytics and AI as a service platforms. It is also targeted at original equipment manufacturers and system integrators building servers for financial modeling and computational chemistry applications where memory bandwidth is a critical bottleneck.
The processor was formally launched by Intel in January 2023 as part of the fourth-generation Xeon Scalable family. It is fabricated on the Intel 7 process node at Intel's advanced manufacturing facilities. The platform support is provided by the Eagle Stream platform, with server designs from partners like Hewlett Packard Enterprise, Dell Technologies, and Supermicro. Its release followed the broader unveiling of the Sapphire Rapids microarchitecture at events like the Intel Innovation conference.
Category:Intel microprocessors Category:Xeon microprocessors Category:Server microprocessors