Generated by GPT-5-mini| Intel Omni-Path | |
|---|---|
| Name | Intel Omni-Path |
| Developer | Intel Corporation |
| Type | High-performance network |
| Introduced | 2015 |
| Discontinued | 2019 (product line sale) |
| Data rate | 100 Gbit/s (per port) |
| Topology | Fat-tree, Fabric |
| Protocol | OFA, PSM, MPI |
Intel Omni-Path Intel Omni-Path was a high-performance, low-latency interconnect product line developed by Intel Corporation for supercomputing and cluster computing. Designed to compete in the high-performance computing market alongside technologies from Mellanox Technologies, the implementation targeted large-scale systems deployed at research centers such as Argonne National Laboratory, Lawrence Livermore National Laboratory, and national supercomputing centers connected via projects like PRACE and XSEDE. The product aimed to serve workloads running on platforms from vendors like Cray Inc., Hewlett Packard Enterprise, and Dell Technologies.
Omni-Path was announced by Intel Corporation in 2015 as part of an effort to enter the interconnect market dominated by players such as Mellanox Technologies and legacy efforts like InfiniBand Trade Association. The technology evolved from prior Intel networking and silicon initiatives tied to groups including Intel Labs and leveraged partnerships with system integrators like Cray Inc. and Hewlett Packard Enterprise. Omni-Path components shipped in systems used by programs funded by agencies such as the U.S. Department of Energy and European initiatives like PRACE. In 2019 Intel shifted strategy, ultimately selling parts of its interconnect business and redirecting investment toward other projects, echoing consolidation patterns seen with companies such as NVIDIA Corporation acquiring Mellanox Technologies.
Omni-Path's architecture centered on a high-radix switch fabric supporting 100 Gbit/s per port and topologies such as fat-tree and torus variants used by installations at sites like Argonne National Laboratory and Oak Ridge National Laboratory. The silicon implementation was produced in Intel fabs associated with fabs run by Intel Foundry Services and drew on packet routing concepts that relate historically to architectures explored at Lawrence Berkeley National Laboratory and Los Alamos National Laboratory. Hosts used host channel adapters compatible with PCIe expansion slots on servers from vendors such as Dell Technologies, Hewlett Packard Enterprise, and Supermicro. The design emphasized low latency, offload features, and congestion management mechanisms with parallels to features in standards maintained by the OpenFabrics Alliance and interoperable software such as MPI implementations by the Open MPI Project and MPICH.
Omni-Path targeted performance metrics relevant to large-scale scientific workloads run on supercomputers like those at Sandia National Laboratories and NERSC (the National Energy Research Scientific Computing Center). Benchmarks compared latency and bandwidth against interconnects used in systems like Summit (supercomputer) and installations using InfiniBand from Mellanox Technologies. Scalability claims were validated on clusters supporting thousands of nodes and applications from domains represented by centers like CERN, NASA, and European Centre for Medium-Range Weather Forecasts. Workloads leveraging message-passing paradigms from projects such as MPI and libraries used by communities around LAMMPS, GROMACS, and Quantum ESPRESSO assessed performance in production environments.
The Omni-Path software stack integrated with ecosystems maintained by entities like the OpenFabrics Alliance and used management tools developed by partners such as Cray Inc. and Hewlett Packard Enterprise. Host drivers and firmware updates were distributed through channels similar to those used by vendors like Red Hat and SUSE, and administrators employed cluster orchestration tools related to projects such as Slurm Workload Manager and provisioning systems from Bright Computing. Interoperability testing referenced standards and projects involving MPI implementations from the Open MPI Project and MPICH, while performance analysis used tooling popularized by research at Lawrence Livermore National Laboratory and profiling suites developed in collaborations with institutions like NERSC.
Adoption occurred among national laboratories and research centers including Argonne National Laboratory, Lawrence Livermore National Laboratory, and other facilities participating in the U.S. Department of Energy supercomputing procurement programs. Commercial systems from Cray Inc., Hewlett Packard Enterprise, and Dell Technologies offered Omni-Path as an option for HPC customers. Some academic consortia and supercomputing centers associated with PRACE and XSEDE evaluated or deployed Omni-Path in production clusters for workloads spanning climate modeling used by ECMWF affiliates, computational chemistry groups linked to Max Planck Society labs, and particle physics collaborations connecting to CERN.
Primary alternatives included interconnects from Mellanox Technologies (now part of NVIDIA Corporation), traditional InfiniBand fabrics, and Ethernet-based high-speed fabrics offered by vendors such as Cisco Systems and Arista Networks. Comparisons focused on latency, bandwidth, congestion control, and software ecosystem integration with middleware from projects like MPI and management frameworks used by Cray Inc. and Hewlett Packard Enterprise. Procurement decisions at institutions like Oak Ridge National Laboratory and Argonne National Laboratory weighed trade-offs similar to purchasing choices documented in procurements by the U.S. Department of Energy and European research infrastructures like PRACE.
Category:Supercomputer interconnects