LLMpediaThe first transparent, open encyclopedia generated by LLMs

InfiniBand

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 81 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted81
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
InfiniBand
NameInfiniBand
DeveloperInfiniBand Trade Association
Introduced0 2000
IndustryHigh-performance computing, data center
RelatedEthernet, Fibre Channel, PCI Express

InfiniBand. It is a high-speed, low-latency computer network communications link primarily used in high-performance computing and enterprise data center environments. The technology was developed by the InfiniBand Trade Association, a consortium including major industry players like Intel, IBM, and Sun Microsystems. It operates over both copper cable and optical fiber media, providing a switched fabric architecture that connects processors, storage area networks, and input/output devices.

Overview

The creation of InfiniBand was driven by the need for a superior interconnect to overcome bottlenecks in traditional bus architecture systems like Peripheral Component Interconnect. Initially envisioned as a replacement for both internal system buses and external networks, its design emphasizes quality of service and remote direct memory access capabilities. While it did not replace PCI Express internally, it became the dominant interconnect for large-scale supercomputer installations and high-end storage systems. Key competitors in the interconnect space have historically included Myrinet and Quadrics, though Ethernet has evolved with technologies like RoCE to compete in similar domains.

Architecture

The architecture is based on a point-to-point, switched network topology rather than a shared bus. Fundamental components include Host Channel Adapters, which interface with a central processing unit, and Target Channel Adapters for connecting to peripherals like storage devices. These are interconnected by InfiniBand switches within a subnet, managed by a Subnet Manager typically running on a dedicated appliance or switch module. Communication is channel-based, utilizing Queue Pairs for data transfer and Completion Queues for signaling operation finalization. The layered protocol stack includes physical, link, network, and transport layers, supporting both reliable connection and unreliable datagram services.

Performance and features

Performance is characterized by extremely high data rates and very low latency (engineering), measured in microseconds. Successive generations have increased speeds, from Single Data Rate through Quad Data Rate to the latest High Data Rate specifications, supporting hundreds of gigabits per second per port. A defining feature is its native support for RDMA, allowing data to move directly between application memory spaces without involving the operating system or consuming CPU resources. This enables efficient bulk data transfer and is crucial for message passing interface libraries like Open MPI and MVAPICH. Other advanced capabilities include adaptive routing, congestion control, and partitioning for traffic isolation.

Applications

The primary application domain is high-performance computing, where it forms the backbone interconnect for many top systems on the TOP500 list, such as those built by Cray Inc. and Fujitsu. It is extensively used in large scientific computing clusters for simulations in fields like computational fluid dynamics and climate modeling. Within enterprise data centers, it is deployed for building high-speed storage networks, often connecting server (computing) to Network-Attached Storage or Storage Area Network arrays from vendors like Dell EMC and NetApp. The technology also underpins many hyper-converged infrastructure solutions and is integral to cloud computing platforms like the Oracle Exadata Database Machine.

Standards and development

The standard is developed and maintained by the InfiniBand Trade Association, with key specifications ratified by the International Committee for Information Technology Standards. Development is closely aligned with the PCI-SIG to ensure compatibility with prevailing expansion card form factors. Major revisions have expanded speed, enhanced management via the Baseboard Management Controller interface, and improved integration with software-defined networking paradigms. The ecosystem is supported by semiconductor companies such as Mellanox Technologies (now part of NVIDIA) and Broadcom Inc., which manufacture the switch silicon and host adapters. Ongoing work focuses on co-design with emerging accelerator (computing) technologies and maintaining a roadmap beyond HDR InfiniBand.