Generated by GPT-5-mini| Multus | |
|---|---|
| Name | Multus |
| Developer | Google, Red Hat, Intel |
| Initial release | 2016 |
| Programming language | Go (programming language) |
| Repository | GitHub |
| License | Apache License |
| Platform | Kubernetes, Linux |
Multus
Multus is a Container Network Interface (CNI) plugin designed to enable multiple network interfaces per pod within Kubernetes clusters. It acts as a meta-plugin that orchestrates other CNI plugins such as Flannel (software), Calico (software), Weave Net, Cilium (software), and SR-IOV drivers to attach supplemental networks to workloads. Multus was developed through collaboration involving contributors from Google, Red Hat, and Intel and is widely used in environments that require advanced networking integration with platforms like OpenShift, KubeVirt, and Knative.
Multus provides a mechanism to associate multiple distinct network attachments with a single pod, allowing separate interfaces for data plane, management, storage, or telemetry. It interoperates with orchestration systems and networking projects including Kubernetes NetworkPolicy, Istio, and Prometheus, enabling separation of concerns between control-plane traffic and application data paths. Common deployment contexts include Network Functions Virtualization (NFV), edge computing clusters, and machine learning training platforms where specialized connectivity to DPDK-accelerated interfaces, SR-IOV Virtual Functions, or overlay networks is required.
Multus operates as a delegating CNI plugin: the Multus binary is installed as the configured CNI in kubelet node directories and dispatches network setup and teardown calls to one or more underlying CNI plugins such as Bridge (networking), Macvlan, Host-device, or VLAN (802.1Q). Key components include:
- The Multus CNI shim which parses pod annotations (e.g., Kubernetes pod Annotations) that reference NetworkAttachmentDefinition custom resources managed by the Kubernetes CustomResourceDefinition API server. - NetworkAttachmentDefinition CRDs that describe configurations for secondary networks and reference specific CNI binaries or configurations, integrating with projects like Container Network Interface and Kubernetes Service Catalog. - Delegated CNIs such as Calico (software), Flannel (software), Cilium (software), Weave Net, SR-IOV, Macvlan, Multus-compatible plugins and hardware-specific drivers from vendors like Mellanox Technologies.
Multus also supports chaining and ordering of CNI calls, interacting with system components such as the kubelet, kube-apiserver, and etcd for persistence of cluster state. Integration points include CRI-O, containerd, and Docker runtimes.
Installation typically involves deploying Multus as a DaemonSet using manifests compatible with Kubernetes distributions like OpenShift or k3s. Administrators create NetworkAttachmentDefinition CRs to expose underlying plugin configurations stored in ConfigMaps or directly embedded in the CRD. Configuration steps commonly reference:
- CNI binary placement under /opt/cni/bin and integration with runtime via kubelet flags. - Creating ClusterRole and ClusterRoleBinding entries to grant RBAC access to Multus DaemonSet service accounts, interacting with Role-Based Access Control APIs and Admission Controller components. - Configuring fallback networks and default CNI chains to ensure pod networking remains intact when secondary attachments fail, which requires coordination with kube-proxy settings and CoreDNS.
Vendors ship curated manifests and operators—such as those from Red Hat for OpenShift Container Platform—that automate common configurations, while community repositories on GitHub provide examples for raw Kubernetes installations.
Multus is frequently used in scenarios requiring distinct traffic segregation or hardware acceleration:
- NFV and telecom workloads integrating with DPDK and SR-IOV to deliver low-latency forwarding for virtualized network functions, often orchestrated alongside OpenStack or ONAP. - Virtual machine networking with KubeVirt where guest VMs require multiple NICs attached to separate networks for management, storage, or tenant isolation. - High-performance storage and backup systems connecting to dedicated storage networks like iSCSI or NFS over separate interfaces, coexisting with overlay networks such as VXLAN. - Edge and IoT gateways that present a control interface to Prometheus-monitored telemetry networks while exposing segregated application networks to external systems like MQTT brokers.
Example deployments reference annotations in pod specs that list NetworkAttachmentDefinition names, enabling automatic binding of secondary interfaces during pod lifecycle events managed by kubelet.
Performance characteristics depend on delegated CNI plugins and underlying hardware. When paired with hardware offloads such as SR-IOV or DPDK, Multus enables near line-rate throughput and minimal CPU overhead. However, performance can be constrained by:
- Multiplexing overhead when chaining many plugins such as Macvlan plus overlays like Flannel (software) or Calico (software). - Resource management complexity on hosts with heavy NUMA considerations or when scheduling across heterogeneous nodes managed by Kubernetes Scheduler. - Troubleshooting challenges related to transient states between the Multus shim, kubelet, and CRD reconciliation in the kube-apiserver.
Scaling considerations often require careful tuning of CNI plugin parameters, host kernel settings, and coordination with cluster-level components such as etcd and CoreDNS.
Multus inherits security responsibilities from Kubernetes and delegated CNIs. Best practices include limiting RBAC privileges for Multus service accounts, auditing CRD changes via Audit (Kubernetes), and enforcing network policies via projects like Calico (software) or Cilium (software). When using hardware NIC passthrough such as SR-IOV or PCI passthrough, administrators must reconcile host trust boundaries and supply chain considerations from vendors like Broadcom or Mellanox Technologies. Compliance programs often map Multus network isolation capabilities to regulatory controls in frameworks like PCI DSS, HIPAA, or SOC 2 by documenting segregation of sensitive traffic and implementing network-level access controls.
Category:Container networking