Generated by GPT-5-mini| Container Network Interface | |
|---|---|
| Name | Container Network Interface |
| Developer | "" |
Container Network Interface
Container Network Interface is a specification and set of conventions for configuring network interfaces in Linux containers and other lightweight virtualization environments. It defines a standardized API and lifecycle for network plugins that integrate with container runtime engines, enabling interoperability among projects such as Kubernetes, Docker, Mesos, OpenShift, and Cloud Foundry. The project emerged from collaboration between engineers at organizations including CoreOS, Google, and Red Hat to address fragmentation in container networking.
The specification provides a minimal contract for how network configuration is passed to plugins and how plugins report results back to container runtimes. It decouples network orchestration from container lifecycle management by offering a JSON-based interface and environment-variable conventions used by runtimes such as containerd, CRI-O, and runc. The design supports common networking models including overlay networks popularized by Weaveworks, underlay strategies used by Calico (software), and hybrid approaches found in Flannel and Cilium (software).
CNI architecture separates responsibilities across a small set of roles: the runtime (caller), the plugin (CNI module), and the network store or controller where higher-level controllers like Kubernetes NetworkProviders orchestrate policies. Core components include the CNI specification, executable plugins conforming to the spec, and the caller libraries that container runtimes embed. Typical plugins implement operations invoked during the add and delete phases of a container's lifecycle; these map to kernel features exposed by Linux kernel subsystems such as Netlink (networking), iptables, and the Linux network namespace. Plugins often interact with dataplane technologies like eBPF, VXLAN, IPsec, and SR-IOV to provide forwarding, isolation, and performance features. Project integration points include the Container Runtime Interface ecosystem and orchestration layers such as Kubelet and OpenStack Neutron integrations.
A rich ecosystem of plugins implements the CNI spec, ranging from simple bridge plugins to complex policy engines. Notable vendors and projects providing implementations include Calico (software), Cilium (software), Weaveworks, Flannel, Multus (software), Canal (software), and Kube-router. Specialized plugins enable advanced capabilities: Multus supports attaching multiple interfaces per pod as used by NFV deployments and Open vSwitch integrations; Cilium leverages eBPF for L7-aware policies; Calico focuses on scalable BGP-based routing and policy enforcement often deployed alongside BIRD (routing daemon). Hardware offload and SR-IOV support is available through plugins developed by vendors such as Mellanox Technologies and integrations with Intel networking stacks.
Configuration uses small JSON files or runtime-provided configuration blobs that specify plugin type, IPAM settings, and namespace mappings. Runtimes pass context via environment variables and file descriptors to plugin executables during the add and delete operations; results include IP addresses, routes, and interface names returned to the caller. Network administrators use cluster-level controllers such as Kubernetes CustomResourceDefinitions, OpenShift networkoperator, or Calicoctl to provision and manage plugin configurations. IP address management is commonly provided by integrated IPAM plugins or external controllers like Metallb for load-balancing addressing and MetalLB-style integrations in bare-metal clusters.
Security features depend on plugin capabilities and kernel mechanisms such as network namespaces, Linux capabilities, and SELinux or AppArmor profiles. Plugins implement policy enforcement and microsegmentation through mechanisms provided by projects like Calico (software) and Cilium (software), integrating with identity systems such as SPIFFE and service meshes like Istio for workload identity and L7 controls. Isolation boundaries leverage network namespaces and virtual Ethernet pairs, while encryption of traffic in transit can be provided by IPsec or WireGuard-based plugins influenced by work from OpenBSD and WireGuard (software) communities. Admission controllers in orchestration layers such as Kubernetes can enforce plugin whitelists and configuration constraints.
Performance characteristics vary widely among plugin designs: in-kernel datapaths using eBPF typically provide lower latency and higher packet throughput compared to user-space datapaths or overlay tunneling with VXLAN. Scalability considerations include control-plane state distribution using BGP as implemented by Calico (software) and distributed systems like etcd for state storage, which affect cluster size and churn handling. Benchmarks often compare CPU utilization, packet-per-second metrics, and connection setup times across implementations from Cilium (software), Weaveworks, and Flannel under workloads derived from real-world deployments such as those at Spotify, Box (company), and Pinterest.
CNI is widely adopted across cloud providers, distributions, and orchestration platforms. Major cloud vendors and managed Kubernetes services integrate CNI-compatible plugins and tooling, and open-source distributions like Kubernetes, OpenShift, Rancher, and Cloud Foundry rely on CNI for networking. The specification's vendor-neutral design enabled contributions from organizations including Google, Red Hat, CoreOS, Intel, and VMware, fostering interoperability among projects such as containerd, CRI-O, rkt (historical), and KubeVirt. The ecosystem includes testing suites, certification efforts, and community governance through working groups associated with entities like the Cloud Native Computing Foundation.
Category:Computer networking