Generated by GPT-5-mini| CNI Plugin | |
|---|---|
| Name | CNI Plugin |
| Type | Software component |
| Developer | Various open-source projects |
| Released | 2015 |
| License | MIT, Apache-2.0, BSD variants |
CNI Plugin
CNI Plugin is a specification and implementation ecosystem for container networking used by projects such as Kubernetes, Docker, Mesos, Nomad (software), and OpenShift. It standardizes how network interfaces are provisioned for containers across networking backends like Calico (software), Flannel (software), Weave Net, and Cilium (software). The project interacts with orchestration systems including CRI-O, containerd, and Kubelet to attach, detach, and configure container interfaces and routes.
CNI Plugin defines a minimal JSON-based plugin contract that allows orchestration systems such as Kubernetes, Apache Mesos, HashiCorp Nomad, OpenShift, and Cloud Foundry to request network attachment and teardown from networking providers like Calico (software), Cilium (software), Weave Net, Flannel (software), and Multus CNI. The specification focuses on portability between runtimes including containerd, rkt, CRI-O, and Docker Engine while leveraging kernel facilities in Linux kernel such as network namespace, netlink, iptables, and iproute2. Key adopters include cloud platforms like Google Kubernetes Engine, Amazon EKS, Microsoft Azure Kubernetes Service, and distributions like Red Hat OpenShift.
CNI Plugin architecture separates concerns among the orchestration layer (for example, Kubernetes kubelet), the plugin binary executed by the runtime, and the network backend agents such as Calico (software) or Cilium (software). Components include: - The CNI specification file and JSON configuration used by Kubelet or containerd to invoke plugin binaries. - Plugin executables that implement ADD/DEL/GET semantics; examples include bridge (software), macvlan, and ipvlan plugins maintained by projects like Project Calico and Cilium. - IP Address Management (IPAM) modules, often integrated with controllers like MetalLB or services in Consul (software) and etcd. Interactions rely on system-level APIs exposed by Linux kernel subsystems and userland utilities such as iproute2 and iptables; advanced datapath implementations may use eBPF programs and integrate with XDP.
Installation typically involves placing plugin binaries into the runtime's CNI directory and providing a network configuration JSON in directories used by Kubelet or containerd (for example, /etc/cni/net.d). Providers such as Calico (software), Cilium (software), Weave Net, Flannel (software), Antrea (software), and Multus CNI offer manifests for platforms like Kubernetes and installers for distributions like Red Hat Enterprise Linux and Ubuntu. Configuration options often map to components such as IPAM backends (for example, etcd or Consul (software)), policy engines (such as OPA (Open Policy Agent) or NetworkPolicy (Kubernetes) implementations), and routing integrations with projects like BGP implementations used in Calico (software) or Bird Internet Routing Daemon.
Popular CNI providers include Calico (software), Cilium (software), Flannel (software), Weave Net, Antrea (software), Multus CNI, and vendor offerings embedded in Amazon EKS, Google Kubernetes Engine, and Azure Kubernetes Service. Specialized plugins implement bridge, macvlan, ipvlan, and host-local IPAM; project-maintained plugins appear under repositories managed by organizations like The Linux Foundation and Cloud Native Computing Foundation. Advanced implementations leverage eBPF from projects such as Cilium (software) and integrate with observability stacks like Prometheus, Grafana, and tracing systems including Jaeger (software).
Security considerations include isolation boundaries provided by Linux kernel namespaces, enforcement via iptables and nftables, and policy enforcement through NetworkPolicy (Kubernetes), OPA (Open Policy Agent), and service meshes like Istio. Datapath security benefits from technologies such as eBPF and XDP for filtering and telemetry, while control-plane security involves mutual TLS and secrets managed by systems like Vault (software) or Kubernetes Secrets. Integration with cloud provider networking features (for instance, Amazon VPC, Google VPC, Azure Virtual Network) and routing protocols (for example, BGP via Calico (software)) can surface identity and tenancy concerns addressed by projects like SPIFFE and SPIRE.
Debugging common failures calls for examining logs from orchestration components like Kubelet, containerd, and CRI-O, and from plugin agents such as Calico (software), Cilium (software), or Flannel (software). Useful tools include iproute2 utilities, tcpdump, ss (socket statistic), ebpftrace, and observability tools like Prometheus and Grafana. Network policy and connectivity checks often reference kubectl to inspect Pod (Kubernetes) status, NetworkPolicy (Kubernetes), and CNI configuration files in system directories; resolving IPAM conflicts may require inspecting databases such as etcd or backend records in Consul (software).
Category:Container networking