LLMpediaThe first transparent, open encyclopedia generated by LLMs

NetworkPolicy (Kubernetes)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Weave Net Hop 5
Expansion Funnel Raw 64 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted64
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
NetworkPolicy (Kubernetes)
NameNetworkPolicy
SystemKubernetes
Introduced2016
LanguageYAML
LicenseApache-2.0

NetworkPolicy (Kubernetes) is a Kubernetes resource for specifying network traffic rules for Kubernetes Pods. It enables declarative control over ingress and egress connectivity among Pods, Services, and external endpoints within a Cluster managed by kube-apiserver. Originating from community proposals in CNCF environments and evolving through contributions from vendors like Google, Red Hat, and IBM, NetworkPolicy integrates with container networking projects to provide namespace-scoped access controls.

Overview

NetworkPolicy defines selective packet filtering through label-based selectors and protocol/port matching. It operates in the context of a Namespace and leverages pod labels, namespace labels, and CIDR blocks to express allowed traffic flows. Implementations rely on Container Network Interface plugins and ecosystem projects such as Calico (software), Cilium, Weave Net, Kube-router, and Flannel to program datapath enforcement. Adoption is common in environments combining orchestration via kubelets and policy orchestration through kubectl or GitOps systems like Argo CD and Flux.

Concepts and components

Key concepts include PodSelector, NamespaceSelector, ingress, egress, and policyTypes. PodSelector is analogous to label selection used by controllers like Deployment, DaemonSet, and StatefulSet to target sets of Pods. NamespaceSelector allows cross-namespace relationships similar to role bindings in RBAC systems implemented by Open Policy Agent integrations. Ingress and egress rules echo firewall paradigms found in iptables and nftables and map to network policies in SDN products from vendors such as Cisco and Juniper Networks. CIDR blocks enable addressing compatible with IPv4 and IPv6 stacks used by MetalLB and cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Policy specification and fields

A NetworkPolicy manifest is declared in YAML using apiVersion and kind fields consistent with other Kubernetes resources like ConfigMap and Secret. Essential fields include podSelector, policyTypes, ingress, and egress entries. Each rule may specify ports (TCP/UDP/SCTP) and peer selectors referencing PodSelector, NamespaceSelector, or ipBlock entries for CIDR ranges. Ports and protocols mirror nomenclature used by Istio and Envoy (software) for service meshes, while ipBlock CIDR semantics align with standards overseen by IETF working groups. Policies are additive; absence of an explicit allow list yields default allow behavior unless implemented network plugin enforces default deny semantics similar to cloud security groups in Amazon EC2.

Use cases and examples

Common use cases include microsegmentation for applications like Prometheus monitoring stacks, restricting egress to external APIs such as those provided by Stripe, isolating multi-tenant workloads in OpenShift clusters, and implementing zero-trust patterns inspired by BeyondCorp. Examples include permitting only HTTP(S) traffic between frontend Deployments and backend StatefulSet databases like PostgreSQL or MySQL, blocking lateral movement in breach scenarios observed in incidents involving SolarWinds-style supply chain attacks, and constraining egress to egress proxies such as Squid or HAProxy. In GitOps workflows driven by Helm charts and CI systems like Jenkins or GitLab, NetworkPolicy manifests are versioned and audited alongside application manifests.

Implementation and enforcement

Enforcement depends on the chosen CNI plugin which programs datapath elements such as iptables, eBPF, or VxLAN routes. Projects like Calico (software) use BGP control planes and eBPF integrations; Cilium leverages eBPF and XDP for high-performance filtering; Weave Net and Flannel can provide simpler overlay enforcement. The kube-proxy component interacts with Service routing while policy enforcement typically occurs at the host network stack or kernel hooks. Observability tools such as Prometheus, Grafana, and Jaeger complement policy telemetry; troubleshooting often uses commands from kubectl and network diagnostics tools like tcpdump and netstat executed within BusyBox or ephemeral debug containers.

Limitations and best practices

NetworkPolicy has limitations: it is namespace-scoped, label-dependent, and behavior varies by CNI implementation; it does not natively offer L7 inspection or application-aware rules found in Envoy (software) or Istio service meshes. Best practices include adopting a default deny posture for namespaces, using immutable label schemes similar to conventions from Kubernetes SIG Architecture, combining NetworkPolicy with RBAC for administrative control, versioning manifests in repositories managed by GitHub or GitLab, and testing rules in staging clusters provisioned via Terraform or Pulumi. For advanced scenarios, integrate with service mesh or policy engines such as Open Policy Agent or network observability platforms like Cilium's Hubble. Consider vendor-specific features, multi-cluster patterns in Kubernetes Federation, and cloud provider network policies in Google Kubernetes Engine and Azure Kubernetes Service when designing production deployments.

Category:Kubernetes