Generated by GPT-5-mini| Kubernetes NetworkPolicy | |
|---|---|
| Name | Kubernetes NetworkPolicy |
| Developer | Google (company), Cloud Native Computing Foundation |
| Initial release | 2017 |
| Repository | GitHub |
| Programming language | Go (programming language) |
| License | Apache License |
Kubernetes NetworkPolicy
Kubernetes NetworkPolicy is a native Kubernetes API resource that defines network traffic rules for pods in a Kubernetes cluster. It enables declarative, namespace-scoped controls to allow or deny ingress and egress flows between workloads, services, and endpoints managed by Kubernetes, and integrates with underlying networking solutions and cloud providers. NetworkPolicy plays a central role in zero-trust microservices architectures and cloud-native security models used by organizations such as Spotify, Airbnb, Pinterest, Salesforce, and Snap Inc..
NetworkPolicy emerged from design discussions at Google (company) and evolved through implementation and standardization within the Cloud Native Computing Foundation. It provides a Kubernetes-native way to express connectivity intent comparable to firewall rules in Amazon Web Services, Microsoft Azure, and Google Cloud Platform. NetworkPolicy interacts with Container Network Interface implementations such as Calico (software), Weave Net, Cilium, Flannel (software), and Kong (company) integrations to enforce policy at the data plane. Large-scale adopters like Booking.com, Reddit, Zalando, and The New York Times use NetworkPolicy alongside service meshes such as Istio, Linkerd, and Consul (software) to achieve layered security.
NetworkPolicy resources target Pods using label selectors that reference labels applied per Deployment, DaemonSet, or StatefulSet. Policies are namespace-scoped and can reference other namespaces, enabling interactions similar to cross-namespace permissions used in systems like OpenStack and VMware vSphere. The spec includes fields for podSelector, namespaceSelector, ingress, egress, and policyTypes; these map to underlying kernel constructs such as iptables and eBPF. The resource model is defined in the Kubernetes API, which is maintained by contributors from companies including Red Hat, IBM, Huawei, VMware, and Cisco Systems.
NetworkPolicy supports explicit policyTypes: Ingress and Egress. Ingress rules permit traffic from selected sources, while Egress rules permit outbound connections to selected destinations. Selectors use Kubernetes label syntax consistent with controllers like Helm (software), Kustomize, Flux (software), and Argo CD. NamespaceSelectors enable patterns similar to role boundaries used by GitHub, GitLab, and Bitbucket, while podSelectors mirror workload scoping found in Apache Kafka deployments managed by operators from Confluent. NetworkPolicy rules can reference IPBlocks to allow CIDR ranges, comparable to security groups in AWS Security Group usage at Netflix, and can match ports and protocols as defined by IANA standards.
Enforcement depends on the cluster CNI plugin and the node networking stack. Solutions like Calico (software) implement policy using routing and BPF for scalable enforcement, while Cilium uses eBPF to attach filtering logic to sockets and packets. Other implementations rely on traditional iptables rules managed by kube-proxy or CNI components. Kubernetes distributions such as OpenShift, Rancher, GKE, EKS, and AKS ship with defaults or integrations that affect NetworkPolicy behavior. Policy intent is stored in the Kubernetes control plane and reconciled by controllers similar to patterns in etcd and Prometheus operators; dataplane enforcement is done on nodes and may interact with cloud-native load balancers like NGINX, Envoy (software), and HAProxy.
Common use cases include microservice isolation, default-deny network segmentation, and compliance enforcement in regulated environments served by companies like Stripe, Square (company), and PayPal. Examples include: - Default deny-all ingress for a namespace and explicit allow rules for service frontends, analogous to perimeter policies used by Equinix Metal deployments. - Egress restrictions to limit access to external APIs (e.g., Twilio, Stripe) using IPBlock CIDRs and port restrictions. - Allow-listing for monitoring and logging stacks like Prometheus, Grafana, and ELK Stack to permit scrapers and forwarders. - Multi-tenant isolation in platforms like Heroku and Platform.sh where namespaces map to tenant boundaries. These scenarios are often combined with admission controllers from Open Policy Agent or GitOps flows using Flux (software) and Argo CD.
Best practices include defining a default deny policy for production namespaces, using least-privilege podSelectors, and testing rules with tools used by Kubernetes SIGs and vendors. Combine NetworkPolicy with identity-aware proxies like Istio or Linkerd for layered security, and use observability tools such as Jaeger, Zipkin, Prometheus, and Grafana to correlate connectivity issues. Troubleshooting steps often require checking CNI logs, kube-proxy state, node firewall settings on systems like Ubuntu, Debian, or Red Hat Enterprise Linux, and validating policies with utilities provided by Calico (software), Cilium, or kubectl commands. For incident response, integrate NetworkPolicy changes with audit systems like Kubernetes Audit and workflows in Jenkins, CircleCI, or GitHub Actions.