Generated by GPT-5-mini| PersistentVolumeClaim | |
|---|---|
| Name | PersistentVolumeClaim |
| Type | Kubernetes resource |
| Introduced | 2014 |
| Purpose | storage abstraction for pods |
PersistentVolumeClaim
A PersistentVolumeClaim is a Kubernetes API object used to request persistent storage for containers managed by Kubernetes API, Kubernetes Controller Manager, Kubelet, Kubernetes Scheduler and related components. It is commonly used with cluster-level resources such as PersistentVolume, StorageClass, Container Storage Interface, etcd and integrates with external systems like Amazon Web Services, Google Cloud Platform, Microsoft Azure, OpenStack, VMware and NetApp. Administrators and developers reference a PersistentVolumeClaim in manifests alongside objects like Pod, Deployment, StatefulSet, DaemonSet and ReplicaSet.
A PersistentVolumeClaim represents a user's request for storage in a cluster and is served by a PersistentVolume provisioned by controllers such as the kube-controller-manager or dynamic provisioners provided by vendors like Rook, Ceph, Portworx, Longhorn and StorageOS. The claim abstracts details of backend systems including Amazon EBS, Google Persistent Disk, Azure Disk, NFS, iSCSI, GlusterFS and CIFS, enabling workloads created via Pod, Job, CronJob or StatefulSet to consume durable storage independent of node lifecycle. Claims are defined in YAML or JSON manifests that interact with cluster authorization systems like Role-Based Access Control, admission controllers such as PodSecurityPolicy and API versions managed by Kubernetes SIG Storage.
The API for a PersistentVolumeClaim is part of the core Kubernetes API group and versions managed by Kubernetes API Server and tracked in releases from Kubernetes SIG Release and CNCF. The spec includes fields like accessModes, resources.requests.storage, storageClassName and selector which coordinate behavior with objects such as PersistentVolume, StorageClass, VolumeSnapshot, VolumeSnapshotClass and dynamic provisioners from vendors like CSI. A claim object's status conditions and phase are updated by controllers in kube-controller-manager and observed by tools such as kubectl, kustomize, Helm, Flux and Argo CD.
Binding of a PersistentVolumeClaim to a PersistentVolume is performed by the internal controller loop in kube-controller-manager using matching rules based on capacity, access modes, storageClassName and label selectors similar to matching performed in systems like Kubernetes Endpoints and Service selector semantics. Lifecycle states include Pending, Bound and Lost and interact with events visible via kubectl describe or monitored by platforms like Prometheus, Grafana and Datadog. Reclaim policies such as Retain, Recycle and Delete—defined on PersistentVolume objects—coordinate with provisioning systems like provisioner implementations from vendors including AWS EBS CSI Driver, GCE PD CSI Driver and Azure Disk CSI Driver.
Access modes for claims include ReadWriteOnce, ReadOnlyMany and ReadWriteMany, which correspond to backend capabilities provided by storage systems such as NFS, CephFS, GlusterFS and distributed block systems from Dell EMC, NetApp, Pure Storage and IBM Spectrum Scale. StorageClass objects define provisioner, parameters and reclaimPolicy and integrate with dynamic provisioners such as CSI drivers, out-of-tree drivers maintained by vendors like Red Hat, Canonical, SUSE and community projects like rook/rook. Administrators commonly control default classes via cluster settings influenced by projects like kubeadm, kops and managed services from Amazon EKS, Google GKE, Azure AKS.
PersistentVolumeClaim resource requests use fields under resources.requests like storage size that are enforced by the cluster control plane and quota systems such as ResourceQuota and admission controllers maintained by Kubernetes SIG API Machinery. Capacity accounting appears in cluster dashboards provided by tools like Lens, Octant, Rancher, OpenShift Console and monitoring backends like Prometheus and integrates with billing and cost tools from Kubecost, CloudHealth, AWS Cost Explorer and Google Cloud Billing when using cloud-backed volumes. Storage quotas, limits and reclaim policies are critical when provisioning via external systems like OpenEBS, Longhorn and enterprise arrays from HPE, Dell EMC.
Common patterns include static provisioning where an operator creates PersistentVolume objects referencing backend storage such as NFS or iSCSI and workloads bind via PersistentVolumeClaim used by Deployment or StatefulSet manifests; dynamic provisioning where StorageClass and a CSI provisioner such as csi-driver create volumes automatically for claims; and volume snapshots and cloning workflows using VolumeSnapshot and snapshot controllers integrated with vendors like Velero for backup and Kasten for DR. Typical examples appear in tutorials from projects like Kubernetes documentation, community guides from Cloud Native Computing Foundation, books by Kelsey Hightower, Brendan Burns and Joe Beda, or vendor docs from AWS, Google Cloud, Azure.
When claims remain Pending administrators inspect events via kubectl, controller logs in kube-controller-manager and provisioner logs from CSI drivers and vendor operators like Rook, Longhorn and Portworx. Best practices include defining appropriate StorageClass policies, setting resource quotas with ResourceQuota, using StatefulSet for ordered storage with StatefulSet controller, enabling snapshots via VolumeSnapshot, and integrating backups with tools like Velero and Restic. Security practices involve using PodSecurityPolicy or Pod Security Standards, RBAC for access control, and encrypting volumes with provider features from AWS KMS, Google Cloud KMS and Azure Key Vault.