Generated by GPT-5-mini| CSI (Container Storage Interface) | |
|---|---|
| Name | CSI (Container Storage Interface) |
| Developer | Cloud Native Computing Foundation |
| Initial release | 2017 |
| Written in | Go |
| Stable release | 1.5 |
| License | Apache License 2.0 |
CSI (Container Storage Interface) is an industry-standard specification that defines a plugin interface for exposing arbitrary storage systems to container orchestration platforms. It decouples Kubernetes and other container orchestration systems like Apache Mesos and Nomad (software) from storage implementation details such as block devices, file systems, and object stores. The project is hosted under the Cloud Native Computing Foundation and has been adopted by major vendors including Red Hat, Google, Microsoft, Amazon (company), and VMware.
CSI standardizes how containerd, CRI-O, Kubelet, and orchestration ecosystems request volumes, attach and mount storage, take snapshots, and perform cloning. The interface enables storage vendors—such as NetApp, Pure Storage, Dell Technologies, Hitachi, Ltd., and IBM—to deliver drivers that work across multiple orchestrators without vendor-specific integration. By defining RPC semantics, CSI reduces coupling between orchestration projects like Kubernetes and storage solutions including Ceph, GlusterFS, Rook (software), and OpenEBS. The specification also informs cloud providers such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure on how to implement persistent storage offerings.
CSI specifies a modular architecture comprised of controller-side and node-side RPCs, with well-defined roles for components like the CSI driver, external provisioner, and kubelet plugins. Key components include: - CSI driver binaries implemented by vendors such as Canonical and SUSE that run on controller and node planes. - Controller services which integrate with control planes in Kubernetes and with APIs from storage backends like iSCSI appliances or NVMe over Fabrics systems provided by Intel or Broadcom. - Node services that interact with local OS kernel subsystems in Linux distributions like Red Hat Enterprise Linux and Ubuntu as well as Windows Server. - Sidecar containers (external-provisioner, external-attacher, external-snapshotter) developed by projects within the Cloud Native Computing Foundation ecosystem to mediate between orchestration controllers and driver binaries. The architecture accommodates capabilities like volume lifecycle, topology awareness, and controller expansion while interoperating with tools like Kubectl and Helm (software).
The CSI specification defines a protobuf-based gRPC API with services and methods for Controller, Node, Identity, and Health. The spec outlines semantics for CreateVolume, DeleteVolume, ControllerPublishVolume, NodeStageVolume, NodePublishVolume, ControllerExpandVolume, and Snapshot operations. It also specifies feature gates and versioning practices influenced by governance patterns in The Linux Foundation. The API interacts with container runtime interfaces such as Container Runtime Interface and leverages networking primitives common to Envoy (software) and other cloud-native projects. Release artifacts and change management reflect processes used by Kubernetes SIG Storage and other special interest groups.
There are numerous vendor and open-source implementations. Notable drivers and projects include: - Cloud vendor drivers: AWS Elastic Block Store, Azure Disk Storage, Google Persistent Disk drivers provided by their respective cloud providers. - Storage vendors: drivers from NetApp, Pure Storage, Dell EMC, VMware vSphere, and Hitachi. - Open-source drivers: Rook (software), OpenEBS, Ceph CSI Driver, and Longhorn (software). - Commercial ecosystems and orchestration platforms such as Red Hat OpenShift and VMware Tanzu integrate CSI drivers to expose underlying SAN, NAS, and distributed file systems. Each implementation targets specific volume modes (block vs filesystem), access modes (single-writer vs multi-writer), and topology constraints used by Kubernetes Scheduler and cloud provider control planes.
CSI supports persistent volume provisioning, dynamic provisioning, snapshotting, cloning, resizing, and volume topology-aware scheduling. Typical workflows include: - Dynamic provisioning triggered by PersistentVolumeClaim objects in Kubernetes where external provisioners create volumes on backends like Ceph, NetApp ONTAP, or VMware vSAN. - Stateful workloads orchestration for systems such as PostgreSQL, MySQL, Apache Kafka, and Elasticsearch requiring durable block or filesystem storage. - Backup and restore integrations with data-protection platforms from Veeam and Commvault that utilize CSI snapshot and clone APIs. - Hybrid cloud migrations orchestrated with tools from HashiCorp and Velero leveraging CSI for volume mobility across datacenter and cloud environments.
CSI drivers must interact securely with storage backends using authentication methods that include mutual TLS, token exchange, role-based access from OAuth 2.0 providers, and secrets management using HashiCorp Vault or Kubernetes Secrets. The spec does not mandate a single auth scheme, so implementations integrate with identity providers like Active Directory and cloud IAM systems such as AWS Identity and Access Management and Azure Active Directory. Best practices involve isolating driver privileges using mechanisms from AppArmor, SELinux, and container runtime sandboxes, and auditing via logging solutions like Prometheus and Grafana.
Development is coordinated through the Cloud Native Computing Foundation with input from a broad community including Red Hat, Google, Microsoft, Amazon (company), VMware, and independent contributors. Governance follows CNCF project policies and typical open-source workflows involving GitHub repositories, issue trackers, and design proposals reviewed by SIGs such as Kubernetes SIG Storage. Adoption spans hyperscalers, telecom operators like AT&T and Verizon, and enterprises using platforms from IBM and Oracle Corporation. The ecosystem includes certification programs and interoperability tests run in continuous integration across projects like Tekton and Jenkins to ensure driver compatibility and platform stability.