The Container Storage Interface (CSI) allows OKD to consume storage from storage back ends that implement the CSI interface as persistent storage.
OKD 4.17 supports version 1.6.0 of the CSI specification. |
CSI drivers are typically shipped as container images. These containers are not aware of OKD where they run. To use CSI-compatible storage back end in OKD, the cluster administrator must deploy several components that serve as a bridge between OKD and the storage driver.
The following diagram provides a high-level overview about the components running in pods in the OKD cluster.
It is possible to run multiple CSI drivers for different storage back ends. Each driver needs its own external controllers deployment and daemon set with the driver and CSI registrar.
External CSI controllers is a deployment that deploys one or more pods with five containers:
The snapshotter container watches VolumeSnapshot
and VolumeSnapshotContent
objects and is responsible for the creation and deletion of VolumeSnapshotContent
object.
The resizer container is a sidecar container that watches for PersistentVolumeClaim
updates and triggers ControllerExpandVolume
operations against a CSI endpoint if you request more storage on PersistentVolumeClaim
object.
An external CSI attacher container translates attach
and detach
calls from OKD to respective ControllerPublish
and
ControllerUnpublish
calls to the CSI driver.
An external CSI provisioner container that translates provision
and
delete
calls from OKD to respective CreateVolume
and
DeleteVolume
calls to the CSI driver.
A CSI driver container.
The CSI attacher and CSI provisioner containers communicate with the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.
The |
The external attacher must also run for CSI drivers that do not support
third-party |
The CSI driver daemon set runs a pod on every node that allows OKD to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:
A CSI driver registrar, which registers the CSI driver into the
openshift-node
service running on the node. The openshift-node
process
running on the node then directly connects with the CSI driver using the
UNIX Domain Socket available on the node.
A CSI driver.
The CSI driver deployed on the node should have as few credentials to the
storage back end as possible. OKD will only use the node plugin
set of CSI calls such as NodePublish
/NodeUnpublish
and
NodeStage
/NodeUnstage
, if these calls are implemented.
OKD installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.
To create CSI-provisioned persistent volumes that mount to these supported storage assets, OKD installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.
The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see Setting up AWS Elastic File Service CSI Driver Operator. For instructions on installing the GCP Filestore CSI driver, see Google Compute Platform Filestore CSI Driver Operator. |
The following table describes the CSI drivers that are installed with OKD supported by OKD and which CSI features they support, such as volume snapshots and resize.
If your CSI driver is not listed in the following table, you must follow the installation instructions provided by your CSI storage vendor to use their supported CSI features. |
CSI driver | CSI volume snapshots | CSI cloning | CSI resize | Inline ephemeral volumes |
---|---|---|---|---|
AWS EBS |
✅ |
✅ |
||
AWS EFS |
||||
Google Compute Platform (GCP) persistent disk (PD) |
✅ |
✅ |
✅ |
|
GCP Filestore |
✅ |
✅ |
||
IBM Power® Virtual Server Block |
✅ |
|||
IBM Cloud® Block |
✅[3] |
✅[3] |
||
LVM Storage |
✅ |
✅ |
✅ |
|
Microsoft Azure Disk |
✅ |
✅ |
✅ |
|
Microsoft Azure Stack Hub |
✅ |
✅ |
✅ |
|
Microsoft Azure File |
✅[4] |
✅[4] |
✅ |
✅ |
OpenStack Cinder |
✅ |
✅ |
✅ |
|
OpenShift Data Foundation |
✅ |
✅ |
✅ |
|
OpenStack Manila |
✅ |
|||
Shared Resource |
✅ |
|||
CIFS/SMB |
✅ |
|||
VMware vSphere |
✅[1] |
✅[2] |
1.
Requires vSphere version 7.0 Update 3 or later for both vCenter Server and ESXi.
Does not support fileshare volumes.
2.
Offline volume expansion: minimum required vSphere version is 6.7 Update 3 P06
Online volume expansion: minimum required vSphere version is 7.0 Update 2.
3.
Does not support offline snapshots or resize. Volume must be attached to a running pod.
4.
Azure File cloning does not supports NFS protocol. It supports the azurefile-csi
storage class, which uses SMB protocol.
Azure File cloning and snapshot are Technology Preview features:
Azure File CSI cloning and snapshot is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage back end. The provider of the CSI driver should document how to create a storage class in OKD and the parameters available for configuration.
The created storage class can be configured to enable dynamic provisioning.
Create a default storage class that ensures all PVCs that do not require any special storage class are provisioned by the installed CSI driver.
# oc create -f - << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <storage-class> (1)
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: <provisioner-name> (2)
parameters:
EOF
1 | The name of the storage class that will be created. |
2 | The name of the CSI driver that has been installed. |
The following example installs a default MySQL template without any changes to the template.
The CSI driver has been deployed.
A storage class has been created for dynamic provisioning.
Create the MySQL template:
# oc new-app mysql-persistent
--> Deploying template "openshift/mysql-persistent" to project default
...
# oc get pvc
NAME STATUS VOLUME CAPACITY
ACCESS MODES STORAGECLASS AGE
mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi
RWO cinder 3s
Volume populators use the datasource
field in a persistent volume claim (PVC) spec to create pre-populated volumes.
Volume population is currently enabled, and supported as a Technology Preview feature. However, OKD does not ship with any volume populators.
Volume populators is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
For more information about volume populators, see Kubernetes volume populators.