$ oc edit machineset <machine-set-name>
OKD is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Microsoft Azure Disk Storage.
To create CSI-provisioned PVs that mount to Azure Disk storage assets, OKD installs the Azure Disk CSI Driver Operator and the Azure Disk CSI driver by default in the
The Azure Disk CSI Driver Operator provides a storage class named
managed-csi that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage.
The Azure Disk CSI driver enables you to create and mount Azure Disk PVs.
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OKD users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
OKD defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage.
In future OKD versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration.
After full migration, in-tree plugins will eventually be removed in later versions of OKD.
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Have an existing Microsoft Azure cluster.
Copy an existing Azure
MachineSet custom resource (CR) and edit it by running the following command:
$ oc edit machineset <machine-set-name>
<machine-set-name> is the machine set that you want to provision machines with ultra disks.
Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd (1) providerSpec: value: ultraSSDCapability: Enabled (2)
|1||Specify a label to use to select a node that is created by this machine set. This procedure uses
|2||These lines enable the use of ultra disks.|
Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine-set-name>.yaml
Create a storage class that contains the following YAML definition:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc (1) parameters: cachingMode: None diskIopsReadWrite: "2000" (2) diskMbpsReadWrite: "320" (3) kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com (4) reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer (5)
|1||Specify the name of the storage class. This procedure uses
|2||Specify the number of IOPS for the storage class.|
|3||Specify the throughput in MBps for the storage class.|
|4||For Azure Kubernetes Service (AKS) version 1.21 or later, use
|5||Optional: Specify this parameter to wait for the creation of the pod that will use the disk.|
Create a persistent volume claim (PVC) to reference the
ultra-disk-sc storage class that contains the following YAML definition:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk (1) spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc (2) resources: requests: storage: 4Gi (3)
|1||Specify the name of the PVC. This procedure uses
|2||This PVC references the
|3||Specify the size for the storage class. The minimum value is
Create a pod that contains the following YAML definition:
apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd (1) containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk (2)
|1||Specify the label of the machine set that enables the use of ultra disks. This procedure uses
|2||This pod references the
Validate that the machines are created by running the following command:
$ oc get machines
The machines should be in the
For a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node-name> -- chroot /host lsblk
In this command,
oc debug node/<node-name> starts a debugging shell on the node
<node-name> and passes a command with
--. The passed command
chroot /host provides access to the underlying host OS binaries, and
lsblk shows the block devices that are attached to the host OS machine.
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd
Use the information in this section to understand and recover from issues you might encounter.
If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the
ContainerCreating state and an alert is triggered.
For example, if the
additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears:
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
To resolve this issue, describe the pod by running the following command:
$ oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>