$ oc edit configmap storage-checkup-config -n <checkup_namespace>
You can use a storage checkup to verify that the cluster storage is optimally configured for OKD Virtualization.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
|
You must always:
|
The predefined storage checkup includes skipTeardown configuration options, which control resource clean up after a storage checkup runs.
By default, the skipTeardown field value is Never, which means that the checkup always performs teardown steps and deletes all resources after the checkup runs.
You can retain resources for further inspection in case a failure occurs by setting the skipTeardown field to onfailure.
You have installed the OpenShift CLI (oc).
Run the following command to edit the storage-checkup-config config map:
$ oc edit configmap storage-checkup-config -n <checkup_namespace>
Configure the skipTeardown field to use the onfailure value. You can do this by modifying the storage-checkup-config config map, stored in the storage_checkup.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: storage-checkup-config
namespace: <checkup_namespace>
data:
spec.param.skipTeardown: onfailure
# ...
Reapply the storage-checkup-config config map by running the following command:
$ oc apply -f storage_checkup.yaml -n <checkup_namespace>
You can run a storage checkup to validate that storage is working correctly for virtual machines.
Navigate to Virtualization → Checkups in the web console.
Click the Storage tab.
Click Install permissions.
Click Run checkup.
Enter a name for the checkup in the Name field.
Enter a timeout value for the checkup in the Timeout (minutes) fields.
Click Run.
You can view the status of the storage checkup in the Checkups list on the Storage tab. Click on the name of the checkup for more details.
Use a predefined checkup to verify that the OKD cluster storage is configured optimally to run OKD Virtualization workloads.
You have installed the OpenShift CLI (oc).
The cluster administrator has created the required cluster-reader permissions for the storage checkup service account and namespace, such as in the following example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubevirt-storage-checkup-clustereader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-reader
subjects:
- kind: ServiceAccount
name: storage-checkup-sa
namespace: <target_namespace> (1)
| 1 | The namespace where the checkup is to be run. |
Create a ServiceAccount, Role, and RoleBinding manifest file for the storage checkup.
Example service account, role, and rolebinding manifest:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: storage-checkup-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: storage-checkup-role
rules:
- apiGroups: [ "" ]
resources: [ "configmaps" ]
verbs: ["get", "update"]
- apiGroups: [ "kubevirt.io" ]
resources: [ "virtualmachines" ]
verbs: [ "create", "delete" ]
- apiGroups: [ "kubevirt.io" ]
resources: [ "virtualmachineinstances" ]
verbs: [ "get" ]
- apiGroups: [ "subresources.kubevirt.io" ]
resources: [ "virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume" ]
verbs: [ "update" ]
- apiGroups: [ "kubevirt.io" ]
resources: [ "virtualmachineinstancemigrations" ]
verbs: [ "create" ]
- apiGroups: [ "cdi.kubevirt.io" ]
resources: [ "datavolumes" ]
verbs: [ "create", "delete" ]
- apiGroups: [ "" ]
resources: [ "persistentvolumeclaims" ]
verbs: [ "delete" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: storage-checkup-role
subjects:
- kind: ServiceAccount
name: storage-checkup-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: storage-checkup-role
Apply the ServiceAccount, Role, and RoleBinding manifest in the target namespace:
$ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml
Create a ConfigMap and Job manifest file. The config map contains the input parameters for the checkup job.
Example input config map and job manifest:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: storage-checkup-config
namespace: $CHECKUP_NAMESPACE
data:
spec.timeout: 10m
spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization
spec.param.vmiTimeout: 3m
---
apiVersion: batch/v1
kind: Job
metadata:
name: storage-checkup
namespace: $CHECKUP_NAMESPACE
spec:
backoffLimit: 0
template:
spec:
serviceAccount: storage-checkup-sa
restartPolicy: Never
containers:
- name: storage-checkup
image: quay.io/kiagnose/kubevirt-storage-checkup:main
imagePullPolicy: Always
env:
- name: CONFIGMAP_NAMESPACE
value: $CHECKUP_NAMESPACE
- name: CONFIGMAP_NAME
value: storage-checkup-config
Apply the ConfigMap and Job manifest file in the target namespace to run the checkup:
$ oc apply -n <target_namespace> -f <storage_configmap_job>.yaml
Wait for the job to complete:
$ oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m
Review the results of the checkup by running the following command:
$ oc get configmap storage-checkup-config -n <target_namespace> -o yaml
Example output config map (success):
apiVersion: v1
kind: ConfigMap
metadata:
name: storage-checkup-config
labels:
kiagnose/checkup-type: kubevirt-storage
data:
spec.timeout: 10m
status.succeeded: "true" (1)
status.failureReason: "" (2)
status.startTimestamp: "2023-07-31T13:14:38Z" (3)
status.completionTimestamp: "2023-07-31T13:19:41Z" (4)
status.result.cnvVersion: 4.21.2 (5)
status.result.defaultStorageClass: trident-nfs (6)
status.result.goldenImagesNoDataSource: <data_import_cron_list> (7)
status.result.goldenImagesNotUpToDate: <data_import_cron_list> (8)
status.result.ocpVersion: 4.21.0 (9)
status.result.pvcBound: "true" (10)
status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> (11)
status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> (12)
status.result.storageProfilesWithSmartClone: <storage_profile_list> (13)
status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> (14)
status.result.storageProfilesWithRWX: |-
ocs-storagecluster-ceph-rbd
ocs-storagecluster-ceph-rbd-virtualization
ocs-storagecluster-cephfs
trident-iscsi
trident-minio
trident-nfs
windows-vms
status.result.vmBootFromGoldenImage: VMI "vmi-under-test-dhkb8" successfully booted
status.result.vmHotplugVolume: |-
VMI "vmi-under-test-dhkb8" hotplug volume ready
VMI "vmi-under-test-dhkb8" hotplug volume removed
status.result.vmLiveMigration: VMI "vmi-under-test-dhkb8" migration completed
status.result.vmVolumeClone: 'DV cloneType: "csi-clone"'
status.result.vmsWithNonVirtRbdStorageClass: <vm_list> (15)
status.result.vmsWithUnsetEfsStorageClass: <vm_list> (16)
| 1 | Specifies if the checkup is successful (true) or not (false). |
| 2 | The reason for failure if the checkup fails. |
| 3 | The time when the checkup started, in RFC 3339 time format. |
| 4 | The time when the checkup has completed, in RFC 3339 time format. |
| 5 | The OKD Virtualization version. |
| 6 | Specifies if there is a default storage class. |
| 7 | The list of golden images whose data source is not ready. |
| 8 | The list of golden images whose data import cron is not up-to-date. |
| 9 | The OKD version. |
| 10 | Specifies if a PVC of 10Mi has been created and bound by the provisioner. |
| 11 | The list of storage profiles using snapshot-based clone but missing VolumeSnapshotClass. |
| 12 | The list of storage profiles with unknown provisioners. |
| 13 | The list of storage profiles with smart clone support (CSI/snapshot). |
| 14 | The list of storage profiles spec-overriden claimPropertySets. |
| 15 | The list of virtual machines that use the Ceph RBD storage class when the virtualization storage class exists. |
| 16 | The list of virtual machines that use an Elastic File Store (EFS) storage class where the GID and UID are not set in the storage class. |
Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> storage-checkup
$ oc delete config-map -n <target_namespace> storage-checkup-config
Optional: If you do not plan to run another checkup, delete the ServiceAccount, Role, and RoleBinding manifest:
$ oc delete -f <storage_sa_roles_rolebinding>.yaml
If a storage checkup fails, there are steps that you can take to identify the reason for failure.
You have installed the OpenShift CLI (oc).
You have downloaded the directory provided by the must-gather tool.
Review the status.failureReason field in the storage-checkup-config config map by running the following command and observing the output:
$ oc get configmap storage-checkup-config -n <namespace> -o yaml
Example output config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: storage-checkup-config
labels:
kiagnose/checkup-type: kubevirt-storage
data:
spec.timeout: 10m
status.succeeded: "false"
status.failureReason: "ErrNoDefaultStorageClass"
# ...
If the checkup has failed, the status.succeeded value is false.
If the checkup has failed, the status.failureReason field contains an error message. In this example output, the ErrNoDefaultStorageClass error message means that no default storage class is configured.
Search the directory provided by the must-gather tool for logs, events, or terms related to the error in the data.status.failureReason field value.
The following error codes might appear in the storage-checkup-config config map after a storage checkup fails.
| Error code | Meaning |
|---|---|
|
No default storage class is configured. |
|
One or more persistent volume claims (PVCs) failed to bind. |
|
Multiple default storage classes are configured. |
|
There are |
|
There are VMs using elastic file system (EFS) storage classes, where the GID and UID are not set in the |
|
One or more golden images has a |
|
The |
|
Some VMs failed to boot within the expected time. |