$ oc patch hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged \
--type json -p '[{"op": "replace", "path": \
"/spec/featureGates/enableCommonBootImageImport", \
"value": false}]'
You can manage automatic updates for the following boot sources:
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources.
You can opt out of automatic updates for all system-defined boot sources by disabling the enableCommonBootImageImport
feature gate. If you disable this feature gate, all DataImportCron
objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
When the enableCommonBootImageImport
feature gate is disabled, DataSource
objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by creating a new persistent volume claim (PVC) or volume snapshot for the DataSource
object, then populating it with an operating system image.
Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents CDIDataImportCronOutdated
alerts from filling up logs.
To disable automatic updates for all system-defined boot sources, turn off the enableCommonBootImageImport
feature gate by setting the value to false
. Setting this value to true
re-enables the feature gate and turns automatic updates back on.
Custom boot sources are not affected by this setting. |
Toggle the feature gate for automatic boot source updates by editing the HyperConverged
custom resource (CR).
To disable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport
field in the HyperConverged
CR to false
. For example:
$ oc patch hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged \
--type json -p '[{"op": "replace", "path": \
"/spec/featureGates/enableCommonBootImageImport", \
"value": false}]'
To re-enable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport
field in the HyperConverged
CR to true
. For example:
$ oc patch hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged \
--type json -p '[{"op": "replace", "path": \
"/spec/featureGates/enableCommonBootImageImport", \
"value": true}]'
Custom boot sources that are not provided by OKD Virtualization are not controlled by the feature gate. You must manage them individually by editing the HyperConverged
custom resource (CR).
You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details. |
You can override the default storage class by editing the HyperConverged
custom resource (CR).
Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources. |
Open the HyperConverged
CR in your default editor by running the following command:
$ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
Define a new storage class by entering a value in the storageClassName
field:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
dataImportCronTemplates:
- metadata:
name: rhel8-image-cron
spec:
template:
spec:
storageClassName: <new_storage_class> (1)
schedule: "0 */12 * * *" (2)
managedDataSource: <data_source> (3)
# ...
1 | Define the storage class. |
2 | Required: Schedule for the job specified in cron format. |
3 | Required: The data source to use. |
For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.
Remove the storageclass.kubernetes.io/is-default-class
annotation from the current default storage class.
Retrieve the name of the current default storage class by running the following command:
$ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d
hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d (1)
1 | In this example, the current default storage class is named hostpath-csi-basic . |
Remove the annotation from the current default storage class by running the following command:
$ oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' (1)
1 | Replace <current_default_storage_class> with the storageClassName value of the default storage class. |
Set the new storage class as the default by running the following command:
$ oc patch storageclass <new_storage_class> -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' (1)
1 | Replace <new_storage_class> with the storageClassName value that you added to the HyperConverged CR. |
OKD Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the HyperConverged
custom resource (CR).
The cluster has a default storage class.
Open the HyperConverged
CR in your default editor by running the following command:
$ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
Edit the HyperConverged
CR, adding the appropriate template and boot source in the dataImportCronTemplates
section. For example:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
dataImportCronTemplates:
- metadata:
name: centos7-image-cron
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true" (1)
spec:
schedule: "0 */12 * * *" (2)
template:
spec:
source:
registry: (3)
url: docker://quay.io/containerdisks/centos:7-2009
storage:
resources:
requests:
storage: 10Gi
managedDataSource: centos7 (4)
retentionPolicy: "None" (5)
1 | This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer . |
2 | Schedule for the job specified in cron format. |
3 | Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod , which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image , but the CDI importer is not authorized to access it. |
4 | For the custom image to be detected as an available boot source, the name of the image’s managedDataSource must match the name of the template’s DataSource , which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file. |
5 | Use All to retain data volumes and data sources when the cron job is deleted. Use None to delete data volumes and data sources when the cron job is deleted. |
Save the file.
Enable volume snapshot boot sources by setting the parameter in the StorageProfile
associated with the storage class that stores operating system base images. Although DataImportCron
was originally designed to maintain only PVC sources, VolumeSnapshot
sources scale better than PVC sources for certain storage types.
Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot. |
You must have access to a volume snapshot with the operating system image.
The storage must support snapshotting.
Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command:
$ oc edit storageprofile <storage_class>
Review the dataImportCronSourceFormat
specification of the StorageProfile
to confirm whether or not the VM is using PVC or volume snapshot by default.
Edit the storage profile, if needed, by updating the dataImportCronSourceFormat
specification to snapshot
.
apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
# ...
spec:
dataImportCronSourceFormat: snapshot
Open the storage profile object that corresponds to the storage class used to provision boot sources.
$ oc get storageprofile <storage_class> -oyaml
Confirm that the dataImportCronSourceFormat
specification of the StorageProfile
is set to 'snapshot', and that any DataSource
objects that the DataImportCron
points to now reference volume snapshots.
You can now use these boot sources to create virtual machines.
You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the HyperConverged
custom resource (CR).
Open the HyperConverged
CR in your default editor by running the following command:
$ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
Disable automatic updates for an individual boot source by editing the spec.dataImportCronTemplates
field.
Remove the boot source from the spec.dataImportCronTemplates
field. Automatic updates are disabled for custom boot sources by default.
Add the boot source to spec.dataImportCronTemplates
.
Automatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them. |
Set the value of the dataimportcrontemplate.kubevirt.io/enable
annotation to 'false'
.
For example:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
dataImportCronTemplates:
- metadata:
annotations:
dataimportcrontemplate.kubevirt.io/enable: 'false'
name: rhel8-image-cron
# ...
Save the file.
You can determine if a boot source is system-defined or custom by viewing the HyperConverged
custom resource (CR).
View the contents of the HyperConverged
CR by running the following command:
$ oc get hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged -o yaml
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
# ...
status:
# ...
dataImportCronTemplates:
- metadata:
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true"
name: centos-7-image-cron
spec:
garbageCollect: Outdated
managedDataSource: centos7
schedule: 55 8/12 * * *
template:
metadata: {}
spec:
source:
registry:
url: docker://quay.io/containerdisks/centos:7-2009
storage:
resources:
requests:
storage: 30Gi
status: {}
status:
commonTemplate: true (1)
# ...
- metadata:
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true"
name: user-defined-dic
spec:
garbageCollect: Outdated
managedDataSource: user-defined-centos-stream8
schedule: 55 8/12 * * *
template:
metadata: {}
spec:
source:
registry:
pullMethod: node
url: docker://quay.io/containerdisks/centos-stream:8
storage:
resources:
requests:
storage: 30Gi
status: {}
status: {} (2)
# ...
1 | Indicates a system-defined boot source. |
2 | Indicates a custom boot source. |
Verify the status of the boot source by reviewing the status.dataImportCronTemplates.status
field.
If the field contains commonTemplate: true
, it is a system-defined boot source.
If the status.dataImportCronTemplates.status
field has the value {}
, it is a custom boot source.