$ touch machineconfig.yaml
You can configure local storage for your virtual machines by using the hostpath provisioner feature.
The hostpath provisioner is a local storage provisioner designed for OKD Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
When you install the OKD Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:
Configure SELinux:
If you use Fedora CoreOS (FCOS) 8 workers, you must create a MachineConfig
object on each node.
Otherwise, apply the SELinux label container_file_t
to the persistent volume (PV) backing
directory on each node.
Create a HostPathProvisioner
custom resource.
Create a StorageClass
object for the hostpath provisioner.
The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the persistent volumes that the hostpath provisioner creates.
You must configure SELinux before you create the HostPathProvisioner
custom resource. To configure SELinux on Fedora CoreOS (FCOS) 8 workers, you must create a MachineConfig
object on each node.
Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.
The backing directory must not be located in the filesystem’s root directory because the |
If you select a directory that shares space with your operating system, you might exhaust the space on that partition and your node might become non-functional. Create a separate partition and point the hostpath provisioner to the separate partition to avoid interference with your operating system. |
Create the MachineConfig
file. For example:
$ touch machineconfig.yaml
Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 50-set-selinux-for-hostpath-provisioner
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Set SELinux chcon for hostpath provisioner
Before=kubelet.service
[Service]
ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> (1)
[Install]
WantedBy=multi-user.target
enabled: true
name: hostpath-provisioner.service
1 | Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory (/ ). |
Create the MachineConfig
object:
$ oc create -f machineconfig.yaml -n <namespace>
To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner
custom resource.
Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.
The backing directory must not be located in the filesystem’s root directory because the |
If you select a directory that shares space with your operating system, you might exhaust the space on that partition and your node becomes non-functional. Create a separate partition and point the hostpath provisioner to the separate partition to avoid interference with your operating system. |
Apply the SELinux context container_file_t
to the PV backing directory on each node. For example:
$ sudo chcon -t container_file_t -R <backing_directory_path>
If you use Fedora CoreOS (FCOS) 8 workers, you must configure SELinux by using a |
Create the HostPathProvisioner
custom resource file. For example:
$ touch hostpathprovisioner_cr.yaml
Edit the file, ensuring that the spec.pathConfig.path
value is the directory where you want the hostpath provisioner to create PVs. For example:
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "<backing_directory_path>" (1)
useNamingPrefix: false (2)
workload: (3)
1 | Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory (/ ). |
2 | Change this value to true if you want to use the name of the persistent volume claim (PVC) that is bound to the created PV as the prefix of the directory name. |
3 | Optional: You can use the spec.workload field to configure node placement rules for the hostpath provisioner. |
If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the |
Create the custom resource in the openshift-cnv
namespace:
$ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
When you create a storage class, you set parameters that affect the
dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
When using OKD Virtualization with OKD Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and |
Create a YAML file for defining the storage class. For example:
$ touch storageclass.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath-provisioner (1)
provisioner: kubevirt.io/hostpath-provisioner
reclaimPolicy: Delete (2)
volumeBindingMode: WaitForFirstConsumer (3)
1 | You can optionally rename the storage class by changing this value. |
2 | The two possible reclaimPolicy values are Delete and Retain . If you
do not specify a value, the storage class defaults to Delete . |
3 | The volumeBindingMode value determines when dynamic provisioning and volume
binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning
of a PV until after a pod that uses the persistent volume claim (PVC)
is created. This ensures that the PV meets the pod’s scheduling requirements. |
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using |
Create the StorageClass
object:
$ oc create -f storageclass.yaml