$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
After you deploy a hosted cluster on OKD Virtualization, you can manage the cluster by completing the following procedures.
You can access the hosted cluster by either getting the kubeconfig
file and kubeadmin
credential directly from resources, or by using the hcp
command line interface to generate a kubeconfig
file.
To access the hosted cluster by getting the kubeconfig
file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.
The secret name formats are as follows:
kubeconfig
secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig
(clusters-hypershift-demo-admin-kubeconfig)
kubeadmin
password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password
(clusters-hypershift-demo-kubeadmin-password)
The kubeconfig
secret contains a Base64-encoded kubeconfig
field, which you can decode and save into a file to use with the following command:
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
The kubeadmin
password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.
To access the hosted cluster by using the hcp
CLI to generate the kubeconfig
file, take the following steps:
Generate the kubeconfig
file by entering the following command:
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
After you save the kubeconfig
file, you can access the hosted cluster by entering the following example command:
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
If you do not provide any advanced storage configuration, the default storage class is used for the KubeVirt virtual machine (VM) images, the KubeVirt Container Storage Interface (CSI) mapping, and the etcd volumes.
The following table lists the capabilities that the infrastructure must provide to support persistent storage in a hosted cluster:
Infrastructure CSI provider | Hosted cluster CSI provider | Hosted cluster capabilities | Notes |
---|---|---|---|
Any RWX |
|
Basic: RWO |
Recommended |
Any RWX |
Red Hat OpenShift Data Foundation external mode |
Red Hat OpenShift Data Foundation feature set |
|
Any RWX |
Red Hat OpenShift Data Foundation internal mode |
Red Hat OpenShift Data Foundation feature set |
Do not use |
KubeVirt CSI supports mapping a infrastructure storage class that is capable of ReadWriteMany
(RWX) access. You can map the infrastructure storage class to hosted storage class during cluster creation.
To map the infrastructure storage class to the hosted storage class, use the --infra-storage-class-mapping
argument by running the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Replace <infrastructure_storage_class> with the infrastructure storage class name and <hosted_storage_class> with the hosted cluster storage class name. You can use the --infra-storage-class-mapping argument multiple times within the hcp create cluster command. |
After you create the hosted cluster, the infrastructure storage class is visible within the hosted cluster. When you create a Persistent Volume Claim (PVC) within the hosted cluster that uses one of those storage classes, KubeVirt CSI provisions that volume by using the infrastructure storage class mapping that you configured during cluster creation.
KubeVirt CSI supports mapping only an infrastructure storage class that is capable of RWX access. |
The following table shows how volume and access mode capabilities map to KubeVirt CSI storage classes:
Infrastructure CSI capability | Hosted cluster CSI capability | VM live migration support | Notes |
---|---|---|---|
RWX: |
|
Supported |
Use |
RWO |
RWO |
Not supported |
Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. |
RWO |
RWO |
Not supported |
Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. Use of the infrastructure |
You can expose your infrastructure volume snapshot class to the hosted cluster by using KubeVirt CSI.
To map your volume snapshot class to the hosted cluster, use the --infra-volumesnapshot-class-mapping
argument when creating a hosted cluster. Run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ (6)
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class> (7)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. |
7 | Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. |
If you do not use the |
You can map multiple volume snapshot classes to the hosted cluster by assigning them to a specific group. The infrastructure storage class and the volume snapshot class are compatible with each other only if they belong to a same group.
To map multiple volume snapshot classes to the hosted cluster, use the group
option when creating a hosted cluster. Run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ (6)
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> \ (7)
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name>
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. Replace <group_name> with the group name. For example, infra-storage-class-mygroup/hosted-storage-class-mygroup,group=mygroup and infra-storage-class-mymap/hosted-storage-class-mymap,group=mymap . |
7 | Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. For example, infra-vol-snap-mygroup/hosted-vol-snap-mygroup,group=mygroup and infra-vol-snap-mymap/hosted-vol-snap-mymap,group=mymap . |
At cluster creation time, you can configure the storage class that is used to host the KubeVirt VM root volumes by using the --root-volume-storage-class
argument.
To set a custom storage class and volume size for KubeVirt VMs, run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--root-volume-storage-class <root_volume_storage_class> \ (6)
--root-volume-size <volume_size> (7)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify a name of the storage class to host the KubeVirt VM root volumes, for example, ocs-storagecluster-ceph-rbd . |
7 | Specify the volume size, for example, 64 . |
As a result, you get a hosted cluster created with VMs hosted on PVCs.
You can use KubeVirt VM image caching to optimize both cluster startup time and storage usage. KubeVirt VM image caching supports the use of a storage class that is capable of smart cloning and the ReadWriteMany
access mode. For more information about smart cloning, see Cloning a data volume using smart-cloning.
Image caching works as follows:
The VM image is imported to a PVC that is associated with the hosted cluster.
A unique clone of that PVC is created for every KubeVirt VM that is added as a worker node to the cluster.
Image caching reduces VM startup time by requiring only a single image import. It can further reduce overall cluster storage usage when the storage class supports copy-on-write cloning.
To enable image caching, during cluster creation, use the --root-volume-cache-strategy=PVC
argument by running the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--root-volume-cache-strategy=PVC (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify a strategy for image caching, for example, PVC . |
At cluster creation time, you can configure the storage class that is used to host etcd data by using the --etcd-storage-class
argument.
To configure a storage class for etcd, run the following command:
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ (1)
--node-pool-replicas <worker_node_count> \ (2)
--pull-secret <path_to_pull_secret> \ (3)
--memory <memory> \ (4)
--cores <cpu> \ (5)
--etcd-storage-class=<etcd_storage_class_name> (6)
1 | Specify the name of your hosted cluster, for instance, example . |
2 | Specify the worker count, for example, 2 . |
3 | Specify the path to your pull secret, for example, /user/name/pullsecret . |
4 | Specify a value for memory, for example, 8Gi . |
5 | Specify a value for CPU, for example, 2 . |
6 | Specify the etcd storage class name, for example, lvm-storageclass . If you do not provide an --etcd-storage-class argument, the default storage class is used. |