$ oc edit <resource_type> <resource_name> -n {CNVNamespace}
The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OKD Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement rules for some components after installing OKD Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads. |
You can use node placement rules for the following tasks:
Deploy virtual machines only on nodes intended for virtualization workloads.
Deploy Operators only on infrastructure nodes.
Maintain separation between workloads.
Depending on the object, you can use one or more of the following rule types:
nodeSelector
Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied.
tolerations
Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
You can apply node placement rules by editing a Subscription
, HyperConverged
, or HostPathProvisioner
object using the command line.
The oc
CLI tool is installed.
You are logged in with cluster administrator permissions.
Edit the object in your default editor by running the following command:
$ oc edit <resource_type> <resource_name> -n {CNVNamespace}
Save the file to apply the changes.
You can specify node placement rules for a OKD Virtualization component by editing a Subscription
, HyperConverged
, or HostPathProvisioner
object.
To specify the nodes where OLM deploys the OKD Virtualization Operators, edit the Subscription
object during OKD Virtualization installation.
Currently, you cannot configure node placement rules for the Subscription
object by using the web console.
The Subscription
object does not support the affinity
node pplacement rule.
Subscription
object with nodeSelector
ruleapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: kubevirt-hyperconverged
spec:
source: community-operators
sourceNamespace: openshift-marketplace
name: community-kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.16.4
channel: "stable"
config:
nodeSelector:
example.io/example-infra-key: example-infra-value (1)
1 | OLM deploys the OKD Virtualization Operators on nodes labeled example.io/example-infra-key = example-infra-value . |
Subscription
object with tolerations
ruleapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: kubevirt-hyperconverged
spec:
source: community-operators
sourceNamespace: openshift-marketplace
name: community-kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.16.4
channel: "stable"
config:
tolerations:
- key: "key"
operator: "Equal"
value: "virtualization" (1)
effect: "NoSchedule"
1 | OLM deploys OKD Virtualization Operators on nodes labeled key = virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled on these nodes. |
To specify the nodes where OKD Virtualization deploys its components, you can edit the nodePlacement
object in the HyperConverged custom resource (CR) file that you create during OKD Virtualization installation.
HyperConverged
object with nodeSelector
ruleapiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: kubevirt-hyperconverged
spec:
infra:
nodePlacement:
nodeSelector:
example.io/example-infra-key: example-infra-value (1)
workloads:
nodePlacement:
nodeSelector:
example.io/example-workloads-key: example-workloads-value (2)
1 | Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-infra-value . |
2 | workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . |
HyperConverged
object with affinity
ruleapiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: kubevirt-hyperconverged
spec:
infra:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-infra-key
operator: In
values:
- example-infra-value (1)
workloads:
nodePlacement:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: example.io/example-workloads-key (2)
operator: In
values:
- example-workloads-value
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: example.io/num-cpus
operator: Gt
values:
- 8 (3)
1 | Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-value . |
2 | workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . |
3 | Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled. |
HyperConverged
object with tolerations
ruleapiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: kubevirt-hyperconverged
spec:
workloads:
nodePlacement:
tolerations: (1)
- key: "key"
operator: "Equal"
value: "virtualization"
effect: "NoSchedule"
1 | Nodes reserved for OKD Virtualization components are labeled with the key = virtualization:NoSchedule taint. Only pods with matching tolerations are scheduled on reserved nodes. |
You can edit the HostPathProvisioner
object directly or by using the web console.
You must schedule the hostpath provisioner and the OKD Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines. |
After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM.
You can configure node placement rules by specifying nodeSelector
, affinity
, or tolerations
for the spec.workload
field of the HostPathProvisioner
object that you create when you install the hostpath provisioner.
HostPathProvisioner
object with nodeSelector
ruleapiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload:
nodeSelector:
example.io/example-workloads-key: example-workloads-value (1)
1 | Workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . |