$ oc label node/worker-1a node/worker-1b topology.kubernetes.io/zone=rack1
Before you get started with hosted control planes for OKD, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
To ensure high availability and proper workload deployment. For example, you can set the node-role.kubernetes.io/infra
label to avoid having the control-plane workload count toward your OKD subscription.
To ensure that control plane workloads are separate from other workloads in the management cluster.
Do not use the management cluster for your workload. Workloads must not run on nodes where control planes run. |
Proper node labeling is a prerequisite to deploying hosted control planes.
As a management cluster administrator, you use the following labels and taints in management cluster nodes to schedule a control plane workload:
hypershift.openshift.io/control-plane: true
: Use this label and taint to dedicate a node to running hosted control plane workloads. By setting a value of true
, you avoid sharing the control plane nodes with other components, for example, the infrastructure components of the management cluster or any other mistakenly deployed workload.
hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}
: Use this label and taint when you want to dedicate a node to a single hosted cluster.
Apply the following labels on the nodes that host control-plane pods:
node-role.kubernetes.io/infra
: Use this label to avoid having the control-plane workload count toward your subscription.
topology.kubernetes.io/zone
: Use this label on the management cluster nodes to deploy highly available clusters across failure domains. The zone might be a location, rack name, or the hostname of the node where the zone is set. For example, a management cluster has the following nodes: worker-1a
, worker-1b
, worker-2a
, and worker-2b
. The worker-1a
and worker-1b
nodes are in rack1
, and the worker-2a
and worker-2b nodes are in rack2
. To use each rack as an availability zone, enter the following commands:
$ oc label node/worker-1a node/worker-1b topology.kubernetes.io/zone=rack1
$ oc label node/worker-2a node/worker-2b topology.kubernetes.io/zone=rack2
Pods for a hosted cluster have tolerations, and the scheduler uses affinity rules to schedule them. Pods tolerate taints for control-plane
and the cluster
for the pods. The scheduler prioritizes the scheduling of pods into nodes that are labeled with hypershift.openshift.io/control-plane
and hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}
.
For the ControllerAvailabilityPolicy
option, use HighlyAvailable
, which is the default value that the hosted control planes command line interface, hcp
, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting topology.kubernetes.io/zone
as the topology key. Control planes that are not highly available are not supported.
To enable a hosted cluster to require its pods to be scheduled into infrastructure nodes, set HostedCluster.spec.nodeSelector
, as shown in the following example:
spec:
nodeSelector:
role.kubernetes.io/infra: ""
This way, hosted control planes for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying OKD nodes.
Four built-in priority classes influence the priority and preemption of the hosted cluster pods. You can create the pods in the management cluster in the following order from highest to lowest:
hypershift-operator
: HyperShift Operator pods.
hypershift-etcd
: Pods for etcd.
hypershift-api-critical
: Pods that are required for API calls and resource admission to succeed. These pods include pods such as kube-apiserver
, aggregated API servers, and web hooks.
hypershift-control-plane
: Pods in the control plane that are not API-critical but still need elevated priority, such as the cluster version Operator.