-
For example, OKD 4.5, 4.7, 4.9.
-
For example, OKD 4.6, 4.8, 4.10.
The control plane, which is composed of control plane machines (also known as the master machines), manages the OKD cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator, the Machine Config Operator, and a set of individual Operators.
Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads.
By default, there are two MCPs created by the cluster when it is installed: master
and worker
. Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP upgrades. You can create additional MCPs, or custom pools, to manage nodes that have custom use cases that extend outside of the default node types.
Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO.
A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like |
It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra
role label to a worker node so it has the worker,infra
dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker
label from a node and apply the infra
label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster.
Any node labeled with the |
The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes.
OKD assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard master and worker role types.
The cluster also contains the definition for the bootstrap role. Because the bootstrap machine is used only during cluster installation, its function is explained in the cluster installation documentation. |
The OKD version must match between control plane host and node host. For example, in a 4.9 cluster, all control plane hosts must be 4.9 and all nodes must be 4.9.
Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from OKD 4.8 to 4.9, some nodes will upgrade to 4.9 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible.
The kubelet
service must not be newer than kube-apiserver
, and can be up to two minor versions older depending on whether your OKD version is odd or even. The table below shows the appropriate version compatibility:
OKD version | Supported kubelet skew |
---|---|
Odd OKD minor versions [1] |
Up to one version older |
Even OKD minor versions [2] |
Up to two versions older |
For example, OKD 4.5, 4.7, 4.9.
For example, OKD 4.6, 4.8, 4.10.
In a Kubernetes cluster, the worker nodes are where the actual workloads requested by Kubernetes users run and are managed. The worker nodes advertise their capacity and the scheduler, which is part of the master services, determines on which nodes to start containers and pods. Important services run on each worker node, including CRI-O, which is the container engine, Kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads, and a service proxy, which manages communication for pods across workers.
In OKD, machine sets control the worker machines. Machines with the worker role drive compute workloads that are governed by a specific machine pool that autoscales them. Because OKD has the capacity to support multiple machine types, the worker machines are classed as compute machines. In this release, the terms worker machine and compute machine are used interchangeably because the only default type of compute machine is the worker machine. In future versions of OKD, different types of compute machines, such as infrastructure machines, might be used by default.
Machine sets are groupings of machine resources under the |
In a Kubernetes cluster, the control plane nodes (also known as the master nodes) run services that are required to control the Kubernetes cluster. In OKD, the control plane machines are the control plane. They contain more than just the Kubernetes services for managing the OKD cluster. Because all of the machines with the control plane role are control plane machines, the terms master and control plane are used interchangeably to describe them. Instead of being grouped into a machine set, control plane machines are defined by a series of standalone machine API resources. Extra controls apply to control plane machines to prevent you from deleting all control plane machines and breaking your cluster.
Exactly three control plane nodes must be used for all production deployments. |
Services that fall under the Kubernetes category on the master include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler.
Component | Description |
---|---|
Kubernetes API server |
The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also provides a focal point for the shared state of the cluster. |
etcd |
etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the specified state. |
Kubernetes controller manager |
The Kubernetes controller manager watches etcd for changes to objects such as replication, namespace, and service account controller objects, and then uses the API to enforce the specified state. Several such processes create a cluster with one active leader at a time. |
Kubernetes scheduler |
The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod. |
There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server.
Component | Description |
---|---|
OpenShift API server |
The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates. The OpenShift API server is managed by the OpenShift API Server Operator. |
OpenShift controller manager |
The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state. The OpenShift controller manager is managed by the OpenShift Controller Manager Operator. |
OpenShift OAuth API server |
The OpenShift OAuth API server validates and configures the data to authenticate to OpenShift Container Platform, such as users, groups, and OAuth tokens. The OpenShift OAuth API server is managed by the Cluster Authentication Operator. |
OpenShift OAuth server |
Users request tokens from the OpenShift OAuth server to authenticate themselves to the API. The OpenShift OAuth server is managed by the Cluster Authentication Operator. |
Some of these services on the control plane machines run as systemd services, while others run as static pods.
Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as:
The CRI-O container engine (crio), which runs and manages the containers. OKD 4.6 uses CRI-O instead of the Docker Container Engine.
Kubelet (kubelet), which accepts requests for managing containers on the machine from master services.
CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers.
The installer-*
and revision-pruner-*
control plane pods must run with root permissions because they write to the /etc/kubernetes
directory, which is owned by the root user. These pods are in the following namespaces:
openshift-etcd
openshift-kube-apiserver
openshift-kube-controller-manager
openshift-kube-scheduler
In OKD, Operators are the preferred method of packaging, deploying,
and managing services on the control plane. They also provide advantages to
applications that users run. Operators integrate with
Kubernetes APIs and CLI tools such as kubectl
and oc
commands. They provide
the means of watching over an application, performing health checks, managing
over-the-air updates, and ensuring that the applications remain in your
specified state.
Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed on the control plane by using Operators. Operators are among the most important components of OKD 4.6. Components that are added to the control plane by using Operators include critical networking and credential services.
The Operator that manages the other Operators in an OKD cluster is the Cluster Version Operator.
OKD 4.6 uses different classes of Operators to perform cluster operations and run services on the cluster for your applications to use.
In OKD 4.6, all cluster functions are divided into a series of platform Operators. Platform Operators manage a particular area of cluster functionality, such as cluster-wide application logging, management of the Kubernetes control plane, or the machine provisioning system.
Each Operator provides you with a simple API for determining cluster functionality. The Operator hides the details of managing the lifecycle of that component. Operators can manage a single component or tens of components, but the end goal is always to reduce operational burden by automating common actions. Operators also offer a more granular configuration experience. You configure each component by modifying the API that the Operator exposes instead of modifying a global configuration file.
The Cluster Operator Lifecycle Management (OLM) component manages Operators that are available for use in applications. It does not manage the Operators that comprise OKD. OLM is a framework that manages Kubernetes-native applications as Operators. Instead of managing Kubernetes manifests, it manages Kubernetes Operators. OLM manages two classes of Operators, Red Hat Operators and certified Operators.
Some Red Hat Operators drive the cluster functions, like the scheduler and problem detectors. Others are provided for you to manage yourself and use in your applications, like etcd. OKD also offers certified Operators, which the community built and maintains. These certified Operators provide an API layer to traditional applications so you can manage the application through Kubernetes constructs.
The OpenShift Update Service (OSUS) provides over-the-air updates to OKD, including Fedora CoreOS (FCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components.
The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the release image for that update to update your cluster. The release artifacts are hosted in Quay as container images.
To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available.
The OpenShift Update Service displays all recommended updates for your current cluster. If an upgrade path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. |
Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available.
Only upgrading to a newer version is supported. Reverting or rolling back your cluster to a previous version is not supported. If your update fails, contact Red Hat support. |
During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes as specified by the maxUnavailable
field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1
. The MCO then applies the new configuration and reboots the machine.
If you use Fedora machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first.
With the specification for the new version applied to the old kubelet, the Fedora machine cannot return to the Ready
state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service.
The OpenShift Update Service is composed of an Operator and one or more application instances.
OKD 4.6 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Fedora CoreOS (FCOS) on cluster nodes, OKD provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades.
OKD employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include:
The machine-config-controller
, which coordinates machine upgrades from the control
plane. It monitors all of the cluster nodes and orchestrates their configuration
updates.
The machine-config-daemon
daemon set, which runs on
each node in the cluster and updates a machine to configuration as defined by
machine config and as instructed by the MachineConfigController. When the node detects
a change, it drains off its pods, applies the update, and reboots. These changes
come in the form of Ignition configuration files that apply the specified
machine configuration and control kubelet configuration. The update itself is
delivered in a container. This process is key to the success of managing
OKD and FCOS updates together.
The machine-config-server
daemon set, which provides the Ignition config files
to control plane nodes as they join the cluster.
The machine configuration is a subset of the Ignition configuration. The
machine-config-daemon
reads the machine configuration to see if it needs to do
an OSTree update or if it must apply a series of systemd kubelet file changes,
configuration changes, or other changes to the operating system or OKD
configuration.
When you perform node management operations, you create or modify a
KubeletConfig
custom resource (CR).
When changes are made to a machine configuration, the Machine Config Operator automatically reboots all corresponding nodes in order for the changes to take effect. To prevent the nodes from automatically rebooting after machine configuration changes, before making the changes, you must pause the autoreboot process by setting the |
For information on preventing the control plane machines from after the Machine Config Operator makes changes to the machine config, see Disabling Machine Config Operator from automatically rebooting.