-
For example, OKD 4.11, 4.13.
-
For example, OKD 4.10, 4.12.
The control plane, which is composed of control plane machines, manages the OKD cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator (CVO), the Machine Config Operator, and a set of individual Operators.
Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads.
By default, there are two MCPs created by the cluster when it is installed: master
and worker
. Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP updates.
For worker nodes, you can create additional MCPs, or custom pools, to manage nodes with custom use cases that extend outside of the default node types. Custom MCPs for the control plane nodes are not supported.
Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO.
A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like |
It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra
role label to a worker node so it has the worker,infra
dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker
label from a node and apply the infra
label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster.
Any node labeled with the |
The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes.
There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded
until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.
OKD assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard master
and worker
role types.
The cluster also contains the definition for the |
The OKD version must match between control plane host and node host. For example, in a 4.15 cluster, all control plane hosts must be 4.15 and all nodes must be 4.15.
Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from the previous OKD version to 4.15, some nodes will upgrade to 4.15 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible.
The kubelet
service must not be newer than kube-apiserver
, and can be up to two minor versions older depending on whether your OKD version is odd or even. The table below shows the appropriate version compatibility:
OKD version | Supported kubelet skew |
---|---|
Odd OKD minor versions [1] |
Up to one version older |
Even OKD minor versions [2] |
Up to two versions older |
For example, OKD 4.11, 4.13.
For example, OKD 4.10, 4.12.
In a Kubernetes cluster, worker nodes run and manage the actual workloads requested by Kubernetes users. The worker nodes advertise their capacity and the scheduler, which is a control plane service, determines on which nodes to start pods and containers. The following important services run on each worker node:
CRI-O, which is the container engine.
kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads.
A service proxy, which manages communication for pods across workers.
The runC or crun low-level container runtime, which creates and runs containers.
For information about how to enable crun instead of the default runC, see the documentation for creating a |
In OKD, compute machine sets control the compute machines, which are assigned the worker
machine role. Machines with the worker
role drive compute workloads that are governed by a specific machine pool that autoscales them. Because OKD has the capacity to support multiple machine types, the machines with the worker
role are classed as compute machines. In this release, the terms worker machine and compute machine are used interchangeably because the only default type of compute machine is the worker machine. In future versions of OKD, different types of compute machines, such as infrastructure machines, might be used by default.
Compute machine sets are groupings of compute machine resources under the |
In a Kubernetes cluster, the master nodes run services that are required to control the Kubernetes cluster. In OKD, the control plane is comprised of control plane machines that have a master
machine role. They contain more than just the Kubernetes services for managing the OKD cluster.
For most OKD clusters, control plane machines are defined by a series of standalone machine API resources. For supported cloud provider and OKD version combinations, control planes can be managed with control plane machine sets. Extra controls apply to control plane machines to prevent you from deleting all of the control plane machines and breaking your cluster.
Exactly three control plane nodes must be used for all production deployments. |
Services that fall under the Kubernetes category on the control plane include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler.
Component | Description |
---|---|
Kubernetes API server |
The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also provides a focal point for the shared state of the cluster. |
etcd |
etcd stores the persistent control plane state while other components watch etcd for changes to bring themselves into the specified state. |
Kubernetes controller manager |
The Kubernetes controller manager watches etcd for changes to objects such as replication, namespace, and service account controller objects, and then uses the API to enforce the specified state. Several such processes create a cluster with one active leader at a time. |
Kubernetes scheduler |
The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod. |
There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server.
Component | Description |
---|---|
OpenShift API server |
The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates. The OpenShift API server is managed by the OpenShift API Server Operator. |
OpenShift controller manager |
The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state. The OpenShift controller manager is managed by the OpenShift Controller Manager Operator. |
OpenShift OAuth API server |
The OpenShift OAuth API server validates and configures the data to authenticate to OKD, such as users, groups, and OAuth tokens. The OpenShift OAuth API server is managed by the Cluster Authentication Operator. |
OpenShift OAuth server |
Users request tokens from the OpenShift OAuth server to authenticate themselves to the API. The OpenShift OAuth server is managed by the Cluster Authentication Operator. |
Some of these services on the control plane machines run as systemd services, while others run as static pods.
Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as:
The CRI-O container engine (crio), which runs and manages the containers. OKD 4.15 uses CRI-O instead of the Docker Container Engine.
Kubelet (kubelet), which accepts requests for managing containers on the machine from control plane services.
CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers.
The installer-*
and revision-pruner-*
control plane pods must run with root permissions because they write to the /etc/kubernetes
directory, which is owned by the root user. These pods are in the following namespaces:
openshift-etcd
openshift-kube-apiserver
openshift-kube-controller-manager
openshift-kube-scheduler
Operators are among the most important components of OKD. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run.
Operators integrate with Kubernetes APIs and CLI tools such as kubectl
 and oc
 commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state.
Operators also offer a more granular configuration experience. You configure each component by modifying the API that the Operator exposes instead of modifying a global configuration file.
Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed on the control plane by using Operators. Components that are added to the control plane by using Operators include critical networking and credential services.
While both follow similar Operator concepts and goals, Operators in OKD are managed by two different systems, depending on their purpose:
Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions.
Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications.
In OKD, all cluster functions are divided into a series of default cluster Operators. Cluster Operators manage a particular area of cluster functionality, such as cluster-wide application logging, management of the Kubernetes control plane, or the machine provisioning system.
Cluster Operators are represented by a ClusterOperator
object, which
cluster administrators
can view in the OKD web console from the Administration → Cluster Settings page. Each cluster Operator provides a simple API for determining cluster functionality. The Operator hides the details of managing the lifecycle of that component. Operators can manage a single component or tens of components, but the end goal is always to reduce operational burden by automating common actions.
Operator Lifecycle Manager (OLM) and OperatorHub are default components in OKD that help manage Kubernetes-native applications as Operators. Together they provide the system for discovering, installing, and managing the optional add-on Operators available on the cluster.
Using OperatorHub in the OKD web console, cluster administrators and authorized users can select Operators to install from catalogs of Operators. After installing an Operator from OperatorHub, it can be made available globally or in specific namespaces to run in user applications.
Default catalog sources are available that include Red Hat Operators, certified Operators, and community Operators. Cluster administrators can also add their own custom catalog sources, which can contain a custom set of Operators.
Developers can use the Operator SDK to help author custom Operators that take advantage of OLM features, as well. Their Operator can then be bundled and added to a custom catalog source, which can be added to a cluster and made available to users.
OLM does not manage the cluster Operators that comprise the OKD architecture. |
For more details on running add-on Operators in OKD, see the Operators guide sections on Operator Lifecycle Manager (OLM) and OperatorHub.
For more details on the Operator SDK, see Developing Operators.
The platform Operator type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Operator Lifecycle Manager (OLM) introduces a new type of Operator called platform Operators. A platform Operator is an OLM-based Operator that can be installed during or after an OKD cluster’s Day 0 operations and participates in the cluster’s lifecycle. As a cluster administrator, you can use platform Operators to further customize your OKD installation to meet your requirements and use cases.
Using the existing cluster capabilities feature in OKD, cluster administrators can already disable a subset of Cluster Version Operator-based (CVO) components considered non-essential to the initial payload prior to cluster installation. Platform Operators iterate on this model by providing additional customization options. Through the platform Operator mechanism, which relies on resources from the RukPak component, OLM-based Operators can now be installed at cluster installation time and can block cluster rollout if the Operator fails to install successfully.
In OKD 4.15, this Technology Preview release focuses on the basic platform Operator mechanism and builds a foundation for expanding the concept in upcoming releases. You can use the cluster-wide PlatformOperator
API to configure Operators before or after cluster creation on clusters that have enabled the TechPreviewNoUpgrade
feature set.
OKDÂ 4.15Â integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes, OKD provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades.
OKD employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include:
The machine-config-controller
, which coordinates machine upgrades from the control
plane. It monitors all of the cluster nodes and orchestrates their configuration
updates.
The machine-config-daemon
daemon set, which runs on
each node in the cluster and updates a machine to configuration as defined by
machine config and as instructed by the MachineConfigController. When the node detects
a change, it drains off its pods, applies the update, and reboots. These changes
come in the form of Ignition configuration files that apply the specified
machine configuration and control kubelet configuration. The update itself is
delivered in a container. This process is key to the success of managing
OKD and FCOS updates together.
The machine-config-server
daemon set, which provides the Ignition config files
to control plane nodes as they join the cluster.
The machine configuration is a subset of the Ignition configuration. The
machine-config-daemon
reads the machine configuration to see if it needs to do
an OSTree update or if it must apply a series of systemd kubelet file changes,
configuration changes, or other changes to the operating system or OKD
configuration.
When you perform node management operations, you create or modify a
KubeletConfig
custom resource (CR).
When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect. To prevent the nodes from automatically rebooting after machine configuration changes, before making the changes, you must pause the autoreboot process by setting the The following modifications do not trigger a node reboot:
|
There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded
until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.
For more information about detecting configuration drift, see Understanding configuration drift detection.
For information about preventing the control plane machines from rebooting after the Machine Config Operator makes changes to the machine configuration, see Disabling Machine Config Operator from automatically rebooting.
etcd is a consistent, distributed key-value store that holds small amounts of data that can fit entirely in memory. Although etcd is a core component of many projects, it is the primary data store for Kubernetes, which is the standard system for container orchestration.
By using etcd, you can benefit in several ways:
Maintain consistent uptime for your cloud-native applications, and keep them working even if individual servers fail
Store and replicate all cluster states for Kubernetes
Distribute configuration data to provide redundancy and resiliency for the configuration of nodes
To ensure a reliable approach to cluster configuration and management, etcd uses the etcd Operator. The Operator simplifies the use of etcd on a Kubernetes container platform like OKD. With the etcd Operator, you can create or delete etcd members, resize clusters, perform backups, and upgrade etcd.
The etcd Operator observes, analyzes, and acts:
It observes the cluster state by using the Kubernetes API.
It analyzes differences between the current state and the state that you want.
It fixes the differences through the etcd cluster management APIs, the Kubernetes API, or both.
etcd holds the cluster state, which is constantly updated. This state is continuously persisted, which leads to a high number of small changes at high frequency. As a result, it is critical to back the etcd cluster member with fast, low-latency I/O. For more information about best practices for etcd, see "Recommended etcd practices".
You can use hosted control planes for Red Hat OKD to reduce management costs, optimize cluster deployment time, and separate management and workload concerns so that you can focus on your applications.
Hosted control planes is available by using the multicluster engine for Kubernetes Operator version 2.0 or later on the following platforms:
Bare metal by using the Agent provider
OpenShift Virtualization, as a Generally Available feature in connected environments and a Technology Preview feature in disconnected environments
Amazon Web Services (AWS), as a Technology Preview feature
IBM Z, as a Technology Preview feature
IBM Power, as a Technology Preview feature
OKD is often deployed in a coupled, or standalone, model, where a cluster consists of a control plane and a data plane. The control plane includes an API endpoint, a storage endpoint, a workload scheduler, and an actuator that ensures state. The data plane includes compute, storage, and networking where workloads and applications run.
The standalone control plane is hosted by a dedicated group of nodes, which can be physical or virtual, with a minimum number to ensure quorum. The network stack is shared. Administrator access to a cluster offers visibility into the cluster’s control plane, machine management APIs, and other components that contribute to the state of a cluster.
Although the standalone model works well, some situations require an architecture where the control plane and data plane are decoupled. In those cases, the data plane is on a separate network domain with a dedicated physical hosting environment. The control plane is hosted by using high-level primitives such as deployments and stateful sets that are native to Kubernetes. The control plane is treated as any other workload.
With hosted control planes for OKD, you can pave the way for a true hybrid-cloud approach and enjoy several other benefits.
The security boundaries between management and workloads are stronger because the control plane is decoupled and hosted on a dedicated hosting service cluster. As a result, you are less likely to leak credentials for clusters to other users. Because infrastructure secret account management is also decoupled, cluster infrastructure administrators cannot accidentally delete control plane infrastructure.
With hosted control planes, you can run many control planes on fewer nodes. As a result, clusters are more affordable.
Because the control planes consist of pods that are launched on OKD, control planes start quickly. The same principles apply to control planes and workloads, such as monitoring, logging, and auto-scaling.
From an infrastructure perspective, you can push registries, HAProxy, cluster monitoring, storage nodes, and other infrastructure components to the tenant’s cloud provider account, isolating usage to the tenant.
From an operational perspective, multicluster management is more centralized, which results in fewer external factors that affect the cluster status and consistency. Site reliability engineers have a central place to debug issues and navigate to the cluster data plane, which can lead to shorter Time to Resolution (TTR) and greater productivity.
When you use hosted control planes for OKD, it is important to understand its key concepts and the personas that are involved.
An OKD cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane.
Network, compute, and storage resources that exist in the tenant or end-user cloud account.
An OKD control plane that runs on the management cluster, which is exposed by the API endpoint of a hosted cluster. The components of a control plane include etcd, the Kubernetes API server, the Kubernetes controller manager, and a VPN.
See management cluster.
A cluster that the hub cluster manages. This term is specific to the cluster lifecycle that the multicluster engine for Kubernetes Operator manages in Red Hat Advanced Cluster Management. A managed cluster is not the same thing as a management cluster. For more information, see Managed cluster.
An OKD cluster where the HyperShift Operator is deployed and where the control planes for hosted clusters are hosted. The management cluster is synonymous with the hosting cluster.
Network, compute, and storage resources of the management cluster.
A resource that contains the compute nodes. The control plane contains node pools. The compute nodes run applications and workloads.
Users who assume this role are the equivalent of administrators in standalone OKD. This user has the cluster-admin
role in the provisioned cluster, but might not have power over when or how the cluster is updated or configured. This user might have read-only access to see some configuration projected into the cluster.
Users who assume this role are the equivalent of developers in standalone OKD. This user does not have a view into OperatorHub or machines.
Users who assume this role can request control planes and worker nodes, drive updates, or modify externalized configurations. Typically, this user does not manage or access cloud credentials or infrastructure encryption keys. The cluster service consumer persona can request hosted clusters and interact with node pools. Users who assume this role have RBAC to create, read, update, or delete hosted clusters and node pools within a logical boundary.
Users who assume this role typically have the cluster-admin
role on the management cluster and have RBAC to monitor and own the availability of the HyperShift Operator as well as the control planes for the tenant’s hosted clusters. The cluster service provider persona is responsible for several activities, including the following examples:
Owning service-level objects for control plane availability, uptime, and stability
Configuring the cloud account for the management cluster to host control planes
Configuring the user-provisioned infrastructure, which includes the host awareness of available compute resources
With each major, minor, or patch version release of OKD, two components of hosted control planes are released:
The HyperShift Operator
The hcp
command-line interface (CLI)
The HyperShift Operator manages the lifecycle of hosted clusters that are represented by the HostedCluster
API resources. The HyperShift Operator is released with each OKD release. The HyperShift Operator creates the supported-versions
config map in the hypershift
namespace. The config map contains the supported hosted cluster versions.
You can host different versions of control planes on the same management cluster.
supported-versions
config map object apiVersion: v1
data:
supported-versions: '{"versions":["4.15"]}'
kind: ConfigMap
metadata:
labels:
hypershift.openshift.io/supported-versions: "true"
name: supported-versions
namespace: hypershift
You can use the hcp
CLI to create hosted clusters.
You can use the hypershift.openshift.io
API resources, such as, HostedCluster
and NodePool
, to create and manage OKD clusters at scale. A HostedCluster
resource contains the control plane and common data plane configuration. When you create a HostedCluster
resource, you have a fully functional control plane with no attached nodes. A NodePool
resource is a scalable set of worker nodes that is attached to a HostedCluster
resource.
The API version policy generally aligns with the policy for Kubernetes API versioning.