You can update an OKD 4 cluster with a single operation by using the web console or the OpenShift CLI (
About the OpenShift Update Service: For clusters with internet access, Red Hat provides over-the-air updates by using an OKD update service as a hosted service located behind public APIs.
Upgrade channels and releases: With upgrade channels, you can choose an upgrade strategy. Upgrade channels are specific to a minor version of OKD. Upgrade channels only control release selection and do not impact the version of the cluster that you install. The
openshift-install binary file for a specific version of the OKD always installs that minor version. For more information, see the following:
Preparing to perform an EUS-to-EUS update: Due to fundamental Kubernetes design, all OKD updates between minor versions must be serialized. You must update from OKD 4.9 to 4.10, and then to 4.11. You cannot update from OKD 4.8 to 4.10 directly. However, if you want to update between two Extended Update Support (EUS) versions, you can do so by incurring only a single reboot of non-control plane hosts. For more information, see the following:
Updating a cluster using the web console: You can update an OKD cluster by using the web console. The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions.
Updating a cluster using the CLI: You can update an OKD cluster within a minor version by using the OpenShift CLI (
oc). The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions.
Performing a canary rollout update: By controlling the rollout of an update to the worker nodes, you can ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. Depending on your organizational needs, you might want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. This is referred to as a canary update. Alternatively, you might also want to fit worker node updates, which often requires a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. You can perform the following procedures:
Updating a cluster that includes Fedora compute machines: If your cluster contains Fedora machines, you must perform additional steps to update those machines. You can perform the following procedures:
Updating a restricted network cluster: If your mirror host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment. You can then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror host of a registry, you can directly push the release images to the local registry.
Updating hardware on vSphere: You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 13 or later is supported for vSphere virtual machines in a cluster. For more information, see the following:
Updating a cluster that includes the Special Resource Operator: When updating a cluster that includes the Special Resource Operator (SRO), it is important to consider whether the new kernel module version is compatible with the kernel modules currently loaded by the SRO. You can run a preflight check to confirm if the SRO will be able to upgrade the kernel modules.
Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. This version is still fully supported, but support will be removed in a future version of OKD. Hardware version 15 is now the default for vSphere virtual machines in OKD.