×

OKD installation overview

The OKD installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain.

These two basic types of OKD clusters are frequently called installer-provisioned infrastructure clusters and user-provisioned infrastructure clusters.

Both types of clusters have the following characteristics:

  • Highly available infrastructure with no single points of failure is available by default

  • Administrators maintain control over what updates are applied and when

You use the same installation program to deploy both types of clusters. The main assets generated by the installation program are the Ignition config files for the bootstrap, master, and worker machines. With these three configurations and correctly configured infrastructure, you can start an OKD cluster.

The OKD installation program uses a set of targets and dependencies to manage cluster installation. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel. The ultimate target is a running cluster. By meeting dependencies instead of running commands, the installation program is able to recognize and use existing components instead of running the commands to create them again.

The following diagram shows a subset of the installation targets and dependencies:

OKD installation targets and dependencies
Figure 1. OKD installation targets and dependencies

After installation, each cluster machine uses Fedora CoreOS (FCOS) as the operating system. FCOS is the immutable container host version of Fedora and features a Fedora kernel with SELinux enabled by default. It includes the kubelet, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.

Every control plane machine in an OKD 4.9 cluster must use FCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as an Atomic OSTree repository that is embedded in a container image that is rolled out across the cluster by an Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree. Together, these technologies enable OKD to manage the operating system like it manages any other application on the cluster, via in-place upgrades that keep the entire platform up-to-date. These in-place updates can reduce the burden on operations teams.

If you use FCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.

Installation process

When you install an OKD cluster, you download the installation program from

In OKD 4.9, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type.

  • For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster.

  • If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines.

You use three sets of files during installation: an installation configuration file that is named install-config.yaml, Kubernetes manifests, and Ignition config files for your machine types.

It is possible to modify Kubernetes and the Ignition config files that control the underlying FCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support.

The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster.

The installation configuration files are all pruned when you run the installation program, so be sure to back up all configuration files that you want to use again.

You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation.

The installation process with installer-provisioned infrastructure

The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster.

You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network.

If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure.

With installer-provisioned infrastructure clusters, OKD manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.

The installation process with user-provisioned infrastructure

You can also install OKD on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.

If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself, including:

  • The underlying infrastructure for the control plane and compute machines that make up the cluster

  • Load balancers

  • Cluster networking, including the DNS records and required subnets

  • Storage for the cluster infrastructure and applications

If your cluster uses user-provisioned infrastructure, you have the option of adding Fedora compute machines to your cluster.

Installation process details

Because each machine in the cluster requires information about the cluster when it is provisioned, OKD uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process:

Creating bootstrap
Figure 2. Creating the bootstrap, control plane, and compute machines

After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually.

  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Bootstrapping a cluster involves the following steps:

  1. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)

  2. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.

  3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)

  4. The temporary control plane schedules the production control plane to the production control plane machines.

  5. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes.

  6. The temporary control plane shuts down and passes control to the production control plane.

  7. The bootstrap machine injects OKD components into the production control plane.

  8. The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)

  9. The control plane sets up the compute nodes.

  10. The control plane installs additional services in the form of a set of Operators.

The result of this bootstrapping process is a running OKD cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.

Verifying node state after installation

The OKD installation completes when the following installation health checks are successful:

  • The provisioning host can access the OKD web console.

  • All control plane nodes are ready.

  • All cluster Operators are available.

After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. It can take some time before all worker nodes report as READY. For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes.

After your installation completes, you can continue to monitor the condition of the nodes in your cluster using the following steps.

Prerequisites
  • The installation program resolves successfully in the terminal.

Procedure
  1. Show the status of all worker nodes:

    $ oc get nodes
    Example output
    NAME                           STATUS   ROLES    AGE   VERSION
    example-compute1.example.com   Ready    worker   13m   v1.21.6+bb8d50a
    example-compute2.example.com   Ready    worker   13m   v1.21.6+bb8d50a
    example-compute4.example.com   Ready    worker   14m   v1.21.6+bb8d50a
    example-control1.example.com   Ready    master   52m   v1.21.6+bb8d50a
    example-control2.example.com   Ready    master   55m   v1.21.6+bb8d50a
    example-control3.example.com   Ready    master   55m   v1.21.6+bb8d50a
  2. Show the phase of all worker machine nodes:

    $ oc get machines -A
    Example output
    NAMESPACE               NAME                           PHASE         TYPE   REGION   ZONE   AGE
    openshift-machine-api   example-zbbt6-master-0         Running                              95m
    openshift-machine-api   example-zbbt6-master-1         Running                              95m
    openshift-machine-api   example-zbbt6-master-2         Running                              95m
    openshift-machine-api   example-zbbt6-worker-0-25bhp   Running                              49m
    openshift-machine-api   example-zbbt6-worker-0-8b4c2   Running                              49m
    openshift-machine-api   example-zbbt6-worker-0-jkbqt   Running                              49m
    openshift-machine-api   example-zbbt6-worker-0-qrl5b   Running                              49m

Installation scope

The scope of the OKD installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.

Additional resources

Supported platforms for OKD clusters

In OKD 4.9, you can install a cluster that uses installer-provisioned infrastructure on the following platforms:

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

  • Microsoft Azure

  • OpenStack versions 16.1 and 16.2

    • The latest OKD release supports both the latest OpenStack long-life release and intermediate release. For complete OpenStack release compatibility, see the OKD on OpenStack support matrix.

  • oVirt

  • VMware vSphere

  • VMware Cloud (VMC) on AWS

  • Bare metal

For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat.

After installation, the following changes are not supported:

  • Mixing cloud provider platforms

  • Mixing cloud provider components, such as using a persistent storage framework from a differing platform than what the cluster is installed on

In OKD 4.9, you can install a cluster that uses user-provisioned infrastructure on the following platforms:

  • AWS

  • Azure

  • Azure Stack Hub

  • GCP

  • OpenStack versions 16.1 and 16.2

  • oVirt

  • VMware vSphere

  • VMware Cloud on AWS

  • Bare metal

  • IBM Z or LinuxONE

  • IBM Power

Depending on the supported cases for the platform, installations on user-provisioned infrastructure allow you to run machines with full internet access, place your cluster behind a proxy, or perform a restricted network installation. In a restricted network installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a restricted network installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access.

The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms.

Additional resources