Installer-provisioned installation of OKD requires:

  1. One provisioner node with Fedora CoreOS (FCOS) installed.

  2. Three control plane nodes.

  3. Baseboard Management Controller (BMC) access to each node.

  4. At least two networks:

    1. One required routable network

    2. One required network for provisioning nodes; and,

    3. One optional management network.

Before starting an installer-provisioned installation of OKD, ensure the hardware environment meets the following requirements.

Node requirements

Installer-provisioned installation involves a number of hardware node requirements:

  • CPU architecture: All nodes must use x86_64 CPU architecture.

  • Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.

  • Intelligent Platform Management Interface (IPMI): Installer-provisioned installation requires IPMI enabled on each node.

  • Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, Fedora CoreOS (FCOS) ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support FCOS for the provisioner node and FCOS for the control plane and worker nodes.

  • Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.

  • Provisioner node: Installer-provisioned installation requires one provisioner node.

  • Control plane: Installer-provisioned installation requires three control plane nodes for high availability.

  • Worker nodes: While not required, a typical production cluster has one or more worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.

  • Network interfaces: Each node must have at least one 10GB network interface for the routable baremetal network. Each node must have one 10GB network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such as eth0 or eno1, must be the same name on all of the other nodes. The same principle applies to the remaining NICs on each node.

Firmware requirements for installing with virtual media

The installer for installer-provisioned OKD clusters validates the hardware and firmware compatibility with Redfish virtual media. The following table lists supported firmware for installer-provisioned OKD clusters deployed with Redfish virtual media.

Table 1. Firmware compatibility for Redfish virtual media
Hardware Model Management Firmware Versions

HP

10th Generation

iLO5

N/A

9th Generation

iLO4

N/A

Dell

14th Generation

iDRAC 9

v4.20.20.20 - 04.40.00.00

13th Generation

iDRAC 8

v2.75.75.75+

Refer to the hardware documentation for the nodes or contact the hardware vendor for information on updating the firmware.

There are no known firmware limitations for HP servers.

For Dell servers, ensure the OKD cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: ConfigurationVirtual MediaAttach ModeAutoAttach . With iDRAC 9 firmware version 04.40.00.00, the Virtual Console plug-in defaults to eHTML5, which causes problems with the InsertVirtualMedia workflow. Set the plug-in to HTML5 to avoid this issue. The menu path is: ConfigurationVirtual consolePlug-in TypeHTML5 .

The installer will not initiate installation on a node if the node firmware is below the foregoing versions when installing with virtual media.

Network requirements

Installer-provisioned installation of OKD involves several network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable baremetal network.

Configuring NICs

OKD deploys with two networks:

  • provisioning: The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OKD cluster. The network interface for the provisioning network on each cluster node must have the BIOS or UEFI configured to PXE boot.

    In OKD 4.3, when deploying using the provisioning network, the first NIC on each node, such as eth0 or eno1, must interface with the provisioning network.

    In OKD 4.4 and later releases, you can specify the provisioning network NIC with the provisioningNetworkInterface configuration setting.

  • baremetal: The baremetal network is a routable network.

    In OKD 4.3, when deploying using the provisioning network, the second NIC on each node, such as eth1 or eno2, must interface with the baremetal network.

    In OKD 4.4 and later releases, you can use any NIC order to interface with the baremetal network, provided it is the same NIC order across worker and control plane nodes and not the NIC specified in the provisioningNetworkInterface configuration setting for the provisioning network.

Use a compatible approach such that cluster nodes use the same NIC ordering on all cluster nodes. NICs must have heterogeneous hardware with the same NIC naming convention such as eth0 or eno1.

When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network.

Configuring the DNS server

Clients access the OKD cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.

<cluster-name>.<domain-name>

For example:

test-cluster.example.com
Dynamic Host Configuration Protocol (DHCP) requirements

By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed, which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file.

Network administrators must reserve IP addresses for each node in the OKD cluster for the baremetal network on an external DHCP server.

Reserving IP addresses for nodes with the DHCP server

For the baremetal network, a network administrator must reserve a number of IP addresses, including:

  1. Three virtual IP addresses

    • One IP address for the API endpoint

    • One IP address for the wildcard ingress endpoint

    • One IP address for the name server

  2. One IP address for the provisioner node.

  3. One IP address for each control plane (master) node.

  4. One IP address for each worker node, if applicable.

The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.

Usage Host Name IP

API

api.<cluster-name>.<domain>

<ip>

Ingress LB (apps)

*.apps.<cluster-name>.<domain>

<ip>

Nameserver

ns1.<cluster-name>.<domain>

<ip>

Provisioner node

provisioner.<cluster-name>.<domain>

<ip>

Master-0

openshift-master-0.<cluster-name>.<domain>

<ip>

Master-1

openshift-master-1.<cluster-name>-.<domain>

<ip>

Master-2

openshift-master-2.<cluster-name>.<domain>

<ip>

Worker-0

openshift-worker-0.<cluster-name>.<domain>

<ip>

Worker-1

openshift-worker-1.<cluster-name>.<domain>

<ip>

Worker-n

openshift-worker-n.<cluster-name>.<domain>

<ip>

Network Time Protocol (NTP)

Each OKD node in the cluster must have access to an NTP server. OKD nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.

Define a consistent clock date and time format in each cluster node’s BIOS settings, or installation might fail.

You may reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.

Configuring nodes

Configuring nodes when using the provisioning network

Each node in the cluster requires the following configuration for proper installation.

A mismatch between nodes will cause an installation failure.

While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:

NIC

Network

VLAN

NIC1

provisioning

<provisioning-vlan>

NIC2

baremetal

<baremetal-vlan>

NIC1 is a non-routable network (provisioning) that is only used for the installation of the OKD cluster.

The Fedora CoreOS (FCOS) installation process on the provisioner node might vary. To install FCOS using a local Satellite server or a PXE server, PXE-enable NIC2.

PXE

Boot order

NIC1 PXE-enabled provisioning network

1

NIC2 baremetal network. PXE-enabled is optional.

2

Ensure PXE is disabled on all other NICs.

Configure the control plane and worker nodes as follows:

PXE

Boot order

NIC1 PXE-enabled (provisioning network)

1

Out-of-band management

Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.

Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OKD 4 installation.

The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.

Required data for installation

Prior to the installation of the OKD cluster, gather the following information from all cluster nodes:

  • Out-of-band management IP

    • Examples

      • Dell (iDRAC) IP

      • HP (iLO) IP

      • Fujitsu (iRMC) IP

When using the provisioning network
  • NIC1 (provisioning) MAC address

  • NIC2 (baremetal) MAC address

When omitting the provisioning network
  • NICx (baremetal) MAC address

Validation checklist for nodes

When using the provisioning network
  • NIC1 VLAN is configured for the provisioning network. (optional)

  • NIC1 is PXE-enabled on the provisioner, control plane (master), and worker nodes when using a provisioning network. (optional)

  • NIC2 VLAN is configured for the baremetal network.

  • PXE has been disabled on all other NICs.

  • Control plane and worker nodes are configured.

  • All nodes accessible via out-of-band management.

  • A separate management network has been created. (optional)

  • Required data for installation.

When omitting the provisioning network
  • NICx VLAN is configured for the baremetal network.

  • Control plane and worker nodes are configured.

  • All nodes accessible via out-of-band management.

  • A separate management network has been created. (optional)

  • Required data for installation.