Installing OKD with Installer Provisioned Infrastructure (IPI) requires:

  1. One provisioner node with RHEL 8.1 installed.

  2. Three Control Plane nodes.

  3. At least two worker nodes.

  4. Baseboard Management Controller (BMC) access to each node.

  5. At least one network:

    1. One required routable network

    2. One optional network for provisioning nodes; and,

    3. One optional management network.

Before installing OKD with IPI, ensure the hardware environment meets the following requirements.

Node requirements

IPI installation involves a number of hardware node requirements:

  • CPU architecture: All nodes must use x86_64 CPU architecture.

  • Similar nodes: Nodes must have an identical configuration per role. That is, control plane nodes must be the same brand and model with the same CPU, RAM and storage configuration. Worker nodes must be identical.

  • Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OKD cluster node. You may use IPMI, RedFish, or a proprietary protocol.

  • Latest generation: Nodes must be of the most recent generation. IPI installation relies on BMC protocols, which must be compatible across nodes. Additionally, Fedora 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support Fedora 8 for the provisioner node and FCOS 8 for the control plane and worker nodes.

  • Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.

  • Provisioner node: IPI installation requires one provisioner node.

  • Control plane: IPI installation requires three control plane nodes for high availability.

  • Worker nodes: A typical production cluster will have many worker nodes. IPI installation in a high availability environment requires at least two worker nodes in an initial cluster.

  • Network interfaces: Each node must have at least one 10GB network interface for the routable baremetal network. Each node must have one 10GB network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Network interface names must follow the same naming convention across all nodes. For example, the first NIC name on a node, such as eth0 or eno1, must be the same name on all of the other nodes. The same principle applies to the remaining NICs on each node.

  • Unified Extensible Firmware Interface (UEFI): Installation requires UEFI boot on all OKD nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement.

Network requirements

IPI installation involves several network requirements by default. First, IPI installation involves a non-routable provisioning network for provisioning the OS on each bare metal node and a routable baremetal network. Since IPI installation deploys ironic-dnsmasq, the networks should have no other DHCP servers running on the same broadcast domain. Network administrators must reserve IP addresses for each node in the OKD cluster.

Network Time Protocol (NTP)

Each OKD node in the cluster must have access to an NTP server.

Configuring NICs

OKD deploys with two networks:

  • provisioning: The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OKD cluster. When deploying using the provisioning network, the first NIC on each node, such as eth0 or eno1, must interface with the provisioning network.

  • baremetal: The baremetal network is a routable network. When deploying using the provisioning network, the second NIC on each node, such as eth1 or eno2, must interface with the baremetal network. When deploying without a provisioning network, you can use any NIC on each node to interface with the baremetal network.

Each NIC should be on a separate VLAN corresponding to the appropriate network.

Configuring the DNS server

Clients access the OKD cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.

<cluster-name>.<domain-name>

For example:

test-cluster.example.com
Reserving IP Addresses for Nodes with the DHCP Server

For the baremetal network, a network administrator must reserve a number of IP addresses, including:

  1. Two virtual IP addresses.

    • 1 IP address for the API endpoint

    • 1 IP address for the wildcard ingress endpoint

  2. One IP Address for the provisioner node.

  3. One IP address for each Control Plane (Master) node.

  4. One IP address for each worker node.

The following table provides an exemplary embodiment of hostnames for each node in the OKD cluster.

Usage Hostname IP

API

api.<cluster-name>.<domain>

<ip>

Ingress LB (apps)

*.apps.<cluster-name>.<domain>

<ip>

Provisioner node

provisioner.<cluster-name>.<domain>

<ip>

Master-0

openshift-master-0.<cluster-name>.<domain>

<ip>

Master-1

openshift-master-1.<cluster-name>-.<domain>

<ip>

Master-2

openshift-master-2.<cluster-name>.<domain>

<ip>

Worker-0

openshift-worker-0.<cluster-name>.<domain>

<ip>

Worker-1

openshift-worker-1.<cluster-name>.<domain>

<ip>

Worker-n

openshift-worker-n.<cluster-name>.<domain>

<ip>

Additional requirements with no provisioning network

All IPI installations require a baremetal network. The baremetal network is a routable network used for external network access to the outside world. In addition to the IP address supplied to the OKD cluster node, installations without a provisioning network require the following:

  • Setting an available IP address from the baremetal network to the bootstrapProvisioningIP configuration setting within the install-config.yaml configuration file.

  • Setting an available IP address from the baremetal network to the provisioningHostIP configuration setting within the install-config.yaml configuration file.

  • Deploying the OKD cluster using RedFish Virtual Media/iDRAC Virtual Media.

Configuring additional IP addresses for bootstrapProvisioningIP and provisioningHostIP is not required when using a provisioning network.

Configuring nodes

Configuring nodes when using the provisioning network

Each node in the cluster requires the following configuration for proper installation.

A mismatch between nodes will cause an installation failure.

While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:

NIC

Network

VLAN

NIC1

provisioning

<provisioning-vlan>

NIC2

baremetal

<baremetal-vlan>

NIC1 is a non-routable network (provisioning) that is only used for the installation of the OKD cluster.

The RHEL 8.x installation process on the provisioner node might vary. To install RHEL 8.x using a local Satellite server or a PXE server, PXE-enable NIC2.

PXE

Boot order

NIC1 PXE-enabled provisioning network

1

NIC2 baremetal network. PXE-enabled is optional.

2

Ensure PXE is disabled on all other NICs.

Configure the control plane and worker nodes as follows:

PXE

Boot order

NIC1 PXE-enabled (provisioning network)

1

Configuring nodes without the provisioning network

The installation process requires one NIC:

NIC

Network

VLAN

NICx

baremetal

<baremetal-vlan>

NICx is a routable network (baremetal) that is used for the installation of the OKD cluster, and routable to the internet.

Out-of-band management

Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner node.

Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OKD 4 installation.

The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning network or the baremetal network are valid options.

Required data for installation

Prior to the installation of the OKD cluster, gather the following information from all cluster nodes:

  • Out-of-band management IP

    • Examples

      • Dell (iDRAC) IP

      • HP (iLO) IP

When using the provisioning network
  • NIC1 (provisioning) MAC address

  • NIC2 (baremetal) MAC address

When omitting the provisioning network
  • NICx (baremetal) MAC address

Validation checklist for nodes

When using the provisioning network
  • NIC1 VLAN is configured for the provisioning network.

  • NIC2 VLAN is configured for the baremetal network.

  • NIC1 is PXE-enabled on the provisioner, Control Plane (master), and worker nodes.

  • PXE has been disabled on all other NICs.

  • Control plane and worker nodes are configured.

  • All nodes accessible via out-of-band management.

  • A separate management network has been created. (optional)

  • Required data for installation.

When omitting the provisioning network
  • NICx VLAN is configured for the baremetal network.

  • Control plane and worker nodes are configured.

  • All nodes accessible via out-of-band management.

  • A separate management network has been created. (optional)

  • Required data for installation.