×

In OKD version 4, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.

You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.

OKD supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported.

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OKD. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods.

Prerequisites

  • You reviewed details about the OKD installation and update processes.

  • You read the documentation on selecting a cluster installation method and preparing it for users.

  • Completing the installation requires that you upload the Fedora CoreOS (FCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible.

  • If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.

  • If you use a firewall, you configured it to allow the sites that your cluster requires access to.

VMware vSphere infrastructure requirements

You must install the OKD cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use.

Table 1. Version requirements for vSphere virtual environments
Virtual environment product Required version

VMware virtual hardware

15 or later

vSphere ESXi hosts

7.0.2 or later

vCenter host

7.0.2 or later

Installing a cluster on VMware vSphere versions 7.0.0 and 7.0.1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OKD requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section.

Table 2. Minimum supported vSphere version for VMware components
Component Minimum supported versions Description

Hypervisor

vSphere 7.0.2 and later with virtual hardware version 15

This version is the minimum version that Fedora CoreOS (FCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list.

Storage with in-tree drivers

vSphere 7.0.2 and later

This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OKD.

Optional: Networking (NSX-T)

vSphere 7.0.2 and later

vSphere 7.0.2 is required for OKD. VMware’s NSX Container Plugin (NCP) is certified with OKD 4.6 and NSX-T 3.x+.

You must ensure that the time on your ESXi hosts is synchronized before you install OKD. See Edit Time Configuration for a Host in the VMware documentation.

VMware vSphere CSI Driver Operator requirements

To install the CSI Driver Operator, the following requirements must be met:

  • VMware vSphere version 7.0.2 or later

  • vCenter 7.0.2 or later

  • Virtual machines of hardware version 15 or later

  • No third-party CSI driver already installed in the cluster

If a third-party CSI driver is present in the cluster, OKD does not overwrite it. The presence of a third-party CSI driver prevents OKD from upgrading to OKD 4.13 or later.

Additional resources

Requirements for a cluster with user-provisioned infrastructure

For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.

This section describes the requirements for deploying OKD on user-provisioned infrastructure.

Required machines for cluster installation

The smallest OKD clusters require the following hosts:

Table 3. Minimum required hosts
Hosts Description

One temporary bootstrap machine

The cluster requires the bootstrap machine to deploy the OKD cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.

Three control plane machines

The control plane machines run the Kubernetes and OKD services that form the control plane.

At least two compute machines, which are also known as worker machines.

The workloads requested by OKD users run on the compute machines.

To maintain high availability of your cluster, use separate physical hosts for these cluster machines.

The bootstrap and control plane machines must use Fedora CoreOS (FCOS) as the operating system. However, the compute machines can choose between Fedora CoreOS (FCOS), Fedora 8.4, or Fedora 8.5.

Minimum resource requirements for cluster installation

Each cluster machine must meet the following minimum requirements:

Table 4. Minimum resource requirements
Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2]

Bootstrap

FCOS

4

16 GB

100 GB

300

Control plane

FCOS

4

16 GB

100 GB

300

Compute

FCOS

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.

  2. OKD and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.

  3. As with all user-provisioned installations, if you choose to use Fedora compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of Fedora 7 compute machines is deprecated and has been removed in OKD 4.10 and later.

If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.

Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

Networking requirements for user-provisioned infrastructure

All the Fedora CoreOS (FCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files.

During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.

It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.

If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at FCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing FCOS and starting the OKD bootstrap process section for more information about static IP provisioning and advanced networking options.

The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

Setting the cluster node hostnames through DHCP

On Fedora CoreOS (FCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.

Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation.

Network connectivity requirements

You must configure the network connectivity between machines to allow OKD cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.

This section provides details about the ports that are required.

In connected OKD environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

Table 5. Ports used for all-machine to all-machine communications
Protocol Port Description

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

10256

openshift-sdn

UDP

4789

VXLAN

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 6. Ports used for all-machine to control plane communications
Protocol Port Description

TCP

6443

Kubernetes API

Table 7. Ports used for control plane machine to control plane machine communications
Protocol Port Description

TCP

2379-2380

etcd server and peer ports

Ethernet adaptor hardware address requirements

When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges:

  • 00:05:69:00:00:00 to 00:05:69:FF:FF:FF

  • 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF

  • 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF

  • 00:50:56:00:00:00 to 00:50:56:3F:FF:FF

If a MAC address outside the VMware OUI is used, the cluster installation will not succeed.

NTP configuration for user-provisioned infrastructure

OKD clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service.

If a DHCP server provides NTP server information, the chrony time service on the Fedora CoreOS (FCOS) machines read the information and can sync the clock with the NTP servers.

Additional resources

User-provisioned DNS requirements

In OKD deployments, DNS name resolution is required for the following components:

  • The Kubernetes API

  • The OKD application wildcard

  • The bootstrap, control plane, and compute machines

Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines.

DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Fedora CoreOS (FCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OKD needs to operate.

It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information.

The following DNS records are required for a user-provisioned OKD cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>..

Table 8. Required DNS records
Component Record Description

Kubernetes API

api.<cluster_name>.<base_domain>.

A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

api-int.<cluster_name>.<base_domain>.

A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster.

The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods.

Routes

*.apps.<cluster_name>.<base_domain>.

A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OKD console.

Bootstrap machine

bootstrap.<cluster_name>.<base_domain>.

A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster.

Control plane machines

<master><n>.<cluster_name>.<base_domain>.

DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster.

Compute machines

<worker><n>.<cluster_name>.<base_domain>.

DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster.

In OKD 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration.

You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps.

Example DNS configuration for user-provisioned clusters

This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OKD on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another.

In the examples, the cluster name is ocp4 and the base domain is example.com.

Example DNS A record configuration for a user-provisioned cluster

The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster.

Sample DNS zone database
$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
	IN	MX 10	smtp.example.com.
;
;
ns1.example.com.		IN	A	192.168.1.5
smtp.example.com.		IN	A	192.168.1.5
;
helper.example.com.		IN	A	192.168.1.5
helper.ocp4.example.com.	IN	A	192.168.1.5
;
api.ocp4.example.com.		IN	A	192.168.1.5 (1)
api-int.ocp4.example.com.	IN	A	192.168.1.5 (2)
;
*.apps.ocp4.example.com.	IN	A	192.168.1.5 (3)
;
bootstrap.ocp4.example.com.	IN	A	192.168.1.96 (4)
;
master0.ocp4.example.com.	IN	A	192.168.1.97 (5)
master1.ocp4.example.com.	IN	A	192.168.1.98 (5)
master2.ocp4.example.com.	IN	A	192.168.1.99 (5)
;
worker0.ocp4.example.com.	IN	A	192.168.1.11 (6)
worker1.ocp4.example.com.	IN	A	192.168.1.7 (6)
;
;EOF
1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer.
2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications.
3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

4 Provides name resolution for the bootstrap machine.
5 Provides name resolution for the control plane machines.
6 Provides name resolution for the compute machines.
Example DNS PTR record configuration for a user-provisioned cluster

The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster.

Sample DNS zone database for reverse records
$TTL 1W
@	IN	SOA	ns1.example.com.	root (
			2019070700	; serial
			3H		; refresh (3 hours)
			30M		; retry (30 minutes)
			2W		; expiry (2 weeks)
			1W )		; minimum (1 week)
	IN	NS	ns1.example.com.
;
5.1.168.192.in-addr.arpa.	IN	PTR	api.ocp4.example.com. (1)
5.1.168.192.in-addr.arpa.	IN	PTR	api-int.ocp4.example.com. (2)
;
96.1.168.192.in-addr.arpa.	IN	PTR	bootstrap.ocp4.example.com. (3)
;
97.1.168.192.in-addr.arpa.	IN	PTR	master0.ocp4.example.com. (4)
98.1.168.192.in-addr.arpa.	IN	PTR	master1.ocp4.example.com. (4)
99.1.168.192.in-addr.arpa.	IN	PTR	master2.ocp4.example.com. (4)
;
11.1.168.192.in-addr.arpa.	IN	PTR	worker0.ocp4.example.com. (5)
7.1.168.192.in-addr.arpa.	IN	PTR	worker1.ocp4.example.com. (5)
;
;EOF
1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer.
2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications.
3 Provides reverse DNS resolution for the bootstrap machine.
4 Provides reverse DNS resolution for the control plane machines.
5 Provides reverse DNS resolution for the compute machines.

A PTR record is not required for the OKD application wildcard.

Load balancing requirements for user-provisioned infrastructure

Before you install OKD, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

If you want to deploy the API and application ingress load balancers with a Fedora instance, you must purchase the Fedora subscription separately.

The load balancing infrastructure must meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.

    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.

    Do not configure session persistence for an API load balancer.

    Configure the following ports on both the front and back of the load balancers:

    Table 9. API load balancer
    Port Back-end machines (pool members) Internal External Description

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

    Machine config server

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application ingress load balancer: Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes.

    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

    Configure the following ports on both the front and back of the load balancers:

    Table 10. Application ingress load balancer
    Port Back-end machines (pool members) Internal External Description

    443

    The machines that run the Ingress Controller pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress Controller pods, compute, or worker, by default.

    X

    X

    HTTP traffic

    1936

    The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe.

    X

    X

    HTTP traffic

If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.

A working configuration for the Ingress router is required for an OKD cluster. You must configure the Ingress router after the control plane initializes.

Example load balancer configuration for user-provisioned clusters

This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

Sample API and application ingress load balancer configuration
global
  log         127.0.0.1 local2
  pidfile     /var/run/haproxy.pid
  maxconn     4000
  daemon
defaults
  mode                    http
  log                     global
  option                  dontlognull
  option http-server-close
  option                  redispatch
  retries                 3
  timeout http-request    10s
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s
  maxconn                 3000
frontend stats
  bind *:1936
  mode            http
  log             global
  maxconn 10
  stats enable
  stats hide-version
  stats refresh 30s
  stats show-node
  stats show-desc Stats for ocp4 cluster (1)
  stats auth admin:ocp4
  stats uri /stats
listen api-server-6443 (2)
  bind *:6443
  mode tcp
  server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup (3)
  server master0 master0.ocp4.example.com:6443 check inter 1s
  server master1 master1.ocp4.example.com:6443 check inter 1s
  server master2 master2.ocp4.example.com:6443 check inter 1s
listen machine-config-server-22623 (4)
  bind *:22623
  mode tcp
  server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup (3)
  server master0 master0.ocp4.example.com:22623 check inter 1s
  server master1 master1.ocp4.example.com:22623 check inter 1s
  server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 (5)
  bind *:443
  mode tcp
  balance source
  server worker0 worker0.ocp4.example.com:443 check inter 1s
  server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 (6)
  bind *:80
  mode tcp
  balance source
  server worker0 worker0.ocp4.example.com:80 check inter 1s
  server worker1 worker1.ocp4.example.com:80 check inter 1s
1 In the example, the cluster name is ocp4.
2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines.
3 The bootstrap entries must be in place before the OKD cluster installation and they must be removed after the bootstrap process is complete.
4 Port 22623 handles the machine config server traffic and points to the control plane machines.
5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.

If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.

If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443, 22623, 443, and 80 by running netstat -nltupe on the HAProxy node.

If you are using HAProxy as a load balancer and SELinux is set to enforcing, you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1.

Preparing the user-provisioned infrastructure

Before you install OKD on user-provisioned infrastructure, you must prepare the underlying infrastructure.

This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OKD installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure.

After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section.

Prerequisites
  • You have reviewed the OKD 4.x Tested Integrations page.

  • You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section.

Procedure
  1. If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service.

    1. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node.

    2. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration.

      If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at FCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing FCOS and starting the OKD bootstrap process section for more information about static IP provisioning and advanced networking options.

    3. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations.

      If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup.

  2. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements.

  3. Configure your firewall to enable the ports required for the OKD cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required.

  4. Setup the required DNS infrastructure for your cluster.

    1. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines.

    2. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines.

      See the User-provisioned DNS requirements section for more information about the OKD DNS requirements.

  5. Validate your DNS configuration.

    1. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components.

    2. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components.

      See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps.

  6. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements.

Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized.

Validating DNS resolution for user-provisioned infrastructure

You can validate your DNS configuration before installing OKD on user-provisioned infrastructure.

The validation steps detailed in this section must succeed before you install your cluster.

Prerequisites
  • You have configured the required DNS records for your user-provisioned infrastructure.

Procedure
  1. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components.

    1. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer:

      $ dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> (1)
      1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name.
      Example output
      api.ocp4.example.com.		0	IN	A	192.168.1.5
    2. Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer:

      $ dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>
      Example output
      api-int.ocp4.example.com.		0	IN	A	192.168.1.5
    3. Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer:

      $ dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>
      Example output
      random.apps.ocp4.example.com.		0	IN	A	192.168.1.5

      In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

      You can replace random with another wildcard value. For example, you can query the route to the OKD console:

      $ dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>
      Example output
      console-openshift-console.apps.ocp4.example.com. 0 IN	A 192.168.1.5
    4. Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node:

      $ dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>
      Example output
      bootstrap.ocp4.example.com.		0	IN	A	192.168.1.96
    5. Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node.

  2. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components.

    1. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API:

      $ dig +noall +answer @<nameserver_ip> -x 192.168.1.5
      Example output
      5.1.168.192.in-addr.arpa. 0	IN	PTR	api-int.ocp4.example.com. (1)
      5.1.168.192.in-addr.arpa. 0	IN	PTR	api.ocp4.example.com. (2)
      
      1 Provides the record name for the Kubernetes internal API.
      2 Provides the record name for the Kubernetes API.

      A PTR record is not required for the OKD application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer.

    2. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node:

      $ dig +noall +answer @<nameserver_ip> -x 192.168.1.96
      Example output
      96.1.168.192.in-addr.arpa. 0	IN	PTR	bootstrap.ocp4.example.com.
    3. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node.

Generating a key pair for cluster node SSH access

During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the /home/core/.ssh/authorized_keys.d/core file. However, the Machine Config Operator manages SSH keys in the /home/core/.ssh/authorized_keys file and configures sshd to ignore the /home/core/.ssh/authorized_keys.d/core file. As a result, newly provisioned OKD nodes are not accessible using SSH until the Machine Config Operator reconciles the machine configs with the authorized_keys file. After you can access the nodes using SSH, you can delete the /home/core/.ssh/authorized_keys.d/core file.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OKD, provide the SSH public key to the installation program.

Obtaining the installation program

Before you install OKD, download the installation file on the host you are using for installation.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure
  1. Download installer from https://github.com/openshift/okd/releases

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.

  2. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  3. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.

    Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}} as the pull secret when prompted during the installation.

    • Red Hat Operators are not available.

    • The Telemetry and Insights operators do not send data to Red Hat.

    • Content from the Red Hat Ecosystem Catalog Container images registry, such as image streams and Operators, are not available.

Manually creating the installation configuration file

For user-provisioned installations of OKD, you manually generate your installation configuration file.

Prerequisites
  • You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.

  • You have obtained the OKD installation program and the pull secret for your cluster.

Procedure
  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.

  2. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory>.

    You must name this configuration file install-config.yaml.

    For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

Sample install-config.yaml file for VMware vSphere

You can customize the install-config.yaml file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com (1)
compute: (2)
- hyperthreading: Enabled (3)
  name: worker
  replicas: 0 (4)
controlPlane: (2)
  hyperthreading: Enabled (3)
  name: master
  replicas: 3 (5)
metadata:
  name: test (6)
platform:
  vsphere:
    vcenter: your.vcenter.server (7)
    username: username (8)
    password: password (9)
    datacenter: datacenter (10)
    defaultDatastore: datastore (11)
    folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" (12)
    resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" (13)
    diskType: thin (14)
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}' (15)
sshKey: 'ssh-ed25519 AAAA...' (16)
additionalTrustBundle: | (17)
  -----BEGIN CERTIFICATE-----
  ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
  -----END CERTIFICATE-----
imageContentSources: (18)
- mirrors:
  - <local_registry>/<local_repository_name>/release
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - <local_registry>/<local_repository_name>/release
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OKD will support defining multiple compute pools during installation. Only one control plane pool is used.
3 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.

4 You must set the value of the replicas parameter to 0. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OKD.
5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
6 The cluster name that you specified in your DNS records.
7 The fully-qualified hostname or IP address of the vCenter server.
8 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
9 The password associated with the vSphere user.
10 The vSphere datacenter.
11 The default vSphere datastore to use.
12 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name>. If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter.
13 Optional: For installer-provisioned infrastructure, the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name>. If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources.
14 The vSphere disk provisioning method.
15 For <local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000. For <credentials>, specify the base64-encoded user name and password for your mirror registry.
16 The public portion of the default SSH key for the core user in Fedora CoreOS (FCOS).

For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

17 Provide the contents of the certificate file that you used for your mirror registry.
18 Provide the imageContentSources section from the output of the command to mirror the repository.

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites
  • You have an existing install-config.yaml file.

  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster.
    3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.
    4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the FCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.
    5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly.

    The installation program does not support the proxy readinessEndpoints field.

  2. Save the file and reference it when installing OKD.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Network configuration phases

There are two phases prior to OKD installation where you can customize the network configuration.

Phase 1

You can customize the following network-related fields in the install-config.yaml file before you create the manifest files:

  • networking.networkType

  • networking.clusterNetwork

  • networking.serviceNetwork

  • networking.machineNetwork

    For more information on these fields, refer to Installation configuration parameters.

    Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

    The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.

Phase 2

After creating the manifest files by running openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.

You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the network plugin during phase 2.

Specifying advanced network configuration

You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.

Customizing your network configuration by modifying the OKD manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.

Prerequisites
  • You have created the install-config.yaml file and completed any modifications to it.

Procedure
  1. Change to the directory that contains the installation program and create the manifests:

    $ ./openshift-install create manifests --dir <installation_directory> (1)
    1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
  3. Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples:

    Specify a different VXLAN port for the OpenShift SDN network provider
    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        openshiftSDNConfig:
          vxlanPort: 4800
    Enable IPsec for the OVN-Kubernetes network provider
    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      defaultNetwork:
        ovnKubernetesConfig:
          ipsecConfig: {}
  4. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files.

  5. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets:

    $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Because you create and manage these resources yourself, you do not have to initialize them.

    • You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment.

Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.

The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:

clusterNetwork

IP address pools from which pod IP addresses are allocated.

serviceNetwork

IP address pool for services.

defaultNetwork.type

Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.

You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.

Cluster Network Operator configuration object

The fields for the Cluster Network Operator (CNO) are described in the following table:

Table 11. Cluster Network Operator configuration object
Field Type Description

metadata.name

string

The name of the CNO object. This name is always cluster.

spec.clusterNetwork

array

A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:

spec:
  clusterNetwork:
  - cidr: 10.128.0.0/19
    hostPrefix: 23
  - cidr: 10.128.32.0/19
    hostPrefix: 23

You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file.

spec.serviceNetwork

array

A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:

spec:
  serviceNetwork:
  - 172.30.0.0/14

You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file.

spec.defaultNetwork

object

Configures the network plugin for the cluster network.

spec.kubeProxyConfig

object

The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect.

defaultNetwork object configuration

The values for the defaultNetwork object are defined in the following table:

Table 12. defaultNetwork object
Field Type Description

type

string

Either OpenShiftSDN or OVNKubernetes. The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation.

OKD uses the OVN-Kubernetes network plugin by default.

openshiftSDNConfig

object

This object is only valid for the OpenShift SDN network plugin.

ovnKubernetesConfig

object

This object is only valid for the OVN-Kubernetes network plugin.

Configuration for the OpenShift SDN network plugin

The following table describes the configuration fields for the OpenShift SDN network plugin:

Table 13. openshiftSDNConfig object
Field Type Description

mode

string

Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy.

The values Multitenant and Subnet are available for backwards compatibility with OKD 3.x but are not recommended. This value cannot be changed after cluster installation.

mtu

integer

The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1450.

This value cannot be changed after cluster installation.

vxlanPort

integer

The port to use for all VXLAN packets. The default value is 4789. This value cannot be changed after cluster installation.

If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.

On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999.

Example OpenShift SDN configuration
defaultNetwork:
  type: OpenShiftSDN
  openshiftSDNConfig:
    mode: NetworkPolicy
    mtu: 1450
    vxlanPort: 4789
Configuration for the OVN-Kubernetes network plugin

The following table describes the configuration fields for the OVN-Kubernetes network plugin:

Table 14. ovnKubernetesConfig object
Field Type Description

mtu

integer

The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU.

If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.

If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1400.

genevePort

integer

The port to use for all Geneve packets. The default value is 6081. This value cannot be changed after cluster installation.

ipsecConfig

object

Specify an empty object to enable IPsec encryption.

policyAuditConfig

object

Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used.

gatewayConfig

object

Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.

While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.

v4InternalSubnet

If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OKD installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster.

For example, if the clusterNetwork.cidr is 10.128.0.0/14 and the clusterNetwork.hostPrefix is /23, then the maximum number of nodes is 2^(23-14)=128. An IP address is also required for the gateway, network, and broadcast addresses. Therefore the internal IP address range must be at least a /24.

This field cannot be changed after installation.

The default value is 100.64.0.0/16.

v6InternalSubnet

If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OKD installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster.

This field cannot be changed after installation.

The default value is fd98::/48.

Table 15. policyAuditConfig object
Field Type Description

rateLimit

integer

The maximum number of messages to generate every second per node. The default value is 20 messages per second.

maxFileSize

integer

The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB.

destination

string

One of the following additional audit log targets:

libc

The libc syslog() function of the journald process on the host.

udp:<host>:<port>

A syslog server. Replace <host>:<port> with the host and port of the syslog server.

unix:<file>

A Unix Domain Socket file specified by <file>.

null

Do not send the audit logs to any additional target.

syslogFacility

string

The syslog facility, such as kern, as defined by RFC5424. The default value is local0.

Table 16. gatewayConfig object