×

Prerequisites

Azure government regions

OKD supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization.

Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment.

The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud, based on the region specified.

Private clusters

You can deploy a private OKD cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.

By default, OKD is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.

If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.

To deploy a private cluster, you must:

  • Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.

  • Deploy from a machine that has access to:

    • The API services for the cloud to which you provision.

    • The hosts on the network that you provision.

    • The internet to obtain installation media.

You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.

Private clusters in Azure

To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.

Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.

The cluster still requires access to internet to access the Azure APIs.

The following items are not required or created when you install a private cluster:

  • A BaseDomainResourceGroup, since the cluster does not create public records

  • Public IP addresses

  • Public DNS records

  • Public endpoints

    The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.

Limitations

Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet.

User-defined outbound routing

In OKD, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer.

You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.

When configuring a cluster to use user-defined routing, the installation program does not create the following resources:

  • Outbound rules for access to the internet.

  • Public IPs for the public load balancer.

  • Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.

You must ensure the following items are available before setting user-defined routing:

  • Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror.

  • The cluster can access Azure APIs.

  • Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.

There are several pre-existing networking setups that are supported for internet access using user-defined routing.

Private cluster with network address translation

You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.

When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.

Private cluster with Azure Firewall

You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.

When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.

Private cluster with a proxy configuration

You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.

When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.

Private cluster with no internet access

You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:

  • An OpenShift image registry mirror that allows for pulling container images

  • Access to Azure APIs

With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.

About reusing a VNet for your OKD cluster

In OKD 4.11, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.

By deploying OKD into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.

Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:

  • Subnets

  • Route tables

  • VNets

  • Network Security Groups

The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.

If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

  • The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.

  • The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones.

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

  • All the specified subnets exist.

  • There are two private subnets, one for the control plane machines and one for the compute machines.

  • The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.

The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.

Table 1. Required ports
Port Description Control plane Compute

80

Allows HTTP traffic

x

443

Allows HTTPS traffic

x

6443

Allows communication to the control plane machines

x

22623

Allows internal communication to the machine config server for provisioning machines

x

Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates.

To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies.

Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.

Division of permissions

Starting with OKD 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.

Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.

Generating a key pair for cluster node SSH access

During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the /home/core/.ssh/authorized_keys.d/core file. However, the Machine Config Operator manages SSH keys in the /home/core/.ssh/authorized_keys file and configures sshd to ignore the /home/core/.ssh/authorized_keys.d/core file. As a result, newly provisioned OKD nodes are not accessible using SSH until the Machine Config Operator reconciles the machine configs with the authorized_keys file. After you can access the nodes using SSH, you can delete the /home/core/.ssh/authorized_keys.d/core file.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    If you plan to install an OKD cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874

      If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OKD, provide the SSH public key to the installation program.

Obtaining the installation program

Before you install OKD, download the installation file on a local computer.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure
  1. Download installer from https://github.com/openshift/okd/releases

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.

  2. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  3. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.

    Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}} as the pull secret when prompted during the installation.

    • Red Hat Operators are not available.

    • The Telemetry and Insights operators do not send data to Red Hat.

    • Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.

Manually creating the installation configuration file

When installing OKD on Microsoft Azure into a government region, you must manually generate your installation configuration file.

Prerequisites
  • You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.

  • You have obtained the OKD installation program and the pull secret for your cluster.

Procedure
  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.

  2. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory>.

    You must name this configuration file install-config.yaml.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

Installation configuration parameters

Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

After installation, you cannot modify these parameters in the install-config.yaml file.

Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 2. Required parameters
Parameter Description Values

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.

Table 3. Network parameters
Parameter Description Values

networking

The configuration for the cluster network.

Object

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) cluster network provider to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OVNKubernetes.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 4. Optional parameters
Parameter Description Values

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

capabilities

Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components.

String array

capabilities.baselineCapabilitySet

Selects an initial set of optional capabilities to enable. Valid values are None, v4.11 and vCurrent. v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OKD. The default value is vCurrent.

String

capabilities.additionalEnabledCapabilities

Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet. Valid values are baremetal, marketplace and openshift-samples. You may specify multiple capabilities in this parameter.

String array

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OKD process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OKD cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String