×

Prerequisites

About Kuryr SDN

Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia OpenStack services to provide networking for pods and Services.

Kuryr and OKD integration is primarily designed for OKD clusters running on OpenStack VMs. Kuryr improves the network performance by plugging OKD pods into OpenStack SDN. In addition, it provides interconnectivity between pods and OpenStack virtual instances.

Kuryr components are installed as pods in OKD using the openshift-kuryr namespace:

  • kuryr-controller - a single service instance installed on a master node. This is modeled in OKD as a Deployment object.

  • kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OKD node. This is modeled in OKD as a DaemonSet object.

The Kuryr controller watches the OKD API server for pod, service, and namespace create, update, and delete events. It maps the OKD API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OKD via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs.

Kuryr is recommended for OKD deployments on encapsulated OpenStack tenant networks to avoid double encapsulation, such as running an encapsulated OKD SDN over an OpenStack network.

If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial.

Kuryr is not recommended in deployments where all of the following criteria are true:

  • The OpenStack version is less than 16.

  • The deployment uses UDP services, or a large number of TCP services on few hypervisors.

or

  • The ovn-octavia Octavia driver is disabled.

  • The deployment uses a large number of TCP services on few hypervisors.

Resource guidelines for installing OKD on OpenStack with Kuryr

When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the OpenStack quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires.

Use the following quota to satisfy a default cluster’s minimum requirements:

Table 1. Recommended resources for a default OKD cluster on OpenStack with Kuryr
Resource Value

Floating IP addresses

3 - plus the expected number of Services of LoadBalancer type

Ports

1500 - 1 needed per Pod

Routers

1

Subnets

250 - 1 needed per Namespace/Project

Networks

250 - 1 needed per Namespace/Project

RAM

112 GB

vCPUs

28

Volume storage

275 GB

Instances

7

Security groups

250 - 1 needed per Service and per NetworkPolicy

Security group rules

1000

Server groups

2 - plus 1 for each additional availability zone in each machine pool

Load balancers

100 - 1 needed per Service

Load balancer listeners

500 - 1 needed per Service-exposed port

Load balancer pools

500 - 1 needed per Service-exposed port

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

If OpenStack object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OKD image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

If you are using OpenStack version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects.

Take the following notes into consideration when setting resources:

  • The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time.

  • Each network policy is mapped into an OpenStack security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group.

  • Each service is mapped to an OpenStack load balancer. Consider this requirement when estimating the number of security groups required for the quota.

    If you are using OpenStack version 15 or earlier, or the ovn-octavia driver, each load balancer has a security group with the user project.

  • The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the OpenStack deployment’s size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them.

    If you are using OpenStack version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows.

An OKD deployment comprises control plane machines, compute machines, and a bootstrap machine.

To enable Kuryr SDN, your environment must meet the following requirements:

  • Run OpenStack 13+.

  • Have Overcloud with Octavia.

  • Use Neutron Trunk ports extension.

  • Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.

Increasing quota

When using Kuryr SDN, you must increase quotas to satisfy the OpenStack resources used by pods, services, namespaces, and network policies.

Procedure
  • Increase the quotas for a project by running the following command:

    $ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>

Configuring Neutron

Kuryr CNI leverages the Neutron Trunks extension to plug containers into the OpenStack SDN, so you must use the trunks extension for Kuryr to properly work.

In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies.

Configuring Octavia

Kuryr SDN uses OpenStack’s Octavia LBaaS to implement OKD services. Thus, you must install and configure Octavia components in OpenStack to use Kuryr SDN.

To enable Octavia, you must include the Octavia service during the installation of the OpenStack Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.

The following steps only capture the key pieces required during the deployment of OpenStack when dealing with Octavia. It is also important to note that registry methods vary.

This example uses the local registry method.

Procedure
  1. If you are using the local registry, create a template to upload the images to the registry. For example:

    (undercloud) $ openstack overcloud container image prepare \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
    --namespace=registry.access.redhat.com/rhosp13 \
    --push-destination=<local-ip-from-undercloud.conf>:8787 \
    --prefix=openstack- \
    --tag-from-label {version}-{product-version} \
    --output-env-file=/home/stack/templates/overcloud_images.yaml \
    --output-images-file /home/stack/local_registry_images.yaml
  2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:

    ...
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
      push_destination: <local-ip-from-undercloud.conf>:8787

    The Octavia container versions vary depending upon the specific OpenStack release installed.

  3. Pull the container images from registry.redhat.io to the Undercloud node:

    (undercloud) $ sudo openstack overcloud container image upload \
      --config-file  /home/stack/local_registry_images.yaml \
      --verbose

    This may take some time depending on the speed of your network and Undercloud disk.

  4. Since an Octavia load balancer is used to access the OKD API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command:

    (undercloud) $ cat octavia_timeouts.yaml
    parameter_defaults:
      OctaviaTimeoutClientData: 1200000
      OctaviaTimeoutMemberData: 1200000

    This is not needed for OpenStack 13.0.13+.

  5. Install or update your Overcloud environment with Octavia:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
      -e octavia_timeouts.yaml

    This command only includes the files associated with Octavia; it varies based on your specific installation of OpenStack. See the OpenStack documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director.

    When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN.

  6. In OpenStack versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project.

    • To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project.

      This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation.

      This task is unnecessary in OpenStack version 13.0.13 or later.

      Octavia implements a new ACL API that restricts access to the load balancers VIP.

      1. Get the project ID

        $ openstack project show <project>
        Example output
        +-------------+----------------------------------+
        | Field       | Value                            |
        +-------------+----------------------------------+
        | description |                                  |
        | domain_id   | default                          |
        | enabled     | True                             |
        | id          | PROJECT_ID                       |
        | is_domain   | False                            |
        | name        | *<project>*                      |
        | parent_id   | default                          |
        | tags        | []                               |
        +-------------+----------------------------------+
      2. Add the project ID to octavia.conf for the controllers.

        1. Source the stackrc file:

          $ source stackrc  # Undercloud credentials
        2. List the Overcloud controllers:

          $ openstack server list
          Example output
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
          │
          | ID                                   | Name         | Status | Networks
          | Image          | Flavor     |
          │
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
          │
          | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
          ctlplane=192.168.24.8 | overcloud-full | controller |
          │
          | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0    | ACTIVE |
          ctlplane=192.168.24.6 | overcloud-full | compute    |
          │
          +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
        3. SSH into the controller(s).

          $ ssh heat-admin@192.168.24.8
        4. Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user’s account.

          # List of project IDs that are allowed to have Load balancer security groups
          # belonging to them.
          amp_secgroup_allowed_projects = PROJECT_ID
      3. Restart the Octavia worker so the new configuration loads.

        controller-0$ sudo docker restart octavia_worker

Depending on your OpenStack environment, Octavia might not support UDP listeners. If you use Kuryr SDN on OpenStack version 13.0.13 or earlier, UDP services are not supported. OpenStack version 16 or later support UDP.

The Octavia OVN Driver

Octavia supports multiple provider drivers through the Octavia API.

To see all available Octavia provider drivers, on a command line, enter:

$ openstack loadbalancer provider list
Example output
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

Beginning with OpenStack version 16, the Octavia OVN provider driver (ovn) is supported on OKD on OpenStack deployments.

ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2.

The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.

If Kuryr uses ovn instead of Amphora, it offers the following benefits:

  • Decreased resource requirements. Kuryr does not require a load balancer VM for each service.

  • Reduced network latency.

  • Increased service creation speed by using OpenFlow rules instead of a VM for each service.

  • Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.

You can configure your cluster to use the Octavia OVN driver after your OpenStack cloud is upgraded from version 13 to version 16.

Known limitations of installing with Kuryr

Using OKD with Kuryr SDN has several known limitations.

OpenStack general limitations

Using OKD with Kuryr SDN has several limitations that apply to all versions and environments:

  • Service objects with the NodePort type are not supported.

  • Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods.

  • If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.

  • Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting.

OpenStack version limitations

Using OKD with Kuryr SDN has several limitations that depend on the OpenStack version.

  • OpenStack versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OKD service. Creating too many services can cause you to run out of resources.

    Deployments of later versions of OpenStack that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of OpenStack.

  • Octavia OpenStack versions before 13.0.13 do not support UDP listeners. Therefore, OKD UDP services are not supported.

  • Octavia OpenStack versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported.

  • Kuryr SDN does not support automatic unidling by a service.

OpenStack environment limitations

There are limitations when using Kuryr SDN that depend on your deployment environment.

Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the OpenStack version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution.

In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.

To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.

In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.

musl-based containers, including Alpine-based containers, do not support the use-vc option.

OpenStack upgrade limitations

As a result of the OpenStack upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required.

You can address API changes on an individual basis.

If the Amphora image is upgraded, the OpenStack operator can handle existing load balancer VMs in two ways:

  • Upgrade each VM by triggering a load balancer failover.

  • Leave responsibility for upgrading the VMs to users.

If the operator takes the first option, there might be short downtimes during failovers.

If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features.

If OKD detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing.

The recreation causes the DNS service approximately one minute of downtime.

Control plane machines

By default, the OKD installation process creates three control plane machines.

Each machine requires:

  • An instance from the OpenStack quota

  • A port from the OpenStack quota

  • A flavor with at least 16 GB memory and 4 vCPUs

  • At least 100 GB storage space from the OpenStack quota

Compute machines

By default, the OKD installation process creates three compute machines.

Each machine requires:

  • An instance from the OpenStack quota

  • A port from the OpenStack quota

  • A flavor with at least 8 GB memory and 2 vCPUs

  • At least 100 GB storage space from the OpenStack quota

Compute machines host the applications that you run on OKD; aim to run as many as you can.

Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the OpenStack quota

  • A port from the OpenStack quota

  • A flavor with at least 16 GB memory and 4 vCPUs

  • At least 100 GB storage space from the OpenStack quota

Enabling Swift on OpenStack

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

If the OpenStack object storage service, commonly known as Swift, is available, OKD uses it as the image registry storage. If it is unavailable, the installation program relies on the OpenStack block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

OpenStack 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OKD registry. You must set the value of rgw_max_attr_size to at least 1024 characters.

Before installation, check if your OpenStack deployment is affected by this problem. If it is, reconfigure Ceph RGW.

Prerequisites
  • You have a OpenStack administrator account on the target environment.

  • The Swift service is installed.

  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on OpenStack:

  1. As an administrator in the OpenStack CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your OpenStack deployment can now use Swift for the image registry.

Verifying external network access

The OKD installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in OpenStack.

Procedure
  1. Using the OpenStack CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"
    Example output
    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process.

The default network ranges are:

Network Range

machineNetwork

10.0.0.0/16

serviceNetwork

172.30.0.0/16

clusterNetwork

10.128.0.0/14

If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in OpenStack.

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

Defining parameters for the installation program

The OKD installation program relies on a file that is called clouds.yaml. The file describes OpenStack configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure
  1. Create the clouds.yaml file:

    • If your OpenStack distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml. OKD does not support application credentials.

    • If your OpenStack distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the OpenStack documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your OpenStack installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.

    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable

    2. The current directory

    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

Setting cloud provider options

Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OKD interacts with OpenStack.

For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation.

Procedure
  1. If you have not already generated manifest files for your cluster, generate them by running the following command:

    $ openshift-install --dir <destination_directory> create manifests
  2. In a text editor, open the cloud-provider configuration manifest file. For example:

    $ vi openshift/manifests/cloud-provider-config.yaml
  3. Modify the options based on the cloud configuration specification.

    Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:

    #...
    [LoadBalancer]
    use-octavia=true (1)
    lb-provider = "amphora" (2)
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" (3)
    create-monitor = True (4)
    monitor-delay = 10s (5)
    monitor-timeout = 10s (6)
    monitor-max-retries = 1 (7)
    #...
    1 This property enables Octavia integration.
    2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT.
    3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here.
    4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of OpenStack 16.1 and 16.2, this feature is only available for the Amphora provider.
    5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True.

    Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section.

    You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local. The OVN Octavia provider in OpenStack 16.1 and 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn".

    For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider.

  4. Save the changes to the file and proceed with installation.

    You can update your cloud provider configuration after you run the installer. On a command line, run:

    $ oc edit configmap -n openshift-config cloud-provider-config

    After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status.

External load balancers that use pre-defined floating IP addresses

Commonly, OpenStack deployments disallow non-administrator users from creating specific floating IP addresses. If such a policy is in place and you use a floating IP address in your service specification, the cloud provider will fail to handle IP address assignment to load balancers.

If you use an external cloud provider, you can avoid this problem by pre-creating a floating IP address and specifying it in your service specification. The in-tree cloud provider does not support this method.

Additional resources

Obtaining the installation program

Before you install OKD, download the installation file on a local computer.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure
  1. Download installer from https://github.com/openshift/okd/releases

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.

  2. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  3. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.

    Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}} as the pull secret when prompted during the installation.

    • Red Hat Operators are not available.

    • The Telemetry and Insights operators do not send data to Red Hat.

    • Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.

Creating the installation configuration file

You can customize the OKD cluster you install on OpenStack.

Prerequisites
  • Obtain the OKD installation program and the pull secret for your cluster.

  • Obtain service principal permissions at the subscription level.

Procedure
  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> (1)
      1 For <installation_directory>, specify the directory name to store the files that the installation program creates.

      When specifying the directory:

      • Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory.

      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.

      3. Specify the OpenStack external network name to use for installing the cluster.

      4. Specify the floating IP address to use for external access to the OpenShift API.

      5. Specify a OpenStack flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.

      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.

      7. Enter a name for your cluster. The name must be 14 or fewer characters long.

      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.

  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Kuryr installations default to HTTP proxies.

Prerequisites
  • For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter:

    $ ip route add <cluster_network_cidr> via <installer_subnet_gateway>
  • The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates.

  • You have an existing install-config.yaml file.

  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster.
    3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.

    The installation program does not support the proxy readinessEndpoints field.

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OKD.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Installation configuration parameters

Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

After installation, you cannot modify these parameters in the install-config.yaml file.

Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 2. Required parameters
Parameter Description Values

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.

Table 3. Network parameters
Parameter Description Values

networking

The configuration for the cluster network.

Object

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The cluster network provider Container Network Interface (CNI) cluster network provider to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OVNKubernetes.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 4. Optional parameters
Parameter Description Values

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

capabilities

Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components.

String array

capabilities.baselineCapabilitySet

Selects an initial set of optional capabilities to enable. Valid values are None, v4.11 and vCurrent. v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OKD. The default value is vCurrent.

String

capabilities.additionalEnabledCapabilities

Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet. Valid values are baremetal, marketplace and openshift-samples. You may specify multiple capabilities in this parameter.

String array

cgroupsV2

Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OKD process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OKD cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.

true

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String