$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>
install-config.yaml
file for OpenStack with Kuryr
Kuryr is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes. |
In OKD version 4.13, you can install a customized cluster on
OpenStack that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml
before you install the cluster.
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You verified that OKD 4.13 is compatible with your OpenStack version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OKD on OpenStack support matrix.
You have a storage service installed in OpenStack, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OKD registry cluster deployment. For more information, see Optimizing storage.
You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster.
Kuryr is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes. |
Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia OpenStack services to provide networking for pods and Services.
Kuryr and OKD integration is primarily designed for OKD clusters running on OpenStack VMs. Kuryr improves the network performance by plugging OKD pods into OpenStack SDN. In addition, it provides interconnectivity between pods and OpenStack virtual instances.
Kuryr components are installed as pods in OKD using the
openshift-kuryr
namespace:
kuryr-controller
- a single service instance installed on a master
node.
This is modeled in OKD as a Deployment
object.
kuryr-cni
- a container installing and configuring Kuryr as a CNI driver on
each OKD node. This is modeled in OKD as a DaemonSet
object.
The Kuryr controller watches the OKD API server for pod, service, and namespace create, update, and delete events. It maps the OKD API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OKD via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs.
Kuryr is recommended for OKD deployments on encapsulated OpenStack tenant networks to avoid double encapsulation, such as running an encapsulated OKD SDN over an OpenStack network.
If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial.
Kuryr is not recommended in deployments where all of the following criteria are true:
The OpenStack version is less than 16.
The deployment uses UDP services, or a large number of TCP services on few hypervisors.
or
The ovn-octavia
Octavia driver is disabled.
The deployment uses a large number of TCP services on few hypervisors.
When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the OpenStack quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires.
Use the following quota to satisfy a default cluster’s minimum requirements:
Resource | Value |
---|---|
Floating IP addresses |
3 - plus the expected number of Services of LoadBalancer type |
Ports |
1500 - 1 needed per Pod |
Routers |
1 |
Subnets |
250 - 1 needed per Namespace/Project |
Networks |
250 - 1 needed per Namespace/Project |
RAM |
112 GB |
vCPUs |
28 |
Volume storage |
275 GB |
Instances |
7 |
Security groups |
250 - 1 needed per Service and per NetworkPolicy |
Security group rules |
1000 |
Server groups |
2 - plus 1 for each additional availability zone in each machine pool |
Load balancers |
100 - 1 needed per Service |
Load balancer listeners |
500 - 1 needed per Service-exposed port |
Load balancer pools |
500 - 1 needed per Service-exposed port |
A cluster might function with fewer than recommended resources, but its performance is not guaranteed.
If OpenStack object storage (Swift) is available and operated by a user account with the |
If you are using OpenStack version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. |
Take the following notes into consideration when setting resources:
The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time.
Each network policy is mapped into an OpenStack security group, and
depending on the NetworkPolicy
spec, one or more rules are added to the
security group.
Each service is mapped to an OpenStack load balancer. Consider this requirement when estimating the number of security groups required for the quota.
If you are using
OpenStack version 15 or earlier, or the ovn-octavia driver
, each load balancer
has a security group with the user project.
The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the OpenStack deployment’s size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them.
If you are using OpenStack version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows.
An OKD deployment comprises control plane machines, compute machines, and a bootstrap machine.
To enable Kuryr SDN, your environment must meet the following requirements:
Run OpenStack 13+.
Have Overcloud with Octavia.
Use Neutron Trunk ports extension.
Use openvswitch
firewall driver if ML2/OVS Neutron driver is used instead
of ovs-hybrid
.
When using Kuryr SDN, you must increase quotas to satisfy the OpenStack resources used by pods, services, namespaces, and network policies.
Increase the quotas for a project by running the following command:
$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>
Kuryr CNI leverages the Neutron Trunks extension to plug containers into the
OpenStack SDN, so you must use the trunks
extension for Kuryr to properly work.
In addition, if you leverage the default ML2/OVS Neutron driver, the firewall
must be set to openvswitch
instead of ovs_hybrid
so that security groups are
enforced on trunk subports and Kuryr can properly handle network policies.
Kuryr SDN uses OpenStack’s Octavia LBaaS to implement OKD services. Thus, you must install and configure Octavia components in OpenStack to use Kuryr SDN.
To enable Octavia, you must include the Octavia service during the installation of the OpenStack Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.
The following steps only capture the key pieces required during the deployment of OpenStack when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. |
If you are using the local registry, create a template to upload the images to the registry. For example:
(undercloud) $ openstack overcloud container image prepare \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
--namespace=registry.access.redhat.com/rhosp13 \
--push-destination=<local-ip-from-undercloud.conf>:8787 \
--prefix=openstack- \
--tag-from-label {version}-{product-version} \
--output-env-file=/home/stack/templates/overcloud_images.yaml \
--output-images-file /home/stack/local_registry_images.yaml
Verify that the local_registry_images.yaml
file contains the Octavia images.
For example:
...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
push_destination: <local-ip-from-undercloud.conf>:8787
The Octavia container versions vary depending upon the specific OpenStack release installed. |
Pull the container images from registry.redhat.io
to the Undercloud node:
(undercloud) $ sudo openstack overcloud container image upload \
--config-file /home/stack/local_registry_images.yaml \
--verbose
This may take some time depending on the speed of your network and Undercloud disk.
Install or update your Overcloud environment with Octavia:
$ openstack overcloud deploy --templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
-e octavia_timeouts.yaml
This command only includes the files associated with Octavia; it varies based on your specific installation of OpenStack. See the OpenStack documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director. |
When leveraging Kuryr SDN, the Overcloud installation requires the Neutron |
Octavia supports multiple provider drivers through the Octavia API.
To see all available Octavia provider drivers, on a command line, enter:
$ openstack loadbalancer provider list
+---------+-------------------------------------------------+
| name  | description                   |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver. Â Â Â Â Â Â Â Â Â Â |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn   | Octavia OVN driver.               |
+---------+-------------------------------------------------+
Beginning with OpenStack version 16, the Octavia OVN provider driver (ovn
) is supported on
OKD on OpenStack deployments.
ovn
is an integration driver for the load balancing
that Octavia and OVN provide. It supports basic load balancing capabilities,
and is based on OpenFlow rules. The driver is automatically enabled
in Octavia by Director on deployments that use OVN Neutron ML2.
The Amphora provider driver is the default driver. If ovn
is enabled, however, Kuryr uses it.
If Kuryr uses ovn
instead of Amphora, it offers the following benefits:
Decreased resource requirements. Kuryr does not require a load balancer VM for each service.
Reduced network latency.
Increased service creation speed by using OpenFlow rules instead of a VM for each service.
Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.
You can configure your cluster to use the Octavia OVN driver after your OpenStack cloud is upgraded from version 13 to version 16.
Using OKD with Kuryr SDN has several known limitations.
Using OKD with Kuryr SDN has several limitations that apply to all versions and environments:
Service
objects with the NodePort
type are not supported.
Clusters that use the OVN Octavia provider driver support Service
objects for which the .spec.selector
property is unspecified only if the .subsets.addresses
property of the Endpoints
object includes the subnet of the nodes or pods.
If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service
objects with type LoadBalancer
.
Configuring the sessionAffinity=ClientIP
property on Service
objects does not have an effect. Kuryr does not support this setting.
Using OKD with Kuryr SDN has several limitations that depend on the OpenStack version.
OpenStack versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OKD service. Creating too many services can cause you to run out of resources.
Deployments of later versions of OpenStack that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of OpenStack.
Kuryr SDN does not support automatic unidling by a service.
As a result of the OpenStack upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required.
You can address API changes on an individual basis.
If the Amphora image is upgraded, the OpenStack operator can handle existing load balancer VMs in two ways:
Upgrade each VM by triggering a load balancer failover.
Leave responsibility for upgrading the VMs to users.
If the operator takes the first option, there might be short downtimes during failovers.
If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features.
By default, the OKD installation process creates three control plane machines.
Each machine requires:
An instance from the OpenStack quota
A port from the OpenStack quota
A flavor with at least 16 GB memory and 4 vCPUs
At least 100 GB storage space from the OpenStack quota
By default, the OKD installation process creates three compute machines.
Each machine requires:
An instance from the OpenStack quota
A port from the OpenStack quota
A flavor with at least 8 GB memory and 2 vCPUs
At least 100 GB storage space from the OpenStack quota
Compute machines host the applications that you run on OKD; aim to run as many as you can. |
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.
The bootstrap machine requires:
An instance from the OpenStack quota
A port from the OpenStack quota
A flavor with at least 16 GB memory and 4 vCPUs
At least 100 GB storage space from the OpenStack quota
Deployment with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Before you install OKD, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you want to deploy the API and application Ingress load balancers with a Fedora instance, you must purchase the Fedora subscription separately. |
The load balancing infrastructure must meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
A stateless load balancing algorithm. The options vary based on the load balancer implementation.
Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OKD cluster and the Kubernetes API that runs inside the cluster. |
Configure the following ports on both the front and back of the load balancers:
Port | Back-end machines (pool members) | Internal | External | Description |
---|---|---|---|---|
|
Bootstrap and control plane. You remove the bootstrap machine from the load
balancer after the bootstrap machine initializes the cluster control plane. You
must configure the |
X |
X |
Kubernetes API server |
|
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. |
X |
Machine config server |
The load balancer must be configured to take a maximum of 30 seconds from the
time the API server turns off the |
Application Ingress load balancer: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OKD cluster.
Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.
A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. |
Configure the following ports on both the front and back of the load balancers:
Port | Back-end machines (pool members) | Internal | External | Description |
---|---|---|---|---|
|
The machines that run the Ingress Controller pods, compute, or worker, by default. |
X |
X |
HTTPS traffic |
|
The machines that run the Ingress Controller pods, compute, or worker, by default. |
X |
X |
HTTP traffic |
If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. |
This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg
configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.
In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
If you are using HAProxy as a load balancer and SELinux is set to |
global
log 127.0.0.1 local2
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode http
log global
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen api-server-6443 (1)
bind *:6443
mode tcp
option httpchk GET /readyz HTTP/1.0
option log-health-checks
balance roundrobin
server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup (2)
server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
listen machine-config-server-22623 (3)
bind *:22623
mode tcp
server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup (2)
server master0 master0.ocp4.example.com:22623 check inter 1s
server master1 master1.ocp4.example.com:22623 check inter 1s
server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 (4)
bind *:443
mode tcp
balance source
server worker0 worker0.ocp4.example.com:443 check inter 1s
server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 (5)
bind *:80
mode tcp
balance source
server worker0 worker0.ocp4.example.com:80 check inter 1s
server worker1 worker1.ocp4.example.com:80 check inter 1s
1 | Port 6443 handles the Kubernetes API traffic and points to the control plane machines. |
||
2 | The bootstrap entries must be in place before the OKD cluster installation and they must be removed after the bootstrap process is complete. | ||
3 | Port 22623 handles the machine config server traffic and points to the control plane machines. |
||
4 | Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. |
||
5 | Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
|
If you are using HAProxy as a load balancer, you can check that the |
Swift is operated by a user account with the swiftoperator
role. Add the role to an account before you run the installation program.
If the OpenStack object storage service, commonly known as Swift, is available, OKD uses it as the image registry storage. If it is unavailable, the installation program relies on the OpenStack block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. |
OpenStack 17 sets the Before installation, check if your OpenStack deployment is affected by this problem. If it is, reconfigure Ceph RGW. |
You have a OpenStack administrator account on the target environment.
The Swift service is installed.
On Ceph RGW, the account in url
option is enabled.
To enable Swift on OpenStack:
As an administrator in the OpenStack CLI, add the swiftoperator
role to the account that will access Swift:
$ openstack role add --user <user> --project <project> swiftoperator
Your OpenStack deployment can now use Swift for the image registry.
The OKD installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in OpenStack.
Using the OpenStack CLI, verify the name and ID of the 'External' network:
$ openstack network list --long -c ID -c Name -c "Router Type"
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.
If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the The default network ranges are:
|
If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in OpenStack. |
If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port. |
The OKD installation program relies on a file that is called clouds.yaml
. The file describes OpenStack configuration parameters, including the project name, log in information, and authorization service URLs.
Create the clouds.yaml
file:
If your OpenStack distribution includes the Horizon web UI, generate a clouds.yaml
file in it.
Remember to add a password to the |
If your OpenStack distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml
, see Config files in the OpenStack documentation.
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: <username>
password: <password>
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: <username>
password: <password>
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
If your OpenStack installation uses self-signed certificate authority (CA) certificates for endpoint authentication:
Copy the certificate authority file to your machine.
Add the cacerts
key to the clouds.yaml
file. The value must be an absolute, non-root-accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the
|
Place the clouds.yaml
file in one of the following locations:
The value of the OS_CLIENT_CONFIG_FILE
environment variable
The current directory
A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml
The installation program searches for clouds.yaml
in that order.
Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OKD interacts with OpenStack.
For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation.
If you have not already generated manifest files for your cluster, generate them by running the following command:
$ openshift-install --dir <destination_directory> create manifests
In a text editor, open the cloud-provider configuration manifest file. For example:
$ vi openshift/manifests/cloud-provider-config.yaml
Modify the options according to the CCM reference guide.
Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:
#...
[LoadBalancer]
use-octavia=true (1)
lb-provider = "amphora" (2)
floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" (3)
create-monitor = True (4)
monitor-delay = 10s (5)
monitor-timeout = 10s (6)
monitor-max-retries = 1 (7)
#...
1 | This property enables Octavia integration. |
2 | This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . |
3 | This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. |
4 | This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of OpenStack 16.2, this feature is only available for the Amphora provider. |
5 | This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . |
6 | This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . |
7 | This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . |
Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. |
You must set the value of the |
For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. |
Save the changes to the file and proceed with installation.
You can update your cloud provider configuration after you run the installer. On a command line, run:
After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a |
Before you install OKD, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.
If you do not use the pull secret from the Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
You can customize the OKD cluster you install on OpenStack.
Obtain the OKD installation program and the pull secret for your cluster.
Obtain service principal permissions at the subscription level.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
When specifying the directory:
Verify that the directory has the execute
permission. This permission is required to run Terraform binaries under the installation directory.
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
Always delete the
|
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select openstack as the platform to target.
Specify the OpenStack external network name to use for installing the cluster.
Specify the floating IP address to use for external access to the OpenShift API.
Specify a OpenStack flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
Enter a name for your cluster. The name must be 14 or fewer characters long.
Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.
Modify the install-config.yaml
file. You can find more information about
the available parameters in the "Installation configuration parameters" section.
Back up the install-config.yaml
file so that you can use
it to install multiple clusters.
The |
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
Kuryr installations default to HTTP proxies. |
For Kuryr installations on restricted networks that use the Proxy
object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter:
$ ip route add <cluster_network_cidr> via <installer_subnet_gateway>
The restricted subnet must have a gateway that is defined and available to be linked to the Router
resource that Kuryr creates.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges these contents
with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
The API version for the |
String |
|
The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
Kubernetes resource |
Object |
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
The configuration for the specific platform upon which to perform the installation: |
Object |
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. |
Parameter | Description | Values | ||
---|---|---|---|---|
|
The configuration for the cluster network. |
Object
|
||
|
The Red Hat OpenShift Networking network plugin to install. |
Either |
||
|
The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation.
The prefix length for an IPv4 block is between |
||
|
The subnet prefix length to assign to each individual node. For example, if |
A subnet prefix. The default value is |
||
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. |
An array with an IP address block in CIDR format. For example:
|
||
|
The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use |
An IP network block in CIDR notation. For example,
|
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||||
---|---|---|---|---|---|---|
|
A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. |
String |
||||
|
Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing. |
String array |
||||
|
Selects an initial set of optional capabilities to enable. Valid values are |
String |
||||
|
Extends the set of optional capabilities beyond what you specify in |
String array |
||||
|
Enables workload partitioning, which isolates OKD services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. |
|
||||
|
The configuration for the machines that comprise the compute nodes. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are |
String |
||||
compute: hyperthreading: |
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
||||
|
Enables the cluster for a feature set. A feature set is a collection of OKD features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". |
String. The name of the feature set to enable, such as |
||||
|
The configuration for the machines that comprise the control plane. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are |
String |
||||
controlPlane: hyperthreading: |
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of control plane machines to provision. |
The only supported value is |
||||
|
The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
|
|
||||
|
Sources and repositories for the release-image content. |
Array of objects. Includes a |
||||
|
Required if you use |
String |
||||
|
Specify one or more repositories that may also contain the same images. |
Array of strings |
||||
|
How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to
|
||||
|
The SSH key to authenticate access to your cluster machines.
|
For example, |
Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
Additional OpenStack configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
|
For compute machines, the root volume’s type. |
String, for example |
|
For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
|
For control plane machines, the root volume’s type. |
String, for example |
|
The name of the OpenStack cloud to use from the list of clouds in the
|
String, for example |
|
The OpenStack external network name to be used for installation. |
String, for example |
|
The OpenStack flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the |
String, for example |
Optional OpenStack configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. |
A list of one or more UUIDs as strings. For example, |
|
Additional security groups that are associated with compute machines. |
A list of one or more UUIDs as strings. For example, |
|
OpenStack Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the OpenStack administrator configured. On clusters that use Kuryr, OpenStack Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OKD services that rely on Amphora VMs, are not created according to the value of this property. |
A list of strings. For example, |
|
For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. |
A list of strings, for example |
|
Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include An If you use a strict |
A server group policy to apply to the machine pool. For example, |
|
Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. |
A list of one or more UUIDs as strings. For example, |
|
Additional security groups that are associated with control plane machines. |
A list of one or more UUIDs as strings. For example, |
|
OpenStack Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the OpenStack administrator configured. On clusters that use Kuryr, OpenStack Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OKD services that rely on Amphora VMs, are not created according to the value of this property. |
A list of strings. For example, |
|
For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. |
A list of strings, for example |
|
Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include An If you use a strict |
A server group policy to apply to the machine pool. For example, |
|
The location from which the installation program downloads the FCOS image. You must set this parameter to perform an installation in a restricted network. |
An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, |
|
Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if You can use this property to exceed the default persistent volume (PV) limit for OpenStack of 26 PVs per node. To exceed the limit, set the You can also use this property to enable the QEMU guest agent by including the |
A list of key-value string pairs. For example, |
|
The default machine pool platform configuration. |
|
|
An existing floating IP address to associate with the Ingress port. To use this property, you must also define the |
An IP address, for example |
|
An existing floating IP address to associate with the API load balancer. To use this property, you must also define the |
An IP address, for example |
|
IP addresses for external DNS servers that cluster instances use for DNS resolution. |
A list of IP addresses as strings. For example, |
|
Whether or not to use the default, internal load balancer. If the value is set to |
|
|
The UUID of a OpenStack subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in If you deploy to a custom subnet, you cannot specify an external DNS server to the OKD installer. Instead, add DNS to the subnet in OpenStack. |
A UUID as a string. For example, |
OpenStack failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
OpenStack deployments do not have a single implementation of failure domains. Instead, availability zones are defined individually for each service, such as the compute service, Nova; the networking service, Neutron; and the storage service, Cinder.
Beginning with OKD 4.13, there is a unified definition of failure domains for OpenStack deployments that covers all supported availability zone types. You can use failure domains to control related aspects of Nova, Neutron, and Cinder configurations from a single place.
In OpenStack, a port describes a network connection and maps to an interface inside a compute machine. A port also:
Is defined by a network or by one more or subnets
Connects a machine to one or more subnets
Failure domains group the services of your deployment by using ports. If you use failure domains, each machine connects to:
The portTarget
object with the ID control-plane
while that object exists.
All non-control-plane portTarget
objects within its own failure domain.
All networks in the machine pool’s additionalNetworkIDs
list.
To configure failure domains for a machine pool, edit availability zone and port target parameters under controlPlane.platform.openstack.failureDomains
.
Parameter | Description | Values |
---|---|---|
|
An availability zone for the server. If not specified, the cluster default is used. |
The name of the availability zone. For example, |
|
An availability zone for the root volume. If not specified, the cluster default is used. |
The name of the availability zone. For example, |
|
A list of |
A list of |
|
The ID of an individual port target. To select that port target as the first network for machines, set the value of this parameter to |
|
|
Required. The name or ID of the network to attach to machines in the failure domain. |
A
or:
|
|
Subnets to allocate fixed IP addresses to. These subnets must exist within the same network as the port. |
A list of |
You cannot combine zone fields and failure domains. If you want to use failure domains, the controlPlane.zone and controlPlane.rootVolume.zone fields must be left unset.
|
Optionally, you can deploy a cluster on a OpenStack subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet
in the install-config.yaml
file.
This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different OpenStack subnet by setting the value of the platform.openstack.machinesSubnet
property to the subnet’s UUID.
Before you run the OKD installer with a custom subnet, verify that your configuration meets the following requirements:
The subnet that is used by platform.openstack.machinesSubnet
has DHCP enabled.
The CIDR of platform.openstack.machinesSubnet
matches the CIDR of networking.machineNetwork
.
The installation program user has permission to create ports on this network, including ports with fixed IP addresses.
Clusters that use custom subnets have the following limitations:
If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet
subnet must be attached to a router that is connected to the externalNetwork
network.
If the platform.openstack.machinesSubnet
value is set in the install-config.yaml
file, the installation program does not create a private network or subnet for your OpenStack machines.
You cannot use the platform.openstack.externalDNS
property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the OpenStack network.
By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values,
set values for |
The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. |
install-config.yaml
file for OpenStack with KuryrTo deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml
file to include Kuryr
as the desired networking.networkType
.
This sample install-config.yaml
demonstrates all of the possible
OpenStack customization options.
This sample file is provided for reference only. You must obtain your
|
apiVersion: v1
baseDomain: example.com
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16 (1)
networkType: Kuryr (2)
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
apiFloatingIP: 128.0.0.1
trunkSupport: true (3)
octaviaSupport: true (3)
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
1 | The Amphora Octavia driver creates two ports per load balancer. As a
result, the service subnet that the installer creates is twice the size of the
CIDR that is specified as the value of the serviceNetwork property. The larger range is
required to prevent IP address conflicts. |
2 | The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . |
3 | Both trunkSupport and octaviaSupport are automatically discovered by the
installer, so there is no need to set them. But if your environment does not
meet both requirements, Kuryr SDN will not properly work. Trunks are needed
to connect the pods to the OpenStack network and Octavia is required to create the
OKD services. |
OpenStack failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The following section of an install-config.yaml
file demonstrates the use of failure domains in a cluster to deploy on OpenStack:
# ...
controlPlane:
name: master
platform:
openstack:
type: m1.large
failureDomains:
- computeAvailabilityZone: 'nova-1'
storageAvailabilityZone: 'cinder-1'
portTargets:
- id: control-plane
network:
id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6
- computeAvailabilityZone: 'nova-2'
storageAvailabilityZone: 'cinder-2'
portTargets:
- id: control-plane
network:
id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1
- computeAvailabilityZone: 'nova-3'
storageAvailabilityZone: 'cinder-3'
portTargets:
- id: control-plane
network:
id: 8e4b4e0d-3865-4a9b-a769-559270271242
featureSet: TechPreviewNoUpgrade
# ...
Deployment on OpenStack with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The following example install-config.yaml
file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer.
apiVersion: v1
baseDomain: mydomain.test
compute:
- name: worker
platform:
openstack:
type: m1.xlarge
replicas: 3
controlPlane:
name: master
platform:
openstack:
type: m1.xlarge
replicas: 3
metadata:
name: mycluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 192.168.10.0/24
platform:
openstack:
cloud: mycloud
machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a (1)
apiVIPs:
- 192.168.10.5
ingressVIPs:
- 192.168.10.7
loadBalancer:
type: UserManaged (2)
featureSet: TechPreviewNoUpgrade (3)
1 | Regardless of which load balancer you use, the load balancer is deployed to this subnet. |
2 | The UserManaged value indicates that you are using an user-managed load balancer. |
3 | Because user-managed load balancers are in Technology Preview, you must include the TechPreviewNoUpgrade value to deploy a cluster that uses a user-managed load balancer. |
You can deploy your OKD clusters on OpenStack with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.
OpenStack provider networks map directly to an existing physical network in the data center. A OpenStack administrator must create them.
In the following example, OKD workloads are connected to a data center by using a provider network:
OKD clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.
Example provider network types include flat (untagged) and VLAN (802.1Q tagged).
A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. |
You can learn more about provider and tenant networks in the OpenStack documentation.
Before you install an OKD cluster, your OpenStack deployment and provider network must meet a number of conditions:
The OpenStack networking service (Neutron) is enabled and accessible through the OpenStack networking API.
The OpenStack networking service has the port security and allowed address pairs extensions enabled.
The provider network can be shared with other tenants.
Use the |
The OpenStack project that you use to install the cluster must own the provider network, as well as an appropriate subnet.
To learn more about creating networks on OpenStack, read the provider networks documentation. |
If the cluster is owned by the admin
user, you must run the installer as that user to create ports on the network.
Provider networks must be owned by the OpenStack project that is used to create the cluster. If they are not, the OpenStack Compute service (Nova) cannot request a port from that network. |
Verify that the provider network can reach the OpenStack metadata service IP address, which is 169.254.169.254
by default.
Depending on your OpenStack SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:
$ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
You can deploy an OKD cluster that has its primary network interface on an OpenStack provider network.
Your OpenStack deployment is configured as described by "OpenStack provider network requirements for cluster installation".
In a text editor, open the install-config.yaml
file.
Set the value of the platform.openstack.apiVIPs
property to the IP address for the API VIP.
Set the value of the platform.openstack.ingressVIPs
property to the IP address for the Ingress VIP.
Set the value of the platform.openstack.machinesSubnet
property to the UUID of the provider network subnet.
Set the value of the networking.machineNetwork.cidr
property to the CIDR block of the provider network subnet.
The |
...
platform:
openstack:
apiVIPs: (1)
- 192.0.2.13
ingressVIPs: (1)
- 192.0.2.23
machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
# ...
networking:
machineNetwork:
- cidr: 192.0.2.0/24
1 | In OKD 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. |
You cannot set the |
When you deploy the cluster, the installer uses the install-config.yaml
file to deploy the cluster on the provider network.
You can add additional networks, including provider networks, to the After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks. |
A Kuryr ports pool maintains a number of ports on standby for pod creation.
Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.
The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OKD cluster nodes.
Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.
Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml
manifest file to configure ports pool behavior:
The enablePortPoolsPrepopulation
parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false
.
The poolMinPorts
parameter is the minimum number of free ports that are kept in the pool. The default value is 1
.
The poolMaxPorts
parameter is the maximum number of free ports that are kept in the pool. A value of 0
disables that upper bound. This is the default setting.
If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.
The poolBatchPorts
parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3
.
During installation, you can configure how Kuryr manages OpenStack Neutron ports to control the speed and efficiency of pod creation.
Create and modify the install-config.yaml
file.
From a command line, create the manifest files:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | For <installation_directory> , specify the name of the directory that
contains the install-config.yaml file for your cluster. |
Create a file that is named cluster-network-03-config.yml
in the
<installation_directory>/manifests/
directory:
$ touch <installation_directory>/manifests/cluster-network-03-config.yml (1)
1 | For <installation_directory> , specify the directory name that contains the
manifests/ directory for your cluster. |
After creating the file, several network configuration files are in the
manifests/
directory, as shown:
$ ls <installation_directory>/manifests/cluster-network-*
cluster-network-01-crd.yml
cluster-network-02-config.yml
cluster-network-03-config.yml
Open the cluster-network-03-config.yml
file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want:
$ oc edit networks.operator.openshift.io cluster
Edit the settings to meet your requirements. The following file is provided as an example:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: Kuryr
kuryrConfig:
enablePortPoolsPrepopulation: false (1)
poolMinPorts: 1 (2)
poolBatchPorts: 3 (3)
poolMaxPorts: 5 (4)
openstackServiceNetwork: 172.30.0.0/15 (5)
1 | Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . |
2 | Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . |
3 | poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . |
4 | If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . |
5 | The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to OpenStack Octavia’s LoadBalancers. |
If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OKD and Neutron respectively, they must come from different pools.
Therefore, the value of openStackServiceNetwork
must be at least twice the size of the value of serviceNetwork
, and the value of serviceNetwork
must overlap entirely with the range that is defined by openStackServiceNetwork
.
The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork
parameter.
If this parameter is not set, the CNO uses an expanded value of serviceNetwork
that is determined by decrementing the prefix size by 1.
Save the cluster-network-03-config.yml
file, and exit the text editor.
Optional: Back up the manifests/cluster-network-03-config.yml
file. The installation program deletes the manifests/
directory while creating the cluster.
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OKD, provide the SSH public key to the installation program.
At deployment, all OKD machines are created in a OpenStack-tenant network. Therefore, they are not accessible directly in most OpenStack deployments.
You can configure OKD API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OKD API and cluster applications.
Using the OpenStack CLI, create the API FIP:
$ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
Using the OpenStack CLI, create the apps, or Ingress, FIP:
$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your
The cluster domain names in the |
Add the FIPs to the
install-config.yaml
file as the values of the following
parameters:
platform.openstack.ingressFloatingIP
platform.openstack.apiFloatingIP
If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork
parameter in the install-config.yaml
file.
You can make OKD resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. |
You can install OKD on OpenStack without providing floating IP addresses.
In the
install-config.yaml
file, do not define the following
parameters:
platform.openstack.ingressFloatingIP
platform.openstack.apiFloatingIP
If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork
, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.
If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.
You can enable name resolution by creating DNS records for the API and Ingress ports. For example:
If you do not control the DNS server, you can add the record to your |
You can install OKD on a compatible cloud platform.
You can run the |
Obtain the OKD installation program and the pull secret for your cluster.
Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can verify your OKD cluster’s status during or after installation.
In the cluster environment, export the administrator’s kubeconfig file:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored the installation files in. |
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
View the control plane and compute machines created after a deployment:
$ oc get nodes
View your cluster’s version:
$ oc get clusterversion
View your Operators' status:
$ oc get clusteroperator
View all running pods in the cluster:
$ oc get pods -A
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
See Accessing the web console for more details about accessing and understanding the OKD web console.
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.
If you need to enable external access to node ports, configure ingress cluster traffic by using a node port.
If you did not configure OpenStack to accept application traffic over floating IP addresses, configure OpenStack access with floating IP addresses.