In OKD version 4.13, you can install a cluster on VMware vSphere infrastructure in a restricted network by deploying it to VMware Cloud (VMC) on AWS.
Once you configure your VMC environment for OKD deployment, you use the OKD installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OKD cluster.
OKD supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. |
You can install OKD on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud.
You must configure several options in your VMC environment prior to installing OKD on VMware vSphere. Ensure your VMC environment has the following prerequisites:
Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OKD deployment.
Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records.
A DNS record for api.<cluster_name>.<base_domain>
pointing to the allocated IP address.
A DNS record for *.apps.<cluster_name>.<base_domain>
pointing to the allocated IP address.
Configure the following firewall rules:
An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Fedora CoreOS (FCOS) OVA during deployment.
An HTTPS firewall rule between the OKD compute network and vCenter. This connection allows OKD to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources.
You must have the following information to deploy OKD:
The OKD cluster name, such as vmc-prod-1
.
The base DNS name, such as companyname.com
.
If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14
and 172.30.0.0/16
, respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization.
The following vCenter information:
vCenter hostname, username, and password
Datacenter name, such as SDDC-Datacenter
Cluster name, such as Cluster-1
Network name
Datastore name, such as WorkloadDatastore
It is recommended to move your vSphere cluster to the VMC |
A Linux-based host deployed to VMC as a bastion.
The bastion host can be Fedora or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts.
Download and install the OpenShift CLI tools to the bastion host.
The openshift-install
installation program
The OpenShift CLI (oc
) tool
You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OKD. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OKD deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OKD cluster and between the bastion host and the VMC vSphere hosts. |
VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure.
To determine this, VMware provides the VMC on AWS Sizer. With this tool, you can define the resources you intend to host on VMC:
Types of workloads
Total number of virtual machines
Specification information such as:
Storage requirements
vCPUs
vRAM
Overcommit ratios
With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need.
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You created a registry on your mirror host and obtained the imageContentSources
data for your version of OKD.
Because the installation media is on the mirror host, you can use that computer to complete all installation steps. |
You provisioned block registry storage. For more information on persistent storage, see Understanding persistent storage.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
If you are configuring a proxy, be sure to also review this site list. |
In OKD 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Clusters in restricted networks have the following additional limitations and restrictions:
The ClusterVersion
status includes an Unable to retrieve available updates
error.
By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
You must install an OKD cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use:
Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later
Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later
You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table:
Virtual environment product | Required version |
---|---|
VMware virtual hardware |
15 or later |
vSphere ESXi hosts |
7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later |
vCenter host |
7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later |
Component | Minimum supported versions | Description |
---|---|---|
Hypervisor |
vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 |
This hypervisor version is the minimum version that Fedora CoreOS (FCOS) supports. For more information about supported hardware on the latest version of Fedora that is compatible with FCOS, see Hardware on the Red Hat Customer Portal. |
Storage with in-tree drivers |
vSphere 7.0 Update 2 and later; 8.0 Update 1 or later |
This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OKD. |
CPU micro-architecture |
x86-64-v2 or higher |
OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation. You can verify compatibility by following the procedures outlined in this KCS article. |
You must ensure that the time on your ESXi hosts is synchronized before you install OKD. See Edit Time Configuration for a Host in the VMware documentation. |
For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator.
You must configure the network connectivity between machines to allow OKD cluster components to communicate.
Review the following details about the required network ports.
Protocol | Port | Description |
---|---|---|
VRRP |
N/A |
Required for keepalived |
ICMP |
N/A |
Network reachability tests |
TCP |
|
Metrics |
|
Host level services, including the node exporter on ports |
|
|
The default ports that Kubernetes reserves |
|
|
openshift-sdn |
|
UDP |
|
virtual extensible LAN (VXLAN) |
|
Geneve |
|
|
Host level services, including the node exporter on ports |
|
|
IPsec IKE packets |
|
|
IPsec NAT-T packets |
|
TCP/UDP |
|
Kubernetes node port |
ESP |
N/A |
IPsec Encapsulating Security Payload (ESP) |
Protocol | Port | Description |
---|---|---|
TCP |
|
Kubernetes API |
Protocol | Port | Description |
---|---|---|
TCP |
|
etcd server and peer ports |
To install the vSphere CSI Driver Operator, the following requirements must be met:
VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later
vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later
Virtual machines of hardware version 15 or later
No third-party vSphere CSI driver already installed in the cluster
If a third-party vSphere CSI driver is present in the cluster, OKD does not overwrite it. The presence of a third-party vSphere CSI driver prevents OKD from updating to OKD 4.13 or later.
The VMware vSphere CSI Driver Operator is supported only on clusters deployed with |
To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver.
To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere.
Before you install an OKD cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.
To install an OKD cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.
If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OKD cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OKD cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.
An additional role is required if the installation program is to create a vSphere virtual machine folder.
vSphere object for role | When required | Required privileges in vSphere API |
---|---|---|
vSphere vCenter |
Always |
|
vSphere vCenter Cluster |
If VMs will be created in the cluster root |
|
vSphere vCenter Resource Pool |
If an existing resource pool is provided |
|
vSphere Datastore |
Always |
|
vSphere Port Group |
Always |
|
Virtual Machine Folder |
Always |
|
vSphere vCenter Datacenter |
If the installation program creates the virtual machine folder. For UPI, |
|
vSphere object for role | When required | Required privileges in vCenter GUI |
---|---|---|
vSphere vCenter |
Always |
|
vSphere vCenter Cluster |
If VMs will be created in the cluster root |
|
vSphere vCenter Resource Pool |
If an existing resource pool is provided |
|
vSphere Datastore |
Always |
|
vSphere Port Group |
Always |
|
Virtual Machine Folder |
Always |
|
vSphere vCenter Datacenter |
If the installation program creates the virtual machine folder. For UPI, |
|
Additionally, the user requires some ReadOnly
permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.
vSphere object | When required | Propagate to children | Permissions required |
---|---|---|---|
vSphere vCenter |
Always |
False |
Listed required privileges |
vSphere vCenter Datacenter |
Existing folder |
False |
|
Installation program creates the folder |
True |
Listed required privileges |
|
vSphere vCenter Cluster |
Existing resource pool |
False |
|
VMs in cluster root |
True |
Listed required privileges |
|
vSphere vCenter Datastore |
Always |
False |
Listed required privileges |
vSphere Switch |
Always |
False |
|
vSphere Port Group |
Always |
False |
Listed required privileges |
vSphere vCenter Virtual Machine Folder |
Existing folder |
True |
Listed required privileges |
vSphere vCenter Resource Pool |
Existing resource pool |
True |
Listed required privileges |
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
If you intend on using vMotion in your vSphere environment, consider the following before installing an OKD cluster.
Using Storage vMotion can cause issues and is not supported.
Using VMware compute vMotion to migrate the workloads for both OKD compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion.
To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OKD during maintenance or hardware issues.
For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules.
If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OKD persistent volume (PV) objects that can result in data loss.
OKD does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.
When you deploy an OKD cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.
A standard OKD installation creates the following vCenter resources:
1 Folder
1 Tag category
1 Tag
Virtual machines:
1 template
1 temporary bootstrap node
3 control plane nodes
3 compute machines
Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.
If you deploy more compute machines, the OKD cluster will use more storage.
Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.
You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. The VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Additionally, you must create the following networking resources before you install the OKD cluster:
It is recommended that each OKD node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. |
An installer-provisioned vSphere installation requires two static IP addresses:
The API address is used to access the cluster API.
The Ingress address is used for cluster ingress traffic.
You must provide these IP addresses to the installation program when you install the OKD cluster.
You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OKD cluster. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
API VIP |
|
This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
Ingress VIP |
|
A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OKD, provide the SSH public key to the installation program.
Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OKD cluster.
From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip
file downloads.
Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:
certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files
Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:
# cp certs/lin/* /etc/pki/ca-trust/source/anchors
Update your system trust. For example, on a Fedora operating system, run the following command:
# update-ca-trust extract
Download the Fedora CoreOS (FCOS) image to install OKD on a restricted network VMware vSphere environment.
Obtain the OKD installation program. For a restricted network installation, the program is on your mirror registry host.
Log in to the Red Hat Customer Portal’s Product Downloads page.
Under Version, select the most recent release of OKD 4.13 for RHEL 8.
The FCOS images might not change with every release of OKD. You must download images with the highest version that is less than or equal to the OKD version that you install. Use the image versions that match your OKD version if they are available. |
Download the Fedora CoreOS (FCOS) - vSphere image.
Upload the image you downloaded to a location that is accessible from the bastion server.
The image is now available for a restricted installation. Note the image name or location for use in OKD deployment.
You can deploy an OKD cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OKD cluster.
The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature only available on a newly installed cluster. A cluster that was upgraded from a previous release defaults to using the in-tree vSphere driver, so you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. |
The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature.
The default install-config.yaml
file includes vcenters
and failureDomains
fields, where you can specify multiple vSphere datacenters and clusters for your OKD cluster. You can leave these fields blank if you want to install an OKD cluster in a vSphere environment that consists of single datacenter.
The following list describes terms associated with defining zones and regions for your cluster:
Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore
object. A failure domain defines the vCenter location for OKD cluster nodes.
Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region
tag category.
Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone
tag category.
If you plan on specifying more than one failure domain in your |
You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters.
The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter.
Datacenter (region) | Cluster (zone) | Tags |
---|---|---|
us-east |
us-east-1 |
us-east-1a |
us-east-1b |
||
us-east-2 |
us-east-2a |
|
us-east-2b |
||
us-west |
us-west-1 |
us-west-1a |
us-west-1b |
||
us-west-2 |
us-west-2a |
|
us-west-2b |
You can customize the OKD cluster you install on VMware vSphere.
Obtain the OKD installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
Have the imageContentSources
values that were generated during mirror registry creation.
Obtain the contents of the certificate for your mirror registry.
Retrieve a Fedora CoreOS (FCOS) image and upload it to an accessible location.
Obtain service principal permissions at the subscription level.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
When specifying the directory:
Verify that the directory has the execute
permission. This permission is required to run Terraform binaries under the installation directory.
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
Always delete the
|
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select vsphere as the platform to target.
Specify the name of your vCenter instance.
Specify the user name and password for the vCenter account that has the required permissions to create the cluster.
The installation program connects to your vCenter instance.
Select the data center in your vCenter instance to connect to.
After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OKD cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement. |
Select the default vCenter datastore to use.
Select the vCenter cluster to install the OKD cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
Enter the virtual IP address that you configured for control plane API access.
Enter the virtual IP address that you configured for cluster ingress.
Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records.
Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.
In the install-config.yaml
file, set the value of platform.vsphere.clusterOSImage
to the image location or name. For example:
platform:
vsphere:
clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d
Edit the install-config.yaml
file to give the additional information that
is required for an installation in a restricted network.
Update the pullSecret
value to contain the authentication information for
your registry:
pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'
For <mirror_host_name>
, specify the registry domain name
that you specified in the certificate for your mirror registry, and for
<credentials>
, specify the base64-encoded user name and password for
your mirror registry.
Add the additionalTrustBundle
parameter and value.
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.
Add the image content resources, which resemble the following YAML excerpt:
imageContentSources:
- mirrors:
- <mirror_host_name>:5000/<repo_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <mirror_host_name>:5000/<repo_name>/release
source: registry.redhat.io/ocp/release
For these values, use the imageContentSources
that you recorded during mirror registry creation.
Make any other modifications to the install-config.yaml
file that you require. You can find more information about
the available parameters in the Installation configuration parameters section.
Back up the install-config.yaml
file so that you can use
it to install multiple clusters.
The |
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
The API version for the |
String |
|
The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
Kubernetes resource |
Object |
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters and hyphens ( |
|
The configuration for the specific platform upon which to perform the installation: |
Object |
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. |
Parameter | Description | Values | ||
---|---|---|---|---|
|
The configuration for the cluster network. |
Object
|
||
|
The Red Hat OpenShift Networking network plugin to install. |
Either |
||
|
The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation.
The prefix length for an IPv4 block is between |
||
|
The subnet prefix length to assign to each individual node. For example, if |
A subnet prefix. The default value is |
||
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. |
An array with an IP address block in CIDR format. For example:
|
||
|
The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use |
An IP network block in CIDR notation. For example,
|
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||||
---|---|---|---|---|---|---|
|
A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. |
String |
||||
|
Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing. |
String array |
||||
|
Selects an initial set of optional capabilities to enable. Valid values are |
String |
||||
|
Extends the set of optional capabilities beyond what you specify in |
String array |
||||
|
Enables workload partitioning, which isolates OKD services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. |
|
||||
|
The configuration for the machines that comprise the compute nodes. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are |
String |
||||
compute: hyperthreading: |
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
||||
|
Enables the cluster for a feature set. A feature set is a collection of OKD features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". |
String. The name of the feature set to enable, such as |
||||
|
The configuration for the machines that comprise the control plane. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are |
String |
||||
controlPlane: hyperthreading: |
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of control plane machines to provision. |
The only supported value is |
||||
|
The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
|
|
||||
|
Sources and repositories for the release-image content. |
Array of objects. Includes a |
||||
|
Required if you use |
String |
||||
|
Specify one or more repositories that may also contain the same images. |
Array of strings |
||||
|
How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to
|
||||
|
The SSH key to authenticate access to your cluster machines.
|
For example, |
Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
Additional VMware vSphere configuration parameters are described in the following table.
The |
Parameter | Description | Values | ||
---|---|---|---|---|
platform: vsphere: |
Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OKD cluster. |
A dictionary of vSphere configuration objects |
||
platform: vsphere: apiVIPs: |
Virtual IP (VIP) addresses that you configured for control plane API access.
|
Multiple IP addresses |
||
platform: vsphere: diskType: |
Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. |
Valid values are |
||
platform: vsphere: failureDomains: region: |
If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the |
String |
||
platform: vsphere: failureDomains: server: |
Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the |
String |
||
platform: vsphere: failureDomains: zone: |
If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the |
String |
||
platform: vsphere: failureDomains: topology: datacenter: |
Lists and defines the datacenters where OKD virtual machines (VMs) operate.
The list of datacenters must match the list of datacenters specified in the |
String |
||
platform: vsphere: failureDomains: topology: datastore: |
Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the |
String |
||
platform: vsphere: failureDomains: topology: folder: |
Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, |
String |
||
platform: vsphere: failureDomains: topology: networks: |
Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. |
String |
||
platform: vsphere: failureDomains: topology: resourcePool: |
Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, |
String |
||
platform: vsphere: ingressVIPs: |
Virtual IP (VIP) addresses that you configured for cluster Ingress.
|
Multiple IP addresses |
||
platform: vsphere: vcenters: |
Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. |
An array of vCenter configuration objects. |
||
platform: vsphere: vcenters: datacenters: |
Lists and defines the datacenters where OKD virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the |
String |
||
platform: vsphere: vcenters: password: |
The password associated with the vSphere user. |
String |
||
platform: vsphere: vcenters: port: |
The port number used to communicate with the vCenter server. |
Integer |
||
platform: vsphere: vcenters: server: |
The fully qualified host name (FQHN) or IP address of the vCenter server. |
String |
||
platform: vsphere: vcenters: user: |
The username associated with the vSphere user. |
String |
In OKD 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml
file.
The following table lists each deprecated vSphere configuration parameter.
The |
Parameter | Description | Values | ||
---|---|---|---|---|
platform: vsphere: apiVIP: |
The virtual IP (VIP) address that you configured for control plane API access.
|
An IP address, for example |
||
platform: vsphere: cluster: |
The vCenter cluster to install the OKD cluster in. |
String |
||
platform: vsphere: datacenter: |
Defines the datacenter where OKD virtual machines (VMs) operate. |
String |
||
platform: vsphere: defaultDatastore: |
The name of the default datastore to use for provisioning volumes. |
String |
||
platform: vsphere: folder: |
Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. |
String, for example, |
||
platform: vsphere: ingressVIP: |
Virtual IP (VIP) addresses that you configured for cluster Ingress.
|
An IP address, for example |
||
platform: vsphere: network: |
The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. |
String |
||
platform: vsphere: password: |
The password for the vCenter user name. |
String |
||
platform: vsphere: resourcePool: |
Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under |
String, for example, |
||
platform: vsphere: username: |
The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. |
String |
||
platform: vsphere: vCenter: |
The fully-qualified hostname or IP address of a vCenter server. |
String |
Optional VMware vSphere machine pool configuration parameters are described in the following table.
The |
Parameter | Description | Values |
---|---|---|
|
The location from which the installation program downloads the FCOS image. You must set this parameter to perform an installation in a restricted network. |
An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, |
|
The size of the disk in gigabytes. |
Integer |
|
The total number of virtual processor cores to assign a virtual machine. The value of |
Integer |
|
The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is |
Integer |
|
The size of a virtual machine’s memory in megabytes. |
Integer |
You can customize the install-config.yaml
file to specify more details about
your OKD cluster’s platform or modify the values of the required
parameters.
apiVersion: v1
baseDomain: example.com (1)
compute: (2)
- architecture: amd64
name: <worker_node>
platform: {}
replicas: 3
controlPlane: (2)
architecture: amd64
name: <parent_node>
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: test (3)
platform:
vsphere: (4)
apiVIPs:
- 10.0.0.1
failureDomains: (5)
- name: <failure_domain_name>
region: <default_region_name>
server: <fully_qualified_domain_name>
topology:
computeCluster: "/<datacenter>/host/<cluster>"
datacenter: <datacenter>
datastore: "/<datacenter>/datastore/<datastore>" (6)
networks:
- <VM_Network_name>
resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" (7)
folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>"
zone: <default_zone_name>
ingressVIPs:
- 10.0.0.2
vcenters:
- datacenters:
- <datacenter>
password: <password>
port: 443
server: <fully_qualified_domain_name>
user: administrator@vsphere.local
diskType: thin (8)
clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova (9)
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}' (10)
sshKey: 'ssh-ed25519 AAAA...'
additionalTrustBundle: | (11)
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
imageContentSources: (12)
- mirrors:
- <mirror_host_name>:<mirror_port>/<repo_name>/release
source: <source_image_1>
- mirrors:
- <mirror_host_name>:<mirror_port>/<repo_name>/release-images
source: <source_image_2>
1 | The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. | ||
2 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. |
||
3 | The cluster name that you specified in your DNS records. | ||
4 | Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. | ||
5 | Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OKD cluster nodes. |
||
6 | The path to the vSphere datastore that holds virtual machine files, templates, and ISO images.
|
||
7 | Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. | ||
8 | The vSphere disk provisioning method. | ||
9 | The location of the Fedora CoreOS (FCOS) image that is accessible from the bastion server. | ||
10 | For <local_registry> , specify the registry domain name, and optionally the
port, that your mirror registry uses to serve content. For example
registry.example.com or registry.example.com:5000 . For <credentials> ,
specify the base64-encoded user name and password for your mirror registry. |
||
11 | Provide the contents of the certificate file that you used for your mirror registry. | ||
12 | Provide the imageContentSources section from the output of the command to mirror the repository. |
In OKD 4.12 and later, the |
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations.
You must include vCenter’s IP address and the IP range that you use for its machines. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges these contents
with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
You can modify the default installation configuration file, so that you can deploy an OKD cluster to multiple vSphere datacenters that run in a single VMware vCenter.
The default install-config.yaml
file configuration from the previous release of OKD is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer
will prompt you with a warning message that indicates the use of deprecated fields in the configuration file.
The example uses the |
You have an existing install-config.yaml
installation configuration file.
You must specify at least one failure domain for your OKD cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OKD cluster. |
Enter the following govc
command-line tool commands to create the openshift-region
and openshift-zone
vCenter tag categories:
If you specify different names for the |
$ govc tags.category.create -d "OpenShift region" openshift-region
$ govc tags.category.create -d "OpenShift zone" openshift-zone
To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal:
$ govc tags.create -c <region_tag_category> <region_tag>
To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command:
$ govc tags.create -c <zone_tag_category> <zone_tag>
Attach region tags to each vCenter datacenter object by entering the following command:
$ govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>
Attach the zone tags to each vCenter datacenter object by entering the following command:
$ govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1
Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
install-config.yaml
file with multiple datacenters defined in a vSphere center---
compute:
---
vsphere:
zones:
- "<machine_pool_zone_1>"
- "<machine_pool_zone_2>"
---
controlPlane:
---
vsphere:
zones:
- "<machine_pool_zone_1>"
- "<machine_pool_zone_2>"
---
platform:
vsphere:
vcenters:
---
datacenters:
- <datacenter1_name>
- <datacenter2_name>
failureDomains:
- name: <machine_pool_zone_1>
region: <region_tag_1>
zone: <zone_tag_1>
server: <fully_qualified_domain_name>
topology:
datacenter: <datacenter1>
computeCluster: "/<datacenter1>/host/<cluster1>"
networks:
- <VM_Network1_name>
datastore: "/<datacenter1>/datastore/<datastore1>"
resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>"
folder: "/<datacenter1>/vm/<folder1>"
- name: <machine_pool_zone_2>
region: <region_tag_2>
zone: <zone_tag_2>
server: <fully_qualified_domain_name>
topology:
datacenter: <datacenter2>
computeCluster: "/<datacenter2>/host/<cluster2>"
networks:
- <VM_Network2_name>
datastore: "/<datacenter2>/datastore/<datastore2>"
resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>"
folder: "/<datacenter2>/vm/<folder2>"
---
You can install OKD on a compatible cloud platform.
When you have configured your VMC environment for OKD deployment, you use the OKD installation program from the bastion management host that is co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OKD cluster.
You can run the |
Configure an account with the cloud platform that hosts your cluster.
Obtain the OKD installation program and the pull secret for your cluster.
Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can install the OpenShift CLI (oc
) to interact with
OKD
from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.zip
.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OKD installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
Disable the sources for the default catalogs by adding disableAllDefaultSources: true
to the OperatorHub
object:
$ oc patch OperatorHub cluster --type json \
-p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. |
After you install the cluster, you must create storage for the Registry Operator.
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed
. This allows openshift-installer
to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the managementState
from Removed
to Managed
. When this has completed, you must configure storage.
The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate
rollout strategy during upgrades.
As a cluster administrator, following installation you must configure your registry to use storage.
Cluster administrator permissions.
A cluster on VMware vSphere.
Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation.
OKD supports |
Must have "100Gi" capacity.
Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OKD core components. |
To configure your registry to use storage, change the spec.storage.pvc
in the configs.imageregistry/cluster
resource.
When you use shared storage, review your security settings to prevent outside access. |
Verify that you do not have a registry pod:
$ oc get pod -n openshift-image-registry -l docker-registry=default
No resourses found in openshift-image-registry namespace
If you do have a registry pod in your output, you do not need to continue with this procedure. |
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
storage:
pvc:
claim: (1)
1 | Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. |
Check the clusteroperator
status:
$ oc get clusteroperator image-registry
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
image-registry 4.7 True False False 6h50m
See About remote health monitoring for more information about the Telemetry service
You can configure an OKD cluster to use an external load balancer in place of the default load balancer.
Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. |
Read the following prerequisites that apply to the service that you want to configure for your external load balancer.
MetalLB, that runs on a cluster, functions as an external load balancer. |
You defined a front-end IP address.
TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
Port 6443 provides access to the OpenShift API service.
Port 22623 can provide ignition startup configurations to nodes.
The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OKD cluster.
The front-end IP address and port 22623 are reachable only by OKD nodes.
The load balancer backend can communicate with OKD control plane nodes on port 6443 and 22623.
You defined a front-end IP address.
TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OKD cluster.
The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OKD cluster.
The load balancer backend can communicate with OKD nodes that run the Ingress Controller on ports 80, 443, and 1936.
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OKD provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following examples demonstrate health check specifications for the previously listed backend services:
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80:
#...
listen my-cluster-api-6443
bind 192.168.1.100:6443
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /readyz
http-check expect status 200
server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2
server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2
server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2
listen my-cluster-machine-config-api-22623
bind 192.168.1.100:22623
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz
http-check expect status 200
server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2
server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2
server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2
listen my-cluster-apps-443
bind 192.168.1.100:443
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz/ready
http-check expect status 200
server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
listen my-cluster-apps-80
bind 192.168.1.100:80
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz/ready
http-check expect status 200
server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
# ...
Use the curl
CLI command to verify that the external load balancer and its resources are operational:
Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
Content-Length: 0
Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.ocp4.private.opequon.net/
cache-control: no-cache
Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Wed, 04 Oct 2023 16:29:38 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
<load_balancer_ip_address> A api.<cluster_name>.<base_domain>
A record pointing to Load Balancer Front End
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain>
A record pointing to Load Balancer Front End
DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. |
Use the curl
CLI command to verify that the external load balancer and DNS record configuration are operational:
Verify that you can access the cluster API, by running the following command and observing the output:
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Verify that you can access the cluster machine configuration, by running the following command and observing the output:
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
Content-Length: 0
Verify that you can access each cluster application on port, by running the following command and observing the output:
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
cache-control: no-cacheHTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 17 Nov 2020 08:42:10 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
Verify that you can access each cluster application on port 443, by running the following command and observing the output:
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Wed, 04 Oct 2023 16:29:38 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
Configure image streams for the Cluster Samples Operator and the must-gather
tool.
Learn how to use Operator Lifecycle Manager (OLM) on restricted networks.
If necessary, you can opt out of remote health reporting.