$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
In OKD version 4, you can install a cluster with a
customized network configuration on infrastructure that the installation program provisions on IBM Cloud®. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an IBM Cloud® account to host the cluster.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
You configured the ccoctl
utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud®.
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OKD cluster that uses the Fedora cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OKD, provide the SSH public key to the installation program.
Before you install OKD, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Download the installation program from https://github.com/openshift/okd/releases.
|
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.
If you do not use the pull secret from Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Ecosystem Catalog Container images registry, such as image streams and Operators, are not available.
You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key.
You have created either a user API key or service ID API key for your IBM Cloud® account.
Export your API key for your account as a global variable:
$ export IC_API_KEY=<api_key>
You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. |
You can customize the OKD cluster you install on IBM Cloud®.
You have the OKD installation program and the pull secret for your cluster.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
When specifying the directory:
Verify that the directory has the execute
permission. This permission is required to run Terraform binaries under the installation directory.
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select ibmcloud as the platform to target.
Select the region to deploy the cluster to.
Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
Enter a descriptive name for your cluster.
Modify the install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section.
Back up the install-config.yaml
file so that you can use
it to install multiple clusters.
The |
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU | Virtual RAM | Storage | Input/Output Per Second (IOPS) |
---|---|---|---|---|---|
Bootstrap |
FCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
FCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
FCOS |
2 |
8 GB |
100 GB |
300 |
As of OKD version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
For more information, see RHEL Architectures. |
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.
The following IBM Cloud® instance types have been tested with OKD.
bx2-8x32
bx2d-4x16
bx3d-4x20
bx3dc-8x40
cx2-8x16
cx2d-4x8
cx3d-8x20
cx3dc-4x10
gx2-8x64x1v100
gx3-16x80x1l4
mx2-8x64
mx2d-4x32
mx3d-4x40
ox2-8x64
ux2d-2x56
vx2d-4x56
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
controlPlane: (2) (3)
hyperthreading: Enabled (4)
name: master
platform:
ibmcloud: {}
replicas: 3
compute: (2) (3)
- hyperthreading: Enabled (4)
name: worker
platform:
ibmcloud: {}
replicas: 3
metadata:
name: test-cluster (1)
networking: (2)
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (5)
serviceNetwork:
- 172.30.0.0/16
platform:
ibmcloud:
region: us-south (1)
credentialsMode: Manual
publish: External
pullSecret: '{"auths": ...}' (1)
sshKey: ssh-ed25519 AAAA... (6)
1 | Required. The installation program prompts you for this value. | ||
2 | If you do not provide these parameters and values, the installation program provides the default value. | ||
3 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. |
||
4 | Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
|
||
5 | The cluster network plugin to install. The default value OVNKubernetes is the only supported value. |
||
6 | Optional: provide the sshKey value that you use to access the machines in your cluster.
|
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace to hold the additional CA
certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter
with the FCOS trust bundle. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider.
You can use the Cloud Credential Operator (CCO) utility (ccoctl
) to create the required IBM Cloud® resources.
You have configured the ccoctl
binary.
You have an existing install-config.yaml
file.
Edit the install-config.yaml
configuration file so that it contains the credentialsMode
parameter set to Manual
.
install-config.yaml
configuration fileapiVersion: v1
baseDomain: cluster1.example.com
credentialsMode: Manual (1)
compute:
- architecture: amd64
hyperthreading: Enabled
1 | This line is added to set the credentialsMode parameter to Manual . |
To generate the manifests, run the following command from the directory that contains the installation program:
$ ./openshift-install create manifests --dir <installation_directory>
From the directory that contains the installation program, set a $RELEASE_IMAGE
variable with the release image from your installation file by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of CredentialsRequest
custom resources (CRs) from the OKD release image by running the following command:
$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 | The --included parameter includes only the manifests that your specific cluster configuration requires. |
2 | Specify the location of the install-config.yaml file. |
3 | Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. |
This command creates a YAML file for each CredentialsRequest
object.
CredentialsRequest
object apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: openshift-image-registry-ibmcos
namespace: openshift-cloud-credential-operator
spec:
secretRef:
name: installer-cloud-credentials
namespace: openshift-image-registry
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: IBMCloudProviderSpec
policies:
- attributes:
- name: serviceName
value: cloud-object-storage
roles:
- crn:v1:bluemix:public:iam::::role:Viewer
- crn:v1:bluemix:public:iam::::role:Operator
- crn:v1:bluemix:public:iam::::role:Editor
- crn:v1:bluemix:public:iam::::serviceRole:Reader
- crn:v1:bluemix:public:iam::::serviceRole:Writer
- attributes:
- name: resourceType
value: resource-group
roles:
- crn:v1:bluemix:public:iam::::role:Viewer
Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret:
$ ccoctl ibmcloud create-service-id \
--credentials-requests-dir=<path_to_credential_requests_directory> \(1)
--name=<cluster_name> \(2)
--output-dir=<installation_directory> \(3)
--resource-group-name=<resource_group_name> (4)
1 | Specify the directory containing the files for the component CredentialsRequest objects. |
2 | Specify the name of the OKD cluster. |
3 | Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. |
4 | Optional: Specify the name of the resource group used for scoping the access policies. |
If your cluster uses Technology Preview features that are enabled by the If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command:
|
Ensure that the appropriate secrets were generated in your cluster’s manifests
directory.
There are two phases prior to OKD installation where you can customize the network configuration.
You can customize the following network-related fields in the install-config.yaml
file before you create the manifest files:
networking.networkType
networking.clusterNetwork
networking.serviceNetwork
networking.machineNetwork
For more information, see "Installation configuration parameters".
Set the |
The CIDR range |
After creating the manifest files by running openshift-install create manifests
, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml
file. However, you can customize the network plugin during phase 2.
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment.
You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OKD manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. |
You have created the install-config.yaml
file and completed any modifications to it.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. |
Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml
in the <installation_directory>/manifests/
directory:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml
file, such as in the following example:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
ipsecConfig:
mode: Full
Optional: Back up the manifests/cluster-network-03-config.yml
file. The
installation program consumes the manifests/
directory when you create the
Ignition config files.
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin. OVNKubernetes
is the only supported plugin during installation.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
|
|
|
A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
Configures the network plugin for the cluster network. |
|
|
The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
|
||
|
|
This object is only valid for the OVN-Kubernetes network plugin. |
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description | ||
---|---|---|---|---|
|
|
The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to |
||
|
|
The port to use for all Geneve packets. The default value is |
||
|
|
Specify a configuration object for customizing the IPsec configuration. |
||
|
|
Specifies a configuration object for IPv4 settings. |
||
|
|
Specifies a configuration object for IPv6 settings. |
||
|
|
Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
||
|
|
Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.
|
Field | Type | Description |
---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
Field | Type | Description |
---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
Field | Type | Description |
---|---|---|
|
integer |
The maximum number of messages to generate every second per node. The default value is |
|
integer |
The maximum size for the audit log in bytes. The default value is |
|
integer |
The maximum number of log files that are retained. |
|
string |
One of the following additional audit log targets:
|
|
string |
The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
|
Set this field to This field has an interaction with the Open vSwitch hardware offloading feature.
If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. |
Field | Type | Description | ||
---|---|---|---|---|
|
|
The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is
|
Field | Type | Description | ||
---|---|---|---|---|
|
|
The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is
|
Field | Type | Description |
---|---|---|
|
|
Specifies the behavior of the IPsec implementation. Must be one of the following values:
|
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig:
mode: Full
You can install OKD on a compatible cloud platform.
You can run the |
You have configured an account with the cloud platform that hosts your cluster.
You have the OKD installation program and the pull secret for your cluster.
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can install the OpenShift CLI (oc
) to interact with
OKD
from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.zip
.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
Verify your installation by using an oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
If necessary, you can opt out of remote health reporting.