# useradd kni
install-config
parametersWith the configuration of the prerequisites complete, the next step is to install Fedora 35 on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OKD cluster. For the purposes of this document, installing Fedora on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media.
Perform the following steps to prepare the environment.
Log in to the provisioner node via ssh
.
Create a non-root user (kni
) and provide that user with sudo
privileges:
# useradd kni
# passwd kni
# echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
# chmod 0440 /etc/sudoers.d/kni
Create an ssh
key for the new user:
# su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
Log in as the new user on the provisioner node:
# su - kni
Install the following packages:
$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
Modify the user to add the libvirt
group to the newly created user:
$ sudo usermod --append --groups libvirt <user>
Restart firewalld
and enable the http
service:
$ sudo systemctl start firewalld
$ sudo firewall-cmd --zone=public --add-service=http --permanent
$ sudo firewall-cmd --reload
Start and enable the libvirtd
service:
$ sudo systemctl enable libvirtd --now
Create the default
storage pool and start it:
$ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
$ sudo virsh pool-start default
$ sudo virsh pool-autostart default
Create a pull-secret.txt
file:
$ vim pull-secret.txt
In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure. Click Copy pull secret. Paste the contents into the pull-secret.txt
file and save the contents in the kni
user’s home directory.
Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a baremetal
bridge and network, and an optional provisioning
bridge and network.
You can also configure networking from the web console. |
Export the baremetal
network NIC name:
$ export PUB_CONN=<baremetal_nic_name>
Configure the baremetal
network:
$ sudo nohup bash -c "
nmcli con down \"$PUB_CONN\"
nmcli con delete \"$PUB_CONN\"
# RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists
nmcli con down \"System $PUB_CONN\"
nmcli con delete \"System $PUB_CONN\"
nmcli connection add ifname baremetal type bridge con-name baremetal
nmcli con add type bridge-slave ifname \"$PUB_CONN\" master baremetal
pkill dhclient;dhclient baremetal
"
The ssh connection might disconnect after executing these steps. |
Optional: If you are deploying with a provisioning
network, export the provisioning
network NIC name:
$ export PROV_CONN=<prov_nic_name>
Optional: If you are deploying with a provisioning
network, configure the provisioning
network:
$ sudo nohup bash -c "
nmcli con down \"$PROV_CONN\"
nmcli con delete \"$PROV_CONN\"
nmcli connection add ifname provisioning type bridge con-name provisioning
nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual
nmcli con down provisioning
nmcli con up provisioning
"
The ssh connection might disconnect after executing these steps. The IPv6 address can be any address as long as it is not routable via the Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. |
Optional: If you are deploying with a provisioning
network, configure the IPv4 address on the provisioning
network connection:
$ nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual
ssh
back into the provisioner
node (if required):
# ssh kni@provisioner.<cluster-name>.<domain>
Verify the connection bridges have been properly created:
$ sudo nmcli con show
NAME UUID TYPE DEVICE
baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal
provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning
virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0
bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1
bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2
Use the stable-4.x
version of the installation program and your selected architecture to deploy the generally available stable version of OKD:
$ export VERSION=stable-4.11
$ export RELEASE_ARCH=<architecture>
$ export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/$RELEASE_ARCH/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
After retrieving the installer, the next step is to extract it.
Set the environment variables:
$ export cmd=openshift-baremetal-install
$ export pullsecret_file=~/pull-secret.txt
$ export extract_dir=$(pwd)
Get the oc
binary:
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
Extract the installer:
$ sudo cp oc /usr/local/bin
$ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
$ sudo cp openshift-baremetal-install /usr/local/bin
To employ image caching, you must download the Fedora CoreOS (FCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth.
The installation program no longer needs the |
If you are running the installation program on a network with limited bandwidth and the FCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios.
If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OKD hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. |
Install a container that contains the images.
Install podman
:
$ sudo dnf install -y podman
Open firewall port 8080
to be used for FCOS image caching:
$ sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
$ sudo firewall-cmd --reload
Create a directory to store the bootstraposimage
:
$ mkdir /home/kni/rhcos_image_cache
Set the appropriate SELinux context for the newly created directory:
$ sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"
$ sudo restorecon -Rv /home/kni/rhcos_image_cache/
Get the URI for the FCOS image that the installation program will deploy on the bootstrap VM:
$ export RHCOS_QEMU_URI=$(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "$(arch)" '.architectures[$ARCH].artifacts.qemu.formats["qcow2.gz"].disk.location')
Get the name of the image that the installation program will deploy on the bootstrap VM:
$ export RHCOS_QEMU_NAME=${RHCOS_QEMU_URI##*/}
Get the SHA hash for the FCOS image that will be deployed on the bootstrap VM:
$ export RHCOS_QEMU_UNCOMPRESSED_SHA256=$(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "$(arch)" '.architectures[$ARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]')
Download the image and place it in the /home/kni/rhcos_image_cache
directory:
$ curl -L ${RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QEMU_NAME}
Confirm SELinux type is of httpd_sys_content_t
for the new file:
$ ls -Z /home/kni/rhcos_image_cache
Create the pod:
$ podman run -d --name rhcos_image_cache \ (1)
-v /home/kni/rhcos_image_cache:/var/www/html \
-p 8080:8080/tcp \
quay.io/centos7/httpd-24-centos7:latest
1 | Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. |
Generate the bootstrapOSImage
configuration:
$ export BAREMETAL_IP=$(ip addr show dev baremetal | awk '/inet /{print $2}' | cut -d"/" -f1)
$ export BOOTSTRAP_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_QEMU_NAME}?sha256=${RHCOS_QEMU_UNCOMPRESSED_SHA256}"
$ echo " bootstrapOSImage=${BOOTSTRAP_OS_IMAGE}"
Add the required configuration to the install-config.yaml
file under platform.baremetal
:
platform:
baremetal:
bootstrapOSImage: <bootstrap_os_image> (1)
1 | Replace <bootstrap_os_image> with the value of $BOOTSTRAP_OS_IMAGE . |
See the "Configuring the install-config.yaml file" section for additional details.
The install-config.yaml
file requires some additional details.
Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it.
The installation program no longer needs the |
Configure install-config.yaml
. Change the appropriate variables to match the environment, including pullSecret
and sshKey
:
apiVersion: v1
baseDomain: <domain>
metadata:
name: <cluster_name>
networking:
machineNetwork:
- cidr: <public_cidr>
networkType: OVNKubernetes
compute:
- name: worker
replicas: 2 (1)
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: <api_ip>
ingressVIP: <wildcard_ip>
provisioningNetworkCIDR: <CIDR>
bootstrapExternalStaticIP: <bootstrap_static_ip_address> (2)
bootstrapExternalStaticGateway: <bootstrap_static_gateway> (3)
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://<out_of_band_ip> (4)
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "/dev/disk/by-id/<disk_id>" (5)
- name: <openshift_master_1>
role: master
bmc:
address: ipmi://<out_of_band_ip> (4)
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "/dev/disk/by-id/<disk_id>" (5)
- name: <openshift_master_2>
role: master
bmc:
address: ipmi://<out_of_band_ip> (4)
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "/dev/disk/by-id/<disk_id>" (5)
- name: <openshift_worker_0>
role: worker
bmc:
address: ipmi://<out_of_band_ip> (4)
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
- name: <openshift_worker_1>
role: worker
bmc:
address: ipmi://<out_of_band_ip>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "/dev/disk/by-id/<disk_id>" (5)
pullSecret: '<pull_secret>'
sshKey: '<ssh_pub_key>'
1 | Scale the worker machines based on the number of worker nodes that are part of the OKD cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. |
2 | When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the baremetal network. |
3 | When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the baremetal network. |
4 | See the BMC addressing sections for more options. |
5 | Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . |
Create a directory to store the cluster configuration:
$ mkdir ~/clusterconfigs
Copy the install-config.yaml
file to the new directory:
$ cp install-config.yaml ~/clusterconfigs
Ensure all bare metal nodes are powered off prior to installing the OKD cluster:
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
Remove old bootstrap resources if any are left over from a previous deployment attempt:
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
do
sudo virsh destroy $i;
sudo virsh undefine $i;
sudo virsh vol-delete $i --pool $i;
sudo virsh vol-delete $i.ign --pool $i;
sudo virsh pool-destroy $i;
sudo virsh pool-undefine $i;
done
install-config
parametersSee the following tables for the required parameters, the hosts
parameter,
and the bmc
parameter for the install-config.yaml
file.
Parameters | Default | Description |
---|---|---|
|
The domain name for the cluster. For example, |
|
|
|
The boot mode for a node. Options are |
|
The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the |
|
|
The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the |
|
|
The |
|
|
The |
|
metadata: name: |
The name to be given to the OKD cluster. For example, |
|
networking: machineNetwork: - cidr: |
The public CIDR (Classless Inter-Domain Routing) of the external network. For example, |
|
compute: - name: worker |
The OKD cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. |
|
compute: replicas: 2 |
Replicas sets the number of worker (or compute) nodes in the OKD cluster. |
|
controlPlane: name: master |
The OKD cluster requires a name for control plane (master) nodes. |
|
controlPlane: replicas: 3 |
Replicas sets the number of control plane (master) nodes included as part of the OKD cluster. |
|
|
The name of the network interface on nodes connected to the |
|
|
The default configuration used for machine pools without a platform configuration. |
|
|
(Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the |
|
|
|
|
|
(Optional) The virtual IP address for ingress traffic. This setting must either be provided in the |
Parameters | Default | Description |
---|---|---|
|
|
Defines the IP range for nodes on the |
|
|
The CIDR for the network to use for provisioning. This option is required when not using the default address range on the |
|
The third IP address of the |
The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the |
|
The second IP address of the |
The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the |
|
|
The name of the |
|
|
The name of the |
|
Defines the host architecture for your cluster. Valid values are |
|
|
The default configuration used for machine pools without a platform configuration. |
|
|
A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example:
|
|
|
The
|
|
|
Set this parameter to the appropriate HTTP proxy used within your environment. |
|
|
Set this parameter to the appropriate HTTPS proxy used within your environment. |
|
|
Set this parameter to the appropriate list of exclusions for proxy usage within your environment. |
The hosts
parameter is a list of separate bare metal assets used to build the cluster.
Name | Default | Description |
---|---|---|