$ export RESOURCE_GROUP=$(oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}')
For OKD platforms that do not support automatic boot image updating or for clusters configured with the boot image management feature disabled, you can manually update the boot image used by the compute nodes in your cluster. By updating the boot image, you can ensure that newly scaled up nodes are able to successfully use the latest Fedora CoreOS (FCOS) version and join the cluster.
|
Red Hat does not support manually updating the boot image in control plane nodes. |
For an IBM Cloud cluster, you can manually update the boot image for the compute nodes in your cluster by configuring your machine sets to use the latest OKD image as the boot image to help ensure any new nodes can scale up properly.
|
The standard boot image management feature is not supported for IBM Cloud clusters. |
The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain IBM Cloud authentication credentials, download a boot image, upload that image to the IBM Cloud image service, and modify your compute machine sets to use the new boot image.
This procedure uses the default IBM Cloud Cloud Object Storage (COS) bucket in your cluster, which was created during cluster installation. Each COS bucket has a specific Cloud Resource Name (CRN), which the IBM Cloud CLI uses the to select the correct COS bucket. The following procedure shows how to obtain the CRN for the default COS bucket. For more information on the CRN, see Cloud Resource Names in the IBM Cloud documentation.
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have the OpenShift CLI (oc) installed.
You have the IBM Cloud CLI installed.
You have installed the IBM Cloud Virtual Private Cloud (VPC) CLI plugin.
You have installed the IBM Cloud Object Storage plugin.
Obtain the resource group and region from the infrastructure object and set the values in an environment variable by running the following commands:
$ export RESOURCE_GROUP=$(oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}')
$ export REGION=$(oc get infrastructure cluster -o jsonpath='{.status.platformStatus.ibmcloud.location}')
Generate an IBM Cloud API key and log in to your IBM Cloud:
Follow the instructions in Creating your IBM Cloud API key in the IBM Cloud documentation to generate the API key.
To ensure that the key has the appropriate permissions, you must use the same IBM Cloud account used to create the OKD cluster when generating the key.
Set the API key in an environment variable by running the following command:
$ export IBM_API_KEY=<Your_IBM_Cloud_API_Key>
Log in to your IBM Cloud by running the following command:
$ ibmcloud login --apikey ${IBM_API_KEY} -r ${REGION} -g ${RESOURCE_GROUP}
IBM_API_KEY, REGION, and RESOURCE_GROUP are environment variables you created in previous steps.
API endpoint: https://cloud.ibm.com
Authenticating...
Retrieving API key token...
OK
Targeted account OpenShift-QE (xxxxxxxxxxxxxxxx) <-> xxxxxx
Targeted resource group xxxxxxx-ibm3h-9pbgg
Targeted region eu-gb
API endpoint: https://cloud.ibm.com
Region: eu-gb
User: xxxxx
Account: xxxxx
Resource group: xxxxx
Obtain the URL of the FCOS image to use as the boot image and set the location in an environment variable by running one of the following commands, based on your cluster architecture:
Linux (x86_64, amd64):
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.ibmcloud.formats["qcow2.gz"].disk.location')
Linux on IBM Z® and IBM® LinuxONE (s390x):
export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.s390x.artifacts.ibmcloud.formats["qcow2.gz"].disk.location')
Obtain the boot image:
Download the image by using the following command:
$ curl -L -o /tmp/rhcos-new.qcow2.gz "${RHCOS_URL}"
RHCOS_URL is the environment variable you created in a previous step.
Decompress the downloaded image by running the following command:
$ gunzip /tmp/rhcos-new.qcow2.gz
Upload the boot image to the default IBM Cloud Cloud Object Storage (COS) bucket:
Obtain the CRN for your COS bucket and set the CRN in an environment variable by running the following command:
$ export COS_CRN=$(ibmcloud resource service-instance "${RESOURCE_GROUP}-cos" --output json | jq -r '.[0].crn')
Optional: Check that the CRN is correct by running the following command:
$ echo ${COS_CRN}
Configure the default COS bucket with the CRN by running the following command:
$ ibmcloud cos config crn --crn "${COS_CRN}"
COS_CRN is the environment variable you created in a previous step.
Upload the boot image to the COS bucket by running the following command:
$ ibmcloud cos object-put --bucket "${RESOURCE_GROUP}-vsi-image" --key "rhcos-new.qcow2" --body /tmp/rhcos-new.qcow2 --region "${REGION}"
RESOURCE_GROUP and REGION are environment variables you created in previous steps.
Optional: Check that image was uploaded to the COS bucket by running the following command:
$ ibmcloud cos objects --bucket "${RESOURCE_GROUP}-vsi-image" --region "${REGION}"
RESOURCE_GROUP and REGION are environment variables you created in previous steps.
OK
Found 2 objects in bucket 'xxxxxx-ibm3h-9pbgg-vsi-image':
Set an environment variable to create a descriptive name for your boot image:
$ export IMAGE_NAME="<descriptive_image_name>"
Setting a descriptive name for your boot image, such as using the Fedora CoreOS (FCOS) version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future.
Create a custom image for your IBM Cloud Virtual Private Cloud (VPC) from the uploaded boot image by running one of the following commands, based on your cluster architecture:
Linux (x86_64, amd64):
$ ibmcloud is image-create "${RESOURCE_GROUP}-${IMAGE_NAME}" --file "cos://${REGION}/${RESOURCE_GROUP}-vsi-image/rhcos-new.qcow2" --os-name rhel-coreos-stable-amd64 --resource-group-name "${RESOURCE_GROUP}"
You must set the --os-name argument to rhel-coreos-stable-amd64 as shown. This parameter configures several Fedora CoreOS (FCOS) default values that are required.
RESOURCE_GROUP, IMAGE_NAME, and REGION are environment variables you created in previous steps.
Linux on IBM Z® and IBM® LinuxONE (s390x):
$ ibmcloud is image-create "${RESOURCE_GROUP}-${IMAGE_NAME}" --file "cos://${REGION}/${RESOURCE_GROUP}-vsi-image/rhcos-new.qcow2" --os-name red-8-s390x-byol --resource-group-name "${RESOURCE_GROUP}"
You must set the --os-name argument to red-8-s390x-byol as shown. This parameter configures several Fedora CoreOS (FCOS) default values that are required.
RESOURCE_GROUP, IMAGE_NAME, and REGION are environment variables you created in previous steps.
Optional: Observe the new image being uploaded until its status changes from pending to available.
$ watch ibmcloud is image "${RESOURCE_GROUP}-${IMAGE_NAME}"
RESOURCE_GROUP and IMAGE_NAME are environment variables you created in previous steps.
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset-name> -n openshift-machine-api --type merge \
-p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":"'${RESOURCE_GROUP}'-'${IMAGE_NAME}'"}}}}}}'
Replace <machineset_name> with the name of your machine set.
IMAGE_NAME is the environment variable you created in a previous step.
If boot image skew enforcement in your cluster is set to the manual mode, update the version of the new boot image in the MachineConfiguration object as described in "Updating the boot image skew enforcement version".
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command.
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
<version>Specifies the boot image version.
After you migrate all machine sets to the new boot image, the old boot image is no longer needed. You can remove the old boot image from your COS bucket.
You can manually update the boot image for your Nutanix cluster by configuring your machine sets to use the latest OKD image as the boot image to ensure that new nodes can scale up properly.
|
The standard boot image management feature is not supported for Nutanix clusters. |
The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain Nutanix authentication credentials, download a boot image, upload that image to the Nutanix Prism Central, and modify your compute machine sets to use the new boot image.
This procedure requires Nutanix authentication credentials, which you need to access Prism Central. If you need to recover your credentials, you can get them from an OKD secret, the name of which you can find in the default compute machine set. You can decrypt this secret and export the credentials to create the clouds.yaml file, as described in the following procedure.
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the OpenShift CLI (oc).
You have installed the jq program.
If you need to recover your Nutanix authentication credentials, perform the following steps:
Obtain the name of the secret that contains your credentials by running the following command:
$ oc get machineset -n openshift-machine-api -o yaml | grep credentialsSecret -A 1
credentialsSecret:
name: nutanix-credentials
Decrypt the secret by running the following command:
$ oc get secret <secret_name> -n openshift-machine-api -o jsonpath='{.data.credentials}' | base64 -d
Replace <secret_name> with the name of the secret, which you obtained in the previous step.
[{"type":"basic_auth","data":{"prismCentral":{"username":"","password":""},"prismElements":null}}]
Set an environment variable for the Nutanix username by running the following command:
$ export USER="<username>"
Set an environment variable for the Nutanix password by running the following command:
$ export PASS="<password>"
If you need to recover your IP address for Prism Central, run the following command:
$ oc get configmap cloud-provider-config -n openshift-config -o jsonpath='{.data.config}' | grep prismCentral -A 8
"prismCentral": {
"address": "",
"port": 9440,
"credentialRef": {
"kind": "Secret",
"name": "nutanix-credentials",
"namespace": "openshift-cloud-controller-manager"
}
},
where:
prismCentral.addressSpecifies the Prism Central IP address.
Set an environment variables for the Prism Central IP address by running the following command:
$ export PC_IP="<prism_central_ip_address>"
Obtain the boot image and upload the image to Prism Central:
Obtain the URL of the FCOS image you want to use as the boot image and set the location in an environment variable by running the following command:
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.nutanix.formats.qcow2.disk.location')
Set an environment variable to create a descriptive name for your boot image in Prism Central by running the following command:
$ export IMAGE_NAME="<descriptive_image_name>"
Setting a descriptive name for your boot image in Prism Central, such as using the Fedora CoreOS (FCOS) version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future.
$ export IMAGE_NAME="rhcos-9.6-boot-image"
Upload the image to Prism Central by running the following command:
$ curl -k -u "$USER:$PASS" \
-X POST "https://$PC_IP:9440/api/nutanix/v3/images" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"name": "'"$IMAGE_NAME"'",
"resources": {
"image_type": "DISK_IMAGE",
"source_uri": "'"$RHCOS_URL"'"
}
},
"metadata": {
"kind": "image"
}
}'
USER,PASS, IMAGE_NAME, and RHCOS_URL are environment variables you created in previous steps.
Optional: Verify that the image is uploaded by running the following command:
$ curl -k -u "$USER:$PASS" \
-X POST "https://$PC_IP:9440/api/nutanix/v3/images/list" \
-H "Content-Type: application/json" \
-d '{
"kind": "image",
"filter": "name=='"$IMAGE_NAME"'"
}'
{
"name": "<image-name>",
"state": "COMPLETE"
}
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset_name> -n openshift-machine-api --type merge -p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":{"type":"name","name":"'${IMAGE_NAME}'"}}}}}}}'
Replace <machineset_name> with the name of your machine set.
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
Verify that the boot image is the same version as the image you uploaded in a previous step by running the following command:
$ echo ${RHCOS_URL}
https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20251212-1/x86_64/rhcos-9.6.20251212-1-nutanix.x86_64.qcow2
After you migrate all machine sets to the new boot image, you can remove the old boot image from Prism Central.
For a OpenStack cluster, you can manually update the boot image for your cluster by configuring your machine sets to use the latest OKD image as the boot image to help ensure any new nodes can scale up properly.
|
The standard boot image management feature is not supported for OpenStack clusters. |
The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain OpenStack authentication credentials, download a boot image, upload that image to the OpenStack image service (Glance), and modify your worker machine sets to use the new boot image.
This procedure requires the clouds.yaml file, which is needed by the OpenStackClient CLI to connect to your OpenStack cloud. If you need to re-create this file, you can get the OpenStack credentials from an OKD secret, the name of which you can find in the default compute machine set. You can decrypt this secret and export the credentials to create the clouds.yaml file, as described in the following procedure.
|
Updating control plane machine sets is not supported in OpenStack. |
You have completed the general boot image prerequisites as described in the Prerequisites section of OKD Boot Image Updates.
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the OpenShift CLI (oc) installed.
You have installed the OpenStackClient (FCOS documentation).
You have installed the jq program.
If you need to re-create the clouds.yaml file, perform the following steps:
Obtain the name of the secret that contains your credentials by running the following command:
$ oc get machineset -n openshift-machine-api -o yaml | grep cloudsSecret -A 1
cloudsSecret:
name: openstack-cloud-credentials
Decrypt the secret and add the contents to the clouds.yaml file by running the following command:
$ oc get secret <secret_name> -n openshift-machine-api -o jsonpath='{.data.clouds\.yaml}' | base64 -d > <file_path>/clouds.yaml
Replace <secret_name> with the name of the secret, which you obtained in the previous step, and <file_path> with the path to the clouds.yaml file.
Optional: Verify the contents of the clouds.yaml file by running the following command:
$ cat <file_path>/clouds.yaml
Replace <file_path> with the path to the clouds.yaml file.
clouds:
openstack:
auth:
auth_url: https://your-openstack-url:13000
username: "your-username"
password: "your-password"
project_name: "your-project"
user_domain_name: "Default"
project_domain_name: "Default"
Set an environment variable for the location of the clouds.yaml file by running the following command:
$ export OS_CLIENT_CONFIG_FILE=<file_path>/clouds.yaml
Replace <file_path> with the path to the clouds.yaml file.
The OpenStackClient CLI uses this environment variable to locate the clouds.yaml file.
Obtain the name of your OpenStack cloud from the default compute machine set and set the name in an environment variable by running the following command:
$ export CLOUD_NAME=$(oc get machineset -n openshift-machine-api -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.cloudName}')
Obtain the URL of the FCOS image you want to use as the boot image and set the location in an environment variable by running one of the following commands, based on cluster architecture:
Linux (x86_64, amd64):
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r \
'.architectures.x86_64.artifacts.openstack.formats."qcow2.gz".disk.location')
Linux on IBM Z® and IBM® LinuxONE (s390x):
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r \
'.architectures.s390x.artifacts.openstack.formats."qcow2.gz".disk.location')
Linux on ARM (aarch64, arm64)
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r \
'.architectures.aarch64.artifacts.openstack.formats."qcow2.gz".disk.location')
Obtain the boot image and upload the image to the OpenStack image service (Glance):
Download the image by using the following command:
$ curl -L -o /tmp/rhcos-new.qcow2.gz "${RHCOS_URL}"
RHCOS_URL is the URL environment variables you created in a previous step.
Decompress the downloaded image by using the following command:
$ gunzip <file_path>/rhcos-new.qcow2.gz
Replace <file_path> with the path to the location for the image.
Set an environment variable to create a descriptive name for your boot image in Glance by running the following command:
$ export IMAGE_NAME="<descriptive_image_name>"
Setting a descriptive name for your boot image, such as using the Fedora CoreOS (FCOS) version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future.
$ export IMAGE_NAME="rhcos 9.6 boot image"
Upload the image to Glance by using the following command:
$ openstack --os-cloud "${CLOUD_NAME}" image create "${IMAGE_NAME}" \
--disk-format qcow2 \
--container-format bare \
--file <file_path>/rhcos-new.qcow2 \
--property os_type=linux \
--property os_distro=rhcos
Replace <file_path> with the path to the location for the image.
CLOUD_NAME and IMAGE_NAME are environment variables you created in previous steps.
It might take several minutes for the image to upload. When the upload is complete, details on the image displays, similar to the following example:
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 469fa549f706617ff15b41bd2a919679 |
# ... |
| disk_format | qcow2 |
# ...
| name | rhcos 9.6 boot image
Optional: Verify that the image has uploaded and is in active state by running the following command:
$ openstack --os-cloud "${CLOUD_NAME}" image show "${IMAGE_NAME}" -f json | jq '{name: .name, status: .status}'
{
"name": "rhcos 9.6 boot image",
"status": "active"
}
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset_name> -n openshift-machine-api --type merge -p \
'{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":"'${IMAGE_NAME}'"}}}}}}'
Replace <machineset_name> with the name of your machine set.
IMAGE_NAME is the environment variable you created in a previous step.
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
After you migrate all machine sets to the new boot image, you can remove the old boot image from Glance.