$ export ARCH=<architecture_type>
For OKD platforms that do not support automatic boot image updating or for clusters configured with the boot image management feature disabled, you can manually update the boot image used by the compute nodes in your cluster. By updating the boot image, you can ensure that newly scaled up nodes are able to successfully use the latest Fedora CoreOS (FCOS) version and join the cluster.
|
Red Hat does not support manually updating the boot image in control plane nodes. |
You can manually update the boot image for your Microsoft Azure cluster by configuring your machine sets to use the latest OKD image as the boot image to ensure that new nodes can scale up properly.
|
Boot image updates are not supported for Azure confidential virtual machines and Azure Stack Hub clusters. Contact Red Hat Support for these cases. |
Use the following procedure to create environment variables that facilitate running the required commands, identify the correct boot image to use as the new boot image, and modify your compute machine sets to use that image.
The process requires you to determine the product variant and Hyper-V generation of your Azure boot image. The following procedure helps determine both values, which you need in order to look up the target image.
|
For clusters that use a default Fedora CoreOS (FCOS), Azure Red Hat OpenShift (ARO), or Azure Marketplace image, you can configure the cluster to automatically update the boot image each time the cluster is updated. If you are using the following procedure, ensure that automatic boot image updates are disabled and skew enforcement is in manual mode. For more information, see "Boot image management" and "Boot image skew enforcement". |
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have installed the OpenShift CLI (oc).
You have set boot image skew enforcement to the manual or none mode. For more information, see "Configuring boot image skew enforcement".
You have disabled boot image management for the cluster. For more information, see "Disabling boot image management".
You have downloaded the latest version of the OKD installation program from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the jq program.
Set an environment variable with your cluster architecture by running the following command:
$ export ARCH=<architecture_type>
Replace <architecture_type> with one of the following values:
Use aarch64 for the AArch64 or ARM64 architecture.
Use x86_64 for the x86_64 or AMD64 architecture.
You can find the architecture as a label in any MachineSet object.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
annotations:
capacity.cluster-autoscaler.kubernetes.io/labels: kubernetes.io/arch=amd64
# ...
Determine your Azure image variant and Hyper-V generation:
Obtain the required values from your machine set by running the following command:
$ oc get machineset <machineset-name> -n openshift-machine-api \
-o jsonpath='{.spec.template.spec.providerSpec.value.image}'
{"offer":"rh-ocp-worker","publisher":"redhat","resourceID":"","sku":"rh-ocp-worker","type":"MarketplaceWithPlan","version":"4.16.20231023"}
Determine your image variant by comparing the output to the entries in the following table:
| Output parmeters | Variant |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Make note of the variant for later use.
Determine your image Hyper-V generation by comparing the output to the entries in the following table:
| Output | Image type | Hyper-V generation |
|---|---|---|
|
Legacy uploaded |
|
|
Unpaid marketplace |
|
|
Paid marketplace |
|
Make note of the generation for later use.
Optional: You can compare the output of the version parameter against the output of the following command to determine if your boot image needs updating.
$ openshift-install coreos print-stream-json | jq '.architectures."'"${ARCH}"'"."rhel-coreos-extensions"."marketplace"."azure"'
ARCH is the environment variable you created in a previous step.
In the output of the command, locate your variant and generation as shown in the following example:
"ocp": {
# ...
"hyperVGen2": {
"publisher": "redhat",
"offer": "rh-ocp-worker",
"sku": "rh-ocp-worker",
"version": "4.18.2025031114"
If the boot image referenced in the version parameter of your machine set matches or is later than the version in this output, no further action on your part is required to update the boot image. If not, continue with this procedure.
Obtain the values needed to identify the new boot image and set the values as environment variables:
Obtain the values required for the new boot image by running the following command:
$ openshift-install coreos print-stream-json | jq '.architectures."'"${ARCH}"'"."rhel-coreos-extensions"."marketplace"."azure"'
ARCH is the environment variable you created in a previous step.
In the output of the command, locate your variant and generation as shown in the following example:
"ocp": {
# ...
"hyperVGen2": {
"publisher": "redhat",
"offer": "rh-ocp-worker",
"sku": "rh-ocp-worker",
"version": "9.6.20251015"
Set an environment variable with your image variant by running the following command:
$ export VARIANT=<variant>
Replace <variant> with the variant of your image, one of the following vales: no-purchase-plan, ocp, opp, oke, ocp-emea, opp-emea, or oke-emea.
Set an environment variable with your image generation by running the following command:
$ export GEN=<generation>
Replace <generation> with the generation of your image, one of the following vales: hyperVGen1 or hyperVGen2.
Set environment variables for the publisher, offer, sku, and version fields based on the openshift-install output for your variant and generation by running the following commands:
$ export PUBLISHER=$(openshift-install coreos print-stream-json | jq -r '.architectures."'"${ARCH}"'"."rhel-coreos-extensions"."marketplace"."azure"."'"${VARIANT}"'"."'"${GEN}"'".publisher')
ARCH, VARIANT, and GEN are environment variables you created in a previous step.
$ export OFFER=$(openshift-install coreos print-stream-json | jq -r '.architectures."'"${ARCH}"'"."rhel-coreos-extensions"."marketplace"."azure"."'"${VARIANT}"'"."'"${GEN}"'".offer')
$ export SKU=$(openshift-install coreos print-stream-json | jq -r '.architectures."'"${ARCH}"'"."rhel-coreos-extensions"."marketplace"."azure"."'"${VARIANT}"'"."'"${GEN}"'".sku')
$ export VERSION=$(openshift-install coreos print-stream-json | jq -r '.architectures."'"${ARCH}"'"."rhel-coreos-extensions"."marketplace"."azure"."'"${VARIANT}"'"."'"${GEN}"'".version')
Obtain the FCOS version by running the following command:
$ echo $VERSION
9.6.20251015
Make note of the FCOS version for later use.
Set an environment variable with the type of your image by running the following command:
$ export IMAGE_TYPE=<image_type>
Replace <image_type> with one of the following values based on the variant of your image:
For the no-purchase-plan variant, use MarketplaceNoPlan.
For all other variants, use MarketplaceWithPlan.
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
ci-ln-lbf9h9k-1d09d-fwh4l-worker-eastus21 1 1 1 1 135m
ci-ln-lbf9h9k-1d09d-fwh4l-worker-eastus22 1 1 1 1 135m
ci-ln-lbf9h9k-1d09d-fwh4l-worker-eastus23 1 1 1 1 135m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset-name> -n openshift-machine-api --type merge \
-p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":{"publisher":"'${PUBLISHER}'","offer":"'${OFFER}'","sku":"'${SKU}'","version":"'${VERSION}'","resourceID":"","type":"'${IMAGE_TYPE}'"}}}}}}}'
PUBLISHER, OFFER, SKU, VERSION, and IMAGE_TYPE are environment variables you created in previous steps.
If boot image skew enforcement in your cluster is set to the manual mode, update the version of the new boot image in the MachineConfiguration object as described in "Updating the boot image skew enforcement version".
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251015-ostree.x86_64.ociarchive",
"version": "9.6.20251015"
}
where:
versionSpecifies the boot image version.
Verify that the boot image is the same the FCOS version as the image you noted in a previous step by running the following command:
$ echo $VERSION
9.6.20251015
You can manually update the boot image for your Amazon Web Services (AWS) cluster by configuring your machine sets to use the latest OKD image as the boot image to ensure that new nodes can scale up properly.
Use the following procedure to create environment variables that facilitate running the required commands, identify the correct Amazon Machine Image (AMI) to use as the new boot image, and modify your compute machine sets to use that image.
The process differs for clusters that use a default Fedora CoreOS (FCOS) image and clusters that use a custom FCOS image from the AWS Marketplace. The following procedure helps determine which type of image you use.
|
For clusters that use a default FCOS image, you can configure the cluster to automatically update the boot image each time the cluster is updated. If you are using the following procedure, ensure that automatic boot image updates are disabled and skew enforcement is in manual mode. For more information, see "Boot image management" and "Boot image skew enforcement". |
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have installed the OpenShift CLI (oc).
You have set boot image skew enforcement to the manual or none mode. For more information, see "Configuring boot image skew enforcement".
You have disabled boot image management for the cluster. For more information, see "Disabling boot image management".
You have installed the AWS CLI.
You configured an AWS account to host the cluster. For information, see "Configuring an AWS account".
For a cluster that uses a default FCOS image, ensure you have met the following additional prerequisites:
You have downloaded the latest version of the OKD installation program from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
For a cluster that uses a default FCOS image, you have installed the jq program.
Determine if your cluster uses a default FCOS image or a custom FCOS image from the AWS Marketplace image:
Obtain the current AWS region where the cluster is installed and set the value in an environment variable by running the following command:
$ export REGION=$(oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}')
Obtain the current Amazon Machine Image (AMI) ID for your region and set the value in an environment variable by running the following command:
$ export CURRENT_AMI=$(oc get machineset -n openshift-machine-api -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.ami.id}')
Obtain the product ID for your AMI and set the value in an environment variable by running the following command:
$ export PRODUCT_ID=$(aws ec2 describe-images --image-ids "$CURRENT_AMI" --region "$REGION" \
--query 'Images[0].Name' --output text | \
grep -oE '[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}')
CURRENT_AMI and REGION are environment variables you created in previous steps.
Display the contents of the PRODUCT_ID environment variable by running the following command:
$ echo $PRODUCT_ID
If the output for the PRODUCT_ID environment variable is empty, as shown in the following example, your cluster uses a standard OKD image.
If the output for the PRODUCT_ID environment variable is not empty, as shown in the following example, your cluster uses an AWS Marketplace image.
59ead7de-2540-4653-a8b0-fa7926d5c845
If the command returns an error, and you are unable to determine your cluster variant, contact Red Hat Support. If Red Hat Support determines that your cluster uses an AWS Marketplace image, you can set the PRODUCT_ID environment variable with the appropriate product ID from the following table.
$ export PRODUCT_ID=<Product_ID_from_table>
| Variant | Product ID |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Determine the AMI for the new boot image by using one of the following steps, depending upon the type of image used in your cluster:
For a cluster that uses a default FCOS image, perform the following steps:
Set an environment variable with your cluster architecture by running the following command:
$ export ARCH=<architecture_type>
Replace <architecture_type> with one of the following values:
Specify aarch64 for the AArch64 or ARM64 architecture.
Specify ppc64le for the IBM Power® (ppc64le) architecture.
Specify s390x for the IBM Z® and IBM® LinuxONE (s390x) architecture.
Specify x86_64 for the x86_64 or AMD64 architecture.
You can find the architecture as a label in any MachineSet object.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
annotations:
capacity.cluster-autoscaler.kubernetes.io/labels: kubernetes.io/arch=amd64
# ...
Obtain the AMI for the new boot image and set an environment variable with the AMI by running the following command:
$ export AMI_ID=$(openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.aws.regions.\"${REGION}\".image")
ARCH and REGION are environment variables you created in previous steps.
View the FCOS version of the new boot image by running the following command:
$ openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.aws.regions.\"${REGION}\".release"
9.6.20251212-1
Make note of the FCOS version for later use.
For a cluster that uses a custom FCOS image, perform the following steps:
Obtain a list of valid AMI images by running the following command:
$ aws ec2 describe-images --region "${REGION}" --filters "Name=name,Values=*${PRODUCT_ID}*" \
--query 'reverse(sort_by(Images, &CreationDate))[].[CreationDate,ImageId,Name]' --output table
REGION and PRODUCT_ID are environment variables you created in previous steps.
This command returns the AMIs ordered by creation date, with the latest images first. The FCOS version of each AMI is contained in the AMI name. Choose the latest image version available.
Make note of the Fedora CoreOS (FCOS) version for later use.
Set an environment variable with the AMI of the new boot image by running the following command:
$ export AMI_ID=<ami-value>
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset_name> -n openshift-machine-api --type merge -p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"ami":{"id":"'${AMI_ID}'"}}}}}}}'
Replace <machineset_name> with the name of your machine set.
AMI_ID is the environment variable you created in a previous step.
If boot image skew enforcement in your cluster is set to the manual mode, update the boot image version in the MachineConfiguration object as described in "Updating the boot image skew enforcement version."
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
You can manually update the boot image for your Google Cloud cluster by configuring your machine sets to use the latest OKD image as the boot image to ensure that new nodes can scale up properly.
Use the following procedure to create environment variables that facilitate running the required commands, identify the correct boot image to use as the new boot image, and modify your machine sets to use that image.
The process differs for clusters that use a default Fedora CoreOS (FCOS) image, clusters that use a custom Fedora CoreOS (FCOS) image from the Google Cloud Marketplace, and user-provisioned infrastructure clusters. The following procedure helps determine which type of cluster you have.
For user-provisioned infrastructure Google Cloud clusters, which typically have no Machine API compute machine sets, you can provision new nodes based on the new boot image by updating the underlying Google Cloud infrastructure with the new boot image, such as instance templates, Deployment Manager templates, or Terraform configuration. For more information, see "Creating additional worker machines in Google Cloud".
|
For clusters that use a default Fedora CoreOS (FCOS) image, you can configure the cluster to automatically update the boot image each time the cluster is updated. If you are using the following procedure, ensure that automatic boot image updates are disabled and skew enforcement is in manual mode. For more information, see "Boot image management" and "Boot image skew enforcement". |
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have installed the OpenShift CLI (oc).
You have set boot image skew enforcement to the manual or none mode. For more information, see "Configuring boot image skew enforcement".
You have disabled boot image management for the cluster. For more information, see "Disabling boot image management".
For a cluster that uses a default FCOS image, ensure that your cluster meets the following additional prerequisites:
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the jq program.
For a user-provisioned infrastructure cluster, ensure that your cluster meets the following additional prerequisites:
You have downloaded the latest version of the OKD installation program from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the Google Cloud CLI.
You have created a Google Cloud service account.
Determine which image in the machine set is the boot image and set the value in an environment variable:
Set the boot image value in an environment variable by running the following command:
$ export BOOT_DISK_INDEX=$(oc get machineset -n openshift-machine-api -o json | \
jq '.items[0].spec.template.spec.providerSpec.value.disks | map(.boot == true) | index(true)')
Display the contents of the BOOT_DISK_INDEX environment variable by running the following command:
$ echo $BOOT_DISK_INDEX
0
If the output for the BOOT_DISK_INDEX environment variable is null, none of the disks in the machine set has the boot field explicitly set. In this case, the boot disk is typically the first disk.
null
If the BOOT_DISK_INDEX output is null, set the boot image to the first image by running the following command:
$ export BOOT_DISK_INDEX=0
Determine if your cluster uses a default FCOS image or a GCP Marketplace FCOS image from the Google Cloud Marketplace, or is a user-provisioned infrastructure cluster:
Obtain the name of the current boot image and set the name as an environment variable by running the following command:
$ export CURRENT_IMAGE=$(oc get machineset -n openshift-machine-api -o json | \
jq -r ".items[0].spec.template.spec.providerSpec.value.disks[${BOOT_DISK_INDEX}].image")
BOOT_DISK_INDEX is the environment variable you created in a previous step.
View the name of the image by running the following command:
$ echo $CURRENT_IMAGE
projects/rhcos-cloud/global/images/rhcos-416-94-202510081640-0-gcp-x86-64
Compare the prefix of the image name to the entries in the following table:
| Current image prefix | Variant |
|---|---|
|
Default |
|
GCP Marketplace FCOS image |
No machine set present/custom prefix |
User-provisioned infrastructure |
Default FCOS clusters use images from the rhcos-cloud project in the rhcos-<version>-<platform>-<arch> format.
GCP Marketplace FCOS clusters use images from the redhat-marketplace-public project in the redhat-coreos-<offering>-<version>-<arch>-<date> format.
|
The following images are the latest Google Cloud Marketplace images for the OKD:
Red Hat has not published Marketplace images for OKD later than these OKD 4.13 images. If the current boot image in your cluster matches one of the listed images, no further action is necessary. |
Obtain the name of the new boot image by using one of the following steps, depending upon your cluster:
For a cluster that uses a default FCOS image, perform the following steps:
Set an environment variable with your cluster architecture by running the following command:
$ export ARCH=<architecture_type>
Replace <architecture_type> with one of the following values:
Specify aarch64 for the AArch64 or ARM64 architecture.
Specify ppc64le for the IBM Power® (ppc64le) architecture.
Specify s390x for the IBM Z® and IBM® LinuxONE (s390x) architecture.
Specify x86_64 for the x86_64 or AMD64 architecture.
You can find the architecture as a label in any MachineSet object.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
annotations:
capacity.cluster-autoscaler.kubernetes.io/labels: kubernetes.io/arch=amd64
# ...
Set an environment variable with the name of the new boot image by running the following command:
$ export GCP_IMAGE=$(openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.gcp.name")
ARCH is the environment variable you created in a previous step.
Set an environment variable with the Google Cloud project of the new boot image by running the following command:
$ export GCP_PROJECT=$(openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.gcp.project")
ARCH is the environment variable you created in a previous step.
View the Fedora CoreOS (FCOS) version of the new boot image by running the following command:
$ openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.gcp.release"
9.6.20251212-1
Make note of the FCOS version for later use.
For a cluster that uses a GCP Marketplace FCOS image that is earlier than the 4.13 images listed above, perform the following steps:
Set an environment variable with the name of the new boot image by running the following command:
$ export GCP_IMAGE=<image_name>
Replace <image_name> with one of the following values:
Specify redhat-coreos-ocp-413-x86-64-202305021736 for an OKD cluster.
Specify redhat-coreos-opp-413-x86-64-202305021736 for an OpenShift Platform Plus cluster.
Specify redhat-coreos-oke-413-x86-64-202305021736 for an OpenShift Kubernetes Engine cluster.
Set an environment variable with the Google Cloud project of the new boot image by running the following command:
$ export GCP_PROJECT=redhat-marketplace-public
For a user-provisioned infrastructure cluster, perform the following steps:
Set an environment variable with your cluster architecture by running the following command:
$ export ARCH=<architecture_type>
Replace <architecture_type> with one of the following values:
Specify aarch64 for the AArch64 or ARM64 architecture.
Specify ppc64le for the IBM Power® (ppc64le) architecture.
Specify s390x for the IBM Z® and IBM® LinuxONE (s390x) architecture.
Specify x86_64 for the x86_64 or AMD64 architecture.
Set an environment variable with the name of the new boot image by running the following command:
$ export GCP_IMAGE=$(openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.gcp.name")
ARCH is the environment variable you created in a previous step.
Set an environment variable with the Google Cloud project of the new boot image in your cluster by running the following command:
$ export GCP_PROJECT=$(openshift-install coreos print-stream-json | jq -r ".architectures.\"${ARCH}\".images.gcp.project")
ARCH is the environment variable you created in a previous step.
If the default FCOS image is not accessible in your environment, for example in a restricted or disconnected environment, you could download the new boot image tar file and upload the file as a custom image to your own Google Cloud project before updating your Google Cloud instance templates.
Update your Google Cloud instance template(s) to reference the new image, then create new instances from the updated template. The exact steps depend on how your infrastructure was provisioned. For more information, see "Creating additional worker machines in Google Cloud".
After creating the new instances, you can proceed to the verification steps, unless your user-provisioned infrastructure cluster has any Machine API machine sets, such as for Day-2 scaling. You can update those machine sets as described in the following steps.
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
ci-ln-xw7zmyt-72292-x7nqv-worker-a 1 1 1 1 53m
ci-ln-xw7zmyt-72292-x7nqv-worker-b 1 1 1 1 53m
ci-ln-xw7zmyt-72292-x7nqv-worker-c 1 1 1 1 53m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset-name> -n openshift-machine-api --type json \
-p '[{"op": "replace", "path": "/spec/template/spec/providerSpec/value/disks/'${BOOT_DISK_INDEX}'/image", "value": "projects/'${GCP_PROJECT}'/global/images/'${GCP_IMAGE}'"}]'
Replace <machineset_name> with the name of your machine set.
BOOT_DISK_INDEX, GCP_PROJECT, and GCP_IMAGE are environment variables you created in previous steps.
If boot image skew enforcement in your cluster is set to the manual mode, update the version of the new boot image in the MachineConfiguration object as described in "Updating the boot image skew enforcement version".
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
Verify that the boot image is the same the FCOS version as the image you noted in a previous step by running the following command:
$ echo $GCP_IMAGE
RHCOS_URL is the environment variable you created in a previous step.
https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20251212-1/x86_64/rhcos-9.6.20251212-1-nutanix.x86_64.qcow2
For a bare-metal cluster that was installed with OKD version 4.9 or earlier, you need to change how the cluster provisions new nodes in order to update the boot image used with those nodes. Using an up-to-date boot image ensures that any new nodes can scale up properly.
|
The standard boot image management feature is not supported for bare-metal clusters. |
If your bare-metal cluster was installed with OKD version 4.10 or later, boot images are kept current by the Cluster Version Operator (CVO) and are not at risk of boot image skew. Skew enforcement is disabled for the cluster by default. No further action on your part is required to maintain the boot image versioning.
If your bare-metal cluster was installed with OKD version 4.9 or earlier, the cluster is using the legacy qcow2-based provisioning method. Boot images in these clusters are not managed by the CVO and could be significantly out of date. Follow the steps below to migrate the cluster to use the machine-os-images provisioning method, which was introduced in OKD 4.10. This migration ensures that the cluster always uses the release version as the boot image when a scale-up is taking place.
Use the following procedure to enable the install_coreos deployment method and disable the qcow2 image cache. With these changes, the Cluster Baremetal Operator (CBO) will use the machine-os-images container from the release payload for new node provisioning. The cluster will have no skew risk, the same as a cluster at version 4.10 or later. Skew enforcement is automatically disabled after the migration is complete.
|
Boot image updates are not required for Agent-based Installer clusters. The boot image for Agent-based Installer nodes is generated from the current release payload through the |
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have the OpenShift CLI (oc) installed.
A new physical host must be registered and in the available state and an associated BareMetalHost object must be present in the openshift-machine-api namespace so that you can scale a new machine to verify the procedure.
Check whether your cluster is using the legacy boot image provisioning path by running the following command:
$ oc get provisioning provisioning-configuration \
-o jsonpath='{.spec.provisioningOSDownloadURL}'
If the output is non-empty, your cluster was installed with OKD version 4.9 or earlier. Boot images are not managed by the the Cluster Version Operator (CVO) and could be significantly out of date. Follow the steps in this procedure to migrate to the current provisioning path.
If the output is empty, your cluster was installed with OKD version 4.10 or later. Boot images are kept current by the Cluster Version Operator (CVO) and are not at risk of skew. Skew enforcement is disabled for this cluster. No further action on your part is required to maintain the boot image versioning.
Clear the legacy image fields and enable the install_coreos deployment method:
Migrate each machine set to the machine-os-images provisioning path by running the following command:
$ oc patch machineset <machineset_name> -n openshift-machine-api --type merge \
-p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"customDeploy":{"method":"install_coreos"},"image":{"url":"","checksum":""}}}}}}}'
Replace <machineset_name> with the name of your machine set.
Clear the legacy download URL by running the following command:
$ oc patch provisioning provisioning-configuration --type=merge -p '{"spec":{"provisioningOSDownloadURL":""}}'
This process migrates the cluster to the machine-os-images provisioning method, which ensures that the latest boot image is used for scaling nodes.
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
For an IBM Cloud cluster, you can manually update the boot image for the compute nodes in your cluster by configuring your machine sets to use the latest OKD image as the boot image to help ensure any new nodes can scale up properly.
|
The standard boot image management feature is not supported for IBM Cloud clusters. |
The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain IBM Cloud authentication credentials, download a boot image, upload that image to the IBM Cloud image service, and modify your compute machine sets to use the new boot image.
This procedure uses the default IBM Cloud Cloud Object Storage (COS) bucket in your cluster, which was created during cluster installation. Each COS bucket has a specific Cloud Resource Name (CRN), which the IBM Cloud CLI uses the to select the correct COS bucket. The following procedure shows how to obtain the CRN for the default COS bucket. For more information on the CRN, see Cloud Resource Names in the IBM Cloud documentation.
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have the OpenShift CLI (oc) installed.
You have the IBM Cloud CLI installed.
You have installed the IBM Cloud Virtual Private Cloud (VPC) CLI plugin.
You have installed the IBM Cloud Object Storage plugin.
Obtain the resource group and region from the infrastructure object and set the values in an environment variable by running the following commands:
$ export RESOURCE_GROUP=$(oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}')
$ export REGION=$(oc get infrastructure cluster -o jsonpath='{.status.platformStatus.ibmcloud.location}')
Generate an IBM Cloud API key and log in to your IBM Cloud:
Follow the instructions in Creating your IBM Cloud API key in the IBM Cloud documentation to generate the API key.
To ensure that the key has the appropriate permissions, you must use the same IBM Cloud account used to create the OKD cluster when generating the key.
Set the API key in an environment variable by running the following command:
$ export IBM_API_KEY=<Your_IBM_Cloud_API_Key>
Log in to your IBM Cloud by running the following command:
$ ibmcloud login --apikey ${IBM_API_KEY} -r ${REGION} -g ${RESOURCE_GROUP}
IBM_API_KEY, REGION, and RESOURCE_GROUP are environment variables you created in previous steps.
API endpoint: https://cloud.ibm.com
Authenticating...
Retrieving API key token...
OK
Targeted account OpenShift-QE (xxxxxxxxxxxxxxxx) <-> xxxxxx
Targeted resource group xxxxxxx-ibm3h-9pbgg
Targeted region eu-gb
API endpoint: https://cloud.ibm.com
Region: eu-gb
User: xxxxx
Account: xxxxx
Resource group: xxxxx
Obtain the URL of the FCOS image to use as the boot image and set the location in an environment variable by running one of the following commands, based on your cluster architecture:
Linux (x86_64, amd64):
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.ibmcloud.formats["qcow2.gz"].disk.location')
Linux on IBM Z® and IBM® LinuxONE (s390x):
export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.s390x.artifacts.ibmcloud.formats["qcow2.gz"].disk.location')
Obtain the boot image:
Download the image by using the following command:
$ curl -L -o /tmp/rhcos-new.qcow2.gz "${RHCOS_URL}"
RHCOS_URL is the environment variable you created in a previous step.
Decompress the downloaded image by running the following command:
$ gunzip /tmp/rhcos-new.qcow2.gz
Upload the boot image to the default IBM Cloud Cloud Object Storage (COS) bucket:
Obtain the CRN for your COS bucket and set the CRN in an environment variable by running the following command:
$ export COS_CRN=$(ibmcloud resource service-instance "${RESOURCE_GROUP}-cos" --output json | jq -r '.[0].crn')
Optional: Check that the CRN is correct by running the following command:
$ echo ${COS_CRN}
Configure the default COS bucket with the CRN by running the following command:
$ ibmcloud cos config crn --crn "${COS_CRN}"
COS_CRN is the environment variable you created in a previous step.
Upload the boot image to the COS bucket by running the following command:
$ ibmcloud cos object-put --bucket "${RESOURCE_GROUP}-vsi-image" --key "rhcos-new.qcow2" --body /tmp/rhcos-new.qcow2 --region "${REGION}"
RESOURCE_GROUP and REGION are environment variables you created in previous steps.
Optional: Check that image was uploaded to the COS bucket by running the following command:
$ ibmcloud cos objects --bucket "${RESOURCE_GROUP}-vsi-image" --region "${REGION}"
RESOURCE_GROUP and REGION are environment variables you created in previous steps.
OK
Found 2 objects in bucket 'xxxxxx-ibm3h-9pbgg-vsi-image':
Set an environment variable to create a descriptive name for your boot image:
$ export IMAGE_NAME="<descriptive_image_name>"
Setting a descriptive name for your boot image, such as using the Fedora CoreOS (FCOS) version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future.
Create a custom image for your IBM Cloud Virtual Private Cloud (VPC) from the uploaded boot image by running one of the following commands, based on your cluster architecture:
Linux (x86_64, amd64):
$ ibmcloud is image-create "${RESOURCE_GROUP}-${IMAGE_NAME}" --file "cos://${REGION}/${RESOURCE_GROUP}-vsi-image/rhcos-new.qcow2" --os-name rhel-coreos-stable-amd64 --resource-group-name "${RESOURCE_GROUP}"
You must set the --os-name argument to rhel-coreos-stable-amd64 as shown. This parameter configures several Fedora CoreOS (FCOS) default values that are required.
RESOURCE_GROUP, IMAGE_NAME, and REGION are environment variables you created in previous steps.
Linux on IBM Z® and IBM® LinuxONE (s390x):
$ ibmcloud is image-create "${RESOURCE_GROUP}-${IMAGE_NAME}" --file "cos://${REGION}/${RESOURCE_GROUP}-vsi-image/rhcos-new.qcow2" --os-name red-8-s390x-byol --resource-group-name "${RESOURCE_GROUP}"
You must set the --os-name argument to red-8-s390x-byol as shown. This parameter configures several Fedora CoreOS (FCOS) default values that are required.
RESOURCE_GROUP, IMAGE_NAME, and REGION are environment variables you created in previous steps.
Optional: Observe the new image being uploaded until its status changes from pending to available.
$ watch ibmcloud is image "${RESOURCE_GROUP}-${IMAGE_NAME}"
RESOURCE_GROUP and IMAGE_NAME are environment variables you created in previous steps.
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset-name> -n openshift-machine-api --type merge \
-p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":"'${RESOURCE_GROUP}'-'${IMAGE_NAME}'"}}}}}}'
Replace <machineset_name> with the name of your machine set.
IMAGE_NAME is the environment variable you created in a previous step.
If boot image skew enforcement in your cluster is set to the manual mode, update the version of the new boot image in the MachineConfiguration object as described in "Updating the boot image skew enforcement version".
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command.
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
<version>Specifies the boot image version.
After you migrate all machine sets to the new boot image, the old boot image is no longer needed. You can remove the old boot image from your COS bucket.
You can manually update the boot image for your Nutanix cluster by configuring your machine sets to use the latest OKD image as the boot image to ensure that new nodes can scale up properly.
|
The standard boot image management feature is not supported for Nutanix clusters. |
The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain Nutanix authentication credentials, download a boot image, upload that image to the Nutanix Prism Central, and modify your compute machine sets to use the new boot image.
This procedure requires Nutanix authentication credentials, which you need to access Prism Central. If you need to recover your credentials, you can get them from an OKD secret, the name of which you can find in the default compute machine set. You can decrypt this secret and export the credentials to create the clouds.yaml file, as described in the following procedure.
You have completed the general boot image prerequisites as described in the "Prerequisites" section of the OKD Boot Image Updates knowledgebase article.
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the OpenShift CLI (oc).
You have installed the jq program.
If you need to recover your Nutanix authentication credentials, perform the following steps:
Obtain the name of the secret that contains your credentials by running the following command:
$ oc get machineset -n openshift-machine-api -o yaml | grep credentialsSecret -A 1
credentialsSecret:
name: nutanix-credentials
Decrypt the secret by running the following command:
$ oc get secret <secret_name> -n openshift-machine-api -o jsonpath='{.data.credentials}' | base64 -d
Replace <secret_name> with the name of the secret, which you obtained in the previous step.
[{"type":"basic_auth","data":{"prismCentral":{"username":"","password":""},"prismElements":null}}]
Set an environment variable for the Nutanix username by running the following command:
$ export USER="<username>"
Set an environment variable for the Nutanix password by running the following command:
$ export PASS="<password>"
If you need to recover your IP address for Prism Central, run the following command:
$ oc get configmap cloud-provider-config -n openshift-config -o jsonpath='{.data.config}' | grep prismCentral -A 8
"prismCentral": {
"address": "",
"port": 9440,
"credentialRef": {
"kind": "Secret",
"name": "nutanix-credentials",
"namespace": "openshift-cloud-controller-manager"
}
},
where:
prismCentral.addressSpecifies the Prism Central IP address.
Set an environment variables for the Prism Central IP address by running the following command:
$ export PC_IP="<prism_central_ip_address>"
Obtain the boot image and upload the image to Prism Central:
Obtain the URL of the FCOS image you want to use as the boot image and set the location in an environment variable by running the following command:
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.nutanix.formats.qcow2.disk.location')
Set an environment variable to create a descriptive name for your boot image in Prism Central by running the following command:
$ export IMAGE_NAME="<descriptive_image_name>"
Setting a descriptive name for your boot image in Prism Central, such as using the Fedora CoreOS (FCOS) version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future.
$ export IMAGE_NAME="rhcos-9.6-boot-image"
Upload the image to Prism Central by running the following command:
$ curl -k -u "$USER:$PASS" \
-X POST "https://$PC_IP:9440/api/nutanix/v3/images" \
-H "Content-Type: application/json" \
-d '{
"spec": {
"name": "'"$IMAGE_NAME"'",
"resources": {
"image_type": "DISK_IMAGE",
"source_uri": "'"$RHCOS_URL"'"
}
},
"metadata": {
"kind": "image"
}
}'
USER,PASS, IMAGE_NAME, and RHCOS_URL are environment variables you created in previous steps.
Optional: Verify that the image is uploaded by running the following command:
$ curl -k -u "$USER:$PASS" \
-X POST "https://$PC_IP:9440/api/nutanix/v3/images/list" \
-H "Content-Type: application/json" \
-d '{
"kind": "image",
"filter": "name=='"$IMAGE_NAME"'"
}'
{
"name": "<image-name>",
"state": "COMPLETE"
}
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset_name> -n openshift-machine-api --type merge -p '{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":{"type":"name","name":"'${IMAGE_NAME}'"}}}}}}}'
Replace <machineset_name> with the name of your machine set.
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
Verify that the boot image is the same version as the image you uploaded in a previous step by running the following command:
$ echo ${RHCOS_URL}
https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20251212-1/x86_64/rhcos-9.6.20251212-1-nutanix.x86_64.qcow2
After you migrate all machine sets to the new boot image, you can remove the old boot image from Prism Central.
For a OpenStack cluster, you can manually update the boot image for your cluster by configuring your machine sets to use the latest OKD image as the boot image to help ensure any new nodes can scale up properly.
|
The standard boot image management feature is not supported for OpenStack clusters. |
The following procedure, which includes steps to create environment variables that facilitate running the required commands, shows how to obtain OpenStack authentication credentials, download a boot image, upload that image to the OpenStack image service (Glance), and modify your worker machine sets to use the new boot image.
This procedure requires the clouds.yaml file, which is needed by the OpenStackClient CLI to connect to your OpenStack cloud. If you need to re-create this file, you can get the OpenStack credentials from an OKD secret, the name of which you can find in the default compute machine set. You can decrypt this secret and export the credentials to create the clouds.yaml file, as described in the following procedure.
|
Updating control plane machine sets is not supported in OpenStack. |
You have completed the general boot image prerequisites as described in the Prerequisites section of OKD Boot Image Updates.
You have downloaded the latest version of the OKD installation program, openshift-install, from the OpenShift Cluster Manager. For more information, see "Obtaining the installation program."
You have installed the OpenShift CLI (oc) installed.
You have installed the OpenStackClient (FCOS documentation).
You have installed the jq program.
If you need to re-create the clouds.yaml file, perform the following steps:
Obtain the name of the secret that contains your credentials by running the following command:
$ oc get machineset -n openshift-machine-api -o yaml | grep cloudsSecret -A 1
cloudsSecret:
name: openstack-cloud-credentials
Decrypt the secret and add the contents to the clouds.yaml file by running the following command:
$ oc get secret <secret_name> -n openshift-machine-api -o jsonpath='{.data.clouds\.yaml}' | base64 -d > <file_path>/clouds.yaml
Replace <secret_name> with the name of the secret, which you obtained in the previous step, and <file_path> with the path to the clouds.yaml file.
Optional: Verify the contents of the clouds.yaml file by running the following command:
$ cat <file_path>/clouds.yaml
Replace <file_path> with the path to the clouds.yaml file.
clouds:
openstack:
auth:
auth_url: https://your-openstack-url:13000
username: "your-username"
password: "your-password"
project_name: "your-project"
user_domain_name: "Default"
project_domain_name: "Default"
Set an environment variable for the location of the clouds.yaml file by running the following command:
$ export OS_CLIENT_CONFIG_FILE=<file_path>/clouds.yaml
Replace <file_path> with the path to the clouds.yaml file.
The OpenStackClient CLI uses this environment variable to locate the clouds.yaml file.
Obtain the name of your OpenStack cloud from the default compute machine set and set the name in an environment variable by running the following command:
$ export CLOUD_NAME=$(oc get machineset -n openshift-machine-api -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.cloudName}')
Obtain the URL of the FCOS image you want to use as the boot image and set the location in an environment variable by running one of the following commands, based on cluster architecture:
Linux (x86_64, amd64):
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r \
'.architectures.x86_64.artifacts.openstack.formats."qcow2.gz".disk.location')
Linux on IBM Z® and IBM® LinuxONE (s390x):
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r \
'.architectures.s390x.artifacts.openstack.formats."qcow2.gz".disk.location')
Linux on ARM (aarch64, arm64)
$ export RHCOS_URL=$(openshift-install coreos print-stream-json | jq -r \
'.architectures.aarch64.artifacts.openstack.formats."qcow2.gz".disk.location')
Obtain the boot image and upload the image to the OpenStack image service (Glance):
Download the image by using the following command:
$ curl -L -o /tmp/rhcos-new.qcow2.gz "${RHCOS_URL}"
RHCOS_URL is the URL environment variables you created in a previous step.
Decompress the downloaded image by using the following command:
$ gunzip <file_path>/rhcos-new.qcow2.gz
Replace <file_path> with the path to the location for the image.
Set an environment variable to create a descriptive name for your boot image in Glance by running the following command:
$ export IMAGE_NAME="<descriptive_image_name>"
Setting a descriptive name for your boot image, such as using the Fedora CoreOS (FCOS) version number in the image name, makes it easier to track which version is currently deployed if you update the cluster in the future.
$ export IMAGE_NAME="rhcos 9.6 boot image"
Upload the image to Glance by using the following command:
$ openstack --os-cloud "${CLOUD_NAME}" image create "${IMAGE_NAME}" \
--disk-format qcow2 \
--container-format bare \
--file <file_path>/rhcos-new.qcow2 \
--property os_type=linux \
--property os_distro=rhcos
Replace <file_path> with the path to the location for the image.
CLOUD_NAME and IMAGE_NAME are environment variables you created in previous steps.
It might take several minutes for the image to upload. When the upload is complete, details on the image displays, similar to the following example:
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 469fa549f706617ff15b41bd2a919679 |
# ... |
| disk_format | qcow2 |
# ...
| name | rhcos 9.6 boot image
Optional: Verify that the image has uploaded and is in active state by running the following command:
$ openstack --os-cloud "${CLOUD_NAME}" image show "${IMAGE_NAME}" -f json | jq '{name: .name, status: .status}'
{
"name": "rhcos 9.6 boot image",
"status": "active"
}
Update each of your compute machine sets to include the new boot image:
Obtain the name of your machine sets for use in the following step by running the following command:
$ oc get machineset -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE
rhhdrbk-b5564-4pcm9-worker-0 3 3 3 3 123m
ci-ln-xj96skb-72292-48nm5-worker-d 1 1 1 1 27m
Edit a machine set to update the image field in the providerSpec stanza to add your boot image by running the following command:
$ oc patch machineset <machineset_name> -n openshift-machine-api --type merge -p \
'{"spec":{"template":{"spec":{"providerSpec":{"value":{"image":"'${IMAGE_NAME}'"}}}}}}'
Replace <machineset_name> with the name of your machine set.
IMAGE_NAME is the environment variable you created in a previous step.
Scale up a machine set to check that the new node is using the new boot image:
Increase the machine set replicas by one to trigger a new machine by running the following command:
$ oc scale --replicas=<count> machineset <machineset_name> -n openshift-machine-api
where:
<count>Specifies the total number of replicas, including any existing replicas, that you want for this machine set.
<machineset_name>Specifies the name of the machine set to scale.
Optional: View the status of the machine set as it provisions by running the following command:
$ oc get machines.machine.openshift.io -n openshift-machine-api -w
It can take several minutes for the machine set to achieve the Running state.
Verify that the new node has been created and is in the Ready state by running the following command:
$ oc get nodes
Verify that the new node is using the new boot image by running the following command:
$ oc debug node/<new_node> -- chroot /host cat /sysroot/.coreos-aleph-version.json
Replace <new_node> with the name of your new node.
{
# ...
"ref": "docker://ostree-image-signed:oci-archive:/rhcos-9.6.20251212-1-ostree.x86_64.ociarchive",
"version": "9.6.20251212-1"
}
where:
versionSpecifies the boot image version.
After you migrate all machine sets to the new boot image, you can remove the old boot image from Glance.