$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
| 
 You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type  To view the platform type for your cluster, run the following command: 
 | 
You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
Each self-managed Red Hat OpenShift subscription includes entitlements for OKD and other OpenShift-related components. These entitlements are included for running OKD control plane and infrastructure workloads and do not need to be accounted for during sizing.
To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components:
Kubernetes and OKD control plane services
The default router
The integrated container image registry
The HAProxy-based Ingress Controller
The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects
Cluster aggregated logging
Red Hat Quay
Red Hat OpenShift Data Foundation
Red Hat Advanced Cluster Management for Kubernetes
Red Hat Advanced Cluster Security for Kubernetes
Red Hat OpenShift GitOps
Red Hat OpenShift Pipelines
Red Hat OpenShift Service Mesh
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document.
To create an infrastructure node, you can use a machine set, label the node, or use a machine config pool.
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
Use the sample compute machine set for your cloud.
The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
<infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  name: <infrastructure_id>-infra-<zone> (2)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: infra (3)
        machine.openshift.io/cluster-api-machine-type: infra
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone>
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          ami:
            id: ami-046fe691f52a953f9 (4)
          apiVersion: machine.openshift.io/v1beta1
          blockDevices:
            - ebs:
                iops: 0
                volumeSize: 120
                volumeType: gp2
          credentialsSecret:
            name: aws-cloud-credentials
          deviceIndex: 0
          iamInstanceProfile:
            id: <infrastructure_id>-worker-profile
          instanceType: m6i.large
          kind: AWSMachineProviderConfig
          placement:
            availabilityZone: <zone> (5)
            region: <region> (6)
          securityGroups:
            - filters:
                - name: tag:Name
                  values:
                    - <infrastructure_id>-node
            - filters:
                - name: tag:Name
                  values:
                    - <infrastructure_id>-lb
          subnet:
            filters:
              - name: tag:Name
                values:
                  - <infrastructure_id>-private-<zone> (7)
          tags:
            - name: kubernetes.io/cluster/<infrastructure_id>
              value: owned
            - name: <custom_tag_name> (8)
              value: <custom_tag_value>
          userDataSecret:
            name: worker-user-data
      taints: (9)
        - key: node-role.kubernetes.io/infra
          effect: NoSchedule
| 1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
 | 
||
| 2 | Specify the infrastructure ID, infra role node label, and zone. | 
||
| 3 | Specify the infra role node label. | 
||
| 4 | Specify a valid Fedora CoreOS (FCOS) Amazon
Machine Image (AMI) for your AWS zone for your OKD nodes. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the AWS Marketplace to obtain an AMI ID for your region.
 | 
||
| 5 | Specify the zone name, for example, us-east-1a. | 
||
| 6 | Specify the region, for example, us-east-1. | 
||
| 7 | Specify the infrastructure ID and zone. | ||
| 8 | Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:admin-email@example.com.
  | 
||
| 9 | Specify a taint to prevent user workloads from being scheduled on
infra
nodes.
  | 
Machine sets running on AWS support non-guaranteed Spot Instances. You can save on costs by using Spot Instances at a lower price compared to
On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file.
This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
infra
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: infra (2)
    machine.openshift.io/cluster-api-machine-type: infra
  name: <infrastructure_id>-infra-<region> (3)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region>
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: infra
        machine.openshift.io/cluster-api-machine-type: infra
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region>
    spec:
      metadata:
        creationTimestamp: null
        labels:
          machine.openshift.io/cluster-api-machineset: <machineset_name>
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1beta1
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          image: (4)
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest (5)
            sku: ""
            version: ""
          internalLoadBalancer: ""
          kind: AzureMachineProviderSpec
          location: <region> (6)
          managedIdentity: <infrastructure_id>-identity
          metadata:
            creationTimestamp: null
          natRule: null
          networkResourceGroup: ""
          osDisk:
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: ""
          resourceGroup: <infrastructure_id>-rg
          sshPrivateKey: ""
          sshPublicKey: ""
          tags:
            - name: <custom_tag_name> (7)
              value: <custom_tag_value>
          subnet: <infrastructure_id>-<role>-subnet
          userDataSecret:
            name: worker-user-data
          vmSize: Standard_D4s_v3
          vnet: <infrastructure_id>-vnet
          zone: "1" (8)
      taints: (9)
      - key: node-role.kubernetes.io/infra
        effect: NoSchedule
| 1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
You can obtain the subnet by running the following command: 
You can obtain the vnet by running the following command: 
 | 
||
| 2 | Specify the infra node label. | 
||
| 3 | Specify the infrastructure ID, infra node label, and region. | 
||
| 4 | Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". | ||
| 5 | Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. | 
||
| 6 | Specify the region to place machines on. | ||
| 7 | Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. | 
||
| 8 | Specify the zone within your region to place machines on.
Ensure that your region supports the zone that you specify.
  | 
||
| 9 | Specify a taint to prevent user workloads from being scheduled on infra nodes.
  | 
Machine sets running on Azure support non-guaranteed Spot VMs. You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file.
This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
<infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: <infra> (2)
    machine.openshift.io/cluster-api-machine-type: <infra> (2)
  name: <infrastructure_id>-infra-<region> (3)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        machine.openshift.io/cluster-api-machine-role: <infra> (2)
        machine.openshift.io/cluster-api-machine-type: <infra> (2)
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/infra: "" (2)
      taints: (4)
      - key: node-role.kubernetes.io/infra
        effect: NoSchedule
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1beta1
          availabilitySet: <availability_set> (6)
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          image:
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (1)
            sku: ""
            version: ""
          internalLoadBalancer: ""
          kind: AzureMachineProviderSpec
          location: <region> (5)
          managedIdentity: <infrastructure_id>-identity (1)
          metadata:
            creationTimestamp: null
          natRule: null
          networkResourceGroup: ""
          osDisk:
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: ""
          resourceGroup: <infrastructure_id>-rg (1)
          sshPrivateKey: ""
          sshPublicKey: ""
          subnet: <infrastructure_id>-<role>-subnet  (1) (2)
          userDataSecret:
            name: worker-user-data (2)
          vmSize: Standard_DS4_v2
          vnet: <infrastructure_id>-vnet (1)
          zone: "1" (7)
| 1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
You can obtain the subnet by running the following command: 
You can obtain the vnet by running the following command: 
 | 
||
| 2 | Specify the <infra> node label. | 
||
| 3 | Specify the infrastructure ID, <infra> node label, and region. | 
||
| 4 | Specify a taint to prevent user workloads from being scheduled on infra nodes.
  | 
||
| 5 | Specify the region to place machines on. | ||
| 6 | Specify the availability set for the cluster. | ||
| 7 | Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. | 
| 
 Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs.  | 
This sample YAML defines a compute machine set that runs in a specified IBM Cloud® zone in a region and creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
<infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: <infra> (2)
    machine.openshift.io/cluster-api-machine-type: <infra> (2)
  name: <infrastructure_id>-<infra>-<region> (3)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> (3)
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        machine.openshift.io/cluster-api-machine-role: <infra> (2)
        machine.openshift.io/cluster-api-machine-type: <infra> (2)
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> (3)
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1
          credentialsSecret:
            name: ibmcloud-credentials
          image: <infrastructure_id>-rhcos (4)
          kind: IBMCloudMachineProviderSpec
          primaryNetworkInterface:
              securityGroups:
              - <infrastructure_id>-sg-cluster-wide
              - <infrastructure_id>-sg-openshift-net
              subnet: <infrastructure_id>-subnet-compute-<zone> (5)
          profile: <instance_profile> (6)
          region: <region> (7)
          resourceGroup: <resource_group> (8)
          userDataSecret:
              name: <role>-user-data (2)
          vpc: <vpc_name> (9)
          zone: <zone> (10)
        taints: (11)
        - key: node-role.kubernetes.io/infra
          effect: NoSchedule
| 1 | The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
 | 
||
| 2 | The <infra> node label. | 
||
| 3 | The infrastructure ID, <infra> node label, and region. | 
||
| 4 | The custom Fedora CoreOS (FCOS) image that was used for cluster installation. | ||
| 5 | The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. | ||
| 6 | Specify the IBM Cloud® instance profile. | ||
| 7 | Specify the region to place machines on. | ||
| 8 | The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. | ||
| 9 | The VPC name. | ||
| 10 | Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. | ||
| 11 | The taint to prevent user workloads from being scheduled on infra nodes.
  | 
This sample YAML defines a compute machine set that runs in Google Cloud and creates nodes that are labeled with
node-role.kubernetes.io/infra: "",
where
infra
is the node label to add.
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
$ oc -n openshift-machine-api \
  -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \
  get machineset/<infrastructure_id>-worker-a
MachineSet valuesapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  name: <infrastructure_id>-w-a
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <infra> (2)
        machine.openshift.io/cluster-api-machine-type: <infra>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1beta1
          canIPForward: false
          credentialsSecret:
            name: gcp-cloud-credentials
          deletionProtection: false
          disks:
          - autoDelete: true
            boot: true
            image: <path_to_image> (3)
            labels: null
            sizeGb: 128
            type: pd-ssd
          gcpMetadata: (4)
          - key: <custom_metadata_key>
            value: <custom_metadata_value>
          kind: GCPMachineProviderSpec
          machineType: n1-standard-4
          metadata:
            creationTimestamp: null
          networkInterfaces:
          - network: <infrastructure_id>-network
            subnetwork: <infrastructure_id>-worker-subnet
          projectID: <project_name> (5)
          region: us-central1
          serviceAccounts: (6)
          - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
            scopes:
            - https://www.googleapis.com/auth/cloud-platform
          tags:
            - <infrastructure_id>-worker
          userDataSecret:
            name: worker-user-data
          zone: us-central1-a
      taints: (7)
      - key: node-role.kubernetes.io/infra
        effect: NoSchedule
| 1 | For <infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. | 
||
| 2 | For <infra>, specify the <infra> node label. | 
||
| 3 | Specify the path to the image that is used in current compute machine sets.
 To use a Google Cloud Marketplace image, specify the offer to use: 
  | 
||
| 4 | Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the Google Cloud documentation for setting custom metadata. | 
||
| 5 | For <project_name>, specify the name of the Google Cloud project that you use for your cluster. | 
||
| 6 | Specifies a single service account. Multiple service accounts are not supported. | ||
| 7 | Specify a taint to prevent user workloads from being scheduled on infra nodes.
  | 
Machine sets running on Google Cloud support non-guaranteed preemptible VM instances. You can save on costs by using preemptible VM instances at a lower price
compared to normal instances on Google Cloud. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file.
This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
<infra>
is the node label to add.
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI (oc).
The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: <infra> (2)
    machine.openshift.io/cluster-api-machine-type: <infra>
  name: <infrastructure_id>-<infra>-<zone> (3)
  namespace: openshift-machine-api
  annotations: (4)
    machine.openshift.io/memoryMb: "16384"
    machine.openshift.io/vCPU: "4"
spec:
  replicas: 3
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <infra>
        machine.openshift.io/cluster-api-machine-type: <infra>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone>
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1
          bootType: "" (5)
          categories: (6)
          - key: <category_name>
            value: <category_value>
          cluster: (7)
            type: uuid
            uuid: <cluster_uuid>
          credentialsSecret:
            name: nutanix-credentials
          image:
            name: <infrastructure_id>-rhcos (8)
            type: name
          kind: NutanixMachineProviderConfig
          memorySize: 16Gi (9)
          project: (10)
            type: name
            name: <project_name>
          subnets: (11)
          - type: uuid
            uuid: <subnet_uuid>
          systemDiskSize: 120Gi (12)
          userDataSecret:
            name: <user_data_secret> (13)
          vcpuSockets: 4 (14)
          vcpusPerSocket: 1 (15)
      taints: (16)
      - key: node-role.kubernetes.io/infra
        effect: NoSchedule
| 1 | For <infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. | 
||
| 2 | Specify the <infra> node label. | 
||
| 3 | Specify the infrastructure ID, <infra> node label, and zone. | 
||
| 4 | Annotations for the cluster autoscaler. | ||
| 5 | Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are Legacy, SecureBoot, or UEFI. The default is Legacy.
  | 
||
| 6 | Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management. | 
||
| 7 | Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid, so there is a uuid stanza. | 
||
| 8 | Specify the image to use. Use an image from an existing default compute machine set for the cluster. | ||
| 9 | Specify the amount of memory for the cluster in Gi. | ||
| 10 | Specify the Nutanix project that you use for your cluster. In this example, the project type is name, so there is a name stanza. | 
||
| 11 | Specify one or more UUID for the Prism Element subnet object. The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the OKD cluster uses. A maximum of 32 subnets for each Prism Element failure domain in the cluster is supported. All subnet UUID values must be unique. | ||
| 12 | Specify the size of the system disk in Gi. | ||
| 13 | Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. | 
||
| 14 | Specify the number of vCPU sockets. | ||
| 15 | Specify the number of vCPUs per socket. | ||
| 16 | Specify a taint to prevent user workloads from being scheduled on infra nodes.
  | 
This sample YAML defines a compute machine set that runs on OpenStack and creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
<infra>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: <infra> (2)
    machine.openshift.io/cluster-api-machine-type: <infra> (2)
  name: <infrastructure_id>-infra (3)
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        machine.openshift.io/cluster-api-machine-role: <infra> (2)
        machine.openshift.io/cluster-api-machine-type: <infra> (2)
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/infra: ""
      taints: (4)
      - key: node-role.kubernetes.io/infra
        effect: NoSchedule
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          serverGroupID: <optional_UUID_of_server_group> (5)
          kind: OpenstackProviderSpec
          networks: (6)
          - filter: {}
            subnets:
            - filter:
                name: <subnet_name>
                tags: openshiftClusterID=<infrastructure_id> (1)
          primarySubnet: <rhosp_subnet_UUID> (7)
          securityGroups:
          - filter: {}
            name: <infrastructure_id>-worker (1)
          serverMetadata:
            Name: <infrastructure_id>-worker (1)
            openshiftClusterID: <infrastructure_id> (1)
          tags:
          - openshiftClusterID=<infrastructure_id> (1)
          trunk: true
          userDataSecret:
            name: worker-user-data (2)
          availabilityZone: <optional_openstack_availability_zone>
| 1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
 | 
||
| 2 | Specify the <infra> node label. | 
||
| 3 | Specify the infrastructure ID and <infra> node label. | 
||
| 4 | Specify a taint to prevent user workloads from being scheduled on infra nodes.
  | 
||
| 5 | To set a server group policy for the MachineSet, enter the value that is returned from
creating a server group. For most deployments, anti-affinity or soft-anti-affinity policies are recommended. | 
||
| 6 | Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. | 
||
| 7 | Specify the OpenStack subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. | 
This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with
node-role.kubernetes.io/infra: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
infra
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  creationTimestamp: null
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  name: <infrastructure_id>-infra (2)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: infra (3)
        machine.openshift.io/cluster-api-machine-type: infra
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/infra: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1beta1
          credentialsSecret:
            name: vsphere-cloud-credentials
          dataDisks: (4)
          - name: "<disk_name>"
            provisioningMode: "<mode>"
            sizeGiB: 20
          diskGiB: 120
          kind: VSphereMachineProviderSpec
          memoryMiB: 8192
          metadata:
            creationTimestamp: null
          network:
            devices:
            - networkName: "<vm_network_name>" (5)
          numCPUs: 4
          numCoresPerSocket: 1
          snapshot: ""
          template: <vm_template_name> (6)
          userDataSecret:
            name: worker-user-data
          workspace:
            datacenter: <vcenter_data_center_name> (7)
            datastore: <vcenter_datastore_name> (8)
            folder: <vcenter_vm_folder_path> (9)
            resourcepool: <vsphere_resource_pool> (10)
            server: <vcenter_server_ip> (11)
      taints: (12)
      - key: node-role.kubernetes.io/infra
        effect: NoSchedule
| 1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
 | 
||
| 2 | Specify the infrastructure ID and infra node label. | 
||
| 3 | Specify the infra node label. | 
||
| 4 | Specify one or more data disk definitions. For more information, see "Configuring data disks by using machine sets". | ||
| 5 | Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. | ||
| 6 | Specify the vSphere VM template to use, such as user-5ddjd-rhcos. | 
||
| 7 | Specify the vCenter datacenter to deploy the compute machine set on. | ||
| 8 | Specify the vCenter datastore to deploy the compute machine set on. | ||
| 9 | Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd. | 
||
| 10 | Specify the vSphere resource pool for your VMs. | ||
| 11 | Specify the vCenter server IP or fully qualified domain name. | ||
| 12 | Specify a taint to prevent user workloads from being scheduled on infra nodes.
  | 
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Deploy an OKD cluster.
Install the OpenShift CLI (oc).
Log in to oc as a user with cluster-admin permission.
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.
Ensure that you set the <clusterID> and <role> parameter values.
Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1d   0         0                             55m
agl030519-vplxk-worker-us-east-1e   0         0                             55m
agl030519-vplxk-worker-us-east-1f   0         0                             55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \
  -n openshift-machine-api -o yaml
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  name: <infrastructure_id>-<role> (2)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <role>
        machine.openshift.io/cluster-api-machine-type: <role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
    spec:
      providerSpec: (3)
        ...
| 1 | The cluster infrastructure ID. | ||
| 2 | A default node label.
  | 
||
| 3 | The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. | 
Create a MachineSet CR by running the following command:
$ oc create -f <file_name>.yaml
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1d   0         0                             55m
agl030519-vplxk-worker-us-east-1e   0         0                             55m
agl030519-vplxk-worker-us-east-1f   0         0                             55m
When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.
| 
 See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API.  | 
Requirements of the cluster dictate that infrastructure (infra) nodes, be provisioned. The installation program provisions only control plane and worker nodes. Worker nodes can be designated as infrastructure nodes through labeling. You can then use taints and tolerations to move appropriate workloads to the infrastructure nodes. For more information, see "Moving resources to infrastructure machine sets".
You can optionally create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces and creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.
| 
 If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as  You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.  | 
Add a label to the worker nodes that you want to act as infrastructure nodes:
$ oc label node <node-name> node-role.kubernetes.io/infra=""
Check to see if applicable nodes now have the infra role:
$ oc get nodes
Optional: Create a default cluster-wide node selector:
Edit the Scheduler object:
$ oc edit scheduler cluster
Add the defaultNodeSelector field with the appropriate node selector:
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
  name: cluster
spec:
  defaultNodeSelector: node-role.kubernetes.io/infra="" (1)
# ...
| 1 | This example node selector deploys pods on infrastructure nodes by default. | 
Save the file to apply the changes.
You can now move infrastructure resources to the new infrastructure nodes. Also, remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "OKD infrastructure components".
If you need infrastructure machines to have dedicated configurations, you must create an infra pool.
| 
 Creating a custom machine configuration pool overrides default worker pool configurations if they refer to the same file or unit.  | 
Add a label to the node you want to assign as the infra node with a specific label:
$ oc label node <node_name> <label>
$ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
Create a machine config pool that contains both the worker role and your custom role as machine config selector:
$ cat infra.mcp.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: infra
spec:
  machineConfigSelector:
    matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} (1)
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/infra: "" (2)
| 1 | Add the worker role and your custom role. | 
| 2 | Add the label you added to the node as a nodeSelector. | 
| 
 Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.  | 
After you have the YAML file, you can create the machine config pool:
$ oc create -f infra.mcp.yaml
Check the machine configs to ensure that the infrastructure configuration rendered successfully:
$ oc get machineconfig
NAME                                                        GENERATEDBYCONTROLLER                      IGNITIONVERSION   CREATED
00-master                                                   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
00-worker                                                   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
01-master-container-runtime                                 365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
01-master-kubelet                                           365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
01-worker-container-runtime                                 365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
01-worker-kubelet                                           365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
99-master-ssh                                                                                          3.2.0             31d
99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             31d
99-worker-ssh                                                                                          3.2.0             31d
rendered-infra-4e48906dca84ee702959c71a53ee80e7             365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             23m
rendered-master-072d4b2da7f88162636902b074e9e28e            5b6fb8349a29735e48446d435962dec4547d3090   3.5.0             31d
rendered-master-3e88ec72aed3886dec061df60d16d1af            02c07496ba0417b3e12b78fb32baf6293d314f79   3.5.0             31d
rendered-master-419bee7de96134963a15fdf9dd473b25            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             17d
rendered-master-53f5c91c7661708adce18739cc0f40fb            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             13d
rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             7d3h
rendered-master-dc7f874ec77fc4b969674204332da037            5b6fb8349a29735e48446d435962dec4547d3090   3.5.0             31d
rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d            5b6fb8349a29735e48446d435962dec4547d3090   3.5.0             31d
rendered-worker-2640531be11ba43c61d72e82dc634ce6            5b6fb8349a29735e48446d435962dec4547d3090   3.5.0             31d
rendered-worker-4e48906dca84ee702959c71a53ee80e7            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             7d3h
rendered-worker-4f110718fe88e5f349987854a1147755            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             17d
rendered-worker-afc758e194d6188677eb837842d3b379            02c07496ba0417b3e12b78fb32baf6293d314f79   3.5.0             31d
rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.5.0             13d
You should see a new machine config, with the rendered-infra-* prefix.
Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.
| 
 After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.  | 
Create a machine config:
$ cat infra.mc.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  name: 51-infra
  labels:
    machineconfiguration.openshift.io/role: infra (1)
spec:
  config:
    ignition:
      version: 3.5.0
    storage:
      files:
      - path: /etc/infratest
        mode: 0644
        contents:
          source: data:,infra
| 1 | Add the label you added to the node as a nodeSelector. | 
Apply the machine config to the infra-labeled nodes:
$ oc create -f infra.mc.yaml
Confirm that your new machine config pool is available:
$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
infra    rendered-infra-60e35c2e99f42d976e084fa94da4d0fc    True      False      False      1              1                   1                     0                      4m20s
master   rendered-master-9360fdb895d4c131c7c4bebbae099c90   True      False      False      3              3                   3                     0                      91m
worker   rendered-worker-60e35c2e99f42d976e084fa94da4d0fc   True      False      False      2              2                   2                     0                      91m
In this example, a worker node was changed to an infra node.
See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool.
After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied.
However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control.
If you have an infrastructure node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it.
| 
 It is recommended that you preserve the dual   | 
Configure additional MachineSet objects in your OKD cluster.
Add a taint to the infrastructure node to prevent scheduling user workloads on it:
Determine if the node has the taint:
$ oc describe nodes <node_name>
oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
Name:               ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
Roles:              worker
 ...
Taints:             node-role.kubernetes.io/infra=reserved:NoSchedule
 ...
This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.
If you have not configured a taint to prevent scheduling user workloads on it:
$ oc adm taint nodes <node_name> <key>=<value>:<effect>
For example:
$ oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule
| 
 You can alternatively edit the pod specification to add the taint: 
 | 
These examples place a taint on node1 that has the node-role.kubernetes.io/infra key and the NoSchedule taint effect. Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.
If you added a NoSchedule taint to the infrastructure node, any pods that are controlled by a daemon set on that node are marked as misscheduled. You must either delete the pods or add a toleration to the pods as shown in the Red Hat Knowledgebase solution add toleration on misscheduled DNS pods. Note that you cannot add a toleration to a daemon set object that is managed by an operator.
| 
 If a descheduler is used, pods violating node taints could be evicted from the cluster.  | 
Add tolerations to the pods that you want to schedule on the infrastructure node, such as the router, registry, and monitoring workloads. Referencing the previous examples, add the following tolerations to the Pod object specification:
apiVersion: v1
kind: Pod
metadata:
  annotations:
# ...
spec:
# ...
  tolerations:
    - key: node-role.kubernetes.io/infra (1)
      value: reserved (2)
      effect: NoSchedule (3)
      operator: Equal (4)
| 1 | Specify the key that you added to the node. | 
| 2 | Specify the value of the key-value pair taint that you added to the node. | 
| 3 | Specify the effect that you added to the node. | 
| 4 | Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. | 
This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infrastructure node.
| 
 Moving pods for an Operator installed via OLM to an infrastructure node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.  | 
Schedule the pod to the infrastructure node by using a scheduler. See the documentation for "Controlling pod placement using the scheduler" for details.
Remove any workloads that you do not want, or that do not belong, on the new infrastructure node. See the list of workloads supported for use on infrastructure nodes in "OKD infrastructure components".
See Controlling pod placement using the scheduler for general information on scheduling a pod to a node.
See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes.
See Understanding taints and tolerations for more details about different effects of taints.
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
  name: cluster
# ...
spec:
  nodePlacement: (1)
    nodeSelector:
      matchLabels:
        node-role.kubernetes.io/infra: ""
    tolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/infra
      value: reserved
    - effect: NoExecute
      key: node-role.kubernetes.io/infra
      value: reserved
| 1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.  If you added a taint to the infrasructure node, also add a matching toleration. | 
Applying a specific node selector to all infrastructure components causes OKD to schedule those workloads on nodes with that label.
You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node.
Configure additional compute machine sets in your OKD cluster.
View the IngressController custom resource for the router Operator:
$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
The command output resembles the following text:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  creationTimestamp: 2019-04-18T12:35:39Z
  finalizers:
  - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
  generation: 1
  name: default
  namespace: openshift-ingress-operator
  resourceVersion: "11341"
  selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
  uid: 79509e05-61d6-11e9-bc55-02ce4781844a
spec: {}
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2019-04-18T12:36:15Z
    status: "True"
    type: Available
  domain: apps.<cluster>.example.com
  endpointPublishingStrategy:
    type: LoadBalancerService
  selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
Edit the ingresscontroller resource and change the nodeSelector to use the infra label:
$ oc edit ingresscontroller default -n openshift-ingress-operator
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  creationTimestamp: "2025-03-26T21:15:43Z"
  finalizers:
  - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
  generation: 1
  name: default
# ...
spec:
  nodePlacement:
    nodeSelector: (1)
      matchLabels:
        node-role.kubernetes.io/infra: ""
    tolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/infra
      value: reserved
# ...
| 1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector parameter in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. | 
Confirm that the router pod is running on the infra node.
View the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wide
NAME                              READY     STATUS        RESTARTS   AGE       IP           NODE                           NOMINATED NODE   READINESS GATES
router-default-86798b4b5d-bdlvd   1/1      Running       0          28s       10.130.2.4   ip-10-0-217-226.ec2.internal   <none>           <none>
router-default-955d875f4-255g8    0/1      Terminating   0          19h       10.129.2.4   ip-10-0-148-172.ec2.internal   <none>           <none>
In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.
View the node status of the running pod:
$ oc get node <node_name> (1)
| 1 | Specify the <node_name> that you obtained from the pod list. | 
NAME                          STATUS  ROLES         AGE   VERSION
ip-10-0-217-226.ec2.internal  Ready   infra,worker  17h   v1.33.4
Because the role list includes infra, the pod is running on the correct node.
You configure the registry Operator to deploy its pods to different nodes.
Configure additional compute machine sets in your OKD cluster.
View the config/instance object:
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
  creationTimestamp: 2019-02-05T13:52:05Z
  finalizers:
  - imageregistry.operator.openshift.io/finalizer
  generation: 1
  name: cluster
  resourceVersion: "56174"
  selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
  uid: 36fd3724-294d-11e9-a524-12ffeee2931b
spec:
  httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
  logging: 2
  managementState: Managed
  proxy: {}
  replicas: 1
  requests:
    read: {}
    write: {}
  storage:
    s3:
      bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
      region: us-east-1
status:
...
Edit the config/instance object:
$ oc edit configs.imageregistry.operator.openshift.io/cluster
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
  name: cluster
# ...
spec:
  logLevel: Normal
  managementState: Managed
  nodeSelector: (1)
    node-role.kubernetes.io/infra: ""
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/infra
    value: reserved
| 1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector parameter in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. | 
Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
$ oc get pods -o wide -n openshift-image-registry
Confirm the node has the label you specified:
$ oc describe node <node_name>
Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.
The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
You have access to the cluster as a user with the cluster-admin cluster role.
You have created the cluster-monitoring-config ConfigMap object.
You have installed the OpenShift CLI (oc).
Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label:
$ oc edit configmap cluster-monitoring-config -n openshift-monitoring
apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |+
    alertmanagerMain:
      nodeSelector: (1)
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    prometheusK8s:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    prometheusOperator:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    metricsServer:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    kubeStateMetrics:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    telemeterClient:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    openshiftStateMetrics:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    thanosQuerier:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
    monitoringPlugin:
      nodeSelector:
        node-role.kubernetes.io/infra: ""
      tolerations:
      - key: node-role.kubernetes.io/infra
        value: reserved
        effect: NoSchedule
| 1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector parameter in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. | 
Watch the monitoring pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'
If a component has not moved to the infra node, delete the pod with this component:
$ oc delete pod -n openshift-monitoring <pod>
The component from the deleted pod is re-created on the infra node.
The Vertical Pod Autoscaler Operator (VPA) consists of three components: the recommender, updater, and admission controller. The Operator and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure nodes by adding a node selector to the VPA subscription and the VerticalPodAutoscalerController CR.
The following example shows the default deployment of the VPA pods to the control plane nodes.
NAME                                                READY   STATUS    RESTARTS   AGE     IP            NODE                  NOMINATED NODE   READINESS GATES
vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z   1/1     Running   0          7m59s   10.128.2.24   c416-tfsbj-master-1   <none>           <none>
vpa-admission-plugin-default-6cb78d6f8b-rpcrj       1/1     Running   0          5m37s   10.129.2.22   c416-tfsbj-master-1   <none>           <none>
vpa-recommender-default-66846bd94c-dsmpp            1/1     Running   0          5m37s   10.129.2.20   c416-tfsbj-master-0   <none>           <none>
vpa-updater-default-db8b58df-2nkvf                  1/1     Running   0          5m37s   10.129.2.21   c416-tfsbj-master-1   <none>           <none>
Move the VPA Operator pod by adding a node selector to the Subscription custom resource (CR) for the VPA Operator:
Edit the CR:
$ oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler
Add a node selector to match the node role label on the infra node:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  labels:
    operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: ""
  name: vertical-pod-autoscaler
# ...
spec:
  config:
    nodeSelector:
      node-role.kubernetes.io/infra: "" (1)
| 1 | Specifies the node role of an infra node. | 
| 
 If the infra node uses taints, you need to add a toleration to the  For example: 
 | 
| 1 | Specifies a toleration for a taint on the infra node. | 
Move each VPA component by adding node selectors to the VerticalPodAutoscaler custom resource (CR):
Edit the CR:
$ oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler
Add node selectors to match the node role label on the infra node:
apiVersion: autoscaling.openshift.io/v1
kind: VerticalPodAutoscalerController
metadata:
 name: default
  namespace: openshift-vertical-pod-autoscaler
# ...
spec:
  deploymentOverrides:
    admission:
      container:
        resources: {}
      nodeSelector:
        node-role.kubernetes.io/infra: "" (1)
    recommender:
      container:
        resources: {}
      nodeSelector:
        node-role.kubernetes.io/infra: "" (2)
    updater:
      container:
        resources: {}
      nodeSelector:
        node-role.kubernetes.io/infra: "" (3)
| 1 | Optional: Specifies the node role for the VPA admission pod. | 
| 2 | Optional: Specifies the node role for the VPA recommender pod. | 
| 3 | Optional: Specifies the node role for the VPA updater pod. | 
| 
 If a target node uses taints, you need to add a toleration to the  For example: 
 | 
| 1 | Specifies a toleration for the admission controller pod for a taint on the infra node. | 
| 2 | Specifies a toleration for the recommender pod for a taint on the infra node. | 
| 3 | Specifies a toleration for the updater pod for a taint on the infra node. | 
You can verify the pods have moved by using the following command:
$ oc get pods -n openshift-vertical-pod-autoscaler -o wide
The pods are no longer deployed to the control plane nodes. In the following example output, the node is now an infra node, not a control plane node.
NAME                                                READY   STATUS    RESTARTS   AGE     IP            NODE                              NOMINATED NODE   READINESS GATES
vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z   1/1     Running   0          7m59s   10.128.2.24   c416-tfsbj-infra-eastus3-2bndt   <none>           <none>
vpa-admission-plugin-default-6cb78d6f8b-rpcrj       1/1     Running   0          5m37s   10.129.2.22   c416-tfsbj-infra-eastus1-lrgj8   <none>           <none>
vpa-recommender-default-66846bd94c-dsmpp            1/1     Running   0          5m37s   10.129.2.20   c416-tfsbj-infra-eastus1-lrgj8   <none>           <none>
vpa-updater-default-db8b58df-2nkvf                  1/1     Running   0          5m37s   10.129.2.21   c416-tfsbj-infra-eastus1-lrgj8   <none>           <none>
By default, the Cluster Resource Override Operator installation process creates an Operator pod and two Cluster Resource Override pods on nodes in the clusterresourceoverride-operator namespace. You can move these pods to other nodes, such as infrastructure nodes, as needed.
The following examples shows the Cluster Resource Override pods are deployed to control plane nodes and the Cluster Resource Override Operator pod is deployed to a worker node.
NAME                                                READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
clusterresourceoverride-786b8c898c-9wrdq            1/1     Running   0          23s   10.128.2.32   ip-10-0-14-183.us-west-2.compute.internal   <none>           <none>
clusterresourceoverride-786b8c898c-vn2lf            1/1     Running   0          26s   10.130.2.10   ip-10-0-20-140.us-west-2.compute.internal   <none>           <none>
clusterresourceoverride-operator-6b8b8b656b-lvr62   1/1     Running   0          56m   10.131.0.33   ip-10-0-2-39.us-west-2.compute.internal     <none>           <none>
NAME                                        STATUS   ROLES                  AGE   VERSION
ip-10-0-14-183.us-west-2.compute.internal   Ready    control-plane,master   65m   v1.33.4
ip-10-0-2-39.us-west-2.compute.internal     Ready    worker                 58m   v1.33.4
ip-10-0-20-140.us-west-2.compute.internal   Ready    control-plane,master   65m   v1.33.4
ip-10-0-23-244.us-west-2.compute.internal   Ready    infra                  55m   v1.33.4
ip-10-0-77-153.us-west-2.compute.internal   Ready    control-plane,master   65m   v1.33.4
ip-10-0-99-108.us-west-2.compute.internal   Ready    worker                 24m   v1.33.4
ip-10-0-24-233.us-west-2.compute.internal   Ready    infra                  55m   v1.33.4
ip-10-0-88-109.us-west-2.compute.internal   Ready    worker                 24m   v1.33.4
ip-10-0-67-453.us-west-2.compute.internal   Ready    infra                  55m   v1.33.4
Move the Cluster Resource Override Operator pod by adding a node selector to the Subscription custom resource (CR) for the Cluster Resource Override Operator.
Edit the CR:
$ oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride
Add a node selector to match the node role label on the node where you want to install the Cluster Resource Override Operator pod:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: clusterresourceoverride
  namespace: clusterresourceoverride-operator
# ...
spec:
  config:
    nodeSelector:
      node-role.kubernetes.io/infra: "" (1)
# ...
| 1 | Specify the role of the node where you want to deploy the Cluster Resource Override Operator pod. | 
| 
 If the infra node uses taints, you need to add a toleration to the  For example: 
  | 
Move the Cluster Resource Override pods by adding a node selector to the ClusterResourceOverride custom resource (CR):
Edit the CR:
$ oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator
Add a node selector to match the node role label on the infra node:
apiVersion: operator.autoscaling.openshift.io/v1
kind: ClusterResourceOverride
metadata:
  name: cluster
  resourceVersion: "37952"
spec:
  podResourceOverride:
    spec:
      cpuRequestToLimitPercent: 25
      limitCPUToMemoryPercent: 200
      memoryRequestToLimitPercent: 50
  deploymentOverrides:
    replicas: 1 (1)
    nodeSelector:
      node-role.kubernetes.io/infra: "" (2)
# ...
| 1 | Optional: Specify the number of Cluster Resource Override pods to deploy. The default is 2. Only one pod is allowed per node. | 
| 2 | Optional: Specify the role of the node where you want to deploy the Cluster Resource Override pods. | 
| 
 If the infra node uses taints, you need to add a toleration to the  For example: 
  | 
You can verify that the pods have moved by using the following command:
$ oc get pods -n clusterresourceoverride-operator -o wide
The Cluster Resource Override pods are now deployed to the infra nodes.
NAME                                                READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
clusterresourceoverride-786b8c898c-9wrdq            1/1     Running   0          23s   10.127.2.25   ip-10-0-23-244.us-west-2.compute.internal   <none>           <none>
clusterresourceoverride-786b8c898c-vn2lf            1/1     Running   0          26s   10.128.0.80   ip-10-0-24-233.us-west-2.compute.internal   <none>           <none>
clusterresourceoverride-operator-6b8b8b656b-lvr62   1/1     Running   0          56m   10.129.0.71   ip-10-0-67-453.us-west-2.compute.internal   <none>           <none>