×

Sample YAML for a control plane machine set custom resource

The base of the ControlPlaneMachineSet CR is structured the same way for all platforms.

Sample ControlPlaneMachineSet CR YAML file
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster (1)
  namespace: openshift-machine-api
spec:
  replicas: 3 (2)
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <cluster_id> (3)
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
  state: Active (4)
  strategy:
    type: RollingUpdate (5)
  template:
    machineType: machines_v1beta1_machine_openshift_io
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        platform: <platform> (6)
        <platform_failure_domains> (7)
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <cluster_id>
          machine.openshift.io/cluster-api-machine-role: master
          machine.openshift.io/cluster-api-machine-type: master
      spec:
        providerSpec:
          value:
            <platform_provider_spec> (8)
1 Specifies the name of the ControlPlaneMachineSet CR, which is cluster. Do not change this value.
2 Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3. Horizontal scaling is not supported. Do not change this value.
3 Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
4 Specifies the state of the Operator. When the state is Inactive, the Operator is not operational. You can activate the Operator by setting the value to Active.

Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see "Getting started with control plane machine sets".

5 Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate. The default value is RollingUpdate. For more information about update strategies, see "Updating the control plane configuration".
6 Specifies the cloud provider platform name. Do not change this value.
7 Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider.
8 Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider.

Provider-specific configuration

The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set resources are provider-specific. Refer to the example YAML for your cluster:

Sample YAML for configuring Amazon Web Services clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for an Amazon Web Services (AWS) cluster.

Sample AWS provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Sample AWS providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            ami:
              id: ami-<ami_id_string> (1)
            apiVersion: machine.openshift.io/v1beta1
            blockDevices:
            - ebs: (2)
                encrypted: true
                iops: 0
                kmsKey:
                  arn: ""
                volumeSize: 120
                volumeType: gp3
            credentialsSecret:
              name: aws-cloud-credentials (3)
            deviceIndex: 0
            iamInstanceProfile:
              id: <cluster_id>-master-profile (4)
            instanceType: m6i.xlarge (5)
            kind: AWSMachineProviderConfig (6)
            loadBalancers: (7)
            - name: <cluster_id>-int
              type: network
            - name: <cluster_id>-ext
              type: network
            metadata:
              creationTimestamp: null
            metadataServiceOptions: {}
            placement: (8)
              region: <region> (9)
              availabilityZone: "" (10)
              tenancy: (11)
            securityGroups:
            - filters:
              - name: tag:Name
                values:
                - <cluster_id>-master-sg (12)
            subnet: {} (13)
            userDataSecret:
              name: master-user-data (14)
1 Specifies the Fedora CoreOS (FCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the AWS Marketplace to obtain an AMI ID for your region.
2 Specifies the configuration of an encrypted EBS volume.
3 Specifies the secret name for the cluster. Do not change this value.
4 Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value.
5 Specifies the AWS instance type for the control plane.
6 Specifies the cloud provider platform type. Do not change this value.
7 Specifies the internal (int) and external (ext) load balancers for the cluster.
8 Specifies where to create the control plane instance in AWS.
9 Specifies the AWS region for the cluster.
10 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain.
11 Specifies the AWS Dedicated Instance configuration for the control plane. For more information, see AWS documentation about Dedicated Instances. The following values are valid:
  • default: The Dedicated Instance runs on shared hardware.

  • dedicated: The Dedicated Instance runs on single-tenant hardware.

  • host: The Dedicated Instance runs on a Dedicated Host, which is an isolated server with configurations that you can control.

12 Specifies the control plane machines security group.
13 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain.

If the failure domain configuration does not specify a value, the value in the provider specification is used. Configuring a subnet in the failure domain overwrites the subnet value in the provider specification.

14 Specifies the control plane user data secret. Do not change this value.

Sample AWS failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ). The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use.

Sample AWS failure domain values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        aws:
        - placement:
            availabilityZone: <aws_zone_a> (1)
          subnet: (2)
            filters:
            - name: tag:Name
              values:
              - <cluster_id>-private-<aws_zone_a> (3)
            type: Filters (4)
        - placement:
            availabilityZone: <aws_zone_b> (5)
          subnet:
            filters:
            - name: tag:Name
              values:
              - <cluster_id>-private-<aws_zone_b> (6)
            type: Filters
        platform: AWS (7)
# ...
1 Specifies an AWS availability zone for the first failure domain.
2 Specifies a subnet configuration. In this example, the subnet type is Filters, so there is a filters stanza.
3 Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone.
4 Specifies the subnet type. The allowed values are: ARN, Filters and ID. The default value is Filters.
5 Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone.
6 Specifies the cluster’s infrastructure ID and the AWS availability zone for the additional failure domain.
7 Specifies the cloud provider platform name. Do not change this value.

Sample YAML for configuring Google Cloud Platform clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for a Google Cloud Platform (GCP) cluster.

Sample GCP provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

Values obtained by using the OpenShift CLI

In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

Infrastructure ID

The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Image path

The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:

$ oc -n openshift-machine-api \
  -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \
  get ControlPlaneMachineSet/cluster
Sample GCP providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1beta1
            canIPForward: false
            credentialsSecret:
              name: gcp-cloud-credentials (1)
            deletionProtection: false
            disks:
            - autoDelete: true
              boot: true
              image: <path_to_image> (2)
              labels: null
              sizeGb: 200
              type: pd-ssd
            kind: GCPMachineProviderSpec (3)
            machineType: e2-standard-4
            metadata:
              creationTimestamp: null
            metadataServiceOptions: {}
            networkInterfaces:
            - network: <cluster_id>-network
              subnetwork: <cluster_id>-master-subnet
            projectID: <project_name> (4)
            region: <region> (5)
            serviceAccounts:
            - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com
              scopes:
              - https://www.googleapis.com/auth/cloud-platform
            shieldedInstanceConfig: {}
            tags:
            - <cluster_id>-master
            targetPools:
            - <cluster_id>-api
            userDataSecret:
              name: master-user-data (6)
            zone: "" (7)
1 Specifies the secret name for the cluster. Do not change this value.
2 Specifies the path to the image that was used to create the disk.

To use a GCP Marketplace image, specify the offer to use:

  • OKD: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736

  • OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736

  • OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736

3 Specifies the cloud provider platform type. Do not change this value.
4 Specifies the name of the GCP project that you use for your cluster.
5 Specifies the GCP region for the cluster.
6 Specifies the control plane user data secret. Do not change this value.
7 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.

Sample GCP failure domain configuration

The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use.

Sample GCP failure domain values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        gcp:
        - zone: <gcp_zone_a> (1)
        - zone: <gcp_zone_b> (2)
        - zone: <gcp_zone_c>
        - zone: <gcp_zone_d>
        platform: GCP (3)
# ...
1 Specifies a GCP zone for the first failure domain.
2 Specifies an additional failure domain. Further failure domains are added the same way.
3 Specifies the cloud provider platform name. Do not change this value.

Sample YAML for configuring Microsoft Azure clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster.

Sample Azure provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Sample Azure providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            acceleratedNetworking: true
            apiVersion: machine.openshift.io/v1beta1
            credentialsSecret:
              name: azure-cloud-credentials (1)
              namespace: openshift-machine-api
            diagnostics: {}
            image: (2)
              offer: ""
              publisher: ""
              resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 (3)
              sku: ""
              version: ""
            internalLoadBalancer: <cluster_id>-internal (4)
            kind: AzureMachineProviderSpec (5)
            location: <region> (6)
            managedIdentity: <cluster_id>-identity
            metadata:
              creationTimestamp: null
              name: <cluster_id>
            networkResourceGroup: <cluster_id>-rg
            osDisk: (7)
              diskSettings: {}
              diskSizeGB: 1024
              managedDisk:
                storageAccountType: Premium_LRS
              osType: Linux
            publicIP: false
            publicLoadBalancer: <cluster_id> (8)
            resourceGroup: <cluster_id>-rg
            subnet: <cluster_id>-master-subnet (9)
            userDataSecret:
              name: master-user-data (10)
            vmSize: Standard_D8s_v3
            vnet: <cluster_id>-vnet
            zone: "1" (11)
1 Specifies the secret name for the cluster. Do not change this value.
2 Specifies the image details for your control plane machine set.
3 Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
4 Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs.
5 Specifies the cloud provider platform type. Do not change this value.
6 Specifies the region to place control plane machines on.
7 Specifies the disk configuration for the control plane.
8 Specifies the public load balancer for the control plane.
9 Specifies the subnet for the control plane.
10 Specifies the control plane user data secret. Do not change this value.
11 Specifies the zone configuration for clusters that use a single zone for all failure domains.

If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification, the Control Plane Machine Set Operator ignores it.

Sample Azure failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. An Azure cluster uses a single subnet that spans multiple zones.

Sample Azure failure domain values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        azure:
        - zone: "1" (1)
        - zone: "2"
        - zone: "3"
        platform: Azure (2)
# ...
1 Each instance of zone specifies an Azure availability zone for a failure domain.

If the cluster is configured to use a single zone for all failure domains, the zone parameter is configured in the provider specification instead of in the failure domain configuration.

2 Specifies the cloud provider platform name. Do not change this value.

Sample YAML for configuring Nutanix clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippet shows a provider specification configuration for a Nutanix cluster.

Sample Nutanix provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Values obtained by using the OpenShift CLI

In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

Infrastructure ID

The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Sample Nutanix providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1
            bootType: "" (1)
            categories: (2)
            - key: <category_name>
              value: <category_value>
            cluster: (3)
              type: uuid
              uuid: <cluster_uuid>
            credentialsSecret:
              name: nutanix-credentials (4)
            image: (5)
              name: <cluster_id>-rhcos
              type: name
            kind: NutanixMachineProviderConfig (6)
            memorySize: 16Gi (7)
            metadata:
              creationTimestamp: null
            project: (8)
              type: name
              name: <project_name>
            subnets: (9)
            - type: uuid
              uuid: <subnet_uuid>
            systemDiskSize: 120Gi (10)
            userDataSecret:
              name: master-user-data (11)
            vcpuSockets: 8 (12)
            vcpusPerSocket: 1 (13)
1 Specifies the boot type that the control plane machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are Legacy, SecureBoot, or UEFI. The default is Legacy.

You must use the Legacy boot type in OKD 4.

2 Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management.
3 Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid, so there is a uuid stanza.

Clusters that use OKD version 4.15 or later can use failure domain configurations.

For clusters that use failure domains, this value is specified in a failure domain configuration. If you specify this value in the provider specification, the Control Plane Machine Set Operator ignores it.

4 Specifies the secret name for the cluster. Do not change this value.
5 Specifies the image that was used to create the disk.
6 Specifies the cloud provider platform type. Do not change this value.
7 Specifies the memory allocated for the control plane machines.
8 Specifies the Nutanix project that you use for your cluster. In this example, the project type is name, so there is a name stanza.
9 Specifies a subnet configuration. In this example, the subnet type is uuid, so there is a uuid stanza.

Clusters that use OKD version 4.15 or later can use failure domain configurations.

For clusters that use failure domains, this value is specified in a failure domain configuration. If you specify this value in the provider specification, the Control Plane Machine Set Operator ignores it.

10 Specifies the VM disk size for the control plane machines.
11 Specifies the control plane user data secret. Do not change this value.
12 Specifies the number of vCPU sockets allocated for the control plane machines.
13 Specifies the number of vCPUs for each control plane vCPU socket.

Failure domains for Nutanix clusters

To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required:

  1. Modify the cluster infrastructure custom resource (CR).

  2. Modify the cluster control plane machine set CR.

  3. Modify or replace the compute machine set CRs.

For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content.

Sample YAML for configuring VMware vSphere clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippet shows a provider specification configuration for a VMware vSphere cluster.

Sample VMware vSphere provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Sample vSphere providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1beta1
            credentialsSecret:
              name: vsphere-cloud-credentials (1)
            diskGiB: 120 (2)
            kind: VSphereMachineProviderSpec (3)
            memoryMiB: 16384 (4)
            metadata:
              creationTimestamp: null
            network: (5)
              devices:
              - networkName: <vm_network_name>
            numCPUs: 4 (6)
            numCoresPerSocket: 4 (7)
            snapshot: ""
            template: <vm_template_name> (8)
            userDataSecret:
              name: master-user-data (9)
            workspace:
              datacenter: <vcenter_datacenter_name> (10)
              datastore: <vcenter_datastore_name> (11)
              folder: <path_to_vcenter_vm_folder> (12)
              resourcePool: <vsphere_resource_pool> (13)
              server: <vcenter_server_ip> (14)
1 Specifies the secret name for the cluster. Do not change this value.
2 Specifies the VM disk size for the control plane machines.
3 Specifies the cloud provider platform type. Do not change this value.
4 Specifies the memory allocated for the control plane machines.
5 Specifies the network on which the control plane is deployed.
6 Specifies the number of CPUs allocated for the control plane machines.
7 Specifies the number of cores for each control plane CPU.
8 Specifies the vSphere VM template to use, such as user-5ddjd-rhcos.
9 Specifies the control plane user data secret. Do not change this value.
10 Specifies the vCenter Datacenter for the control plane.
11 Specifies the vCenter Datastore for the control plane.
12 Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
13 Specifies the vSphere resource pool for your VMs.
14 Specifies the vCenter server IP or fully qualified domain name.

Sample VMware vSphere failure domain configuration

On VMware vSphere infrastructure, the cluster-wide infrastructure Custom Resource Definition (CRD), infrastructures.config.openshift.io, defines failure domains for your cluster. The providerSpec in the ControlPlaneMachineSet custom resource (CR) specifies names for failure domains. A failure domain is an infrastructure resource that comprises a control plane machine set, a vCenter datacenter, vCenter datastore, and a network.

By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on hardware that is separate from the primary VMware vSphere infrastructure. A control plane machine set also balances control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure.

If you modify the ProviderSpec configuration in the ControlPlaneMachineSet CR, the control plane machine set updates all control plane machines deployed on the primary infrastructure and each failure domain infrastructure.

Defining a failure domain for a control plane machine set is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Sample vSphere failure domain values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains: (1)
        platform: VSphere
        vsphere: (2)
        - name: <failure_domain_name1>
        - name: <failure_domain_name2>
# ...
1 A failure domain defines the vCenter location for OKD cluster nodes.
2 Defines failure domains by name for the control plane machine set.

Each failureDomains.platform.vsphere.name field value in the ControlPlaneMachineSet CR must match the corresponding value defined in the failureDomains.name field of the cluster-wide infrastructure CRD. Currently, the vsphere.name field is the only supported failure domain field that you can specify in the ControlPlaneMachineSet CR.

Additional resources

Sample YAML for configuring OpenStack clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for an OpenStack cluster.

Sample OpenStack provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Sample OpenStack providerSpec values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
      spec:
        providerSpec:
          value:
            apiVersion: machine.openshift.io/v1alpha1
            cloudName: openstack
            cloudsSecret:
              name: openstack-cloud-credentials (1)
              namespace: openshift-machine-api
            flavor: m1.xlarge (2)
            image: ocp1-2g2xs-rhcos
            kind: OpenstackProviderSpec (3)
            metadata:
              creationTimestamp: null
            networks:
            - filter: {}
              subnets:
              - filter:
                  name: ocp1-2g2xs-nodes
                  tags: openshiftClusterID=ocp1-2g2xs
            securityGroups:
            - filter: {}
              name: ocp1-2g2xs-master (4)
            serverGroupName: ocp1-2g2xs-master
            serverMetadata:
              Name: ocp1-2g2xs-master
              openshiftClusterID: ocp1-2g2xs
            tags:
            - openshiftClusterID=ocp1-2g2xs
            trunk: true
            userDataSecret:
              name: master-user-data
1 The secret name for the cluster. Do not change this value.
2 The OpenStack flavor type for the control plane.
3 The OpenStack cloud provider platform type. Do not change this value.
4 The control plane machines security group.

Sample OpenStack failure domain configuration

The control plane machine set concept of a failure domain is analogous to the existing OpenStack concept of an availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones.

Sample OpenStack failure domain values
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
metadata:
  name: cluster
  namespace: openshift-machine-api
spec:
# ...
  template:
# ...
    machines_v1beta1_machine_openshift_io:
      failureDomains:
        platform: OpenStack
        openstack:
        - availabilityZone: nova-az0
          rootVolume:
            availabilityZone: cinder-az0
        - availabilityZone: nova-az1
          rootVolume:
            availabilityZone: cinder-az1
        - availabilityZone: nova-az2
          rootVolume:
            availabilityZone: cinder-az2
# ...