# ...
providerSpec:
value:
# ...
loadBalancers:
- name: lk4pj-ext (1)
type: network (2)
- name: lk4pj-int
type: network
# ...
You can enable or change the configuration of features for your control plane machines by editing values in the control plane machine set specification.
When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy. For more information, see "Updating the control plane configuration".
After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone.
Install the OpenShift CLI (oc).
Have access to the web console as a user with admin privileges.
In the web portal or console for your cloud provider, take the following actions:
Locate and delete the appropriate load balancer component:
AWS clusters: Delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
Delete the
api.$clustername.$yourdomain
DNS entry in the public zone.
Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource:
# ...
providerSpec:
value:
# ...
loadBalancers:
- name: lk4pj-ext (1)
type: network (2)
- name: lk4pj-int
type: network
# ...
| 1 | Delete the name value for the external load balancer, which ends in -ext. |
| 2 | Delete the type value for the external load balancer. |
You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR).
Your AWS cluster uses a control plane machine set.
Edit the following line under the providerSpec field:
providerSpec:
value:
...
instanceType: <compatible_aws_instance_type> (1)
| 1 | Specify a larger AWS instance type with the same base as the previous selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge. |
Save your changes.
You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group.
EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group.
You created a placement group in the AWS console.
|
Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. The control plane machine set spreads the control plane machines across multiple failure domains when possible. To use placement groups for the control plane, you must use a placement group type that can span multiple Availability Zones. |
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following lines under the providerSpec field:
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
# ...
spec:
template:
spec:
providerSpec:
value:
instanceType: <supported_instance_type> (1)
networkInterfaceType: EFA (2)
placement:
availabilityZone: <zone> (3)
region: <region> (4)
placementGroupName: <placement_group> (5)
placementGroupPartition: <placement_group_partition_number> (6)
# ...
| 1 | Specify an instance type that supports EFAs. |
| 2 | Specify the EFA network interface type. |
| 3 | Specify the zone, for example, us-east-1a. |
| 4 | Specify the region, for example, us-east-1. |
| 5 | Specify the name of the existing AWS placement group to deploy machines in. |
| 6 | Optional: Specify the partition number of the existing AWS placement group to deploy machines in. |
In the AWS console, find a machine that the machine set created and verify the following in the machine properties:
The placement group field has the value that you specified for the placementGroupName parameter in the machine set.
The partition number field has the value that you specified for the placementGroupPartition parameter in the machine set.
The interface type field indicates that it uses an EFA.
You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.
|
To use IMDSv2 on AWS clusters that were created with OKD version 4.6 or earlier, you must update your boot image. For more information, see "Boot image management". |
|
Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. |
You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines.
To use IMDSv2, your AWS cluster must have been created with OKD version 4.7 or later.
Add or edit the following lines under the providerSpec field:
providerSpec:
value:
metadataServiceOptions:
authentication: Required (1)
| 1 | To require IMDSv2, set the parameter value to Required. To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional. If no value is specified, both IMDSv1 and IMDSv2 are allowed. |
You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.
Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.
You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS.
Specify a dedicated tenancy under the providerSpec field:
providerSpec:
placement:
tenancy: dedicated
OKD version 4.20 and later supports Capacity Reservations on Amazon Web Services clusters, including On-Demand Capacity Reservations and Capacity Blocks for ML.
You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the instance type, region, and number of instances that you want to reserve. If your Capacity Reservation can accommodate the capacity request, the deployment succeeds.
For more information, including limitations and suggested use cases for this AWS offering, see On-Demand Capacity Reservations and Capacity Blocks for ML in the AWS documentation.
You have access to the cluster with cluster-admin privileges.
You installed the OpenShift CLI (oc).
You purchased an On-Demand Capacity Reservation or Capacity Block for ML. For more information, see On-Demand Capacity Reservations and Capacity Blocks for ML in the AWS documentation.
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the providerSpec field:
apiVersion: machine.openshift.io/v1
kind: ControlPlaneMachineSet
# ...
spec:
template:
machines_v1beta1_machine_openshift_io:
spec:
providerSpec:
value:
capacityReservationId: <capacity_reservation> (1)
marketType: <market_type> (2)
# ...
| 1 | Specify the ID of the Capacity Block for ML or On-Demand Capacity Reservation that you want the machine set to deploy machines on. |
| 2 | Specify the market type to use.
The following values are valid:
|
To verify machine deployment, list the machines that the machine set created by running the following command:
$ oc get machine \
-n openshift-machine-api \
-l machine.openshift.io/cluster-api-machine-role=master
In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation.