$ oc annotate route <route_name> \
--overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit>
OKD provides methods for communicating from outside the cluster with services running in the cluster. This method uses load balancers on Amazon Web Services (AWS), specifically a Network Load Balancer (NLB) or a Classic Load Balancer (CLB). Both types of load balancers can forward the IP address of the client to the node, but a CLB requires proxy protocol support, which OKD automatically enables.
There are two ways to configure an Ingress Controller to use an NLB:
By force replacing the Ingress Controller that is currently using a CLB. This deletes the IngressController object and an outage occurs while the new DNS records propagate and the NLB is being provisioned.
By editing an existing Ingress Controller that uses a CLB to then use an NLB. This changes the load balancer without having to delete and recreate the IngressController object.
Both methods can be used to switch from an NLB to a CLB.
You can configure these load balancers on a new or existing AWS cluster.
To prevent connection drops for long-running processes in OKD, configure custom timeout periods for specific routes or Ingress Controllers.
Ensure these settings account for the Amazon Web Services Classic Load Balancer (CLB) default timeout of 60 seconds to maintain stable network traffic.
If the timeout period of the CLB is shorter than the route timeout or Ingress Controller timeout, the load balancer can prematurely terminate the connection. You can prevent this problem by increasing both the timeout period of the route and CLB.
You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.
|
If you configured a user-managed external load balancer in front of your OKD cluster, ensure that the timeout value for the user-managed external load balancer is higher than the timeout value for the route. This configuration prevents network congestion issues over the network that your cluster uses. |
You deployed an Ingress Controller on a running cluster.
Using the oc annotate command, add the timeout to the route:
$ oc annotate route <route_name> \
--overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit>
<timeout>: Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d).
The following example sets a timeout of two seconds on a route named myroute:
$ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s
You can configure the default timeouts for a Classic Load Balancer (CLB) to extend idle connections.
You must have a deployed Ingress Controller on a running cluster.
Set an Amazon Web Services connection idle timeout of five minutes for the default ingresscontroller by running the following command:
$ oc -n openshift-ingress-operator patch ingresscontroller/default \
--type=merge --patch='{"spec":{"endpointPublishingStrategy": \
{"type":"LoadBalancerService", "loadBalancer": \
{"scope":"External", "providerParameters":{"type":"AWS", "aws": \
{"type":"Classic", "classicLoadBalancer": \
{"connectionIdleTimeout":"5m"}}}}}}}'
Optional: Restore the default value of the timeout by running the following command:
$ oc -n openshift-ingress-operator patch ingresscontroller/default \
--type=merge --patch='{"spec":{"endpointPublishingStrategy": \
{"loadBalancer":{"providerParameters":{"aws":{"classicLoadBalancer": \
{"connectionIdleTimeout":null}}}}}}}'
|
You must specify the |
To enable high-performance communication between external services and your OKD cluster, configure an Amazon Web Services Network Load Balancer (NLB). You can set up an NLB on a new or existing AWS cluster to manage ingress traffic with low latency.
To improve performance and reduce latency for cluster traffic in OKD on Amazon Web Services, switch an Ingress Controller using a Classic Load Balancer (CLB) to one that uses a Network Load Balancer (NLB).
Switching between these load balancers does not delete the IngressController object.
|
This procedure might cause the following issues:
|
Modify the existing Ingress Controller that you want to switch to by using an NLB. This example assumes that your default Ingress Controller has an External scope and no other customizations:
ingresscontroller.yaml fileapiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: null
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: NLB
type: LoadBalancerService
|
If you do not specify a value for the |
|
If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. |
Apply the changes to the Ingress Controller YAML file by running the command:
$ oc apply -f ingresscontroller.yaml
Expect several minutes of outages while the Ingress Controller updates.
To support specific networking configurations in OKD on Amazon Web Services, switch an Ingress Controller using a Network Load Balancer (NLB) to one that uses a Classic Load Balancer (CLB).
Switching between these load balancers does not delete the IngressController object.
|
This procedure might cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. |
Modify the existing Ingress Controller that you want to switch to using a CLB. This example assumes that your default Ingress Controller has an External scope and no other customizations:
ingresscontroller.yaml fileapiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: null
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: Classic
type: LoadBalancerService
|
If you do not specify a value for the |
|
If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. |
Apply the changes to the Ingress Controller YAML file by running the command:
$ oc apply -f ingresscontroller.yaml
Expect several minutes of outages while the Ingress Controller updates.
To improve performance and reduce latency for traffic in OKD on Amazon Web Services, replace an Ingress Controller using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB).
|
This procedure might cause the following issues:
|
Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an External scope and no other customizations:
ingresscontroller.yml fileapiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: null
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: NLB
type: LoadBalancerService
If your default Ingress Controller has other customizations, ensure that you modify the file accordingly.
|
If your Ingress Controller has no other customizations and you are only updating the load balancer type, consider following the procedure detailed in "Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer". |
Force replace the Ingress Controller YAML file:
$ oc replace --force --wait -f ingresscontroller.yml
Wait until the Ingress Controller is replaced. Expect several of minutes of outages.
To improve performance for high-traffic workloads in OKD, configure an Ingress Controller backed by an Amazon Web Services Network Load Balancer (NLB) on an existing cluster.
You can create an Ingress Controller backed by an Amazon Web Services Network Load Balancer (NLB) on an existing cluster.
You installed an AWS cluster.
PlatformStatus of the infrastructure resource must be AWS.
To verify that the PlatformStatus is AWS, run the following command:
$ oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}'
AWS
Create the Ingress Controller manifest:
$ cat ingresscontroller-aws-nlb.yaml
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: <ingress_controller_name>
namespace: openshift-ingress-operator
spec:
domain: <unique_ingress_domain
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: NLB
where:
<ingress_controller_name>Specifies a unique name for the Ingress Controller.
<unique_ingress_domain>Specifies a domain name that is unique among all Ingress Controllers in the cluster. This variable must be a subdomain of the DNS name <clustername>.<domain>.
scopeSpecifies the type of NLB, either External to use an external NLB or Internal to use an internal NLB.
Create the resource in the cluster:
$ oc create -f ingresscontroller-aws-nlb.yaml
|
Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the creating the installation configuration file procedure. For more information, see "Creating the installation configuration file". |
You can create an Ingress Controller backed by an Amazon Web Services Network Load Balancer (NLB) on a new cluster in situations where you need more transparent networking capabilities.
Create and edit the install-config.yaml file. For instructions, see "Creating the installation configuration file" in the Additonal resources section.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>
For <installation_directory>, specify the name of the directory that contains the install-config.yaml file for your cluster.
Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory:
$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
<installation_directory>Specifies the directory name that contains the manifests/ directory for your cluster.
Check the several network configuration files that exist in the manifests/ directory by entering the following command:
$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
cluster-ingress-default-ingresscontroller.yaml
Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: null
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: NLB
type: LoadBalancerService
Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor.
Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file because the installation program deletes the manifests/ directory during cluster creation.
To manually control network placement for Ingress Controllers in an existing cluster, specify the load balancer subnets in your configuration. This method provides precise control over your infrastructure by overriding the default automatic subnet discovery method used by Amazon Web Services.
You must have an installed AWS cluster.
You must know the names or IDs of the subnets to which you intend to map your IngressController.
Create a custom resource (CR) YAML file, such as sample-ingress.yaml, and specifying the following content for the file:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
namespace: openshift-ingress-operator
name: <name>
spec:
domain: <domain>
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: External
dnsManagementPolicy: Managed
# ...
Add subnets to the CR file:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: <name>
namespace: openshift-ingress-operator
spec:
domain: <domain>
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: Classic
classicLoadBalancer:
subnets:
ids:
- <subnet>
- <subnet>
- <subnet>
dnsManagementPolicy: Managed
where:
nameSpecifies a name for the IngressController.
domainSpecifies the DNS name serviced by the IngressController.
classicLoadBalancerSpecifies the type of load balancer, either classicLoadBalancer if using a CLB or networkLoadBalancer field if using an NLB.
idsSpecifies a subnet by name using the names field instead of specifying the subnet by ID. This field is optional.
<subnet>Specifies the subnet IDs (or names if you using names).
|
You can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers and private subnets for internal Ingress Controllers. |
Save and apply the CR file by using the OpenShift CLI (oc):
$ oc apply -f sample-ingress.yaml
Confirm the load balancer was provisioned successfully by checking the IngressController conditions.
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
You can update an IngressController with manually specified load balancer subnets in OKD to avoid any disruptions, to maintain the stability of your services, and to ensure that your network configuration aligns with your specific requirements.
The example in the procedure shows you how to select and apply new subnets, verify the configuration changes, and confirm successful load balancer provisioning.
|
This procedure may cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. |
Modify the existing IngressController by specifying the new subnets:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: <name>
namespace: openshift-ingress-operator
spec:
domain: <domain>
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: Classic
classicLoadBalancer:
subnets:
ids:
- <updated_subnet>
- <updated_subnet>
- <updated_subnet>
# ...
where:
<name>Specifies a name for the IngressController.
<domain>Specifies the DNS name serviced by the IngressController.
typeSpecifies the updated subnet IDs (or names if you using names).
classicLoadBalancerYou can also use the networkLoadBalancer field if using an NLB.
idsSpecifies the subnet by name using the names field instead of specifying the subnet by ID. This parameter is optional.
<updated_subnet>Specifies the updated subnet IDs (or names if you are using names).
|
You can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers and private subnets for internal Ingress Controllers. |
Examine the Progressing condition on the IngressController for instructions on how to apply the subnet updates by running the following command:
$ oc get ingresscontroller -n openshift-ingress-operator subnets -o jsonpath="{.status.conditions[?(@.type==\"Progressing\")]}" | yq -PC
lastTransitionTime: "2024-11-25T20:19:31Z"
message: 'One or more status conditions indicate progressing: LoadBalancerProgressing=True (OperandsProgressing: One or more managed resources are progressing: The IngressController subnets were changed from [...] to [...]. To effectuate this change, you must delete the service: `oc -n openshift-ingress delete svc/router-<name>`; the service load-balancer will then be deprovisioned and a new one created. This will most likely cause the new load-balancer to have a different host name and IP address and cause disruption. To return to the previous state, you can revert the change to the IngressController: [...]'
reason: IngressControllerProgressing
status: "True"
type: Progressing
To apply the update, delete the service associated with the Ingress controller by running the following command:
$ oc -n openshift-ingress delete svc/router-<name>
To confirm that the load balancer was provisioned successfully, check the IngressController conditions by running the following command:
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC
You can specify static IPs, otherwise known as elastic IPs, for your network load balancer (NLB) in the Ingress Controller. This is useful in situations where you want to configure appropriate firewall rules for your cluster network.
You must have an installed Amazon Web Services cluster.
You must know the names or IDs of the subnets to which you intend to map your IngressController.
Create a YAML file that contains the following example content:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
namespace: openshift-ingress-operator
name: <name>
spec:
domain: <domain>
endpointPublishingStrategy:
loadBalancer:
scope: External
type: LoadBalancerService
providerParameters:
type: AWS
aws:
type: NLB
networkLoadBalancer:
subnets:
ids:
- <subnet_ID>
names:
- <subnet_A>
- <subnet_B>
eipAllocations:
- <eipalloc_A>
- <eipalloc_B>
- <eipalloc_C>
where:
<name>Specifies a name for the Ingress Controller.
<domain>Specifies the DNS name serviced by the Ingress Controller.
scopeSpecifies a scope for the EIPs. The scope must be set to the value External and be Internet-facing in order to allocate EIPs.
Specifies the IDs and names for your subnets. The total number of IDs and names must be equal to your allocated EIPs.
eipAllocationsSpecifies the EIP addresses.
|
You can specify a maximum of one subnet per availability zone. Only provide public subnets for external Ingress Controllers. You can associate one EIP address per subnet. |
Save and apply the CR file by entering the following command:
$ oc apply -f sample-ingress.yaml
Confirm the load balancer was provisioned successfully by checking the IngressController conditions by running the following command:
$ oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath="{.status.conditions}" | yq -PC