apiVersion: v1
kind: Namespace
metadata:
name: aws-load-balancer-operator
The AWS Load Balancer Operator deploys and manages the AWS Load Balancer Controller. You can install the AWS Load Balancer Operator from the OperatorHub by using OKD web console or CLI.
You can install the AWS Load Balancer Operator by using the web console.
You have logged in to the OKD web console as a user with cluster-admin
permissions.
Your cluster is configured with AWS as the platform type and cloud provider.
If you are using a security token service (STS) or user-provisioned infrastructure, follow the related preparation steps. For example, if you are using AWS Security Token Service, see "Preparing for the AWS Load Balancer Operator on a cluster using the AWS Security Token Service (STS)".
Navigate to Operators → OperatorHub in the OKD web console.
Select the AWS Load Balancer Operator. You can use the Filter by keyword text box or use the filter list to search for the AWS Load Balancer Operator from the list of Operators.
Select the aws-load-balancer-operator
namespace.
On the Install Operator page, select the following options:
Update the channel as stable-v1.
Installation mode as All namespaces on the cluster (default).
Installed Namespace as aws-load-balancer-operator
. If the aws-load-balancer-operator
namespace does not exist, it gets created during the Operator installation.
Select Update approval as Automatic or Manual. By default, the Update approval is set to Automatic. If you select automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator updated to the new version.
Click Install.
Verify that the AWS Load Balancer Operator shows the Status as Succeeded on the Installed Operators dashboard.
You can install the AWS Load Balancer Operator by using the CLI.
You are logged in to the OKD web console as a user with cluster-admin
permissions.
Your cluster is configured with AWS as the platform type and cloud provider.
You are logged into the OpenShift CLI (oc
).
Create a Namespace
object:
Create a YAML file that defines the Namespace
object:
namespace.yaml
fileapiVersion: v1
kind: Namespace
metadata:
name: aws-load-balancer-operator
Create the Namespace
object by running the following command:
$ oc apply -f namespace.yaml
Create an OperatorGroup
object:
Create a YAML file that defines the OperatorGroup
object:
operatorgroup.yaml
fileapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: aws-lb-operatorgroup
namespace: aws-load-balancer-operator
spec:
upgradeStrategy: Default
Create the OperatorGroup
object by running the following command:
$ oc apply -f operatorgroup.yaml
Create a Subscription
object:
Create a YAML file that defines the Subscription
object:
subscription.yaml
fileapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
spec:
channel: stable-v1
installPlanApproval: Automatic
name: aws-load-balancer-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the Subscription
object by running the following command:
$ oc apply -f subscription.yaml
Get the name of the install plan from the subscription:
$ oc -n aws-load-balancer-operator \
get subscription aws-load-balancer-operator \
--template='{{.status.installplan.name}}{{"\n"}}'
Check the status of the install plan:
$ oc -n aws-load-balancer-operator \
get ip <install_plan_name> \
--template='{{.status.phase}}{{"\n"}}'
The output must be Complete
.
You can install only a single instance of the AWSLoadBalancerController
object in a cluster. You can create the AWS Load Balancer Controller by using CLI. The AWS Load Balancer Operator reconciles only the cluster
named resource.
You have created the echoserver
namespace.
You have access to the OpenShift CLI (oc
).
Create a YAML file that defines the AWSLoadBalancerController
object:
sample-aws-lb.yaml
fileapiVersion: networking.olm.openshift.io/v1
kind: AWSLoadBalancerController (1)
metadata:
name: cluster (2)
spec:
subnetTagging: Auto (3)
additionalResourceTags: (4)
- key: example.org/security-scope
value: staging
ingressClass: alb (5)
config:
replicas: 2 (6)
enabledAddons: (7)
- AWSWAFv2 (8)
1 | Defines the AWSLoadBalancerController object. |
2 | Defines the AWS Load Balancer Controller name. This instance name gets added as a suffix to all related resources. |
3 | Configures the subnet tagging method for the AWS Load Balancer Controller. The following values are valid:
|
4 | Defines the tags used by the AWS Load Balancer Controller when it provisions AWS resources. |
5 | Defines the ingress class name. The default value is alb . |
6 | Specifies the number of replicas of the AWS Load Balancer Controller. |
7 | Specifies annotations as an add-on for the AWS Load Balancer Controller. |
8 | Enables the alb.ingress.kubernetes.io/wafv2-acl-arn annotation. |
Create the AWSLoadBalancerController
object by running the following command:
$ oc create -f sample-aws-lb.yaml
Create a YAML file that defines the Deployment
resource:
sample-aws-lb.yaml
fileapiVersion: apps/v1
kind: Deployment (1)
metadata:
name: <echoserver> (2)
namespace: echoserver
spec:
selector:
matchLabels:
app: echoserver
replicas: 3 (3)
template:
metadata:
labels:
app: echoserver
spec:
containers:
- image: openshift/origin-node
command:
- "/bin/socat"
args:
- TCP4-LISTEN:8080,reuseaddr,fork
- EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"'
imagePullPolicy: Always
name: echoserver
ports:
- containerPort: 8080
1 | Defines the deployment resource. |
2 | Specifies the deployment name. |
3 | Specifies the number of replicas of the deployment. |
Create a YAML file that defines the Service
resource:
service-albo.yaml
fileapiVersion: v1
kind: Service (1)
metadata:
name: <echoserver> (2)
namespace: echoserver
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
selector:
app: echoserver
1 | Defines the service resource. |
2 | Specifies the service name. |
Create a YAML file that defines the Ingress
resource:
ingress-albo.yaml
fileapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name> (1)
namespace: echoserver
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
service:
name: <echoserver> (2)
port:
number: 80
1 | Specify a name for the Ingress resource. |
2 | Specifies the service name. |
Save the status of the Ingress
resource in the HOST
variable by running the following command:
$ HOST=$(oc get ingress -n echoserver echoserver --template='{{(index .status.loadBalancer.ingress 0).hostname}}')
Verify the status of the Ingress
resource by running the following command:
$ curl $HOST