×

Overview

OKD can be configured to access an AWS EC2 infrastructure, including using AWS volumes as persistent storage for application data. After AWS is configured properly, some additional configurations will need to be completed on the OKD hosts.

Permissions

Configuring AWS for OKD requires the following permissions:

Table 1. Master Permissions

Elastic Compute Cloud(EC2)

ec2:DescribeVolume, ec2:CreateVolume, ec2:CreateTags, ec2:DescribeInstance, ec2:AttachVolume, ec2:DetachVolume, ec2:DeleteVolume, ec2:DescribeSubnets, ec2:CreateSecurityGroup, ec2:DescribeSecurityGroups, ec2:DescribeRouteTables, ec2:AuthorizeSecurityGroupIngress, ec2:RevokeSecurityGroupIngress, ec2:DeleteSecurityGroup

Elastic Load Balancing

elasticloadbalancing:DescribeTags, elasticloadbalancing:CreateLoadBalancerListeners, elasticloadbalancing:ConfigureHealthCheck, elasticloadbalancing:DeleteLoadBalancerListeners, elasticloadbalancing:RegisterInstancesWithLoadBalancer, elasticloadbalancing:DescribeLoadBalancers, elasticloadbalancing:CreateLoadBalancer, elasticloadbalancing:DeleteLoadBalancer, elasticloadbalancing:ModifyLoadBalancerAttributes, elasticloadbalancing:DescribeLoadBalancerAttributes

Table 2. Node Permissions

Elastic Compute Cloud(EC2)

ec2:DescribeInstance*

  • Every master, node, and subnet must have the KubernetesCluster: value tag.

  • One security group, preferably the one linked to the nodes, must have the KubernetesCluster: value tag.

    • Do not tag all security groups with the KubernetesCluster: value tag or the Elastic Load Balancing (ELB) will not be able to create a load balancer.

Configuring a Security Group

When installing OKD on AWS, ensure that you set up the appropriate security groups.

These are some ports that you must have in your security groups, without which the installation fails. You may need more depending on the cluster configuration you want to install. For more information and to adjust your security groups accordingly, see Required Ports for more information.

All OKD Hosts

  • tcp/22 from host running the installer/Ansible

etcd Security Group

  • tcp/2379 from masters

  • tcp/2380 from etcd hosts

Master Security Group

  • tcp/8443 from 0.0.0.0/0

  • tcp/53 from all OKD hosts for environments installed prior to or upgraded to 1.2

  • udp/53 from all OKD hosts for environments installed prior to or upgraded to 1.2

  • tcp/8053 from all OKD hosts for new environments installed with 1.2

  • udp/8053 from all OKD hosts for new environments installed with 1.2

Node Security Group

  • tcp/10250 from masters

  • udp/4789 from nodes

Infrastructure Nodes (ones that can host the OKD router)

  • tcp/443 from 0.0.0.0/0

  • tcp/80 from 0.0.0.0/0

If configuring external load-balancers (ELBs) for load balancing the masters and/or routers, you also need to configure Ingress and Egress security groups for the ELBs appropriately.

Overriding Detected IP Addresses and Host Names

In AWS, situations that require overriding the variables include:

Variable Usage

hostname

The user is installing in a VPC that is not configured for both DNS hostnames and DNS resolution.

ip

You have multiple network interfaces configured and want to use one other than the default. You must also set the openshift_set_node_ip parameter to True, or the SDN attempts to use the hostname setting or tries to resolve the host name for the IP address.

public_hostname

  • A master instance where the VPC subnet is not configured for Auto-assign Public IP. For external access to this master, you need to have an ELB or other load balancer configured that would provide the external access needed, or you need to connect over a VPN connection to the internal name of the host.

  • A master instance where metadata is disabled.

  • This value is not actually used by the nodes.

public_ip

  • A master instance where the VPC subnet is not configured for Auto-assign Public IP.

  • A master instance where metadata is disabled.

  • This value is not actually used by the nodes.

If openshift_hostname is set to a value other than the metadata-provided private-dns-name value, the native cloud integration for those providers will no longer work.

For EC2 hosts in particular, they must be deployed in a VPC that has both DNS host names and DNS resolution enabled, and openshift_hostname should not be overridden.

Configuring AWS Variables

To set the required AWS variables, create a /etc/aws/aws.conf file with the following contents on all of your OKD hosts, both masters and nodes:

[Global]
Zone = us-east-1c (1)
1 This is the Availability Zone of your AWS Instance and where your EBS Volume resides; this information is obtained from the AWS Management Console.

Configuring OKD Masters for AWS

You can set the AWS configuration on your OKD master hosts in two ways:

Configuring OKD for AWS with Ansible

Example AWS Configuration with Ansible
# Cloud Provider Configuration
#
# Note: You may make use of environment variables rather than store
# sensitive configuration within the ansible inventory.
# For example:
#openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
#openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
#
# AWS (Using API Credentials)
#openshift_cloudprovider_kind=aws
#openshift_cloudprovider_aws_access_key=aws_access_key_id
#openshift_cloudprovider_aws_secret_key=aws_secret_access_key
#
# AWS (Using IAM Profiles)
#openshift_cloudprovider_kind=aws
# Note: IAM roles must exist before launching the instances.

When Ansible configures AWS, the following files are created for you:

  • /etc/aws/aws.conf

  • /etc/origin/master/master-config.yaml

  • /etc/origin/node/node-config.yaml

  • /etc/sysconfig/atomic-openshift-master-api

  • /etc/sysconfig/atomic-openshift-master-controllers

  • /etc/sysconfig/atomic-openshift-node

Manually Configuring OKD Masters for AWS

Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and update the contents of the apiServerArguments and controllerArguments sections:

kubernetesMasterConfig:
  ...
  apiServerArguments:
    cloud-provider:
      - "aws"
    cloud-config:
      - "/etc/aws/aws.conf"
  controllerArguments:
    cloud-provider:
      - "aws"
    cloud-config:
      - "/etc/aws/aws.conf"

Currently, the nodeName must match the instance name in AWS in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant.

When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, aws.conf should be in /etc/origin/ instead of /etc/.

Manually Configuring OKD Nodes for AWS

Edit or create the node configuration file on all nodes (/etc/origin/node/node-config.yaml by default) and update the contents of the kubeletArguments section:

kubeletArguments:
  cloud-provider:
    - "aws"
  cloud-config:
    - "/etc/aws/aws.conf"

When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, aws.conf should be in /etc/origin/ instead of /etc/.

Setting Key Value Access Pairs

Make sure the following environment variables are set in the /etc/sysconfig/origin-master-api file and /etc/sysconfig/origin-master-containers file on masters and the /etc/sysconfig/origin-node file on nodes:

AWS_ACCESS_KEY_ID=<key_ID>
AWS_SECRET_ACCESS_KEY=<secret_key>

Access keys are obtained when setting up your AWS IAM user.

Applying Configuration Changes

Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:

# systemctl restart origin-master-api origin-master-controllers
# systemctl restart origin-node

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.

  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OKD service.

    # systemctl restart origin-node
  5. Add back any labels on each node that you previously had.