Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.
Other infrastructure components
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available:
Option 1: Create VPC endpoints
Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
-
ec2.<aws_region>.amazonaws.com
-
elasticloadbalancing.<aws_region>.amazonaws.com
-
s3.<aws_region>.amazonaws.com
With this option, network traffic remains private between your VPC and the required AWS services.
Option 2: Create a proxy without VPC endpoints
As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services.
Option 3: Create a proxy with VPC endpoints
As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
-
ec2.<aws_region>.amazonaws.com
-
elasticloadbalancing.<aws_region>.amazonaws.com
-
s3.<aws_region>.amazonaws.com
When configuring the proxy in the install-config.yaml
file, add these endpoints to the noProxy
field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services.
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your
machines.
Component |
AWS type |
Description |
|
-
AWS::EC2::VPC
-
AWS::EC2::VPCEndpoint
|
You must provide a public VPC for the cluster to use. The VPC uses an
endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3.
|
|
|
Your VPC must have public subnets for between 1 and 3 availability zones
and associate them with appropriate Ingress rules.
|
|
-
AWS::EC2::InternetGateway
-
AWS::EC2::VPCGatewayAttachment
-
AWS::EC2::RouteTable
-
AWS::EC2::Route
-
AWS::EC2::SubnetRouteTableAssociation
-
AWS::EC2::NatGateway
-
AWS::EC2::EIP
|
You must have a public internet gateway, with public routes, attached to the
VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios.
|
|
|
You must allow the VPC to access the following ports:
|
Port |
Reason |
|
|
|
|
|
|
|
Inbound ephemeral traffic
|
|
Outbound ephemeral traffic
|
|
|
Your VPC can have private subnets. The provided CloudFormation templates
can create private subnets for between 1 and 3 availability zones.
If you use private subnets, you must provide appropriate routes and tables
for them.
|
Required DNS and load balancing components
Your DNS and load balancer configuration needs to use a public hosted zone and
can use a private hosted zone similar to the one that the installation program
uses if it provisions the cluster’s infrastructure. You must
create a DNS entry that resolves to your load balancer. An entry for
api.<cluster_name>.<domain>
must point to the external load balancer, and an
entry for api-int.<cluster_name>.<domain>
must point to the internal load
balancer.
The cluster also requires load balancers and listeners for port 6443, which are
required for the Kubernetes API and its extensions, and port 22623, which are
required for the Ignition config files for new machines. The targets will be the
control plane nodes. Port 6443 must be accessible to both clients external to the
cluster and nodes within the cluster. Port 22623 must be accessible to nodes
within the cluster.
Component |
AWS type |
Description |
|
|
The hosted zone for your internal DNS.
|
|
AWS::ElasticLoadBalancingV2::LoadBalancer
|
The load balancer for your public subnets.
|
External API server record
|
AWS::Route53::RecordSetGroup
|
Alias records for the external API server.
|
|
AWS::ElasticLoadBalancingV2::Listener
|
A listener on port 6443 for the external load balancer.
|
|
AWS::ElasticLoadBalancingV2::TargetGroup
|
The target group for the external load balancer.
|
|
AWS::ElasticLoadBalancingV2::LoadBalancer
|
The load balancer for your private subnets.
|
Internal API server record
|
AWS::Route53::RecordSetGroup
|
Alias records for the internal API server.
|
|
AWS::ElasticLoadBalancingV2::Listener
|
A listener on port 22623 for the internal load balancer.
|
|
AWS::ElasticLoadBalancingV2::TargetGroup
|
The target group for the internal load balancer.
|
|
AWS::ElasticLoadBalancingV2::Listener
|
A listener on port 6443 for the internal load balancer.
|
|
AWS::ElasticLoadBalancingV2::TargetGroup
|
The target group for the internal load balancer.
|
Security groups
The control plane and worker machines require access to the following ports:
Group |
Type |
IP Protocol |
Port range |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Control plane Ingress
The control plane machines require the following Ingress groups. Each Ingress group is
a AWS::EC2::SecurityGroupIngress
resource.
Ingress group |
Description |
IP protocol |
Port range |
|
|
|
|
|
|
|
|
|
|
|
|
|
Internal cluster communication and Kubernetes proxy metrics
|
|
|
MasterIngressWorkerInternal
|
Internal cluster communication
|
|
|
|
Kubernetes kubelet, scheduler and controller manager
|
|
|
|
Kubernetes kubelet, scheduler and controller manager
|
|
|
MasterIngressIngressServices
|
Kubernetes Ingress services
|
|
|
MasterIngressWorkerIngressServices
|
Kubernetes Ingress services
|
|
|
|
|
|
|
MasterIngressWorkerGeneve
|
|
|
|
|
|
|
|
MasterIngressWorkerIpsecIke
|
|
|
|
|
|
|
|
MasterIngressWorkerIpsecNat
|
|
|
|
|
|
|
|
MasterIngressWorkerIpsecEsp
|
|
|
|
|
Internal cluster communication
|
|
|
MasterIngressWorkerInternalUDP
|
Internal cluster communication
|
|
|
MasterIngressIngressServicesUDP
|
Kubernetes Ingress services
|
|
|
MasterIngressWorkerIngressServicesUDP
|
Kubernetes Ingress services
|
|
|
Worker Ingress
The worker machines require the following Ingress groups. Each Ingress group is
a AWS::EC2::SecurityGroupIngress
resource.
Ingress group |
Description |
IP protocol |
Port range |
|
|
|
|
|
|
|
|
|
Internal cluster communication
|
|
|
WorkerIngressWorkerInternal
|
Internal cluster communication
|
|
|
|
Kubernetes kubelet, scheduler, and controller manager
|
|
|
|
Kubernetes kubelet, scheduler, and controller manager
|
|
|
WorkerIngressIngressServices
|
Kubernetes Ingress services
|
|
|
WorkerIngressWorkerIngressServices
|
Kubernetes Ingress services
|
|
|
|
|
|
|
WorkerIngressMasterGeneve
|
|
|
|
|
|
|
|
WorkerIngressMasterIpsecIke
|
|
|
|
|
|
|
|
WorkerIngressMasterIpsecNat
|
|
|
|
|
|
|
|
WorkerIngressMasterIpsecEsp
|
|
|
|
|
Internal cluster communication
|
|
|
WorkerIngressMasterInternalUDP
|
Internal cluster communication
|
|
|
WorkerIngressIngressServicesUDP
|
Kubernetes Ingress services
|
|
|
WorkerIngressMasterIngressServicesUDP
|
Kubernetes Ingress services
|
|
|
Roles and instance profiles
You must grant the machines permissions in AWS. The provided CloudFormation
templates grant the machines Allow
permissions for the following AWS::IAM::Role
objects
and provide a AWS::IAM::InstanceProfile
for each set of roles. If you do
not use the templates, you can grant the machines the following broad permissions
or the following individual permissions.
Role |
Effect |
Action |
Resource |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|