Cloud Deployment Manager V2 API
In OKD version 4.6, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP) that uses infrastructure that you provide. In this context, a cluster installed into a shared VPC is a cluster that is configured to use a VPC from a project different from where the cluster is being deployed.
A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IPs from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation.
The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods.
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OKD. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. |
Review details about the OKD installation and update processes.
If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to.
If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials. Manual mode can also be used in environments where the cloud IAM APIs are not reachable.
Be sure to also review this site list if you are configuring a proxy. |
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
Before you can install OKD, you must configure a Google Cloud Platform (GCP) project to host it.
To install OKD, you must create a project in your Google Cloud Platform (GCP) account to host the cluster.
Create a project to host your OKD cluster. See Creating and Managing Projects in the GCP documentation.
Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the |
Your Google Cloud Platform (GCP) project requires access to several API services to complete OKD installation.
You created a project to host your cluster.
Enable the following required API services in the project that hosts your cluster. See Enabling services in the GCP documentation.
API service | Console service name |
---|---|
Cloud Deployment Manager V2 API |
|
Compute Engine API |
|
Google Cloud APIs |
|
Cloud Resource Manager API |
|
Google DNS API |
|
IAM Service Account Credentials API |
|
Identity and Access Management (IAM) API |
|
Service Management API |
|
Service Usage API |
|
Google Cloud Storage JSON API |
|
Cloud Storage |
|
The OKD cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OKD cluster.
A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys.
Service | Component | Location | Total resources required | Resources removed after bootstrap |
---|---|---|---|---|
Service account |
IAM |
Global |
5 |
0 |
Firewall rules |
Networking |
Global |
11 |
1 |
Forwarding rules |
Compute |
Global |
2 |
0 |
Health checks |
Compute |
Global |
2 |
0 |
Images |
Compute |
Global |
1 |
0 |
Networks |
Networking |
Global |
1 |
0 |
Routers |
Networking |
Global |
1 |
0 |
Routes |
Networking |
Global |
2 |
0 |
Subnetworks |
Compute |
Global |
2 |
0 |
Target pools |
Networking |
Global |
2 |
0 |
If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. |
Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient.
If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit:
asia-east2
asia-northeast2
asia-south1
australia-southeast1
europe-north1
europe-west2
europe-west3
europe-west6
northamerica-northeast1
southamerica-east1
us-west2
You can increase resource quotas from the GCP console, but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OKD cluster.
OKD requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one.
You created a project to host your cluster.
Create a service account in the project that you use to host your OKD cluster. See Creating a service account in the GCP documentation.
Grant the service account the appropriate permissions. You can either
grant the individual permissions that follow or assign the Owner
role to it.
See Granting roles to a service account for specific resources.
While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. |
Create the service account key in JSON format. See Creating service account keys in the GCP documentation.
The service account key is required to create a cluster.
When you attach the Owner
role to the service account that you create, you
grant that service account all permissions, including those that are required to
install OKD. To deploy an OKD cluster, the service
account requires the following permissions. If you deploy your cluster into an existing VPC, the service account does not require certain networking permissions, which are noted in the following lists:
Compute Admin
Security Admin
Service Account Admin
Service Account User
Storage Admin
DNS Administrator
Deployment Manager Editor
Service Account Key Admin
For the cluster to create new limited credentials for its Operators, add the following role:
Service Account Key Admin
The roles are applied to the service accounts that the control plane and compute machines use:
Account | Roles |
---|---|
Control Plane |
|
|
|
|
|
|
|
|
|
Compute |
|
|
You can deploy an OKD cluster to the following Google Cloud Platform (GCP) regions:
asia-east1
(Changhua County, Taiwan)
asia-east2
(Hong Kong)
asia-northeast1
(Tokyo, Japan)
asia-northeast2
(Osaka, Japan)
asia-northeast3
(Seoul, South Korea)
asia-south1
(Mumbai, India)
asia-southeast1
(Jurong West, Singapore)
asia-southeast2
(Jakarta, Indonesia)
australia-southeast1
(Sydney, Australia)
europe-north1
(Hamina, Finland)
europe-west1
(St. Ghislain, Belgium)
europe-west2
(London, England, UK)
europe-west3
(Frankfurt, Germany)
europe-west4
(Eemshaven, Netherlands)
europe-west6
(Zürich, Switzerland)
northamerica-northeast1
(Montréal, Québec, Canada)
southamerica-east1
(São Paulo, Brazil)
us-central1
(Council Bluffs, Iowa, USA)
us-east1
(Moncks Corner, South Carolina, USA)
us-east4
(Ashburn, Northern Virginia, USA)
us-west1
(The Dalles, Oregon, USA)
us-west2
(Los Angeles, California, USA)
us-west3
(Salt Lake City, Utah, USA)
us-west4
(Las Vegas, Nevada, USA)
To install OKD on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP.
You created a project to host your cluster.
You created a service account and granted it the required permissions.
Install the following binaries in $PATH
:
gcloud
gsutil
See Install the latest Cloud SDK version in the GCP documentation.
Authenticate using the gcloud
tool with your configured service account.
See Authorizing with a service account in the GCP documentation.
If you use a shared Virtual Private Cloud (VPC) to host your OKD cluster in Google Cloud Platform (GCP), you must configure the project that hosts it.
If you already have a project that hosts the shared VPC network, review this section to ensure that the project meets all of the requirements to install an OKD cluster. |
Create a project to host the shared VPC for your OKD cluster. See Creating and Managing Projects in the GCP documentation.
Create a service account in the project that hosts your shared VPC. See Creating a service account in the GCP documentation.
Grant the service account the appropriate permissions. You can either
grant the individual permissions that follow or assign the Owner
role to it.
See Granting roles to a service account for specific resources.
While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. The service account for the project that hosts the shared VPC network requires the following roles:
|
To install OKD, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the project that hosts the shared VPC that you install the cluster into. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster.
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source.
If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains. |
Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation.
Use an appropriate root domain, such as openshiftcorp.com
, or subdomain,
such as clusters.openshiftcorp.com
.
Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation.
You typically have four name servers.
Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers.
If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation.
If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain. This process might include a request to your company’s IT department or the division that controls the root domain and DNS services for your company.
You must create a VPC in Google Cloud Platform (GCP) for your OKD cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Copy the template from the Deployment Manager template for the VPC
section of this topic and save it as 01_vpc.py
on your computer. This template
describes the VPC that your cluster requires.
Export the following variables required by the resource definition:
Export the control plane CIDR:
$ export MASTER_SUBNET_CIDR='10.0.0.0/19'
Export the compute CIDR:
$ export WORKER_SUBNET_CIDR='10.0.32.0/19'
Export the region to deploy the VPC network and cluster to:
$ export REGION='<region>'
Export the variable for the ID of the project that hosts the shared VPC:
$ export HOST_PROJECT=<host_project>
Export the variable for the email of the service account that belongs to host project:
$ export HOST_PROJECT_ACCOUNT=<host_service_account_email>
Create a 01_vpc.yaml
resource definition file:
$ cat <<EOF >01_vpc.yaml
imports:
- path: 01_vpc.py
resources:
- name: cluster-vpc
type: 01_vpc.py
properties:
infra_id: '<prefix>' (1)
region: '${REGION}' (2)
master_subnet_cidr: '${MASTER_SUBNET_CIDR}' (3)
worker_subnet_cidr: '${WORKER_SUBNET_CIDR}' (4)
EOF
1 | infra_id is the prefix of the network name. |
2 | region is the region to deploy the cluster into, for example us-central1 . |
3 | master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/19 . |
4 | worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.32.0/19 . |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} (1)
1 | For <vpc_deployment_name> , specify the name of the VPC to deploy. |
Export the VPC variable that other components require:
Export the name of the host project network:
$ export HOST_PROJECT_NETWORK=<vpc_network>
Export the name of the host project control plane subnet:
$ export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>
Export the name of the host project compute subnet:
$ export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>
Set up the shared VPC. See Setting up Shared VPC in the GCP documentation.
You can use the following Deployment Manager template to deploy the VPC that you need for your OKD cluster:
01_vpc.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-network',
'type': 'compute.v1.network',
'properties': {
'region': context.properties['region'],
'autoCreateSubnetworks': False
}
}, {
'name': context.properties['infra_id'] + '-master-subnet',
'type': 'compute.v1.subnetwork',
'properties': {
'region': context.properties['region'],
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
'ipCidrRange': context.properties['master_subnet_cidr']
}
}, {
'name': context.properties['infra_id'] + '-worker-subnet',
'type': 'compute.v1.subnetwork',
'properties': {
'region': context.properties['region'],
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
'ipCidrRange': context.properties['worker_subnet_cidr']
}
}, {
'name': context.properties['infra_id'] + '-router',
'type': 'compute.v1.router',
'properties': {
'region': context.properties['region'],
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
'nats': [{
'name': context.properties['infra_id'] + '-nat-master',
'natIpAllocateOption': 'AUTO_ONLY',
'minPortsPerVm': 7168,
'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
'subnetworks': [{
'name': '$(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)',
'sourceIpRangesToNat': ['ALL_IP_RANGES']
}]
}, {
'name': context.properties['infra_id'] + '-nat-worker',
'natIpAllocateOption': 'AUTO_ONLY',
'minPortsPerVm': 512,
'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
'subnetworks': [{
'name': '$(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)',
'sourceIpRangesToNat': ['ALL_IP_RANGES']
}]
}]
}
}]
return {'resources': resources}
To install OKD on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml
file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var
partition during the preparation phases of installation.
For installations of OKD that use user-provisioned infrastructure, you manually generate your installation configuration file.
Obtain the OKD installation program and the access token for your cluster.
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version. |
Customize the following install-config.yaml
file template and save
it in the <installation_directory>
.
You must name this configuration file |
Back up the install-config.yaml
file so that you can use it to install
multiple clusters.
The |
install-config.yaml
file for GCPYou can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
controlPlane: (2)
hyperthreading: Enabled (3) (4)
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
replicas: 3
compute: (2)
- hyperthreading: Enabled (3)
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
replicas: 0
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production (5)
region: us-central1 (6)
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA... (7)
publish: Internal (8)
1 | Specify the public DNS on the host project. | ||
2 | If you do not provide these parameters and values, the installation program provides the default value. | ||
3 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OKD will support defining multiple compute pools during installation. Only one control plane pool is used. |
||
4 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
|
||
5 | Specify the main project where the VM instances reside. | ||
6 | Specify the region that your VPC network is in. | ||
7 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
||
8 | How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the Internet. The default value is External .
To use a shared VPC in a cluster that uses infrastructure that you provision, you must set publish to Internal . The installation program will no longer be able to access the public DNS zone for the base domain in the host project. |
Production environments can deny direct access to the Internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or
other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace to hold the additional CA
certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter
with the FCOS trust bundle. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
The installation program does not support the proxy |
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster.
|
You obtained the OKD installation program.
You created the install-config.yaml
installation configuration file.
Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | For <installation_directory> , specify the installation directory that
contains the install-config.yaml file you created. |
Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml
By removing these files, you prevent the cluster from automatically generating control plane machines.
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml
Because you create and manage the worker machines yourself, you do not need to initialize these machines.
Check that the mastersSchedulable
parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml
Kubernetes manifest file is set to false
. This setting prevents pods from being scheduled on the control plane machines:
Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml
file.
Locate the mastersSchedulable
parameter and ensure that it is set to false
.
Save and exit the file.
Remove the privateZone
sections from the <installation_directory>/manifests/cluster-dns-02-config.yml
DNS configuration file:
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: null
name: cluster
spec:
baseDomain: example.openshift.com
privateZone: (1)
id: mycluster-100419-private-zone
status: {}
1 | Remove this section completely. |
Configure the cloud provider for your VPC.
Open the <installation_directory>/manifests/cloud-provider-config.yaml
file.
Add the network-project-id
parameter and set its value to the ID of project that hosts the shared VPC network.
Add the network-name
parameter and set its value to the name of the shared VPC network that hosts the OKD cluster.
Replace the value of the subnetwork-name
parameter with the value of the shared VPC subnet that hosts your compute machines.
The contents of the <installation_directory>/manifests/cloud-provider-config.yaml
resemble the following example:
config: |+
[global]
project-id = example-project
regional = true
multizone = true
node-tags = opensh-ptzzx-master
node-tags = opensh-ptzzx-worker
node-instance-prefix = opensh-ptzzx
external-instance-groups-prefix = opensh-ptzzx
network-project-id = example-shared-vpc
network-name = example-network
subnetwork-name = example-worker-subnet
If you deploy a cluster that is not on a private network, open the <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
file and replace the value of the scope
parameter with External
. The contents of the file resemble the following example:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: null
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
scope: External
type: LoadBalancerService
status:
availableReplicas: 0
domain: ''
selector: ''
To create the Ignition configuration files, run the following command from the directory that contains the installation program:
$ ./openshift-install create ignition-configs --dir <installation_directory> (1)
1 | For <installation_directory> , specify the same installation directory. |
The following files are generated in the directory:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OKD installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it.
You obtained the OKD installation program and the pull secret for your cluster.
You generated the Ignition config files for your cluster.
You installed the jq
package.
To extract and view the infrastructure name from the Ignition config file metadata, run the following command:
$ jq -r .infraID <installation_directory>/metadata.json (1)
1 | For <installation_directory> , specify the path to the directory that you stored the
installation files in. |
openshift-vw9j6 (1)
1 | The output of this command is your cluster name and a random string. |
You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP).
Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. |
Obtain the OKD installation program and the pull secret for your cluster.
Generate the Ignition config files for your cluster.
Install the jq
package.
Export the following common variables to be used by the provided Deployment Manager templates:
$ export BASE_DOMAIN='<base_domain>' (1)
$ export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' (1)
$ export NETWORK_CIDR='10.0.0.0/16'
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (2)
$ export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json`
$ export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json`
$ export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`
1 | Supply the values for the host project. |
2 | For <installation_directory> , specify the path to the directory that you stored the installation files in. |
All the Fedora CoreOS (FCOS) machines require network in initramfs
during boot
to fetch Ignition config from the machine config server.
You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.
Protocol | Port | Description |
---|---|---|
ICMP |
N/A |
Network reachability tests |
TCP |
|
Metrics |
|
Host level services, including the node exporter on ports |
|
|
The default ports that Kubernetes reserves |
|
|
openshift-sdn |
|
UDP |
|
VXLAN and Geneve |
|
VXLAN and Geneve |
|
|
Host level services, including the node exporter on ports |
|
TCP/UDP |
|
Kubernetes node port |
Protocol | Port | Description |
---|---|---|
TCP |
|
Kubernetes API |
Protocol | Port | Description |
---|---|---|
TCP |
|
etcd server and peer ports |
The infrastructure that you provision for your cluster must meet the following network topology requirements.
OKD requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. |
Before you install OKD, you must provision two load balancers that meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes.
A stateless load balancing algorithm. The options vary based on the load balancer implementation.
Do not configure session persistence for an API load balancer. |
Configure the following ports on both the front and back of the load balancers:
Port | Back-end machines (pool members) | Internal | External | Description |
---|---|---|---|---|
|
Bootstrap and control plane. You remove the bootstrap machine from the load
balancer after the bootstrap machine initializes the cluster control plane. You
must configure the |
X |
X |
Kubernetes API server |
|
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. |
X |
Machine config server |
The load balancer must be configured to take a maximum of 30 seconds from the
time the API server turns off the |
Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes.
A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
Configure the following ports on both the front and back of the load balancers:
Port | Back-end machines (pool members) | Internal | External | Description |
---|---|---|---|---|
|
The machines that run the Ingress router pods, compute, or worker, by default. |
X |
X |
HTTPS traffic |
|
The machines that run the Ingress router pods, compute, or worker, by default. |
X |
X |
HTTP traffic |
If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. |
A working configuration for the Ingress router is required for an OKD cluster. You must configure the Ingress router after the control plane initializes. |
You must configure load balancers in Google Cloud Platform (GCP) for your OKD cluster to use. One way to create these components is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Copy the template from the Deployment Manager template for the internal load balancer
section of this topic and save it as 02_lb_int.py
on your computer. This
template describes the internal load balancing objects that your cluster
requires.
For an external cluster, also copy the template from the Deployment Manager template for the external load balancer
section of this topic and save it as 02_lb_ext.py
on your computer. This
template describes the external load balancing objects that your cluster
requires.
Export the variables that the deployment template uses:
Export the cluster network location:
$ export CLUSTER_NETWORK=(`gcloud compute networks describe ${HOST_PROJECT_NETWORK} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)
Export the control plane subnet location:
$ export CONTROL_SUBNET=(`gcloud compute networks subnets describe ${HOST_PROJECT_CONTROL_SUBNET} --region=${REGION} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)
Export the three zones that the cluster uses:
$ export ZONE_0=(`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`)
$ export ZONE_1=(`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`)
$ export ZONE_2=(`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`)
Create a 02_infra.yaml
resource definition file:
$ cat <<EOF >02_infra.yaml
imports:
- path: 02_lb_ext.py
- path: 02_lb_int.py (1)
resources:
- name: cluster-lb-ext (1)
type: 02_lb_ext.py
properties:
infra_id: '${INFRA_ID}' (2)
region: '${REGION}' (3)
- name: cluster-lb-int
type: 02_lb_int.py
properties:
cluster_network: '${CLUSTER_NETWORK}'
control_subnet: '${CONTROL_SUBNET}' (4)
infra_id: '${INFRA_ID}'
region: '${REGION}'
zones: (5)
- '${ZONE_0}'
- '${ZONE_1}'
- '${ZONE_2}'
EOF
1 | Required only when deploying an external cluster. |
2 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
3 | region is the region to deploy the cluster into, for example us-central1 . |
4 | control_subnet is the URI to the control subnet. |
5 | zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-infra --config 02_infra.yaml
Export the cluster IP address:
$ export CLUSTER_IP=(`gcloud compute addresses describe ${INFRA_ID}-cluster-ip --region=${REGION} --format json | jq -r .address`)
For an external cluster, also export the cluster public IP address:
$ export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address`)
You can use the following Deployment Manager template to deploy the external load balancer that you need for your OKD cluster:
02_lb_ext.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-cluster-public-ip',
'type': 'compute.v1.address',
'properties': {
'region': context.properties['region']
}
}, {
# Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver
'name': context.properties['infra_id'] + '-api-http-health-check',
'type': 'compute.v1.httpHealthCheck',
'properties': {
'port': 6080,
'requestPath': '/readyz'
}
}, {
'name': context.properties['infra_id'] + '-api-target-pool',
'type': 'compute.v1.targetPool',
'properties': {
'region': context.properties['region'],
'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'],
'instances': []
}
}, {
'name': context.properties['infra_id'] + '-api-forwarding-rule',
'type': 'compute.v1.forwardingRule',
'properties': {
'region': context.properties['region'],
'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)',
'target': '$(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)',
'portRange': '6443'
}
}]
return {'resources': resources}
You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OKD cluster:
02_lb_int.py
Deployment Manager templatedef GenerateConfig(context):
backends = []
for zone in context.properties['zones']:
backends.append({
'group': '$(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)'
})
resources = [{
'name': context.properties['infra_id'] + '-cluster-ip',
'type': 'compute.v1.address',
'properties': {
'addressType': 'INTERNAL',
'region': context.properties['region'],
'subnetwork': context.properties['control_subnet']
}
}, {
# Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver
'name': context.properties['infra_id'] + '-api-internal-health-check',
'type': 'compute.v1.healthCheck',
'properties': {
'httpsHealthCheck': {
'port': 6443,
'requestPath': '/readyz'
},
'type': "HTTPS"
}
}, {
'name': context.properties['infra_id'] + '-api-internal-backend-service',
'type': 'compute.v1.regionBackendService',
'properties': {
'backends': backends,
'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'],
'loadBalancingScheme': 'INTERNAL',
'region': context.properties['region'],
'protocol': 'TCP',
'timeoutSec': 120
}
}, {
'name': context.properties['infra_id'] + '-api-internal-forwarding-rule',
'type': 'compute.v1.forwardingRule',
'properties': {
'backendService': '$(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)',
'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)',
'loadBalancingScheme': 'INTERNAL',
'ports': ['6443','22623'],
'region': context.properties['region'],
'subnetwork': context.properties['control_subnet']
}
}]
for zone in context.properties['zones']:
resources.append({
'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group',
'type': 'compute.v1.instanceGroup',
'properties': {
'namedPorts': [
{
'name': 'ignition',
'port': 22623
}, {
'name': 'https',
'port': 6443
}
],
'network': context.properties['cluster_network'],
'zone': zone
}
})
return {'resources': resources}
You will need this template in addition to the 02_lb_ext.py
template when you create an external cluster.
You must configure a private DNS zone in Google Cloud Platform (GCP) for your OKD cluster to use. One way to create this component is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Copy the template from the Deployment Manager template for the private DNS
section of this topic and save it as 02_dns.py
on your computer. This
template describes the private DNS objects that your cluster
requires.
Create a 02_dns.yaml
resource definition file:
$ cat <<EOF >02_dns.yaml
imports:
- path: 02_dns.py
resources:
- name: cluster-dns
type: 02_dns.py
properties:
infra_id: '${INFRA_ID}' (1)
cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}' (2)
cluster_network: '${CLUSTER_NETWORK}' (3)
EOF
1 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
2 | cluster_domain is the domain for the cluster, for example openshift.example.com . |
3 | cluster_network is the selfLink URL to the cluster network. |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-dns --config 02_dns.yaml --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually:
Add the internal DNS entries:
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
For an external cluster, also add the external DNS entries:
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
You can use the following Deployment Manager template to deploy the private DNS that you need for your OKD cluster:
02_dns.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-private-zone',
'type': 'dns.v1.managedZone',
'properties': {
'description': '',
'dnsName': context.properties['cluster_domain'] + '.',
'visibility': 'private',
'privateVisibilityConfig': {
'networks': [{
'networkUrl': context.properties['cluster_network']
}]
}
}
}]
return {'resources': resources}
You must create firewall rules in Google Cloud Platform (GCP) for your OKD cluster to use. One way to create these components is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Copy the template from the
Deployment Manager template for firewall rules
section of this topic and save it as 03_firewall.py
on your computer. This
template describes the security groups that your cluster requires.
Create a 03_firewall.yaml
resource definition file:
$ cat <<EOF >03_firewall.yaml
imports:
- path: 03_firewall.py
resources:
- name: cluster-firewall
type: 03_firewall.py
properties:
allowed_external_cidr: '0.0.0.0/0' (1)
infra_id: '${INFRA_ID}' (2)
cluster_network: '${CLUSTER_NETWORK}' (3)
network_cidr: '${NETWORK_CIDR}' (4)
EOF
1 | allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to ${NETWORK_CIDR} . |
2 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
3 | cluster_network is the selfLink URL to the cluster network. |
4 | network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-firewall --config 03_firewall.yaml --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
You can use the following Deployment Manager template to deploy the firewall rues that you need for your OKD cluster:
03_firewall.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-bootstrap-in-ssh',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['22']
}],
'sourceRanges': [context.properties['allowed_external_cidr']],
'targetTags': [context.properties['infra_id'] + '-bootstrap']
}
}, {
'name': context.properties['infra_id'] + '-api',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['6443']
}],
'sourceRanges': [context.properties['allowed_external_cidr']],
'targetTags': [context.properties['infra_id'] + '-master']
}
}, {
'name': context.properties['infra_id'] + '-health-checks',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['6080', '6443', '22624']
}],
'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'],
'targetTags': [context.properties['infra_id'] + '-master']
}
}, {
'name': context.properties['infra_id'] + '-etcd',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['2379-2380']
}],
'sourceTags': [context.properties['infra_id'] + '-master'],
'targetTags': [context.properties['infra_id'] + '-master']
}
}, {
'name': context.properties['infra_id'] + '-control-plane',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['10257']
},{
'IPProtocol': 'tcp',
'ports': ['10259']
},{
'IPProtocol': 'tcp',
'ports': ['22623']
}],
'sourceTags': [
context.properties['infra_id'] + '-master',
context.properties['infra_id'] + '-worker'
],
'targetTags': [context.properties['infra_id'] + '-master']
}
}, {
'name': context.properties['infra_id'] + '-internal-network',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'icmp'
},{
'IPProtocol': 'tcp',
'ports': ['22']
}],
'sourceRanges': [context.properties['network_cidr']],
'targetTags': [
context.properties['infra_id'] + '-master',
context.properties['infra_id'] + '-worker'
]
}
}, {
'name': context.properties['infra_id'] + '-internal-cluster',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'udp',
'ports': ['4789', '6081']
},{
'IPProtocol': 'tcp',
'ports': ['9000-9999']
},{
'IPProtocol': 'udp',
'ports': ['9000-9999']
},{
'IPProtocol': 'tcp',
'ports': ['10250']
},{
'IPProtocol': 'tcp',
'ports': ['30000-32767']
},{
'IPProtocol': 'udp',
'ports': ['30000-32767']
}],
'sourceTags': [
context.properties['infra_id'] + '-master',
context.properties['infra_id'] + '-worker'
],
'targetTags': [
context.properties['infra_id'] + '-master',
context.properties['infra_id'] + '-worker'
]
}
}]
return {'resources': resources}
You must create IAM roles in Google Cloud Platform (GCP) for your OKD cluster to use. One way to create these components is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Copy the template from the
Deployment Manager template for IAM roles
section of this topic and save it as 03_iam.py
on your computer. This
template describes the IAM roles that your cluster requires.
Create a 03_iam.yaml
resource definition file:
$ cat <<EOF >03_iam.yaml
imports:
- path: 03_iam.py
resources:
- name: cluster-iam
type: 03_iam.py
properties:
infra_id: '${INFRA_ID}' (1)
EOF
1 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-iam --config 03_iam.yaml
Export the variable for the master service account:
$ export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-m@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
Export the variable for the worker service account:
$ export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets:
Grant the networkViewer
role of the project that hosts your shared VPC to the master service account:
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} projects add-iam-policy-binding ${HOST_PROJECT} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkViewer"
Grant the networkUser
role to the master service account for the control plane subnet:
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION}
Grant the networkUser
role to the worker service account for the control plane subnet:
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION}
Grant the networkUser
role to the master service account for the compute subnet:
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION}
Grant the networkUser
role to the worker service account for the compute subnet:
$ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION}
The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually:
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin"
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin"
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin"
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser"
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer"
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
Create a service account key and store it locally for later use:
$ gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SERVICE_ACCOUNT}
You can use the following Deployment Manager template to deploy the IAM roles that you need for your OKD cluster:
03_iam.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-master-node-sa',
'type': 'iam.v1.serviceAccount',
'properties': {
'accountId': context.properties['infra_id'] + '-m',
'displayName': context.properties['infra_id'] + '-master-node'
}
}, {
'name': context.properties['infra_id'] + '-worker-node-sa',
'type': 'iam.v1.serviceAccount',
'properties': {
'accountId': context.properties['infra_id'] + '-w',
'displayName': context.properties['infra_id'] + '-worker-node'
}
}]
return {'resources': resources}
You must use a valid Fedora CoreOS (FCOS) image for Google Cloud Platform (GCP) for your OKD nodes.
Obtain the FCOS image from the FCOS Downloads page
Create the Google storage bucket:
$ gsutil mb gs://<bucket_name>
Upload the FCOS image to the Google storage bucket:
$ gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>
Export the uploaded FCOS image location as a variable:
$ export IMAGE_SOURCE="gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz"
Create the cluster image:
$ gcloud compute images create "${INFRA_ID}-rhcos-image" \
--source-uri="${IMAGE_SOURCE}"
You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OKD cluster initialization. One way to create this machine is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Create and configure networking and load balancers in GCP.
Create control plane and compute roles.
Ensure pyOpenSSL is installed.
Copy the template from the Deployment Manager template for the bootstrap machine
section of this topic and save it as 04_bootstrap.py
on your computer. This
template describes the bootstrap machine that your cluster requires.
Export the location of the Fedora CoreOS (FCOS) image that the installation program requires:
$ export CLUSTER_IMAGE=(`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)
Create a bucket and upload the bootstrap.ign
file:
$ gsutil mb gs://${INFRA_ID}-bootstrap-ignition
$ gsutil cp <installation_directory>/bootstrap.ign gs://${INFRA_ID}-bootstrap-ignition/
Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable:
$ export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}'`
Create a 04_bootstrap.yaml
resource definition file:
$ cat <<EOF >04_bootstrap.yaml
imports:
- path: 04_bootstrap.py
resources:
- name: cluster-bootstrap
type: 04_bootstrap.py
properties:
infra_id: '${INFRA_ID}' (1)
region: '${REGION}' (2)
zone: '${ZONE_0}' (3)
cluster_network: '${CLUSTER_NETWORK}' (4)
control_subnet: '${CONTROL_SUBNET}' (5)
image: '${CLUSTER_IMAGE}' (6)
machine_type: 'n1-standard-4' (7)
root_volume_size: '128' (8)
bootstrap_ign: '${BOOTSTRAP_IGN}' (9)
EOF
1 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
2 | region is the region to deploy the cluster into, for example us-central1 . |
3 | zone is the zone to deploy the bootstrap instance into, for example us-central1-b . |
4 | cluster_network is the selfLink URL to the cluster network. |
5 | control_subnet is the selfLink URL to the control subnet. |
6 | image is the selfLink URL to the FCOS image. |
7 | machine_type is the machine type of the instance, for example n1-standard-4 . |
8 | root_volume_size is the boot disk size for the bootstrap machine. |
9 | bootstrap_ign is the URL output when creating a signed URL. |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml
Add the bootstrap instance to the internal load balancer instance group:
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-bootstrap-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
Add the bootstrap instance group to the internal load balancer backend service:
$ gcloud compute backend-services add-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0}
You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OKD cluster:
04_bootstrap.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-bootstrap-public-ip',
'type': 'compute.v1.address',
'properties': {
'region': context.properties['region']
}
}, {
'name': context.properties['infra_id'] + '-bootstrap',
'type': 'compute.v1.instance',
'properties': {
'disks': [{
'autoDelete': True,
'boot': True,
'initializeParams': {
'diskSizeGb': context.properties['root_volume_size'],
'sourceImage': context.properties['image']
}
}],
'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'],
'metadata': {
'items': [{
'key': 'user-data',
'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.1.0"}}',
}]
},
'networkInterfaces': [{
'subnetwork': context.properties['control_subnet'],
'accessConfigs': [{
'natIP': '$(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)'
}]
}],
'tags': {
'items': [
context.properties['infra_id'] + '-master',
context.properties['infra_id'] + '-bootstrap'
]
},
'zone': context.properties['zone']
}
}, {
'name': context.properties['infra_id'] + '-bootstrap-instance-group',
'type': 'compute.v1.instanceGroup',
'properties': {
'namedPorts': [
{
'name': 'ignition',
'port': 22623
}, {
'name': 'https',
'port': 6443
}
],
'network': context.properties['cluster_network'],
'zone': context.properties['zone']
}
}]
return {'resources': resources}
You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template.
If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Create and configure networking and load balancers in GCP.
Create control plane and compute roles.
Create the bootstrap machine.
Copy the template from the Deployment Manager template for control plane machines
section of this topic and save it as 05_control_plane.py
on your computer.
This template describes the control plane machines that your cluster requires.
Export the following variable required by the resource definition:
$ export MASTER_IGNITION=`cat <installation_directory>/master.ign`
Create a 05_control_plane.yaml
resource definition file:
$ cat <<EOF >05_control_plane.yaml
imports:
- path: 05_control_plane.py
resources:
- name: cluster-control-plane
type: 05_control_plane.py
properties:
infra_id: '${INFRA_ID}' (1)
zones: (2)
- '${ZONE_0}'
- '${ZONE_1}'
- '${ZONE_2}'
control_subnet: '${CONTROL_SUBNET}' (3)
image: '${CLUSTER_IMAGE}' (4)
machine_type: 'n1-standard-4' (5)
root_volume_size: '128'
service_account_email: '${MASTER_SERVICE_ACCOUNT}' (6)
ignition: '${MASTER_IGNITION}' (7)
EOF
1 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
2 | zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . |
3 | control_subnet is the selfLink URL to the control subnet. |
4 | image is the selfLink URL to the FCOS image. |
5 | machine_type is the machine type of the instance, for example n1-standard-4 . |
6 | service_account_email is the email address for the master service account that you created. |
7 | ignition is the contents of the master.ign file. |
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --config 05_control_plane.yaml
The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually.
Run the following commands to add the control plane machines to the appropriate instance groups:
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-master-0
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-master-1
$ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-master-2
For an external cluster, you must also run the following commands to add the control plane machines to the target pools:
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-master-0
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-master-1
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-master-2
You can use the following Deployment Manager template to deploy the control plane machines that you need for your OKD cluster:
05_control_plane.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-master-0',
'type': 'compute.v1.instance',
'properties': {
'disks': [{
'autoDelete': True,
'boot': True,
'initializeParams': {
'diskSizeGb': context.properties['root_volume_size'],
'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd',
'sourceImage': context.properties['image']
}
}],
'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'],
'metadata': {
'items': [{
'key': 'user-data',
'value': context.properties['ignition']
}]
},
'networkInterfaces': [{
'subnetwork': context.properties['control_subnet']
}],
'serviceAccounts': [{
'email': context.properties['service_account_email'],
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
}],
'tags': {
'items': [
context.properties['infra_id'] + '-master',
]
},
'zone': context.properties['zones'][0]
}
}, {
'name': context.properties['infra_id'] + '-master-1',
'type': 'compute.v1.instance',
'properties': {
'disks': [{
'autoDelete': True,
'boot': True,
'initializeParams': {
'diskSizeGb': context.properties['root_volume_size'],
'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd',
'sourceImage': context.properties['image']
}
}],
'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'],
'metadata': {
'items': [{
'key': 'user-data',
'value': context.properties['ignition']
}]
},
'networkInterfaces': [{
'subnetwork': context.properties['control_subnet']
}],
'serviceAccounts': [{
'email': context.properties['service_account_email'],
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
}],
'tags': {
'items': [
context.properties['infra_id'] + '-master',
]
},
'zone': context.properties['zones'][1]
}
}, {
'name': context.properties['infra_id'] + '-master-2',
'type': 'compute.v1.instance',
'properties': {
'disks': [{
'autoDelete': True,
'boot': True,
'initializeParams': {
'diskSizeGb': context.properties['root_volume_size'],
'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd',
'sourceImage': context.properties['image']
}
}],
'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'],
'metadata': {
'items': [{
'key': 'user-data',
'value': context.properties['ignition']
}]
},
'networkInterfaces': [{
'subnetwork': context.properties['control_subnet']
}],
'serviceAccounts': [{
'email': context.properties['service_account_email'],
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
}],
'tags': {
'items': [
context.properties['infra_id'] + '-master',
]
},
'zone': context.properties['zones'][2]
}
}]
return {'resources': resources}
After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Create and configure networking and load balancers in GCP.
Create control plane and compute roles.
Create the bootstrap machine.
Create the control plane machines.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ (1)
--log-level info (2)
1 | For <installation_directory> , specify the path to the directory that you
stored the installation files in. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the command exits without a FATAL
warning, your production control plane
has initialized.
Delete the bootstrap resources:
$ gcloud compute backend-services remove-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0}
$ gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
$ gsutil rb gs://${INFRA_ID}-bootstrap-ignition
$ gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap
You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OKD.
In this example, you manually launch one instance by using the Deployment
Manager template. Additional instances can be launched by including additional
resources of type 06_worker.py
in the file.
If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure a GCP account.
Generate the Ignition config files for your cluster.
Create and configure a VPC and associated subnets in GCP.
Create and configure networking and load balancers in GCP.
Create control plane and compute roles.
Create the bootstrap machine.
Create the control plane machines.
Copy the template from the Deployment Manager template for worker machines
section of this topic and save it as 06_worker.py
on your computer. This
template describes the worker machines that your cluster requires.
Export the variables that the resource definition uses.
Export the subnet that hosts the compute machines:
$ export COMPUTE_SUBNET=(`gcloud compute networks subnets describe ${HOST_PROJECT_COMPUTE_SUBNET} --region=${REGION} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)
Export the email address for your service account:
$ export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
Export the location of the compute machine Ignition config file:
$ export WORKER_IGNITION=`cat <installation_directory>/worker.ign`
Create a 06_worker.yaml
resource definition file:
$ cat <<EOF >06_worker.yaml
imports:
- path: 06_worker.py
resources:
- name: 'worker-0' (1)
type: 06_worker.py
properties:
infra_id: '${INFRA_ID}' (2)
zone: '${ZONE_0}' (3)
compute_subnet: '${COMPUTE_SUBNET}' (4)
image: '${CLUSTER_IMAGE}' (5)
machine_type: 'n1-standard-4' (6)
root_volume_size: '128'
service_account_email: '${WORKER_SERVICE_ACCOUNT}' (7)
ignition: '${WORKER_IGNITION}' (8)
- name: 'worker-1'
type: 06_worker.py
properties:
infra_id: '${INFRA_ID}' (2)
zone: '${ZONE_1}' (3)
compute_subnet: '${COMPUTE_SUBNET}' (4)
image: '${CLUSTER_IMAGE}' (5)
machine_type: 'n1-standard-4' (6)
root_volume_size: '128'
service_account_email: '${WORKER_SERVICE_ACCOUNT}' (7)
ignition: '${WORKER_IGNITION}' (8)
EOF
1 | name is the name of the worker machine, for example worker-0 . |
2 | infra_id is the INFRA_ID infrastructure name from the extraction step. |
3 | zone is the zone to deploy the worker machine into, for example us-central1-a . |
4 | compute_subnet is the selfLink URL to the compute subnet. |
5 | image is the selfLink URL to the FCOS image. |
6 | machine_type is the machine type of the instance, for example n1-standard-4 . |
7 | service_account_email is the email address for the worker service account that you created. |
8 | ignition is the contents of the worker.ign file. |
Optional: If you want to launch additional instances, include additional
resources of type 06_worker.py
in your 06_worker.yaml
resource definition
file.
Create the deployment by using the gcloud
CLI:
$ gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml
You can use the following Deployment Manager template to deploy the worker machines that you need for your OKD cluster:
06_worker.py
Deployment Manager templatedef GenerateConfig(context):
resources = [{
'name': context.properties['infra_id'] + '-' + context.env['name'],
'type': 'compute.v1.instance',
'properties': {
'disks': [{
'autoDelete': True,
'boot': True,
'initializeParams': {
'diskSizeGb': context.properties['root_volume_size'],
'sourceImage': context.properties['image']
}
}],
'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'],
'metadata': {
'items': [{
'key': 'user-data',
'value': context.properties['ignition']
}]
},
'networkInterfaces': [{
'subnetwork': context.properties['compute_subnet']
}],
'serviceAccounts': [{
'email': context.properties['service_account_email'],
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
}],
'tags': {
'items': [
context.properties['infra_id'] + '-worker',
]
},
'zone': context.properties['zone']
}
}]
return {'resources': resources}
You can install the OpenShift CLI (oc
) in order to interact with OKD from a
command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack the archive:
$ tar xvzf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.zip
.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
You added machines to your cluster.
Confirm that the cluster recognizes the machines:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready master 63m v1.19.0
master-1 Ready master 63m v1.19.0
master-2 Ready master 64m v1.19.0
The output lists all of the machines that you created.
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. |
Review the pending CSRs and ensure that you see the client requests with the Pending
or Approved
status for each machine that you added to the cluster:
$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending
status, approve the CSRs for your cluster machines:
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the |
For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the |
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Some Operators might not become available until some CSRs are approved. |
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
...
If the remaining CSRs are not approved, and are in the Pending
status, approve the CSRs for your cluster machines:
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the Ready
status. Verify this by running the following command:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready master 73m v1.20.0
master-1 Ready master 73m v1.20.0
master-2 Ready master 74m v1.20.0
worker-0 Ready worker 11m v1.20.0
worker-1 Ready worker 11m v1.20.0
It can take a few minutes after approval of the server CSRs for the machines to transition to the |
For more information on CSRs, see Certificate Signing Requests.
DNS zone configuration is removed when creating Kubernetes manifests and generating Ignition configs. You must manually create DNS records that point at the ingress load balancer. You can create either a wildcard
*.apps.{baseDomain}.
or specific records. You can use A, CNAME, and other records per your requirements.
Configure a GCP account.
Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs.
Create and configure a VPC and associated subnets in GCP.
Create and configure networking and load balancers in GCP.
Create control plane and compute roles.
Create the bootstrap machine.
Create the control plane machines.
Create the worker machines.
Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP
field:
$ oc -n openshift-ingress get service router-default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98
Add the A record to your zones:
To use A records:
Export the variable for the router IP address:
$ export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
Add the A record to the private zones:
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
For an external cluster, also add the A record to the public zones:
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
$ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
$ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
To add explicit domains instead of using a wildcard, create entries for each of the cluster’s current routes:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
oauth-openshift.apps.your.cluster.domain.example.com
console-openshift-console.apps.your.cluster.domain.example.com
downloads-openshift-console.apps.your.cluster.domain.example.com
alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
grafana-openshift-monitoring.apps.your.cluster.domain.example.com
prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com
The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the ingress controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters.
If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required:
$ oc get events -n openshift-ingress --field-selector="reason=LoadBalancerManualChange"
Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`
If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running.
You can create cluster-wide firewall rules to allow the access that the OKD cluster requires.
If you do not choose to create firewall rules based on cluster events, you must create cluster-wide firewall rules. |
You exported the variables that the Deployment Manager templates require to deploy your cluster.
You created the networking and load balancing components in GCP that your cluster requires.
Add a single firewall rule to allow the Google Cloud Engine health checks to access all of the services. This rule enables the ingress load balancers to determine the health status of their instances.
$ gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="${CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress-hc --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT}
Add a single firewall rule to allow access to all cluster services:
For an external cluster:
$ gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT}
For a private cluster:
$ gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="${CLUSTER_NETWORK}" --source-ranges=${NETWORK_CIDR} --target-tags="${INFRA_ID}-master,${INFRA_ID}-worker" ${INFRA_ID}-ingress --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT}
Because this rule only allows traffic on TCP ports 80
and 443
, ensure that you add all the ports that your services use.
After you start the OKD installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready.
Deploy the bootstrap machine for an OKD cluster on user-provisioned GCP infrastructure.
Install the oc
CLI and log in.
Complete the cluster installation:
$ ./openshift-install --dir <installation_directory> wait-for install-complete (1)
INFO Waiting up to 30m0s for the cluster to initialize...
1 | For <installation_directory> , specify the path to the directory that you
stored the installation files in. |
|
Observe the running state of your cluster.
Run the following command to view the current cluster version and status:
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version False True 24m Working towards 4.5.4: 99% complete
Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO):
$ oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.5.4 True False False 7m56s
cloud-credential 4.5.4 True False False 31m
cluster-autoscaler 4.5.4 True False False 16m
console 4.5.4 True False False 10m
csi-snapshot-controller 4.5.4 True False False 16m
dns 4.5.4 True False False 22m
etcd 4.5.4 False False False 25s
image-registry 4.5.4 True False False 16m
ingress 4.5.4 True False False 16m
insights 4.5.4 True False False 17m
kube-apiserver 4.5.4 True False False 19m
kube-controller-manager 4.5.4 True False False 20m
kube-scheduler 4.5.4 True False False 20m
kube-storage-version-migrator 4.5.4 True False False 16m
machine-api 4.5.4 True False False 22m
machine-config 4.5.4 True False False 22m
marketplace 4.5.4 True False False 16m
monitoring 4.5.4 True False False 10m
network 4.5.4 True False False 23m
node-tuning 4.5.4 True False False 23m
openshift-apiserver 4.5.4 True False False 17m
openshift-controller-manager 4.5.4 True False False 15m
openshift-samples 4.5.4 True False False 16m
operator-lifecycle-manager 4.5.4 True False False 22m
operator-lifecycle-manager-catalog 4.5.4 True False False 22m
operator-lifecycle-manager-packageserver 4.5.4 True False False 18m
service-ca 4.5.4 True False False 23m
service-catalog-apiserver 4.5.4 True False False 23m
service-catalog-controller-manager 4.5.4 True False False 23m
storage 4.5.4 True False False 17m
Run the following command to view your cluster pods:
$ oc get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m
kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m
kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m
openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m
openshift-apiserver apiserver-fm48r 1/1 Running 0 30m
openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m
openshift-apiserver apiserver-q85nm 1/1 Running 0 29m
...
openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m
openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m
openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m
openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m
openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m
openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m
When the current cluster version is AVAILABLE
, the installation is complete.
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.