The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
In OKD version 4.16, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
If you use customer-managed encryption keys, you prepared your Azure environment for encryption.
You can deploy a private OKD cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
By default, OKD is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. |
To deploy a private cluster, you must:
Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Deploy from a machine that has access to:
The API services for the cloud to which you provision.
The hosts on the network that you provision.
The internet to obtain installation media.
You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.
Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16
internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.
The cluster still requires access to internet to access the Azure APIs.
The following items are not required or created when you install a private cluster:
A BaseDomainResourceGroup
, since the cluster does not create public records
Public IP addresses
Public DNS records
Public endpoints
The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
In OKD, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer.
You can configure user-defined routing by modifying parameters in the install-config.yaml
file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.
When configuring a cluster to use user-defined routing, the installation program does not create the following resources:
Outbound rules for access to the internet.
Public IPs for the public load balancer.
Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.
You must ensure the following items are available before setting user-defined routing:
Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror.
The cluster can access Azure APIs.
Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.
There are several pre-existing networking setups that are supported for internet access using user-defined routing.
You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.
When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.
You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.
When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.
You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.
When using the default route table for subnets, with 0.0.0.0/0
populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.
You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:
An OpenShift image registry mirror that allows for pulling container images
Access to Azure APIs
With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.
In OKD 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.
By deploying OKD into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.
When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:
Subnets
Route tables
VNets
Network Security Groups
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. |
If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.
The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.
Your VNet must meet the following characteristics:
The VNet’s CIDR block must contain the Networking.MachineCIDR
range, which is the IP address pool for cluster machines.
The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.
By default, if you specify availability zones in the |
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
All the specified subnets exist.
There are two private subnets, one for the control plane machines and one for the compute machines.
The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VNet, the VNet is not deleted. |
The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.
The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. |
Port | Description | Control plane | Compute |
---|---|---|---|
|
Allows HTTP traffic |
x |
|
|
Allows HTTPS traffic |
x |
|
|
Allows communication to the control plane machines |
x |
|
|
Allows internal communication to the machine config server for provisioning machines |
x |
If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed.
Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. |
Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.
Protocol | Port | Description |
---|---|---|
ICMP |
N/A |
Network reachability tests |
TCP |
|
Metrics |
|
Host level services, including the node exporter on ports |
|
|
The default ports that Kubernetes reserves |
|
UDP |
|
VXLAN |
|
Geneve |
|
|
Host level services, including the node exporter on ports |
|
|
IPsec IKE packets |
|
|
IPsec NAT-T packets |
|
|
Network Time Protocol (NTP) on UDP port If you configure an external NTP time server, you must open UDP port |
|
TCP/UDP |
|
Kubernetes node port |
ESP |
N/A |
IPsec Encapsulating Security Payload (ESP) |
Protocol | Port | Description |
---|---|---|
TCP |
|
etcd server and peer ports |
Starting with OKD 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.
The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OKD cluster that uses the Fedora cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OKD, provide the SSH public key to the installation program.
Before you install OKD, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.
If you do not use the pull secret from Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
Installing the cluster requires that you manually create the installation configuration file.
You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
You have obtained the OKD installation program and the pull secret for your cluster.
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version. |
Customize the sample install-config.yaml
file template that is provided and save
it in the <installation_directory>
.
You must name this configuration file |
Back up the install-config.yaml
file so that you can use it to install multiple clusters.
The |
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap |
FCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
FCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
FCOS |
2 |
8 GB |
100 GB |
300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OKD and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use Fedora compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of Fedora 7 compute machines is deprecated and has been removed in OKD 4.10 and later.
As of OKD version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
For more information, see RHEL Architectures. |
You are required to use Azure virtual machines that have the |
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.
The following Microsoft Azure instance types have been tested with OKD.
standardBasv2Family
standardBSFamily
standardBsv2Family
standardDADSv5Family
standardDASv4Family
standardDASv5Family
standardDCACCV5Family
standardDCADCCV5Family
standardDCADSv5Family
standardDCASv5Family
standardDCSv3Family
standardDCSv2Family
standardDDCSv3Family
standardDDSv4Family
standardDDSv5Family
standardDLDSv5Family
standardDLSv5Family
standardDSFamily
standardDSv2Family
standardDSv2PromoFamily
standardDSv3Family
standardDSv4Family
standardDSv5Family
standardEADSv5Family
standardEASv4Family
standardEASv5Family
standardEBDSv5Family
standardEBSv5Family
standardECACCV5Family
standardECADCCV5Family
standardECADSv5Family
standardECASv5Family
standardEDSv4Family
standardEDSv5Family
standardEIADSv5Family
standardEIASv4Family
standardEIASv5Family
standardEIBDSv5Family
standardEIBSv5Family
standardEIDSv5Family
standardEISv3Family
standardEISv5Family
standardESv3Family
standardESv4Family
standardESv5Family
standardFXMDVSFamily
standardFSFamily
standardFSv2Family
standardGSFamily
standardHBrsv2Family
standardHBSFamily
standardHBv4Family
standardHCSFamily
standardHXFamily
standardLASv3Family
standardLSFamily
standardLSv2Family
standardLSv3Family
standardMDSHighMemoryv3Family
standardMDSMediumMemoryv2Family
standardMDSMediumMemoryv3Family
standardMIDSHighMemoryv3Family
standardMIDSMediumMemoryv2Family
standardMISHighMemoryv3Family
standardMISMediumMemoryv2Family
standardMSFamily
standardMSHighMemoryv3Family
standardMSMediumMemoryv2Family
standardMSMediumMemoryv3Family
StandardNCADSA100v4Family
Standard NCASv3_T4 Family
standardNCSv3Family
standardNDSv2Family
StandardNGADSV620v1Family
standardNPSFamily
StandardNVADSA10v5Family
standardNVSv3Family
standardXEISv4Family
The following Microsoft Azure ARM64 instance types have been tested with OKD.
standardBpsv2Family
standardDPSv5Family
standardDPDSv5Family
standardDPLDSv5Family
standardDPLSv5Family
standardEPSv5Family
standardEPDSv5Family
You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules.
See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features.
Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You have created an install-config.yaml
file.
Use a text editor to edit the install-config.yaml
file prior to deploying your cluster and add the following stanza:
controlPlane: (1)
platform:
azure:
settings:
securityType: TrustedLaunch (2)
trustedLaunch:
uefiSettings:
secureBoot: Enabled (3)
virtualizedTrustedPlatformModule: Enabled (4)
1 | Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. |
2 | Enable trusted launch features. |
3 | Enable secure boot. For more information, see the Azure documentation about secure boot. |
4 | Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules. |
You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes.
Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You can use confidential VMs with the following VM sizes:
DCasv5-series
DCadsv5-series
ECasv5-series
ECadsv5-series
Confidential VMs are currently not supported on 64-bit ARM architectures. |
You have created an install-config.yaml
file.
Use a text editor to edit the install-config.yaml
file prior to deploying your cluster and add the following stanza:
controlPlane: (1)
platform:
azure:
settings:
securityType: ConfidentialVM (2)
confidentialVM:
uefiSettings:
secureBoot: Enabled (3)
virtualizedTrustedPlatformModule: Enabled (4)
osDisk:
securityProfile:
securityEncryptionType: VMGuestStateOnly (5)
1 | Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. |
2 | Enable confidential VMs. |
3 | Enable secure boot. For more information, see the Azure documentation about secure boot. |
4 | Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules. |
5 | Specify VMGuestStateOnly to encrypt the VM guest state. |
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
controlPlane: (2)
hyperthreading: Enabled (3) (4)
name: master
platform:
azure:
encryptionAtHost: true
ultraSSDCapability: Enabled
osDisk:
diskSizeGB: 1024 (5)
diskType: Premium_LRS
diskEncryptionSet:
resourceGroup: disk_encryption_set_resource_group
name: disk_encryption_set_name
subscriptionId: secondary_subscription_id
osImage:
publisher: example_publisher_name
offer: example_image_offer
sku: example_offer_sku
version: example_image_version
type: Standard_D8s_v3
replicas: 3
compute: (2)
- hyperthreading: Enabled (3)
name: worker
platform:
azure:
ultraSSDCapability: Enabled
type: Standard_D2s_v3
encryptionAtHost: true
osDisk:
diskSizeGB: 512 (5)
diskType: Standard_LRS
diskEncryptionSet:
resourceGroup: disk_encryption_set_resource_group
name: disk_encryption_set_name
subscriptionId: secondary_subscription_id
osImage:
publisher: example_publisher_name
offer: example_image_offer
sku: example_offer_sku
version: example_image_version
zones: (6)
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster (1)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (7)
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
defaultMachinePlatform:
osImage: (8)
publisher: example_publisher_name
offer: example_image_offer
sku: example_offer_sku
version: example_image_version
ultraSSDCapability: Enabled
baseDomainResourceGroupName: resource_group (9)
region: centralus (1)
resourceGroupName: existing_resource_group (10)
networkResourceGroupName: vnet_resource_group (11)
virtualNetwork: vnet (12)
controlPlaneSubnet: control_plane_subnet (13)
computeSubnet: compute_subnet (14)
outboundType: UserDefinedRouting (15)
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' (1)
sshKey: ssh-ed25519 AAAA... (16)
publish: Internal (17)
1 | Required. The installation program prompts you for this value. | ||
2 | If you do not provide these parameters and values, the installation program provides the default value. | ||
3 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. |
||
4 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
|
||
5 | You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. | ||
6 | Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. | ||
7 | The cluster network plugin to install. The default value OVNKubernetes is the only supported value. |
||
8 | Optional: A custom Fedora CoreOS (FCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. |
||
9 | Specify the name of the resource group that contains the DNS zone for your base domain. | ||
10 | Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. | ||
11 | If you use an existing VNet, specify the name of the resource group that contains it. | ||
12 | If you use an existing VNet, specify its name. | ||
13 | If you use an existing VNet, specify the name of the subnet to host the control plane machines. | ||
14 | If you use an existing VNet, specify the name of the subnet to host the compute machines. | ||
15 | You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. | ||
16 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
||
17 | How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . |
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges these contents
with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
You can install the OpenShift CLI (oc
) to interact with
OKD
from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.zip
.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
Verify your installation by using an oc
command:
$ oc <command>
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials.
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
If you did not set the credentialsMode
parameter in the install-config.yaml
configuration file to Manual
, modify the value as shown:
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where <installation_directory>
is the directory in which the installation program creates files.
Set a $RELEASE_IMAGE
variable with the release image from your installation file by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of CredentialsRequest
custom resources (CRs) from the OKD release image by running the following command:
$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 | The --included parameter includes only the manifests that your specific cluster configuration requires. |
2 | Specify the location of the install-config.yaml file. |
3 | Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. |
This command creates a YAML file for each CredentialsRequest
object.
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AzureProviderSpec
roleBindings:
- role: Contributor
...
Create YAML files for secrets in the openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef
for each CredentialsRequest
object.
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AzureProviderSpec
roleBindings:
- role: Contributor
...
secretRef:
name: <component_secret>
namespace: <component_namespace>
...
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
azure_subscription_id: <base64_encoded_azure_subscription_id>
azure_client_id: <base64_encoded_azure_client_id>
azure_client_secret: <base64_encoded_azure_client_secret>
azure_tenant_id: <base64_encoded_azure_tenant_id>
azure_resource_prefix: <base64_encoded_azure_resource_prefix>
azure_resourcegroup: <base64_encoded_azure_resourcegroup>
azure_region: <base64_encoded_azure_region>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. |
To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster.
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The |
You have access to an OKD account with cluster administrator access.
You have installed the OpenShift CLI (oc
).
You have created a global Microsoft Azure account for the ccoctl
utility to use with the following permissions:
Microsoft.Resources/subscriptions/resourceGroups/read
Microsoft.Resources/subscriptions/resourceGroups/write
Microsoft.Resources/subscriptions/resourceGroups/delete
Microsoft.Authorization/roleAssignments/read
Microsoft.Authorization/roleAssignments/delete
Microsoft.Authorization/roleAssignments/write
Microsoft.Authorization/roleDefinitions/read
Microsoft.Authorization/roleDefinitions/write
Microsoft.Authorization/roleDefinitions/delete
Microsoft.Storage/storageAccounts/listkeys/action
Microsoft.Storage/storageAccounts/delete
Microsoft.Storage/storageAccounts/read
Microsoft.Storage/storageAccounts/write
Microsoft.Storage/storageAccounts/blobServices/containers/write
Microsoft.Storage/storageAccounts/blobServices/containers/delete
Microsoft.Storage/storageAccounts/blobServices/containers/read
Microsoft.ManagedIdentity/userAssignedIdentities/delete
Microsoft.ManagedIdentity/userAssignedIdentities/read
Microsoft.ManagedIdentity/userAssignedIdentities/write
Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read
Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write
Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete
Microsoft.Storage/register/action
Microsoft.ManagedIdentity/register/action
Set a variable for the OKD release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OKD release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
Ensure that the architecture of the |
Extract the ccoctl
binary from the CCO container image within the OKD release image by running the following command:
$ oc image extract $CCO_IMAGE \
--file="/usr/bin/ccoctl.<rhel_version>" \(1)
-a ~/.pull-secret
1 | For <rhel_version> , specify the value that corresponds to the version of Fedora that the host uses.
If no value is specified, ccoctl.rhel8 is used by default.
The following values are valid:
|
Change the permissions to make ccoctl
executable by running the following command:
$ chmod 775 ccoctl.<rhel_version>
To verify that ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:
$ ./ccoctl.rhel9
OpenShift credentials provisioning tool
Usage:
ccoctl [command]
Available Commands:
aws Manage credentials objects for AWS cloud
azure Manage credentials objects for Azure
gcp Manage credentials objects for Google cloud
help Help about any command
ibmcloud Manage credentials objects for IBM Cloud
nutanix Manage credentials objects for Nutanix
Flags:
-h, --help help for ccoctl
Use "ccoctl [command] --help" for more information about a command.
You can use the ccoctl azure create-all
command to automate the creation of Azure resources.
By default, |
You must have:
Extracted and prepared the ccoctl
binary.
Access to your Microsoft Azure account by using the Azure CLI.
Set a $RELEASE_IMAGE
variable with the release image from your installation file by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of CredentialsRequest
objects from the OKD release image by running the following command:
$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 | The --included parameter includes only the manifests that your specific cluster configuration requires. |
2 | Specify the location of the install-config.yaml file. |
3 | Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. |
This command might take a few moments to run. |
To enable the ccoctl
utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command:
$ az login
Use the ccoctl
tool to process all CredentialsRequest
objects by running the following command:
$ ccoctl azure create-all \
--name=<azure_infra_name> \(1)
--output-dir=<ccoctl_output_dir> \(2)
--region=<azure_region> \(3)
--subscription-id=<azure_subscription_id> \(4)
--credentials-requests-dir=<path_to_credentials_requests_directory> \(5)
--dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \(6)
--tenant-id=<azure_tenant_id> (7)
1 | Specify the user-defined name for all created Azure resources used for tracking. |
2 | Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. |
3 | Specify the Azure region in which cloud resources will be created. |
4 | Specify the Azure subscription ID to use. |
5 | Specify the directory containing the files for the component CredentialsRequest objects. |
6 | Specify the name of the resource group containing the cluster’s base domain Azure DNS zone. |
7 | Specify the Azure tenant ID to use. |
If your cluster uses Technology Preview features that are enabled by the To see additional optional parameters and explanations of how to use them, run the |
To verify that the OKD secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests
directory:
$ ls <path_to_ccoctl_output_dir>/manifests
azure-ad-pod-identity-webhook-config.yaml
cluster-authentication-02-config.yaml
openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml
openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml
openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml
openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml
openshift-image-registry-installer-cloud-credentials-credentials.yaml
openshift-ingress-operator-cloud-credentials-credentials.yaml
openshift-machine-api-azure-cloud-credentials-credentials.yaml
You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts.
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
You have configured an account with the cloud platform that hosts your cluster.
You have configured the Cloud Credential Operator utility (ccoctl
).
You have created the cloud provider resources that are required for your cluster with the ccoctl
utility.
If you did not set the credentialsMode
parameter in the install-config.yaml
configuration file to Manual
, modify the value as shown:
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you used the ccoctl
utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName
parameter in the install-config.yaml
as shown:
apiVersion: v1
baseDomain: example.com
# ...
platform:
azure:
resourceGroupName: <azure_infra_name> (1)
# ...
1 | This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. |
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where <installation_directory>
is the directory in which the installation program creates files.
Copy the manifests that the ccoctl
utility generated to the manifests
directory that the installation program created by running the following command:
$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the tls
directory that contains the private key to the installation directory:
$ cp -a /<path_to_ccoctl_output_dir>/tls .
By installing a private image registry on a private Microsoft Azure cluster, you can create private storage endpoints. Private storage endpoints disable public facing endpoints to the registry’s storage account, adding an extra layer of security to your OKD deployment.
Do not install a private image registry on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state. |
Use the following guide to prepare your private Microsoft Azure cluster for installation with a private image registry.
You have access to an OKD account with cluster administrator access.
You have installed the OpenShift CLI (oc).
You have prepared an install-config.yaml
that includes the following information:
The publish
field is set to Internal
You have set the permissions for creating a private storage endpoint. For more information, see "Azure permissions for installer-provisioned infrastructure".
If you have not previously created installation manifest files, do so by running the following command:
$ ./openshift-install create manifests --dir <installation_directory>
This command displays the following messages:
INFO Consuming Install Config from target directory
INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift
Create an image registry configuration object and pass in the networkResourceGroupName
, subnetName
, and vnetName
provided by Microsoft Azure. For example:
$ touch imageregistry-config.yaml
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
name: cluster
spec:
managementState: "Managed"
replicas: 2
rolloutStrategy: RollingUpdate
storage:
azure:
networkAccess:
internal:
networkResourceGroupName: <vnet_resource_group> (1)
subnetName: <subnet_name> (2)
vnetName: <vnet_name> (3)
type: Internal
1 | Optional. If you have an existing VNet and subnet setup, replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). |
2 | Optional. If you have an existing VNet and subnet setup, replace <subnet_name> with the name of the existing compute subnet within the specified resource group. |
3 | Optional. If you have an existing VNet and subnet setup, replace <vnet_name> with the name of the existing virtual network (VNet) in the specified resource group. |
The |
Move the imageregistry-config.yaml
file to the <installation_directory/manifests>
folder by running the following command:
$ mv imageregistry-config.yaml <installation_directory/manifests/>
After you have moved the imageregistry-config.yaml
file to the <installation_directory/manifests>
folder and set the required permissions, proceed to "Deploying the cluster".
For the list of permissions needed to create a private storage endpoint, see Required Azure permissions for installer-provisioned infrastructure.
You can install OKD on a compatible cloud platform.
You can run the |
You have configured an account with the cloud platform that hosts your cluster.
You have the OKD installation program and the pull secret for your cluster.
You have an Azure subscription ID and tenant ID.
If you are installing the cluster using a service principal, you have its application ID and password.
If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from.
If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites:
You have its client ID.
You have assigned it to the virtual machine that you will run the installation program from.
Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/
directory and delete the osServicePrincipal.json
configuration file.
Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the installation program cannot locate the osServicePrincipal.json
configuration file from a previous installation, you are prompted for Azure subscription and authentication values.
Enter the following Azure parameter values for your subscription:
azure subscription id: Enter the subscription ID to use for the cluster.
azure tenant id: Enter the tenant ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id:
If you are using a service principal, enter its application ID.
If you are using a system-assigned managed identity, leave this value blank.
If you are using a user-assigned managed identity, specify its client ID.
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret:
If you are using a service principal, enter its password.
If you are using a system-assigned managed identity, leave this value blank.
If you are using a user-assigned managed identity,leave this value blank.
If previously not detected, the installation program creates an osServicePrincipal.json
configuration file and stores this file in the ~/.azure/
directory on your computer. This ensures that the installation program can load the profile when it is creating an OKD cluster on the target platform.
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
See Accessing the web console for more details about accessing and understanding the OKD web console.
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.