×

In the context of hosted control planes, a management cluster is an OKD cluster where the HyperShift Operator is deployed and where the control planes for hosted clusters are hosted.

The control plane is associated with a hosted cluster and runs as pods in a single namespace. When the cluster service consumer creates a hosted cluster, it creates a worker node that is independent of the control plane.

You can run both the management cluster and the worker nodes on-premise, such as on a bare-metal platform or on OKD Virtualization. In addition, you can run both the management cluster and the worker nodes on cloud infrastructure, such as Amazon Web Services (AWS). If you use a mixed infrastructure, such as running the management cluster on AWS and your worker nodes on-premise, or running your worker nodes on AWS and your management cluster on-premise, you must use the PublicAndPrivate publishing strategy and follow the latency requirements in the support matrix.

In Bare Metal Host (BMH) deployments, where the Bare Metal Operator starts machines, the hosted control plane must be able to reach baseboard management controllers (BMCs). If your security profile does not permit the Cluster Baremetal Operator to access the network where the BMHs have their BMCs in order to enable Redfish automation, you can use BYO ISO support. However, in BYO mode, OKD cannot automate the powering on of BMHs.

Support matrix for hosted control planes

Because multicluster engine for Kubernetes Operator includes the HyperShift Operator, releases of hosted control planes align with releases of multicluster engine Operator. For more information, see OpenShift Operator Life Cycles.

Management cluster support

Any supported standalone OKD cluster can be a management cluster.

A single-node OKD cluster is not supported as a management cluster. If you have resource constraints, you can share infrastructure between a standalone OKD control plane and hosted control planes. For more information, see "Shared infrastructure between hosted and standalone control planes".

The following table maps multicluster engine Operator versions to the management cluster versions that support them:

Table 1. Supported multicluster engine Operator versions for OKD management clusters
Management cluster version Supported multicluster engine Operator version

4.14 - 4.15

2.4

4.14 - 4.16

2.5

4.14 - 4.17

2.6

4.15 - 4.17

2.7

4.16 - 4.18

2.8

Hosted cluster support

For hosted clusters, no direct relationship exists between the management cluster version and the hosted cluster version. The hosted cluster version depends on the HyperShift Operator that is included with your multicluster engine Operator version.

Ensure a maximum latency of 200 ms between the management cluster and hosted clusters. This requirement is especially important for mixed infrastructure deployments, such as when your management cluster is on AWS and your worker nodes are on-premise.

The following table maps multicluster engine Operator versions to the hosted cluster versions that you can create by using the HyperShift Operator that is associated with that version of multicluster engine Operator:

Although the HyperShift Operator supports the hosted cluster versions in the following table, multicluster engine Operator supports only as far back as 2 versions earlier than the current version. For example, if the current hosted cluster version is 4.18, multicluster engine Operator supports as far back as version 4.16. If you want to use a hosted cluster version that is earlier than one of the versions that multicluster engine Operator supports, you can detach your hosted clusters from multicluster engine Operator to be unmanaged, or you can use an earlier version of multicluster engine Operator. For more information, see The multicluster engine for Kubernetes operator 2.8 Support Matrix.

Table 2. Hosted cluster versions that can be created by multicluster engine Operator versions
Hosted cluster version multicluster engine Operator 2.4 multicluster engine Operator 2.5 multicluster engine Operator 2.6 multicluster engine Operator 2.7 multicluster engine Operator 2.8

4.14

Yes

Yes

Yes

Yes

Yes

4.15

No

Yes

Yes

Yes

Yes

4.16

No

No

Yes

Yes

Yes

4.17

No

No

No

Yes

Yes

4.18

No

No

No

No

Yes

Hosted cluster platform support

A hosted cluster supports only one infrastructure platform. For example, you cannot create multiple node pools on different infrastructure platforms.

The following table indicates which OKD versions are supported for each platform of hosted control planes.

For IBM Power and IBM Z, you must run the control plane on machine types based on 64-bit x86 architecture, and node pools on IBM Power or IBM Z.

In the following table, the management cluster version is the OKD version where the multicluster engine Operator is enabled:

Table 3. Required OKD versions for platforms
Hosted cluster platform Management cluster version Hosted cluster version

Amazon Web Services

4.16 - 4.18

4.16 - 4.18

IBM Power

4.17 - 4.18

4.17 - 4.18

IBM Z

4.17 - 4.18

4.17 - 4.18

OKD Virtualization

4.14 - 4.18

4.14 - 4.18

Bare metal

4.14 - 4.18

4.14 - 4.18

Non-bare-metal agent machines (Technology Preview)

4.16 - 4.18

4.16 - 4.18

Updates of multicluster engine Operator

When you update to another version of the multicluster engine Operator, your hosted cluster can continue to run if the HyperShift Operator that is included in the version of multicluster engine Operator supports the hosted cluster version. The following table shows which hosted cluster versions are supported on which updated multicluster engine Operator versions.

Although the HyperShift Operator supports the hosted cluster versions in the following table, multicluster engine Operator supports only as far back as 2 versions earlier than the current version. For example, if the current hosted cluster version is 4.18, multicluster engine Operator supports as far back as version 4.16. If you want to use a hosted cluster version that is earlier than one of the versions that multicluster engine Operator supports, you can detach your hosted clusters from multicluster engine Operator to be unmanaged, or you can use an earlier version of multicluster engine Operator. For more information, see The multicluster engine for Kubernetes operator 2.8 Support Matrix.

Table 4. Updated multicluster engine Operator version support for hosted clusters
Updated multicluster engine Operator version Supported hosted cluster version

Updated from 2.4 to 2.5

OKD 4.14

Updated from 2.5 to 2.6

OKD 4.14 - 4.15

Updated from 2.6 to 2.7

OKD 4.14 - 4.16

Updated from 2.7 to 2.8

OKD 4.14 - 4.17

For example, if you have an OKD 4.14 hosted cluster on the management cluster and you update from multicluster engine Operator 2.4 to 2.5, the hosted cluster can continue to run.

Technology Preview features

The following list indicates Technology Preview features for this release:

  • Hosted control planes on IBM Z in a disconnected environment

  • Custom taints and tolerations for hosted control planes on OKD Virtualization

  • NVIDIA GPU devices on hosted control planes for OKD Virtualization

FIPS-enabled hosted clusters

The binaries for hosted control planes are FIPs-compliant, with the exception of the hosted control planes command-line interface, hcp.

If you want to deploy a FIPS-enabled hosted cluster, you must use a FIPS-enabled management cluster. To enable FIPS mode for your management cluster, you must run the installation program from a Fedora computer configured to operate in FIPS mode. For more information about configuring FIPS mode on Fedora, see Switching Fedora to FIPS mode.

When running Fedora or Fedora CoreOS (FCOS) booted in FIPS mode, OKD core components use the Fedora cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

After you set up your management cluster in FIPS mode, the hosted cluster creation process runs on that management cluster.