×

You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. The hosting cluster is an OKD cluster where the control planes are hosted. The hosting cluster is also known as the management cluster.

The management cluster is not the same thing as the managed cluster. A managed cluster is a cluster that the hub cluster manages.

The hosted control planes feature is enabled by default.

The multicluster engine Operator supports only the default local-cluster managed hub cluster. On Red Hat Advanced Cluster Management (RHACM) 2.10, you can use the local-cluster managed hub cluster as the hosting cluster.

A hosted cluster is an OKD cluster with its API endpoint and control plane that are hosted on the hosting cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hcp command-line interface (CLI) to create a hosted cluster.

The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator".

Preparing to deploy hosted control planes on non-bare metal agent machines

As you prepare to deploy hosted control planes on bare metal, consider the following information:

  • You can add agent machines as a worker node to a hosted cluster by using the Agent platform. Agent machine represents a host booted with a Discovery Image and ready to be provisioned as an OKD node. The Agent platform is part of the central infrastructure management service. For more information, see Enabling the central infrastructure management service.

  • All hosts that are not bare metal require a manual boot with a Discovery Image ISO that the central infrastructure management provides.

  • When you scale up the node pool, a machine is created for every replica. For every machine, the Cluster API provider finds and installs an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.

  • When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image.

  • When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control planes etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using logical volume manager storage" in the OKD documentation.

Prerequisites for deploying hosted control planes on non-bare metal agent machines

You must review the following prerequisites before deploying hosted control planes on non-bare metal agent machines:

  • You need the multicluster engine for Kubernetes Operator 2.5 and later installed on an OKD cluster. The multicluster engine Operator is automatically installed when you install Red Hat Advanced Cluster Management (RHACM). You can also install the multicluster engine Operator without RHACM as an Operator from the OKD OperatorHub.

  • You have at least one managed OKD cluster for the multicluster engine Operator. The local-cluster managed hub cluster is automatically imported. See Advanced configuration for more information about the local-cluster. You can check the status of your hub cluster by running the following command:

    $ oc get managedclusters local-cluster
  • You enabled central infrastructure management. For more information, see Enabling the central infrastructure management service.

  • You installed the hcp command-line interface.

  • Your hosted cluster has a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for the multicluster engine Operator to manage it.

  • You run the hub cluster and workers on the same platform for hosted control planes.

Firewall, port, and service requirements for non-bare metal agent machines

You must meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters.

Services run on their default ports. However, if you use the NodePort publishing strategy, services run on the port that is assigned by the NodePort service.

Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address.

A hosted control plane exposes the following services on non-bare metal agent machines:

  • APIServer

    • The APIServer service runs on port 6443 by default and requires ingress access for communication between the control plane components.

    • If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.

  • OAuthServer

    • The OAuthServer service runs on port 443 by default when you use the route and ingress to expose the service.

    • If you use the NodePort publishing strategy, use a firewall rule for the OAuthServer service.

  • Konnectivity

    • The Konnectivity service runs on port 443 by default when you use the route and ingress to expose the service.

    • The Konnectivity agent establishes a reverse tunnel to allow the control plane to access the network for the hosted cluster. The agent uses egress to connect to the Konnectivity server. The server is exposed by using either a route on port 443 or a manually assigned NodePort.

    • If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.

    • If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.

  • Ignition

    • The Ignition service runs on port 443 by default when you use the route and ingress to expose the service.

    • If you use the NodePort publishing strategy, use a firewall rule for the Ignition service.

You do not need the following services on non-bare metal agent machines:

  • OVNSbDb

  • OIDC

Infrastructure requirements for non-bare metal agent machines

The Agent platform does not create any infrastructure, but it has the following infrastructure requirements:

  • Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OKD node.

  • DNS: The API and ingress endpoints must be routable.

Configuring DNS on non-bare metal agent machines

The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<basedomain> that points to destination where the API Server can be reached.

The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.

  • If you are configuring DNS for a connected environment on an IPv4 network, see the following example DNS configuration:

    api-int.example.krnl.es.    IN A 192.168.122.22
    `*`.apps.example.krnl.es.   IN A 192.168.122.23
  • If you are configuring DNS for a disconnected environment on an IPv6 network, see the following example DNS configuration:

    api-int.example.krnl.es.    IN A 2620:52:0:1306::7
    `*`.apps.example.krnl.es.   IN A 2620:52:0:1306::10
  • If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6. See the following example DNS configuration:

    host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2
    address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3
    dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5]

Creating a hosted cluster on non-bare metal agent machines by using the CLI

When you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. You can create a hosted cluster on bare metal or import one.

As you create a hosted cluster, review the following guidelines:

  • Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it.

  • Do not use clusters as a hosted cluster name.

  • A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.

Procedure
  1. Create the hosted control plane namespace by entering the following command:

    $ oc create ns <hosted_cluster_namespace>-<hosted_cluster_name> (1)
    1 Replace <hosted_cluster_namespace> with your hosted cluster namespace name, for example, clusters. Replace <hosted_cluster_name> with your hosted cluster name.
  2. Create a hosted cluster by entering the following command:

    $ hcp create cluster agent \
      --name=<hosted_cluster_name> \(1)
      --pull-secret=<path_to_pull_secret> \(2)
      --agent-namespace=<hosted_control_plane_namespace> \(3)
      --base-domain=<basedomain> \(4)
      --api-server-address=api.<hosted_cluster_name>.<basedomain> \(5)
      --etcd-storage-class=<etcd_storage_class> \(6)
      --ssh-key  <path_to_ssh_key> \(7)
      --namespace <hosted_cluster_namespace> \(8)
      --control-plane-availability-policy SingleReplica \
      --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release> (9)
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify the path to your pull secret, for example, /user/name/pullsecret.
    3 Specify your hosted control plane namespace, for example, clusters-example. Ensure that agents are available in this namespace by using the oc get agent -n <hosted-control-plane-namespace> command.
    4 Specify your base domain, for example, krnl.es.
    5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster.
    6 Verify that you have a default storage class configured for your cluster. Otherwise, you might end up with pending PVCs. Specify the etcd storage class name, for example, lvm-storageclass.
    7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
    8 Specify your hosted cluster namespace.
    9 Specify the supported {ocp-short} version that you want to use, for example, 4.14.0-x86_64.
Verification
  • After a few moments, verify that your hosted control plane pods are up and running by entering the following command:

    $ oc -n <hosted_control_plane_namespace> get pods
    Example output
    NAME                                             READY   STATUS    RESTARTS   AGE
    catalog-operator-6cd867cc7-phb2q                 2/2     Running   0          2m50s
    control-plane-operator-f6b4c8465-4k5dh           1/1     Running   0          4m32s

Creating a hosted cluster on non-bare metal agent machines by using the web console

You can create a hosted cluster on non-bare metal agent machines by using the OKD web console.

Prerequisites
  • You have access to the cluster with cluster-admin privileges.

  • You have access to the OKD web console.

Procedure
  1. Open the OKD web console and log in by entering your administrator credentials.

  2. In the console header, select All Clusters.

  3. Click Infrastructure → Clusters.

  4. Click Create cluster Host inventory → Hosted control plane.

    The Create cluster page is displayed.

  5. On the Create cluster page, follow the prompts to enter details about the cluster, node pools, networking, and automation.

As you enter details about the cluster, you might find the following tips useful:

  • If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see Creating a credential for an on-premises environment.

  • On the Cluster details page, the pull secret is your OKD pull secret that you use to access OKD resources. If you selected a host inventory credential, the pull secret is automatically populated.

  • On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace.

  • On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.<hosted_cluster_name>.<basedomain> setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods.

    1. Review your entries and click Create.

    The Hosted cluster view is displayed.

    1. Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name. Wait until the control plane components are ready. This process can take a few minutes.

    2. To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.

Next steps

Creating a hosted cluster on bare metal by using a mirror registry

You can use a mirror registry to create a hosted cluster on bare metal by specifying the --image-content-sources flag in the hcp create cluster command.

Procedure
  1. Create a YAML file to define Image Content Source Policies (ICSP). See the following example:

    - mirrors:
      - brew.registry.redhat.io
      source: registry.redhat.io
    - mirrors:
      - brew.registry.redhat.io
      source: registry.stage.redhat.io
    - mirrors:
      - brew.registry.redhat.io
      source: registry-proxy.engineering.redhat.com
  2. Save the file as icsp.yaml. This file contains your mirror registries.

  3. To create a hosted cluster by using your mirror registries, run the following command:

    $ hcp create cluster agent \
        --name=<hosted_cluster_name> \(1)
        --pull-secret=<path_to_pull_secret> \(2)
        --agent-namespace=<hosted_control_plane_namespace> \(3)
        --base-domain=<basedomain> \(4)
        --api-server-address=api.<hosted_cluster_name>.<basedomain> \(5)
        --image-content-sources icsp.yaml  \(6)
        --ssh-key  <path_to_ssh_key> \(7)
        --namespace <hosted_cluster_namespace> \(8)
        --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> (9)
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify the path to your pull secret, for example, /user/name/pullsecret.
    3 Specify your hosted control plane namespace, for example, clusters-example. Ensure that agents are available in this namespace by using the oc get agent -n <hosted-control-plane-namespace> command.
    4 Specify your base domain, for example, krnl.es.
    5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster.
    6 Specify the icsp.yaml file that defines ICSP and your mirror registries.
    7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
    8 Specify your hosted cluster namespace.
    9 Specify the supported OKD version that you want to use, for example, 4.14.0-x86_64. If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OKD release image digest, see Extracting the OKD release image digest.
Next steps

Verifying hosted cluster creation on non-bare metal agent machines

After the deployment process is complete, you can verify that the hosted cluster was created successfully. Follow these steps a few minutes after you create the hosted cluster.

Procedure
  1. Obtain the kubeconfig file for your new hosted cluster by entering the following command:

    $ oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>
  2. Use the kubeconfig file to view the cluster Operators of the hosted cluster. Enter the following command:

    $ oc get co --kubeconfig=kubeconfig-<hosted_cluster_name>
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    console                                    4.10.26   True        False         False      2m38s
    csi-snapshot-controller                    4.10.26   True        False         False      4m3s
    dns                                        4.10.26   True        False         False      2m52s
  3. View the running pods on your hosted cluster by entering the following command:

    $ oc get pods -A --kubeconfig=kubeconfig-<hosted_cluster_name>
    Example output
    NAMESPACE                                          NAME                                                      READY   STATUS             RESTARTS        AGE
    kube-system                                        konnectivity-agent-khlqv                                  0/1     Running            0               3m52s
    openshift-cluster-samples-operator                 cluster-samples-operator-6b5bcb9dff-kpnbc                 2/2     Running            0               20m
    openshift-monitoring                               alertmanager-main-0                                       6/6     Running            0               100s
    openshift-monitoring                               openshift-state-metrics-677b9fb74f-qqp6g                  3/3     Running            0               104s