×

Prerequisites

Configuring image mirroring for hosted control planes in a disconnected environment

Image mirroring is the process of fetching images from external registries, such as registry.redhat.com or quay.io, and storing them in your private registry.

In the following procedures, the oc-mirror tool is used, which is a binary that uses the ImageSetConfiguration object. In the file, you can specify the following information:

  • The OKD versions to mirror. The versions are in quay.io.

  • The additional Operators to mirror. Select packages individually.

  • The extra images that you want to add to the repository.

Prerequisites

Ensure that the registry server is running before you start the mirroring process.

Procedure

To configure image mirroring, complete the following steps:

  1. Ensure that your ${HOME}/.docker/config.json file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to.

  2. By using the following example, create an ImageSetConfiguration object to use for mirroring. Replace values as needed to match your environment:

    apiVersion: mirror.openshift.io/v1alpha2
    kind: ImageSetConfiguration
    storageConfig:
      registry:
        imageURL: registry.<dns.base.domain.name>:5000/openshift/release/metadata:latest (1)
    mirror:
      platform:
        channels:
        - name: candidate-4.17
          minVersion: 4.x.y-build  (2)
          maxVersion: 4.x.y-build (2)
          type: ocp
        kubeVirtContainer: true (3)
        graph: true
      additionalImages:
      - name: quay.io/karmab/origin-keepalived-ipfailover:latest
      - name: quay.io/karmab/kubectl:latest
      - name: quay.io/karmab/haproxy:latest
      - name: quay.io/karmab/mdns-publisher:latest
      - name: quay.io/karmab/origin-coredns:latest
      - name: quay.io/karmab/curl:latest
      - name: quay.io/karmab/kcli:latest
      - name: quay.io/user-name/trbsht:latest
      - name: quay.io/user-name/hypershift:BMSelfManage-v4.17
      - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10
      operators:
      - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17
        packages:
        - name: lvms-operator
        - name: local-storage-operator
        - name: odf-csi-addons-operator
        - name: odf-operator
        - name: mcg-operator
        - name: ocs-operator
        - name: metallb-operator
        - name: kubevirt-hyperconverged (4)
    1 Replace <dns.base.domain.name> with the DNS base domain name.
    2 Replace 4.x.y-build with the supported OKD version you want to use.
    3 Set this optional flag to true if you want to also mirror the container disk image for the Fedora CoreOS (FCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only.
    4 For deployments that use the KubeVirt provider, include this line.
  3. Start the mirroring process by entering the following command:

    $ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY}

    After the mirroring process is finished, you have a new folder named oc-mirror-workspace/results-XXXXXX/, which contains the IDMS and the catalog sources to apply on the hosted cluster.

  4. Mirror the nightly or CI versions of OKD by configuring the imagesetconfig.yaml file as follows:

    apiVersion: mirror.openshift.io/v2alpha1
    kind: ImageSetConfiguration
    mirror:
      platform:
        graph: true
        release: registry.ci.openshift.org/ocp/release:4.x.y-build (1)
        kubeVirtContainer: true (2)
    # ...
    1 Replace 4.x.y-build with the supported OKD version you want to use.
    2 Set this optional flag to true if you want to also mirror the container disk image for the Fedora CoreOS (FCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only.
  5. Apply the changes to the file by entering the following command:

    $ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY}
  6. Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks.

Applying objects in the management cluster

After the mirroring process is complete, you need to apply two objects in the management cluster:

  • ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS)

  • Catalog sources

When you use the oc-mirror tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/.

The ICSP or IDMS initiates a MachineConfig change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as READY, you need to apply the newly generated catalog sources.

The catalog sources initiate actions in the openshift-marketplace Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests that are included in that image.

Procedure
  1. To check the new sources, run the following command by using the new CatalogSource as a source:

    $ oc get packagemanifest
  2. To apply the artifacts, complete the following steps:

    1. Create the ICSP or IDMS artifacts by entering the following command:

      $ oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml
    2. Wait for the nodes to become ready, and then enter the following command:

      $ oc apply -f catalogSource-XXXXXXXX-index.yaml
  3. Mirror the OLM catalogs and configure the hosted cluster to point to the mirror.

    When you use the management (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster.

    1. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the hypershift.openshift.io/olm-catalogs-is-registry-overrides annotation to the HostedCluster resource. The format is "sr1=dr1,sr2=dr2", where the source registry string is a key and the destination registry is a value.

    2. To bypass the OLM catalog image stream mechanism, use the following four annotations on the HostedCluster resource to directly specify the addresses of the four images to use for OLM Operator catalogs:

      • hypershift.openshift.io/certified-operators-catalog-image

      • hypershift.openshift.io/community-operators-catalog-image

      • hypershift.openshift.io/redhat-marketplace-catalog-image

      • hypershift.openshift.io/redhat-operators-catalog-image

In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates.

Next steps

Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes.

Deploying multicluster engine Operator for a disconnected installation of hosted control planes

The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it:

Configuring TLS certificates for a disconnected installation of hosted control planes

To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster.

Adding the registry CA to the management cluster

To add the registry CA to the management cluster, complete the following steps.

Procedure
  1. Create a config map that resembles the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: <config_map_name> (1)
      namespace: <config_map_namespace> (2)
    data: (3)
      <registry_name>..<port>: | (4)
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
      <registry_name>..<port>: |
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
      <registry_name>..<port>: |
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
    1 Specify the name of the config map.
    2 Specify the namespace for the config map.
    3 In the data field, specify the registry names and the registry certificate content. Replace <port> with the port where the registry server is running; for example, 5000.
    4 Ensure that the data in the config map is defined by using | only instead of other methods, such as | -. If you use other methods, issues can occur when the pod reads the certificates.
  2. Patch the cluster-wide object, image.config.openshift.io to include the following specification:

    spec:
      additionalTrustedCA:
        - name: registry-config

    As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OKD payload for hosted cluster deployments.

    The process to patch the object might take several minutes to be completed.

Adding the registry CA to the worker nodes for the hosted cluster

In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes.

Procedure
  1. In the hc.spec.additionalTrustBundle file, add the following specification:

    spec:
      additionalTrustBundle:
        - name: user-ca-bundle (1)
    1 The user-ca-bundle entry is a config map that you create in the next step.
  2. In the same namespace where the HostedCluster object is created, create the user-ca-bundle config map. The config map resembles the following example:

    apiVersion: v1
    data:
      ca-bundle.crt: |
        // Registry1 CA
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
    
        // Registry2 CA
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
    
        // Registry3 CA
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
    
    kind: ConfigMap
    metadata:
      name: user-ca-bundle
      namespace: <hosted_cluster_namespace> (1)
    1 Specify the namespace where the HostedCluster object is created.

Creating a hosted cluster on OKD Virtualization

A hosted cluster is an OKD cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane.

Requirements to deploy hosted control planes on OKD Virtualization

As you prepare to deploy hosted control planes on OKD Virtualization, consider the following information:

  • Run the hub cluster and workers on the same platform for hosted control planes.

  • Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it.

  • Do not use clusters as a hosted cluster name.

  • A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.

  • When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage".

Creating a hosted cluster with the KubeVirt platform by using the CLI

To create a hosted cluster, you can use the hosted control plane command line interface, hcp.

Procedure
  1. Enter the following command:

    $ hcp create cluster kubevirt \
      --name <hosted-cluster-name> \ (1)
      --node-pool-replicas <worker-count> \ (2)
      --pull-secret <path-to-pull-secret> \ (3)
      --memory <value-for-memory> \ (4)
      --cores <value-for-cpu> \ (5)
      --etcd-storage-class=<etcd-storage-class> (6)
    
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify the worker count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify a value for memory, for example, 6Gi.
    5 Specify a value for CPU, for example, 2.
    6 Specify the etcd storage class name, for example, lvm-storageclass.

    You can use the --release-image flag to set up the hosted cluster with a specific OKD release.

    A default node pool is created for the cluster with two virtual machine worker replicas according to the --node-pool-replicas flag.

  2. After a few moments, verify that the hosted control plane pods are running by entering the following command:

    $ oc -n clusters-<hosted-cluster-name> get pods
    Example output
    NAME                                                  READY   STATUS    RESTARTS   AGE
    capi-provider-5cc7b74f47-n5gkr                        1/1     Running   0          3m
    catalog-operator-5f799567b7-fd6jw                     2/2     Running   0          69s
    certified-operators-catalog-784b9899f9-mrp6p          1/1     Running   0          66s
    cluster-api-6bbc867966-l4dwl                          1/1     Running   0          66s
    .
    .
    .
    redhat-operators-catalog-9d5fd4d44-z8qqk              1/1     Running   0          66s

    A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned.

  3. To check the status of the hosted cluster, see the corresponding HostedCluster resource by entering the following command:

    $ oc get --namespace clusters hostedclusters

    See the following example output, which illustrates a fully provisioned HostedCluster object:

    NAMESPACE   NAME      VERSION   KUBECONFIG                 PROGRESS    AVAILABLE   PROGRESSING   MESSAGE
    clusters    example   4.x.0     example-admin-kubeconfig   Completed   True        False         The hosted control plane is available

    Replace 4.x.0 with the supported OKD version that you want to use.

Configuring the default ingress and DNS for hosted control planes on OKD Virtualization

Every OKD cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OKD cluster that the KubeVirt virtual machines run on.

For example, your OKD cluster might have the following default ingress DNS entry:

*.apps.mgmt-cluster.example.com

As a result, a KubeVirt hosted cluster that is named guest and that runs on that underlying OKD cluster has the following default ingress:

*.apps.guest.apps.mgmt-cluster.example.com
Procedure

For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes.

  • You can configure this behavior by entering the following command:

    $ oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'

When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior.

Customizing ingress and DNS behavior

If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.

Deploying a hosted cluster that specifies the base domain

To create a hosted cluster that specifies a base domain, complete the following steps.

Procedure
  1. Enter the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ (1)
      --node-pool-replicas <worker_count> \ (2)
      --pull-secret <path_to_pull_secret> \ (3)
      --memory <value_for_memory> \ (4)
      --cores <value_for_cpu> \ (5)
      --base-domain <basedomain> (6)
    
    1 Specify the name of your hosted cluster.
    2 Specify the worker count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify a value for memory, for example, 6Gi.
    5 Specify a value for CPU, for example, 2.
    6 Specify the base domain, for example, hypershift.lab.

    As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example, .apps.example.hypershift.lab. The hosted cluster remains in Partial status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer.

  2. View the status of your hosted cluster by entering the following command:

    $ oc get --namespace clusters hostedclusters
    Example output
    NAME            VERSION   KUBECONFIG                       PROGRESS   AVAILABLE   PROGRESSING   MESSAGE
    example                   example-admin-kubeconfig         Partial    True        False         The hosted control plane is available
  3. Access the cluster by entering the following commands:

    $ hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig
    $ oc --kubeconfig <hosted_cluster_name>-kubeconfig get co
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    console                                    4.x.0     False       False         False      30m     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host
    ingress                                    4.x.0     True        False         True       28m     The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)

    Replace 4.x.0 with the supported OKD version that you want to use.

Next steps

To fix the errors in the output, complete the steps in Setting up the load balancer and Setting up a wildcard DNS.

If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see Optional: Configuring MetalLB.

Setting up the load balancer

Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address.

Procedure
  1. A NodePort service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports.

    1. Get the HTTP node port by entering the following command:

      $ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'

      Note the HTTP node port value to use in the next step.

    2. Get the HTTPS node port by entering the following command:

      $ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'

      Note the HTTPS node port value to use in the next step.

  2. Create the load balancer service by entering the following command:

    oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: <hosted_cluster_name>
      name: <hosted_cluster_name>-apps
      namespace: clusters-<hosted_cluster_name>
    spec:
      ports:
      - name: https-443
        port: 443
        protocol: TCP
        targetPort: <https_node_port> (1)
      - name: http-80
        port: 80
        protocol: TCP
        targetPort: <http-node-port> (2)
      selector:
        kubevirt.io: virt-launcher
      type: LoadBalancer
    1 Specify the HTTPS node port value that you noted in the previous step.
    2 Specify the HTTP node port value that you noted in the previous step.

Setting up a wildcard DNS

Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service.

Procedure
  1. Get the external IP address by entering the following command:

    $ oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
    Example output
    192.168.20.30
  2. Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry:

    *.apps.<hosted_cluster_name\>.<base_domain\>.

    The DNS entry must be able to route inside and outside of the cluster.

    DNS resolutions example
    dig +short test.apps.example.hypershift.lab
    
    192.168.20.30
  3. Check that hosted cluster status has moved from Partial to Completed by entering the following command:

    $ oc get --namespace clusters hostedclusters
    Example output
    NAME            VERSION   KUBECONFIG                       PROGRESS    AVAILABLE   PROGRESSING   MESSAGE
    example         4.x.0     example-admin-kubeconfig         Completed   True        False         The hosted control plane is available

    Replace 4.x.0 with the supported OKD version that you want to use.

Finishing the deployment

You can monitor the deployment of a hosted cluster from two perspectives: the control plane and the data plane.

Monitoring the control plane

While the deployment proceeds, you can monitor the control plane by gathering information about the following artifacts:

  • The HyperShift Operator

  • The HostedControlPlane pod

  • The bare metal hosts

  • The agents

  • The InfraEnv resource

  • The HostedCluster and NodePool resources

Procedure
  • Enter the following commands to monitor the control plane:

    $ export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig
    $ watch "oc get pod -n hypershift;echo;echo;oc get pod -n clusters-hosted-ipv4;echo;echo;oc get bmh -A;echo;echo;oc get agent -A;echo;echo;oc get infraenv -A;echo;echo;oc get hostedcluster -A;echo;echo;oc get nodepool -A;echo;echo;"

Monitoring the data plane

While the deployment proceeds, you can monitor the data plane by gathering information about the following artifacts:

  • The cluster version

  • The nodes, specifically, about whether the nodes joined the cluster

  • The cluster Operators

Procedure
  • Enter the following commands:

    $ oc get secret -n clusters-hosted-ipv4 admin-kubeconfig -o jsonpath='{.data.kubeconfig}' |base64 -d > /root/hc_admin_kubeconfig.yaml
    $ export KUBECONFIG=/root/hc_admin_kubeconfig.yaml
    $ watch "oc get clusterversion,nodes,co"