×

You can install the Migration Toolkit for Containers (MTC) on OKD 4 in a restricted network environment by performing the following procedures:

  1. Create a mirrored Operator catalog.

    This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OKD 4.2 to 4.5 source cluster.

  2. Install the Migration Toolkit for Containers Operator on the OKD 4.16 target cluster by using Operator Lifecycle Manager.

    By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster.

  3. Install the Migration Toolkit for Containers Operator on the source cluster:

    • OKD 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager.

    • OKD 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface.

  4. Configure object storage to use as a replication repository.

Compatibility guidelines

You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OKD version.

Definitions
legacy platform

OKD 4.5 and earlier.

modern platform

OKD 4.6 and later.

legacy operator

The MTC Operator designed for legacy platforms.

modern operator

The MTC Operator designed for modern platforms.

control cluster

The cluster that runs the MTC controller and GUI.

remote cluster

A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations.

You must use the compatible MTC version for migrating your OKD clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC.

MTC 1.7 supports migrations from OKD 3.11 to 4.9.

MTC 1.8 only supports migrations from OKD 4.10 and later.

Table 1. MTC compatibility: Migrating from a legacy or a modern platform
Details OKD 3.11 OKD 4.0 to 4.5 OKD 4.6 to 4.9 OKD 4.10 or later

Stable MTC version

MTC v.1.7.z

MTC v.1.7.z

MTC v.1.7.z

MTC v.1.8.z

Installation

Legacy MTC v.1.7.z operator: Install manually with the operator.yml file.

[IMPORTANT] This cluster cannot be the control cluster.

Install with OLM, release channel release-v1.7

Install with OLM, release channel release-v1.8

Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OKD 3.11 cluster on premises to a modern OKD cluster in the cloud, where the modern cluster cannot connect to the OKD 3.11 cluster.

With MTC v.1.7.z, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command.

With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster.

Installing the Migration Toolkit for Containers Operator on OKD 4.16

You install the Migration Toolkit for Containers Operator on OKD 4.16 by using the Operator Lifecycle Manager.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges on all clusters.

  • You must create an Operator catalog from a mirror image in a local registry.

Procedure
  1. In the OKD web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.

  3. Select the Migration Toolkit for Containers Operator and click Install.

  4. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  5. Click Migration Toolkit for Containers Operator.

  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.

  7. Click Create.

  8. Click WorkloadsPods to verify that the MTC pods are running.

Installing the legacy Migration Toolkit for Containers Operator on OKD 4.2 to 4.5

You can install the legacy Migration Toolkit for Containers Operator manually on OKD versions 4.2 to 4.5.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges on all clusters.

  • You must have access to registry.redhat.io.

  • You must have podman installed.

  • You must have a Linux workstation with network access in order to download files from registry.redhat.io.

  • You must create a mirror image of the Operator catalog.

  • You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OKD 4.16.

Procedure
  1. Log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    $ podman login registry.redhat.io
  2. Download the operator.yml file by entering the following command:

    podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./
  3. Download the controller.yml file by entering the following command:

    podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./
  4. Obtain the Operator image mapping by running the following command:

    $ grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc

    The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image.

    Example output
    registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator
  5. Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file:

    containers:
      - name: ansible
        image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
    ...
      - name: operator
        image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
    ...
        env:
        - name: REGISTRY
          value: <registry.apps.example.com> (2)
    1 Specify your mirror registry and the sha256 value of the Operator image.
    2 Specify your mirror registry.
  6. Log in to your OKD source cluster.

  7. Create the Migration Toolkit for Containers Operator object:

    $ oc create -f operator.yml
    Example output
    namespace/openshift-migration created
    rolebinding.rbac.authorization.k8s.io/system:deployers created
    serviceaccount/migration-operator created
    customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    role.rbac.authorization.k8s.io/migration-operator created
    rolebinding.rbac.authorization.k8s.io/migration-operator created
    clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    deployment.apps/migration-operator created
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1)
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
    1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OKD 4 that are provided in later releases.
  8. Create the MigrationController object:

    $ oc create -f controller.yml
  9. Verify that the MTC pods are running:

    $ oc get pods -n openshift-migration

Proxy configuration

For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.

For OKD 4.2 to 4.16, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.

Direct volume migration

Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.

If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.

TCP proxy setup for DVM

You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy:

apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
  name: migration-controller
  namespace: openshift-migration
spec:
  [...]
  stunnel_tcp_proxy: http://username:password@ip:port

Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.

Why use a TCP proxy instead of an HTTP/HTTPS proxy?

You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.

Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.

Known issue

Migration fails with error Upgrade request required

The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required. Workaround: Use a proxy that supports the SPDY protocol.

In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required. Workaround: Ensure that the proxy forwards the Upgrade header.

Tuning network policies for migrations

OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.

Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.

NetworkPolicy configuration

Egress traffic from Rsync pods

You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-egress-from-rsync-pods
spec:
  podSelector:
    matchLabels:
      owner: directvolumemigration
      app: directvolumemigration-rsync-transfer
  egress:
  - {}
  policyTypes:
  - Egress
Ingress traffic to Rsync pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-egress-from-rsync-pods
spec:
  podSelector:
    matchLabels:
      owner: directvolumemigration
      app: directvolumemigration-rsync-transfer
  ingress:
  - {}
  policyTypes:
  - Ingress

EgressNetworkPolicy configuration

The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster.

Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters.

Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two:

apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
  name: test-egress-policy
  namespace: <namespace>
spec:
  egress:
  - to:
      cidrSelector: <cidr_of_source_or_target_cluster>
    type: Deny

Choosing alternate endpoints for data transfer

By default, DVM uses an OKD route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow.

For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR:

apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
  name: migration-controller
  namespace: openshift-migration
spec:
  [...]
  rsync_endpoint_type: [NodePort|ClusterIP|Route]

Configuring supplemental groups for Rsync pods

When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access:

Table 2. Supplementary groups for Rsync pods
Variable Type Default Description

src_supplemental_groups

string

Not set

Comma-separated list of supplemental groups for source Rsync pods

target_supplemental_groups

string

Not set

Comma-separated list of supplemental groups for target Rsync pods

Example usage

The MigrationController CR can be updated to set values for these supplemental groups:

spec:
  src_supplemental_groups: "1000,2000"
  target_supplemental_groups: "2000,3000"

Configuring proxies

Prerequisites
  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure
  1. Get the MigrationController CR manifest:

    $ oc get migrationcontroller <migration_controller> -n openshift-migration
  2. Update the proxy parameters:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: <migration_controller>
      namespace: openshift-migration
    ...
    spec:
      stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
      noProxy: example.com (2)
    1 Stunnel proxy URL for direct volume migration.
    2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy nor the httpsProxy field is set.

  3. Save the manifest as migration-controller.yaml.

  4. Apply the updated manifest:

    $ oc replace -f migration-controller.yaml -n openshift-migration

For more information, see Configuring the cluster-wide proxy.

Running Rsync as either root or non-root

This section applies only when you are working with the OpenShift API, not the web console.

OpenShift environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged, Baseline or Restricted. Every cluster has its own default policy set.

To guarantee successful data transfer in all environments, MTC 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible.

Manually overriding default non-root operation for data transfer

Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer:

  • Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations.

  • Run an Rsync pod as root on the destination cluster per migration.

In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges prior to migration: enforce, audit, and warn.

To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization.

Configuring the MigrationController CR as root or non-root for all migrations

By default, Rsync runs as non-root.

On the destination cluster, you can configure the MigrationController CR to run Rsync as root.

Procedure
  • Configure the MigrationController CR as follows:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      [...]
      migration_rsync_privileged: true

    This configuration will apply to all future migrations.

Configuring the MigMigration CR as root or non-root per migration

On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options:

  • As a specific user ID (UID)

  • As a specific group ID (GID)

Procedure
  1. To run Rsync as root, configure the MigMigration CR according to this example:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigMigration
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      [...]
      runAsRoot: true
  2. To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigMigration
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      [...]
      runAsUser: 10010001
      runAsGroup: 3

Configuring a replication repository

The Multicloud Object Gateway is the only supported option for a restricted network environment.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

Prerequisites

  • All clusters must have uninterrupted network access to the replication repository.

  • If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.

Retrieving Multicloud Object Gateway credentials

Although the MCG Operator is deprecated, the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system.

Prerequisites

Additional resources

Uninstalling MTC and deleting resources

You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster.

Deleting the velero CRDs removes Velero from the cluster.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

Procedure
  1. Delete the MigrationController custom resource (CR) on all clusters:

    $ oc delete migrationcontroller <migration_controller>
  2. Uninstall the Migration Toolkit for Containers Operator on OKD 4 by using the Operator Lifecycle Manager.

  3. Delete cluster-scoped resources on all clusters by running the following commands:

    • migration custom resource definitions (CRDs):

      $ oc delete $(oc get crds -o name | grep 'migration.openshift.io')
    • velero CRDs:

      $ oc delete $(oc get crds -o name | grep 'velero')
    • migration cluster roles:

      $ oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io')
    • migration-operator cluster role:

      $ oc delete clusterrole migration-operator
    • velero cluster roles:

      $ oc delete $(oc get clusterroles -o name | grep 'velero')
    • migration cluster role bindings:

      $ oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io')
    • migration-operator cluster role bindings:

      $ oc delete clusterrolebindings migration-operator
    • velero cluster role bindings:

      $ oc delete $(oc get clusterrolebindings -o name | grep 'velero')