You can migrate application workloads by adding your clusters and replication repository to the CAM web console. Then, you can create and run a migration plan.

If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

Creating a CA certificate bundle file

If you use a self-signed certificate to secure a cluster or a replication repository, certificate verification might fail with the following error message: Certificate signed by unknown authority.

You can create a custom CA certificate bundle file and upload it in the CAM web console when you add a cluster or a replication repository.

Procedure

Download a CA certificate from a remote endpoint and save it as a CA bundle file:

$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ (1)
  | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> (2)
1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443.
2 Specify the name of the CA bundle file.

Configuring a migration plan

You can configure a migration plan to suit your needs by increasing the number of objects migrated or excluding resources from migration.

Increasing Migration Controller limits for large migrations

You can increase the Migration Controller limits on migration objects and container resources for large migrations.

You must test these changes before you perform a migration in a production environment.

Procedure
  1. Edit the Migration Controller manifest:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    ...
    mig_controller_limits_cpu: "1" (1)
    mig_controller_limits_memory: "10Gi" (2)
    ...
    mig_controller_requests_cpu: "100m" (3)
    mig_controller_requests_memory: "350Mi" (4)
    ...
    mig_pv_limit: 100 (5)
    mig_pod_limit: 100 (6)
    mig_namespace_limit: 10 (7)
    ...
    1 Specifies the number of CPUs available to the Migration Controller.
    2 Specifies the amount of memory available to the Migration Controller.
    3 Specifies the number of CPU units available for Migration Controller requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4 Specifies the amount of memory available for Migration Controller requests.
    5 Specifies the number of PVs that can be migrated.
    6 Specifies the number of Pods that can be migrated.
    7 Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the Migration Controller limits, the CAM console displays a warning message when you save the migration plan.

Excluding resources from a migration plan

You can exclude resources, for example, imagestreams, persistent volumes (PVs), or subscriptions, from a migration plan.

Procedure
  1. Edit the Migration Controller CR:

    $ oc edit migrationcontroller -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: migration-controller
      namespace: openshift-migration
    spec:
      disable_image_migration: true (1)
      disable_pv_migration: true (2)
      ...
      excluded_resources: (3)
      - imagetags
      - templateinstances
      - clusterserviceversions
      - packagemanifests
      - subscriptions
      - servicebrokers
      - servicebindings
      - serviceclasses
      - serviceinstances
      - serviceplans
    1 Add disable_image_migration: true to exclude imagestreams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the Migration Controller Pod restarts.
    2 Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the Migration Controller Pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3 You can add OKD resources to the excluded_resources list. Do not delete any of the default excluded resources. These resources are known to be problematic for migration.
  3. Wait two minutes for the Migration Controller Pod to restart so that the changes are applied.

  4. Verify that the resource is excluded:

    $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources, as shown in the following example:

        - name: EXCLUDED_RESOURCES
          value:
          imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

Adding a cluster to the CAM web console

You can add a cluster to the CAM web console.

Prerequisites

If you are using Azure snapshots to copy data:

  • You must provide the Azure resource group name when you add the source cluster.

  • The source and target clusters must be in the same Azure resource group and in the same location.

Procedure
  1. Log in to the cluster.

  2. Obtain the service account token:

    $ oc sa get-token migration-controller -n openshift-migration
    Example output
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
  3. Log in to the CAM web console.

  4. In the Clusters section, click Add cluster.

  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.

    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.

    • Service account token: String that you obtained from the source cluster.

    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.

    • Azure resource group: This field appears if Azure cluster is checked.

    • If you use a custom CA bundle, click Browse and browse to the CA bundle file.

  6. Click Add cluster.

    The cluster appears in the Clusters section.

Adding a replication repository to the CAM web console

You can add an object storage bucket as a replication repository to the CAM web console.

Prerequisites
  • You must configure an object storage bucket for migrating the data.

Procedure
  1. Log in to the CAM web console.

  2. In the Replication repositories section, click Add repository.

  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the CAM web console.

      • S3 bucket name: Specify the name of the S3 bucket you created.

      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.

      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.

      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.

      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.

      • Require SSL verification: Clear this check box if you are using a generic S3 provider.

      • If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.

    • GCP:

      • Replication repository name: Specify the replication repository name in the CAM web console.

      • GCP bucket name: Specify the name of the GCP bucket.

      • GCP credential JSON blob: Specify the string in the credentials-velero file.

    • Azure:

      • Replication repository name: Specify the replication repository name in the CAM web console.

      • Azure resource group: Specify the resource group of the Azure Blob storage.

      • Azure storage account name: Specify the Azure Blob storage account name.

      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.

  4. Click Add repository and wait for connection validation.

  5. Click Close.

    The new repository appears in the Replication repositories section.

Creating a migration plan in the CAM web console

You can create a migration plan in the CAM web console.

Prerequisites
  • The CAM web console must contain the following:

    • Source cluster

    • Target cluster, which is added automatically during the CAM tool installation

    • Replication repository

  • The source and target clusters must have network access to each other and to the replication repository.

  • If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and in the same region.

Procedure
  1. Log in to the CAM web console.

  2. In the Plans section, click Add plan.

  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.

  5. Select a Target cluster.

  6. Select a Replication repository.

  7. Select the projects to be migrated and click Next.

  8. Select Copy or Move for the PVs:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

      Optional: You can verify data copied with the filesystem method by selecting Verify copy. This option, which generates a checksum for each source file and checks it after restoration, significantly reduces performance.

    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

  9. Click Next.

  10. Select a Copy method for the PVs:

    • Snapshot backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem.

      The storage and clusters must be in the same region and the storage class must be compatible.

    • Filesystem copies the data files from the source disk to a newly created target disk.

  11. Select a Storage class for the PVs.

    If you selected the Filesystem copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  12. Click Next.

  13. If you want to add a migration hook, click Add Hook and perform the following steps:

    1. Specify the name of the hook.

    2. Select Ansible playbook to use your own playbook or Custom container image for a hook written in another language.

    3. Click Browse to upload the playbook.

    4. Optional: If you are not using the default Ansible runtime image, specify your custom Ansible image.

    5. Specify the cluster on which you want the hook to run.

    6. Specify the service account name.

    7. Specify the namespace.

    8. Select the migration step at which you want the hook to run:

      • PreBackup: Before backup tasks are started on the source cluster

      • PostBackup: After backup tasks are complete on the source cluster

      • PreRestore: Before restore tasks are started on the target cluster

      • PostRestore: After restore tasks are complete on the target cluster

  14. Click Add.

    You can add up to four hooks to a migration plan, assigning each hook to a different migration step.

  15. Click Finish.

  16. Click Close.

    The migration plan appears in the Plans section.

Running a migration plan in the CAM web console

You can stage or migrate applications and data with the migration plan you created in the CAM web console.

Prerequisites

The CAM web console must contain the following:

  • Source cluster

  • Target cluster

  • Replication repository

  • Valid migration plan

Procedure
  1. Log in to the source cluster.

  2. Delete old images:

    $ oc adm prune images
  3. Log in to the CAM web console.

  4. Select a migration plan.

  5. Click Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  6. When you are ready to migrate the application workload, click Migrate.

    Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.

  7. Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.

  8. Click Migrate.

  9. Optional: To stop a migration in progress, click the Options menu kebab and select Cancel.

  10. When the migration is complete, verify that the application migrated successfully in the OKD web console:

    1. Click HomeProjects.

    2. Click the migrated project to view its status.

    3. In the Routes section, click Location to verify that the application is functioning, if applicable.

    4. Click WorkloadsPods to verify that the Pods are running in the migrated namespace.

    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.