Launching the CAM web console

You can launch the CAM web console that is installed on the target cluster.

Procedure
  1. Log in to the target cluster.

  2. Obtain the CAM web console URL:

    $ oc get -n openshift-migration route/migration -o go-template='(?i}//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'
  3. Launch a browser and navigate to the CAM web console.

    If you try to access the CAM web console immediately after installing the Cluster Application Migration Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page will guide you through the process of accepting the remaining certificates.

  5. Log in with your OKD username and password.

Adding a cluster to the CAM web console

You can add a source cluster to the CAM web console.

Prerequisites for Azure

If you are using Azure snapshots to copy data: * You must provide the Azure resource group name when you add the source cluster. * The source and target clusters must be in the same Azure resource group and in the same location.

Procedure
  1. Log in to the source cluster.

  2. Obtain the service account token:

    $ oc sa get-token mig -n openshift-migration
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
  3. Log in to the CAM web console.

  4. In the Clusters section, click Add cluster.

  5. Fill in the following fields:

    • Cluster name: May contain lower-case letters (a-z) and numbers (0-9). Must not contain spaces or international characters.

    • Url: URL of the cluster’s API server, for example, https://<master1.example.com>:8443.

    • Service account token: String that you obtained from the source cluster.

    • Azure cluster: Optional. Select it if you are using Azure snapshots to copy your data.

    • Azure resource group: This field appears if Azure cluster is checked.

  6. Click Add cluster.

    The cluster appears in the Clusters section.

Adding a replication repository to the CAM web console

You can add an object storage bucket as a replication repository to the CAM web console.

Prerequisites
  • You must configure an object storage bucket for migrating the data.

Procedure
  1. Log in to the CAM web console.

  2. In the Replication repositories section, click Add repository.

  3. Select a Storage provider type and fill in the following fields:

    • AWS for AWS S3, MCG, and generic S3 providers:

      • Replication repository name: Specify the replication repository name in the CAM web console.

      • S3 bucket name: Specify the name of the S3 bucket you created.

      • S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.

      • S3 endpoint: Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com>. Required for a generic S3 provider. You must use the https:// prefix.

      • S3 provider access key: Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG.

      • S3 provider secret access key: Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG.

      • Require SSL verification: Clear this check box if you are using a generic S3 provider.

    • GCP:

      • Replication repository name: Specify the replication repository name in the CAM web console.

      • GCP bucket name: Specify the name of the GCP bucket.

      • GCP credential JSON blob: Specify the string in the credentials-velero file.

    • Azure:

      • Replication repository name: Specify the replication repository name in the CAM web console.

      • Azure resource group: Specify the resource group of the Azure Blob storage.

      • Azure storage account name: Specify the Azure Blob storage account name.

      • Azure credentials - INI file contents: Specify the string in the credentials-velero file.

  4. Click Add repository and wait for connection validation.

  5. Click Close.

    The new repository appears in the Replication repositories section.

Changing migration plan limits for large migrations

You can change the migration plan limits for large migrations.

Changes should first be tested in your environment to avoid a failed migration.

A single migration plan has the following default limits:

  • 10 namespaces

    If this limit is exceeded, the CAM web console displays a Namespace limit exceeded error and you cannot create a migration plan.

  • 100 Pods

    If the Pod limit is exceeded, the CAM web console displays a warning message similar to the following example: Plan has been validated with warning condition(s). See warning message. Pod limit: 100 exceeded, found: 104.

  • 100 persistent volumes

    If the persistent volume limit is exceeded, the CAM web console displays a similar warning message.

Procedure
  1. Edit the Migration controller CR:

    $ oc get migrationcontroller -n openshift-migration
    NAME AGE
    migration-controller 5d19h
    
    $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    [...]
    migration_controller: true
    
    # This configuration is loaded into mig-controller, and should be set on the
    # cluster where `migration_controller: true`
    mig_pv_limit: 100
    mig_pod_limit: 100
    mig_namespace_limit: 10
    [...]

Creating a migration plan in the CAM web console

You can create a migration plan in the CAM web console.

Prerequisites
  • The CAM web console must contain the following:

    • Source cluster

    • Target cluster, which is added automatically during the CAM tool installation

    • Replication repository

  • If you want to copy your data by using snapshots, the source and target clusters must be running on the same cloud provider (AWS, GCP, or Azure) and in the same region.

Procedure
  1. Log in to the CAM web console.

  2. In the Plans section, click Add plan.

  3. Enter the Plan name and click Next.

    The Plan name can contain up to 253 lower-case alphanumeric characters (a-z, 0-9). It must not contain spaces or underscores (_).

  4. Select a Source cluster.

  5. Select a Target cluster.

  6. Select a Replication repository.

  7. Select the projects to be migrated and click Next.

  8. Select Copy or Move for the PVs:

    • Copy copies the data in a source cluster’s PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.

    • Move unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

  9. Click Next.

  10. Select a Copy method for the PVs:

    • Snapshot backs up and restores the disk using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem.

      The storage and clusters must be in the same region and the storage class must be compatible.

    • Filesystem copies the data files from the source disk to a newly created target disk.

  11. Select a Storage class for the PVs.

    If you selected the Filesystem copy method, you can change the storage class during migration, for example, from Red Hat Gluster Storage or NFS storage to Red Hat Ceph Storage.

  12. Click Finish.

  13. Click Close.

    The migration plan appears in the Plans section.

Running a migration plan in the CAM web console

You can stage or migrate applications and data with the migration plan you created in the CAM web console.

Prerequisites

The CAM web console must contain the following:

  • Source cluster

  • Target cluster, which is added automatically during the CAM tool installation

  • Replication repository

  • Valid migration plan

Procedure
  1. Log in to the CAM web console on the target cluster.

  2. Select a migration plan.

  3. Click Stage to copy data from the source cluster to the target cluster without stopping the application.

    You can run Stage multiple times to reduce the actual migration time.

  4. When you are ready to migrate the application workload, click Migrate.

    Migrate stops the application workload on the source cluster and recreates its resources on the target cluster.

  5. Optionally, in the Migrate window, you can select Do not stop applications on the source cluster during migration.

  6. Click Migrate.

  7. When the migration is complete, verify that the application migrated successfully in the OKD web console:

    1. Click HomeProjects.

    2. Click the migrated project to view its status.

    3. In the Routes section, click Location to verify that the application is functioning, if applicable.

    4. Click WorkloadsPods to verify that the Pods are running in the migrated namespace.

    5. Click StoragePersistent volumes to verify that the migrated persistent volume is correctly provisioned.