×

To migrate a virtual machine (VM) across OKD clusters, you must configure an OKD provider for each cluster that you are including in the migration. If MTV is already installed on a cluster, a local provider already exists.

Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Configuring the root certificate authority for providers

You must configure an OKD provider for each cluster that you are including in the migration, and each provider requires a certificate authority (CA) for the cluster. It is important to configure the root CA for the entire cluster to avoid CA expiration, which causes the provider to fail.

Procedure
  1. Run the following command against the cluster for which you are creating the provider:

    $ oc get cm kube-root-ca.crt -o=jsonpath={.data.ca\\.crt}
  2. Copy the printed certificate.

  3. In the Migration Toolkit for Virtualization (MTV) web console, create a provider and select OpenShift Virtualization.

  4. Paste the certificate into the CA certificate field, as shown in the following example:

    -----BEGIN CERTIFICATE-----
    <CA_certificate_content>
    -----END CERTIFICATE-----

Creating the long-lived service account and token to use with MTV providers

When you register an OpenShift Virtualization provider in the Migration Toolkit for Virtualization (MTV) web console, you must supply credentials that allow MTV to interact with the cluster. Creating a long-lived service account and cluster role binding gives MTV persistent permissions to read and create virtual machine resources during migration.

Procedure
  1. Create the cluster role as shown in the following example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: live-migration-role
    rules:
      - apiGroups:
          - forklift.konveyor.io
        resources:
          - '*'
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - secrets
          - namespaces
          - configmaps
          - persistentvolumes
          - persistentvolumeclaims
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - k8s.cni.cncf.io
        resources:
          - network-attachment-definitions
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - storage.k8s.io
        resources:
          - storageclasses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - kubevirt.io
        resources:
          - virtualmachines
          - virtualmachines/finalizers
          - virtualmachineinstancemigrations
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - kubevirt.io
        resources:
          - kubevirts
          - virtualmachineinstances
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - cdi.kubevirt.io
        resources:
          - datavolumes
          - datavolumes/finalizers
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - apps
        resources:
          - deployments
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
      - apiGroups:
          - instancetype.kubevirt.io
        resources:
          - virtualmachineclusterpreferences
          - virtualmachineclusterinstancetypes
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - instancetype.kubevirt.io
        resources:
          - virtualmachinepreferences
          - virtualmachineinstancetypes
        verbs:
          - get
          - list
          - watch
          - create
          - update
          - patch
          - delete
  2. Create the cluster role by running the following command:

    $ oc create -f <filename>.yaml
  3. Create a service account by running the following command:

    $ oc create serviceaccount <service_account_name> -n <service_account_namespace>
  4. Create a cluster role binding that links the service account to the cluster role, by running the following command:

    $ oc create clusterrolebinding <service_account_name> --clusterrole=<cluster_role_name> --serviceaccount=<service_account_namespace>:<service_account_name>
  5. Create a secret to hold the token by saving the following manifest as a YAML file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <name_of_secret>
      namespace: <namespace_for_service_account>
      annotations:
        kubernetes.io/service-account.name: <service_account_name>
    type: kubernetes.io/service-account-token
  6. Apply the manifest by running the following command:

    $ oc apply -f <filename>.yaml
  7. After the secret is populated, run the following command to get the service account bearer token:

    $ TOKEN_BASE64=$(oc get secret "<name_of_secret>" -n "<namespace_bound_to_service_account>" -o jsonpath='{.data.token}')
      TOKEN=$(echo "$TOKEN_BASE64" | base64 --decode)
      echo "$TOKEN"
  8. Copy the printed token.

  9. In the Migration Toolkit for Virtualization (MTV) web console, when you create a provider and select OpenShift Virtualization, paste the token into the Service account bearer token field.