$ bridge vlan add vid VLAN_ID dev DEV
Cross-cluster live migration requires that the clusters be connected in the same network. Specifically, virt-handler pods must be able to communicate.
|
Cross-cluster live migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The following object describes the configuration parameters for the Bridge CNI plugin:
| Field | Type | Description |
|---|---|---|
|
|
The CNI specification version. The |
|
|
The value for the |
|
|
The name of the CNI plugin to configure: |
|
|
The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. |
|
|
Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Set to |
|
|
Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. |
|
|
Optional: Indicates whether the default vlan must be preserved on the |
|
|
Optional: Assign a VLAN trunk tag. The default value is |
|
|
Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. |
|
|
Optional: Enables duplicate address detection for the container side |
|
|
Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is |
|
The VLAN parameter configures the VLAN tag on the host end of the |
|
To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command:
|
To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).
You installed the OpenShift CLI (oc).
You logged in to the cluster as a user with the cluster-admin role.
Each node has at least two Network Interface Cards (NICs).
The NICs for live migration are connected to the same VLAN.
Create a NetworkAttachmentDefinition manifest according to the following example:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: my-secondary-network (1)
namespace: kubevirt-hyperconverged
spec:
config: '{
"cniVersion": "0.3.1",
"name": "migration-bridge",
"type": "macvlan",
"master": "eth1", (2)
"mode": "bridge",
"ipam": {
"type": "whereabouts", (3)
"range": "10.200.5.0/24" (4)
}
}'
| 1 | Specify the name of the NetworkAttachmentDefinition object. |
| 2 | Specify the name of the NIC to be used for live migration. |
| 3 | Specify the name of the CNI plugin that provides the network for the NAD. |
| 4 | Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. |
Open the HyperConverged CR in your default editor by running the following command:
$ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR:
HyperConverged manifestapiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: kubevirt-hyperconverged
spec:
liveMigrationConfig:
completionTimeoutPerGiB: 800
network: <network> (1)
parallelMigrationsPerCluster: 5
parallelOutboundMigrationsPerNode: 2
progressTimeout: 150
# ...
| 1 | Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. |
Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network.
When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.
$ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'