kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {} (1)
ports: (2)
- port: 80
networks:
- name: default
pod: {}
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade
binding mode.
Traffic on the virtual Network Interface Cards (vNICs) that are attached to the default pod network is interrupted during live migration. |
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Edit the interfaces
spec of your virtual machine configuration file:
kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {} (1)
ports: (2)
- port: 80
networks:
- name: default
pod: {}
1 | Connect using masquerade mode. |
2 | Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . |
Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. |
Create the virtual machine:
$ oc create -f <vm-name>.yaml
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR
field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120
. You can edit this value based on your network requirements.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
The OKD cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.
In a new virtual machine configuration, include an interface with masquerade
and configure the IPv6 address and default gateway by using cloud-init.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example-vm-ipv6
# ...
interfaces:
- name: default
masquerade: {} (1)
ports:
- port: 80 (2)
networks:
- name: default
pod: {}
volumes:
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
eth0:
dhcp4: true
addresses: [ fd10:0:2::2/120 ] (3)
gateway6: fd10:0:2::1 (4)
1 | Connect using masquerade mode. |
2 | Allows incoming traffic on port 80 to the virtual machine. |
3 | The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . |
4 | The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . |
Create the virtual machine in the namespace:
$ oc create -f example-vm-ipv6.yaml
To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.
The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:
libvirt
: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device.
DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using |