To connect virtual machines (VMs) to cluster networks, configure default and user-defined networking options in OKD Virtualization.
OKD Virtualization support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.
|
Deploying OKD Virtualization on a single-stack IPv6 cluster is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The following figure illustrates the typical network setup of OKD Virtualization. Other configurations are also possible.
Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.
You can connect VMs to the default pod network and to any number of secondary networks.
The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.
Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.
The default pod network is overlay-based, tunneled through the underlying machine network.
The machine network can be defined over a selected set of network interface controllers (NICs).
Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation. It is also possible to create virtual overlay networks for secondary networks.
|
Connecting VMs directly to the underlay network is not supported on Red Hat OpenShift Service on AWS, Azure,{dedicated}, Google Cloud, or Oracle® Cloud Infrastructure (OCI). |
|
Connecting VMs to user-defined networks with the |
Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.
The following terms are used throughout OKD Virtualization documentation.
A Cloud Native Computing Foundation project, focused on container network connectivity. OKD Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
NetworkAttachmentDefinitionA CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
UserDefinedNetworkA namespace-scoped CRD introduced by the user-defined network (UDN) API that can be used to create a tenant network that isolates the tenant namespace from other namespaces.
ClusterUserDefinedNetworkA cluster-scoped CRD introduced by the user-defined network API that cluster administrators can use to create a shared network across multiple namespaces.
NodeNetworkConfigurationPolicyA CRD introduced by the nmstate project, describing the requested network configuration on nodes.
You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.
To ensure your virtual machines (VMs) connect reliably by using the standard OKD networking model, configure the default pod network for cluster-wide connectivity.
Overlay networks provide a flexible, software-defined layer of connectivity on top of a physical network, enabling services like network segmentation, custom routing, and simplified management without altering the underlying hardware.
Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.
You can access a virtual machine (VM) that is connected to the default internal pod network on a stable fully qualified domain name (FQDN) by using headless services.
Configure a primary user-defined network (UDN) that supports multi-namespace connectivity to provide isolated and flexible traffic paths for your workloads.
Cluster administrators can configure a primary UserDefinedNetwork CRD to create a tenant network that isolates the tenant namespace from other namespaces without requiring network policies. Additionally, cluster administrators can use the ClusterUserDefinedNetwork CRD to create a shared OVN layer 2 network across multiple namespaces.
User-defined networks with the layer 2 overlay topology are useful for VM workloads, and a good alternative to secondary networks in environments where physical network access is limited, such as the public cloud. The layer 2 topology enables seamless migration of VMs without the need for Network Address Translation (NAT), and also provides persistent IP addresses that are preserved between reboots and during live migration.
Configure a secondary UDN with layer 2 topology to create a private isolated communication channel between a group of VMs across different nodes. A layer 2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes CNI plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
You can expose a VM within the cluster or outside the cluster by creating a Service object. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OKD web console or the CLI.
OKD Virtualization is integrated with Red Hat OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines on the default pod network with IPv4.
To give virtual machines (VMs) access to the internet or other physical devices, you configure the node network, define the secondary network, and attach the VM to the secondary network.
You can connect a VM to the physical network infrastructure by configuring an OVN-Kubernetes secondary user-defined network (UDN) with the localnet topology.
A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes.
Cluster administrators can use the following steps to configure the localnet UDN:
Install the Kubernetes NMState Operator which provides a state-driven network configuration across cluster nodes.
Use the NodeNetworkConfigurationPolicy custom resource (CR) to configure OVS bridges and add the appropriate bridge mappings on the nodes.
Use the ClusterUserDefinedNetwork CR from the UDN API to attach their workload to the underlay network through the OVS bridges configured in the previous step.
Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bonding for your secondary networks. The OVN-Kubernetes localnet topology is the recommended way of connecting a VM to the underlying physical network, but OKD Virtualization also supports Linux bridge networks.
|
You cannot directly attach to the default machine network when using Linux bridge networks. |
You can create a Linux bridge network and attach a VM to the network by performing the following steps:
Prepare the node network by creating a Linux bridge node network configuration policy (NNCP).
Define the secondary Linux bridge network by creating a network attachment definition (NAD).
Attach the VM to the Linux bridge network.
You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OKD cluster installed on bare metal or OpenStack infrastructure for applications that require high bandwidth or low latency.
You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments.
You can connect a VM to an SR-IOV network by performing the following steps:
Configure an SR-IOV physical network device by creating a SriovNetworkNodePolicy CR.
Define the SR-IOV secondary network by creating an SriovNetwork object.
Connect the VM to the SR-IOV network by including the network details in the VM configuration.
The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks by performing the following steps:
Configure the node hardware.
Configure the VM namespace for DPDK.
Configure the VM and guest OS to run DPDK applications.
The following table provides a comparison of features available when using the Linux bridge CNI compared to the localnet topology for an OVN-Kubernetes plugin.
| Feature | Available on Linux bridge CNI | Available on OVN-Kubernetes localnet |
|---|---|---|
Layer 2 access to the underlay native network |
Only on secondary network interface controllers (NICs) |
Yes |
Layer 2 access to underlay VLANs |
Yes |
Yes |
Layer 2 trunk access |
Yes |
No |
Network policies |
No |
Yes |
MAC spoof filtering |
Yes |
Yes (Always on) |
Manage virtual machine (VM) network configuration to scale connectivity without incurring application downtime, troubleshoot network latency, define and automate management of MAC address pools, configure IP addresses, and isolate live migration traffic.
You can add or remove secondary network interfaces without stopping your VM. OKD Virtualization supports hot plugging and hot unplugging for secondary interfaces that use bridge binding and the VirtIO device driver. OKD Virtualization also supports hot plugging secondary interfaces that use the SR-IOV binding.
You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). To connect to a VM by using its external FQDN, you must configure the DNS server, retrieve the cluster FQDN, and then connect to the VM by using the ssh command.
You can manage the link state of a primary or secondary VM network interface by using the OKD web console or the command line. By specifying the link state, you can logically connect or disconnect the virtual network interface controller (vNIC) from a network.
|
OKD Virtualization does not support link state management for Single Root I/O Virtualization (SR-IOV) secondary network interfaces and their link states are not reported. |
You can configure the IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OKD web console or the command line. The network information is collected by the QEMU guest agent.
The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.
You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
You can use SSH to securely access your virtual machines (VMs) from the command line.
To set up your SSH configuration, use one of the following methods:
virtctl ssh commandYou create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key.
You can add public SSH keys to Fedora 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
virtctl port-forward commandYou add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH.
You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address.
Connect a virtual machine to a custom primary overlay network
Connect a VM to the physical network by using an Open vSwitch bridge
Connect a virtual machine to the physical network by using a Linux bridge
Connect a VM to the physical network by using an SR-IOV device
Connect a VM to the physical network by using DPDK drivers with SR-IOV hardware