×

OKD Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with OKD networking and its ecosystem.

You cannot run OKD Virtualization on a single-stack IPv6 cluster.

The following figure illustrates the typical network setup of OKD Virtualization. Other configurations are also possible.

OKD Virtualization networking architecture
Figure 1. OKD Virtualization networking overview

20 Pods and VMs run on the same network infrastructure which allows you to easily connect your containerized and virtualized workloads.

20 You can connect VMs to the default pod network and to any number of secondary networks.

20 The default pod network provides connectivity between all its members, service abstraction, IP management, micro segmentation, and other functionality.

20 Multus is a "meta" CNI plugin that enables a pod or virtual machine to connect to additional network interfaces by using other compatible CNI plugins.

20 The default pod network is overlay-based, tunneled through the underlying machine network.

20 The machine network can be defined over a selected set of network interface controllers (NICs).

20 Secondary VM networks are typically bridged directly to a physical network, with or without VLAN encapsulation.

20 Secondary VM networks can be defined on dedicated set of NICs, as shown in Figure 1, or they can use the machine network.

OKD Virtualization networking glossary

The following terms are used throughout OKD Virtualization documentation:

Container Network Interface (CNI)

A Cloud Native Computing Foundation project, focused on container network connectivity. OKD Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.

Multus

A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.

Custom resource definition (CRD)

A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.

Network attachment definition (NAD)

A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.

Node network configuration policy (NNCP)

A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.

Using the default pod network

Connecting a virtual machine to the default pod network

Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification.

Exposing a virtual machine as a service

You can expose a VM within the cluster or outside the cluster by creating a Service object. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OKD web console or the CLI.

Configuring VM secondary network interfaces

You can connect a virtual machine to a secondary network by using Linux bridge, SR-IOV and OVN-Kubernetes CNI plugins. You can list multiple secondary networks and interfaces in the VM specification. It is not required to specify the primary pod network in the VM specification when connecting to a secondary network interface.

Connecting a virtual machine to a Linux bridge network

Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bondings for your secondary networks.

You can create a Linux bridge network and attach a VM to the network by performing the following steps:

  1. Configure a Linux bridge network device by creating a NodeNetworkConfigurationPolicy custom resource definition (CRD).

  2. Configure a Linux bridge network by creating a NetworkAttachmentDefinition CRD.

  3. Connect the VM to the Linux bridge network by including the network details in the VM configuration.

Connecting a virtual machine to an SR-IOV network

You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OKD cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency.

You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments.

You can connect a VM to an SR-IOV network by performing the following steps:

  1. Configure an SR-IOV network device by creating a SriovNetworkNodePolicy CRD.

  2. Configure an SR-IOV network by creating an SriovNetwork object.

  3. Connect the VM to the SR-IOV network by including the network details in the VM configuration.

Connecting a virtual machine to an OVN-Kubernetes secondary network

You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. OKD Virtualization supports the layer 2 and localnet topologies for OVN-Kubernetes.

  • A layer 2 topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plug-in uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.

  • A localnet topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster, but it requires additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.

To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps:

  1. Configure an OVN-Kubernetes secondary network by creating a network attachment definition (NAD).

    For localnet topology, you must configure an OVS bridge by creating a NodeNetworkConfigurationPolicy object before creating the NAD.

  2. Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification.

Comparing Linux bridge CNI and OVN-Kubernetes localnet topology

The following table provides a comparison of features available when using the Linux bridge CNI versus the localnet topology for an OVN-Kubernetes plugin:

Table 1. Linux bridge CNI compared to an OVN-Kubernetes localnet topology
Feature Available on Linux bridge CNI Available on OVN-Kubernetes localnet

Layer 2 access to the underlay native network

Only on secondary network interface controllers (NICs)

Yes

Layer 2 access to underlay VLANs

Yes

Yes

Network policies

No

Yes

Managed IP pools

No

No

MAC spoof filtering

Yes

Yes

Hot plugging secondary network interfaces

You can add or remove secondary network interfaces without stopping your VM. OKD Virtualization supports hot plugging and hot unplugging for Linux bridge interfaces that use the VirtIO device driver.

Using DPDK with SR-IOV

The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks.

Configuring a dedicated network for live migration

You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.

Accessing a virtual machine by using the cluster FQDN

You can access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN).

Configuring and viewing IP addresses

You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OKD web console or the command line. The network information is collected by the QEMU guest agent.

Integrating with OpenShift Service Mesh

Connecting a virtual machine to a service mesh

OKD Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines.

Managing MAC address pools

Managing MAC address pools for network interfaces

The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots.

Configuring SSH access

Configuring SSH access to virtual machines

You can configure SSH access to VMs by using the following methods:

  • virtctl ssh command

    You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key.

    You can add public SSH keys to Fedora 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.

  • virtctl port-forward command

    You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH.

  • Service

    You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.

  • Secondary network

    You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address.