Verifies that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.
The OKD Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
About the OKD Virtualization cluster checkup framework
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
You must always:
Verify that the checkup image is from a trustworthy source before applying it.
Review the checkup permissions before creating the Role and RoleBinding objects.
Running a latency checkup
You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility.
You run a latency checkup by performing the following steps:
Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
Create a config map to provide the input to run the checkup and to store the results.
Create a job to run the checkup.
Review the results in the config map.
Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
When you are finished, delete the latency checkup resources.
Prerequisites
You installed the OpenShift CLI (oc).
The cluster has at least two worker nodes.
You configured a network attachment definition for a namespace.
Procedure
Create a ServiceAccount, Role, and RoleBinding manifest for the latency checkup:
<target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides.
Create a ConfigMap manifest that contains the input parameters for the checkup:
The name of the NetworkAttachmentDefinition object.
2
Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
3
Optional: The duration of the latency check, in seconds.
4
Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.targetNode field cannot be empty.
5
Optional: When specified, latency is measured from the source node to this node.
Apply the config map manifest in the target namespace:
Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.maxDesiredLatencyMilliseconds attribute, the checkup fails and returns an error.
$oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
Optional: If you do not plan to run another checkup, delete the roles manifest:
$oc delete -f <latency_sa_roles_rolebinding>.yaml
DPDK checkup
Use a predefined checkup to verify that your OKD cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
Create a service account, role, and role bindings for the DPDK checkup.
Create a config map to provide the input to run the checkup and to store the results.
Create a job to run the checkup.
Review the results in the config map.
Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
When you are finished, delete the DPDK checkup resources.
Prerequisites
You have installed the OpenShift CLI (oc).
The cluster is configured to run DPDK applications.
The project is configured to run DPDK applications.
Procedure
Create a ServiceAccount, Role, and RoleBinding manifest for the DPDK checkup:
Collapse all
Example service account, role, and rolebinding manifest file
Optional: If you do not plan to run another checkup, delete the ServiceAccount, Role, and RoleBinding manifest:
$oc delete -f <dpdk_sa_roles_rolebinding>.yaml
DPDK checkup config map parameters
The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:
Table 1. DPDK checkup config map input parameters
Parameter
Description
Is Mandatory
spec.timeout
The time, in minutes, before the checkup fails.
True
spec.param.networkAttachmentDefinitionName
The name of the NetworkAttachmentDefinition object of the SR-IOV NICs connected.
True
spec.param.trafficGenContainerDiskImage
The container disk image for the traffic generator. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:main.
False
spec.param.trafficGenTargetNodeName
The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic.
False
spec.param.trafficGenPacketsPerSecond
The number of packets per second, in kilo (k) or million(m). The default value is 8m.
False
spec.param.vmUnderTestContainerDiskImage
The container disk image for the VM under test. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-vm:main.
False
spec.param.vmUnderTestTargetNodeName
The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic.
False
spec.param.testDuration
The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes.
False
spec.param.portBandwidthGbps
The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps.
False
spec.param.verbose
When set to true, it increases the verbosity of the checkup log. The default value is false.
False
Building a container disk image for Fedora virtual machines
You can build a custom Fedora 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a Fedora 8 VM that can be used to build custom Fedora images.
Prerequisites
The image builder VM must run Fedora 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the /var directory.
You have installed the image builder tool and its CLI (composer-cli) on the VM.
You have installed the virt-customize tool:
#dnf install libguestfs-tools
You have installed the Podman CLI tool (podman).
Procedure
Verify that you can build a Fedora 8.7 image:
#composer-cli distros list
To run the composer-cli commands as non-root, add your user to the weldr or root groups:
#usermod -a-G weldr user
$newgrp weldr
Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
$cat<<EOF > dpdk-vm.toml
name = "dpdk_image"
description = "Image to use with the DPDK checkup"
version = "0.0.1"
distro = "rhel-87"
[[packages]]
name = "dpdk"
[[packages]]
name = "dpdk-tools"
[[packages]]
name = "driverctl"
[[packages]]
name = "tuned-profiles-cpu-partitioning"
[customizations.kernel]
append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7"
[customizations.services]
disabled = ["NetworkManager-wait-online", "sshd"]
EOF
Push the blueprint file to the image builder tool by running the following command:
#composer-cli blueprints push dpdk-vm.toml
Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
#composer-cli compose start dpdk_image qcow2
Wait for the compose process to complete. The compose status must show FINISHED before you can continue to the next step.
#composer-cli compose status
Enter the following command to download the qcow2 image file by specifying its UUID:
#composer-cli compose image <UUID>
Create the customization scripts by running the following commands: