SR-IOV network node configuration object

You specify the SR-IOV network device configuration for a node by defining an SriovNetworkNodePolicy object. The object is part of the API group.

The following YAML describes an SriovNetworkNodePolicy object:

kind: SriovNetworkNodePolicy
  name: <name> (1)
  namespace: openshift-sriov-network-operator (2)
  resourceName: <sriov_resource_name> (3)
  nodeSelector: "true" (4)
  priority: <priority> (5)
  mtu: <mtu> (6)
  numVfs: <num> (7)
  nicSelector: (8)
    vendor: "<vendor_code>" (9)
    deviceID: "<device_id>" (10)
    pfNames: ["<pf_name>", ...] (11)
    rootDevices: ["<pci_bus_id>", "..."] (12)
  deviceType: <device_type> (13)
  isRdma: false (14)
  linkType: <link_type> (15)
1 The name for the CR object.
2 The namespace where the SR-IOV Operator is installed.
3 The resource name of the SR-IOV device plug-in. You can create multiple SriovNetworkNodePolicy objects for a resource name.
4 The node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plug-in and device plug-in are deployed on only selected nodes.
5 Optional: An integer value between 0 and 99. A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99. The default value is 99.
6 Optional: The maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
7 The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel Network Interface Card (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128.
8 The nicSelector mapping selects the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. If you specify rootDevices, you must also specify a value for vendor, deviceID, or pfNames. If you specify both pfNames and rootDevices at the same time, ensure that they point to the same device.
9 Optional: The vendor hex code of the SR-IOV network device. The only allowed values are 8086 and 15b3.
10 Optional: The device hex code of SR-IOV network device. The only allowed values are 158b, 1015, and 1017.
11 Optional: An array of one or more physical function (PF) names for the device.
12 An array of one or more PCI bus addresses for the PF of the device. Provide the address in the following format: 0000:02:00.1.
13 Optional: The driver type for the virtual functions. The only allowed values are netdevice and vfio-pci. The default value is netdevice.

For a Mellanox card to work in Data Plane Development Kit (DPDK) mode on bare metal nodes, use the netdevice driver type and set isRdma to true.

14 Optional: Whether to enable remote direct memory access (RDMA) mode. The default value is false.

If the isRDMA parameter is set to true, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.

15 Optional: The link type for the VFs. You can specify one of the following values: eth or ib. eth is ethernet and ib is InfiniBand. The default value is eth if it is not explicitly set. When linkType is set to ib, isRdma will be set to true by SR-IOV Network Operator webhook automatically. When linkType is set to ib, deviceType should not be set to vfio-pci.

SR-IOV network node configuration examples

The following example describes the configuration for an IB device:

Example configuration for an IB device
kind: SriovNetworkNodePolicy
  name: policy-ib-net-1
  namespace: openshift-sriov-network-operator
  resourceName: ibnic1
  nodeSelector: "true"
  numVfs: 4
    vendor: "15b3"
    deviceID: "101b"
      - "0000:19:00.0"
  linkType: ib
  isRdma: true

Virtual function (VF) partitioning for SR-IOV devices

In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into multiple resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the vfio-pci driver. In such a deployment, the pfNames selector in your SriovNetworkNodePolicy custom resource (CR) can be used to specify a range of VFs for a pool using the following format: <pfname>#<first_vf>-<last_vf>.

For example, the following YAML shows the selector for an interface named netpf0 with VF 2 through 7:

pfNames: ["netpf0#2-7"]
  • netpf0 is the PF interface name.

  • 2 is the first VF index (0-based) that is included in the range.

  • 7 is the last VF index (0-based) that is included in the range.

You can select VFs from the same PF by using different policy CRs if the following requirements are met:

  • The numVfs value must be identical for policies that select the same PF.

  • The VF index must be in the range of 0 to <numVfs>-1. For example, if you have a policy with numVfs set to 8, then the <first_vf> value must not be smaller than 0, and the <last_vf> must not be larger than 7.

  • The VFs ranges in different policies must not overlap.

  • The <first_vf> must not be larger than the <last_vf>.

The following example illustrates NIC partitioning for an SR-IOV device.

The policy policy-net-1 defines a resource pool net-1 that contains the VF 0 of PF netpf0 with the default VF driver. The policy policy-net-1-dpdk defines a resource pool net-1-dpdk that contains the VF 8 to 15 of PF netpf0 with the vfio VF driver.

Policy policy-net-1:

kind: SriovNetworkNodePolicy
  name: policy-net-1
  namespace: openshift-sriov-network-operator
  resourceName: net1
  nodeSelector: "true"
  numVfs: 16
    pfNames: ["netpf0#0-0"]
  deviceType: netdevice

Policy policy-net-1-dpdk:

kind: SriovNetworkNodePolicy
  name: policy-net-1-dpdk
  namespace: openshift-sriov-network-operator
  resourceName: net1dpdk
  nodeSelector: "true"
  numVfs: 16
    pfNames: ["netpf0#8-15"]
  deviceType: vfio-pci

Configuring SR-IOV network devices

The SR-IOV Network Operator adds the CustomResourceDefinition to OKD. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).

When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes.

It might take several minutes for a configuration change to apply.

  • You installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the SR-IOV Network Operator.

  • You have enough available nodes in your cluster to handle the evicted workload from drained nodes.

  • You have not selected any control plane nodes for SR-IOV network device configuration.

  1. Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration.

  1. Create the SriovNetworkNodePolicy object:

    $ oc create -f <name>-sriov-node-network.yaml

    where <name> specifies the name for this configuration.

    After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status.

  2. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured.

    $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

Troubleshooting SR-IOV configuration

After following the procedure to configure an SR-IOV network device, the following sections address some error conditions.

To display the state of nodes, run the following command:

$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>

where: <node_name> specifies the name of a node with an SR-IOV network device.

Error output: Cannot allocate memory
"lastSyncError": "write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory"

When a node indicates that it cannot allocate memory, check the following items:

  • Confirm that global SR-IOV settings are enabled in the BIOS for the node.

  • Confirm that VT-d is enabled in the BIOS for the node.