CNI VRF plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Assigning a secondary network to a VRF

As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plug-in. The virtual network created by this plug-in is associated with a physical interface that you specify.

Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1. To use SO_BINDTODEVICE, the application must have CAP_NET_RAW capabilities.

Creating an additional network attachment with the CNI VRF plug-in

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically.

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

To create an additional network attachment with the CNI VRF plug-in, perform the following procedure.

  • Install the OKD CLI (oc).

  • Log in to the OpenShift cluster as a user with cluster-admin privileges.

  1. Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml.

    apiVersion: operator.openshift.io/v1
    kind: Network
      name: cluster
      - name: test-network-1
        namespace: additional-network-1
        type: Raw
        rawCNIConfig: '{
          "cniVersion": "0.3.1",
          "name": "macvlan-vrf",
          "plugins": [  (1)
            "type": "macvlan",  (2)
            "master": "eth1",
            "ipam": {
                "type": "static",
                "addresses": [
                    "address": ""
            "type": "vrf",
            "vrfname": "example-vrf-name",  (3)
            "table": 1001   (4)
    1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration.
    2 type must be set to vrf.
    3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created.
    4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF.

    VRF functions correctly only when the resource is of type netdevice.

  2. Create the Network resource:

    $ oc create -f additional-network-attachment.yaml
  3. Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1.

    $ oc get network-attachment-definitions -n <namespace>
    Example output
    NAME                       AGE
    additional-network-1       14m

    There might be a delay before the CNO creates the CR.

Verifying that the additional VRF network attachment is successful

To verify that the VRF CNI is correctly configured and the additional network attachment is attached, do the following:

  1. Create a network that uses the VRF CNI.

  2. Assign the network to a pod.

  3. Verify that the pod network attachment is connected to the VRF additional network. Remote shell into the pod and run the following command:

    $ ip vrf show
    Example output
    Name              Table
    red                 10
  4. Confirm the VRF interface is master of the secondary interface:

    $ ip link
    Example output
    5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode