×

You can use Container Network Interface (CNI) plugin chaining to enable advanced multi-networking use cases for your pods.

About CNI chaining

CNI plugin chaining allows pods to use multiple network interfaces. This enables advanced configurations such as traffic isolation and prioritized routing through granular traffic policies.

By using CNI plugin chaining, different types of traffic can be isolated to meet performance, security, and compliance requirements, providing greater flexibility in network design and traffic management.

Some scenarios where this might be useful include:

  • Multi-Network topologies: Enables you to attach pods to multiple networks, each with its own traffic policy, where relevant.

  • Traffic isolation: Provides separate networks for management, storage, and application traffic to ensure each has the appropriate security and QoS settings.

  • Custom routing rules: Ensures that specific traffic, for example SIP traffic, always uses a designated network interface, while other traffic follows the default network.

  • Enhanced network performance: Allows you to prioritize certain traffic types or manage congestion by directing them through dedicated network interfaces.

Configuring plugin chaining with the route-override CNI plugin

Plugin chaining allows you to configure multiple CNI plugins to be applied sequentially to the same network interface, where each plugin in the chain processes the interface in order.

When you define a NetworkAttachmentDefinition (NAD) with a plugins array, the first plugin can create the interface, and a second plugin can modify its routing configuration.

The route-override CNI plugin is commonly used as the second plugin in a chain to modify the routing configuration of an interface created by the first plugin. It supports the following operations:

  • addroutes: Add static routes to direct traffic for specific destination networks through the interface.

  • delroutes: Remove specific routes from the interface.

  • flushroutes: Remove all routes from the interface.

  • flushgateway: Remove the default gateway route from the interface.

The following example demonstrates plugin chaining by configuring a pod with two additional network interfaces, each on a separate VLAN with custom routing:

  • eth1 on the 192.168.100.0/24 network (VLAN 100), with a static route directing 10.0.0.0/8 traffic through this interface.

  • eth2 on the 192.168.200.0/24 network (VLAN 200), with a static route directing 172.16.0.0/12 traffic through this interface.

Each interface uses a chain of two plugins: macvlan to create the interface on a VLAN, and route-override to add static routes that direct specific traffic through that interface.

Prerequisites
  • Install the OpenShift CLI (oc).

  • An account with cluster-admin privileges.

Procedure
  1. Create a namespace for the example by running the following command:

    $ oc create namespace chain-example
  2. Create the first NetworkAttachmentDefinition (NAD) with a chained plugin configuration.

    1. Create a YAML file, such as management.yaml, to define a NAD that configures a new interface, eth1, on VLAN 100 with the following configuration:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: management-net
        namespace: chain-example
      spec:
        config: '{
          "cniVersion": "1.0.0",
          "name": "management-net",
          "plugins": [
            {
              "type": "macvlan",
              "master": "br-ex",
              "vlan": 100,
              "mode": "bridge",
              "ipam": {
                "type": "static",
                "addresses": [
                  {
                    "address": "192.168.100.10/24",
                    "gateway": "192.168.100.1"
                  }
                ]
              }
            },
            {
              "type": "route-override",
              "addroutes": [
                {
                  "dst": "10.0.0.0/8",
                  "gw": "192.168.100.1"
                }
              ]
            }
          ]
        }'
  3. Create the NAD by running the following command:

    $ oc apply -f management.yaml
  4. Create the second NAD with a chained plugin configuration.

    1. Create a YAML file, such as sip.yaml, to define a NAD that configures a new interface, eth2, on VLAN 200 with the following configuration:

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: sip-net
        namespace: chain-example
      spec:
        config: '{
          "cniVersion": "1.0.0",
          "name": "sip-net",
          "plugins": [
            {
              "type": "macvlan",
              "master": "br-ex",
              "vlan": 200,
              "mode": "bridge",
              "ipam": {
                "type": "static",
                "addresses": [
                  {
                    "address": "192.168.200.10/24",
                    "gateway": "192.168.200.1"
                  }
                ]
              }
            },
            {
              "type": "route-override",
              "addroutes": [
                {
                  "dst": "172.16.0.0/12",
                  "gw": "192.168.200.1"
                }
              ]
            }
          ]
        }'
  5. Create the NAD by running the following command:

    $ oc apply -f sip.yaml
  6. Attach the NetworkAttachmentDefinition resources to a pod by creating a pod definition file, such as pod.yaml, with the following configuration:

    apiVersion: v1
    kind: Pod
    metadata:
      name: chain-test-pod
      namespace: chain-example
      labels:
        app: chain-test
      annotations:
        k8s.v1.cni.cncf.io/networks: '[
          { "name": "management-net", "interface": "eth1" },
          { "name": "sip-net", "interface": "eth2" }
        ]'
    spec:
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: test-container
        image: registry.access.redhat.com/ubi9/ubi:latest
        command: ["sleep", "infinity"]
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
  7. Create the pod by running the following command:

    $ oc apply -f pod.yaml
  8. Verify the pod is running with the following command:

    $ oc wait --for=condition=Ready pod/chain-test-pod -n chain-example --timeout=120s

    Example output:

    pod/chain-test-pod condition met
Verification
  1. Run the following command to list all network interfaces and their assigned IP addresses inside the pod. This verifies that the pod has the additional interfaces configured by plugin chaining:

    $ oc exec chain-test-pod -n chain-example -- ip a

    Example output:

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP
        link/ether 0a:58:0a:83:02:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.131.2.25/23 brd 10.131.3.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::858:aff:fe83:219/64 scope link
           valid_lft forever preferred_lft forever
    3: eth1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP qlen 1000
        link/ether aa:25:73:ff:a7:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 192.168.100.10/24 brd 192.168.100.255 scope global eth1
           valid_lft forever preferred_lft forever
        inet6 fe80::a825:73ff:feff:a700/64 scope link
           valid_lft forever preferred_lft forever
    4: eth2@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP qlen 1000
        link/ether aa:a4:6c:4e:e8:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 192.168.200.10/24 brd 192.168.200.255 scope global eth2
           valid_lft forever preferred_lft forever
        inet6 fe80::a8a4:6cff:fe4e:e897/64 scope link
           valid_lft forever preferred_lft forever

    This output shows the pod has three network interfaces:

    • eth0: The default interface, connected to the cluster network.

    • eth1: The first additional interface from management-net, with IP 192.168.100.10.

    • eth2: The second additional interface from sip-net, with IP 192.168.200.10.

  2. Run the following command to verify that the route-override plugin added the expected static routes:

    $ oc exec chain-test-pod -n chain-example -- ip route

    Example output:

    default via 10.132.0.1 dev eth0
    10.0.0.0/8 via 192.168.100.1 dev eth1
    10.132.0.0/23 dev eth0 proto kernel scope link src 10.132.1.97
    10.132.0.0/14 via 10.132.0.1 dev eth0
    100.64.0.0/16 via 10.132.0.1 dev eth0
    169.254.0.5 via 10.132.0.1 dev eth0
    172.16.0.0/12 via 192.168.200.1 dev eth2
    172.30.0.0/16 via 10.132.0.1 dev eth0
    192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.10
    192.168.200.0/24 dev eth2 proto kernel scope link src 192.168.200.10

    This output confirms that the route-override plugin in each chain added the expected static routes:

    • For 10.0.0.0/8 via 192.168.100.1 dev eth1, traffic destined for 10.0.0.0/8 is routed through eth1 via the management-net gateway. This route was added by the route-override plugin in the management-net chain.

    • For 172.16.0.0/12 via 192.168.200.1 dev eth2, traffic destined for 172.16.0.0/12 is routed through eth2 via the sip-net gateway. This route was added by the route-override plugin in the sip-net chain.

    • The connected subnet routes (192.168.100.0/24 and 192.168.200.0/24) were created by the macvlan plugin, while the default route uses eth0, the cluster network interface.