×

As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address.

The egress router implementation uses the egress router Container Network Interface (CNI) plug-in.

The egress router CNI plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Network attachment definition for an egress router in redirect mode

Before a pod can act as an egress router, you must specify the network interface configuration as a NetworkAttachmentDefinition object. The object specifies information such as the IP address to attach to the egress router pod, the network destinations, and a network gateway. As the pod for the egress router starts, Multus uses the network attachment definition to add a network interface with the specified properties to the pod.

Example network attachment definition
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: egress-router-redirect  (1)
spec:
  config: '{
    "cniVersion": "0.4.0",
    "type": "egress-router",
    "name": "egress-router",
    "ip": {
      "addresses": [
        "192.168.12.99/24"  (2)
        ],
      "destinations": [
        "192.168.12.91/32"  (3)
        ],
      "gateway": "192.168.12.1"  (4)
      }
    }'
1 The name of the network attachment definition is used later in the specification for the egress router pod.
2 The addresses key specifies the reserved source IP address to use with the additional network interface. Specify a single IP address in CIDR notation, such as 192.168.12.99/24.
3 The destinations key specifies a single IP address in CIDR notation that the egress router sends packets to. The network address translation (NAT) tables for the egress router pod are configured so that connections to the cluster IP address of the pod are redirected to the same port on the destination IP address. Using this example, connections to the pod are redirected to 192.168.12.91, with a source IP address of 192.168.12.99.
4 The gateway key specifies the IP address for the network gateway.

Egress router pod specification for redirect mode

After you create a network attachment definition, you add a pod that references the definition.

Example egress router pod specification
apiVersion: v1
kind: Pod
metadata:
  name: egress-router-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: egress-router-redirect  (1)
spec:
  containers:
    - name: egress-router-pod
      image: quay.io/openshift/origin-pod
1 The specified network must match the name of the network attachment definition. You can specify a namespace, interface name, or both, by replacing the values in the following pattern: <namespace>/<network>@<interface>. By default, Multus adds a secondary network interface to the pod with a name such as net1, net2, and so on.

Deploying an egress router pod in redirect mode

You can deploy an egress router pod to redirect traffic from its own reserved source IP address to one or more destination IP addresses.

After you add an egress router pod, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure
  1. Create a network attachment definition.

  2. Create an egress router pod.

  3. To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router pod, as in the following example:

    apiVersion: v1
    kind: Service
    metadata:
      name: egress-1
    spec:
      ports:
      - name: database
        protocol: TCP
        port: 3306
      type: ClusterIP
      selector:
        name: egress-router-pod

    After you create the service, your pods can connect to the service. The egress router pod redirects the connection to the corresponding port on the destination IP address. The connections originate from the reserved source IP address.

Verification

To verify that the egress router pod started and has the secondary network interface, complete the following procedure:

  1. View the events for the egress router pod:

    $ oc get events --field-selector involvedObject.name=egress-router-pod

    If the pod references the network attachment definition, the previous command returns output that is similar to the following:

    Example output
    LAST SEEN   TYPE     REASON           OBJECT                  MESSAGE
    5m4s        Normal   Scheduled        pod/egress-router-pod   Successfully assigned default/egress-router-pod to ci-ln-9x2bnsk-f76d1-j2v6g-worker-c-24g65
    5m3s        Normal   AddedInterface   pod/egress-router-pod   Add eth0 [10.129.2.31/23]
    5m3s        Normal   AddedInterface   pod/egress-router-pod   Add net1 [192.168.12.99/24] from default/egress-router-redirect
  2. Optional: View the routing table for the egress router pod.

    1. Get the node name for the egress router pod:

      $ POD_NODENAME=$(oc get pod egress-router-pod -o jsonpath="{.spec.nodeName}")
    2. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug:

      $ oc debug node/$POD_NODENAME
    3. Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the host in /host within the pod. By changing the root directory to /host, you can run binaries from the executable paths of the host:

      # chroot /host
    4. From within the chroot environment console, get the container ID:

      # crictl ps --name egress-router-redirect | awk '{print $1}'
      Example output
      CONTAINER
      bac9fae69ddb6
    5. Determine the process ID of the container. In this example, the container ID is bac9fae69ddb6:

      # crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print $2}'
      Example output
      68857
    6. Enter the network namespace of the container:

      # nsenter -n -t 68857
    7. Display the routing table:

      # ip route

      In the following example output, the net1 network interface is the default route. Traffic for the cluster network uses the eth0 network interface. Traffic for the 192.168.12.0/24 network uses the net1 network interface and originates from the reserved source IP address 192.168.12.99. The pod routes all other traffic to the gateway at IP address 192.168.12.1. Routing for the service network is not shown.

      Example output
      default via 192.168.12.1 dev net1
      10.129.2.0/23 dev eth0 proto kernel scope link src 10.129.2.31
      192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99
      192.168.12.1 dev net1