×

With IPsec enabled, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec Transport mode.

IPsec is disabled by default. It can be enabled either during or after installing the cluster. For information about cluster installation, see OKD installation overview. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header.

IPsec on IBM Cloud® supports only NAT-T. Using ESP is not supported.

The following support limitations exist for IPsec on a OKD cluster:

  • You must disable IPsec before updating to OKD 4.15. After disabling IPsec, you must also delete the associated IPsec daemonsets. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. (OCPBUGS-43323)

Use the procedures in the following documentation to:

  • Enable and disable IPSec after cluster installation

  • Configure support for external IPsec endpoints outside the cluster

  • Verify that IPsec encrypts traffic between pods on different nodes

Prerequisites

  • You have decreased the size of the cluster MTU by 46 bytes to allow for the additional overhead of the IPsec ESP header. For more information on resizing the MTU that your cluster uses, see Changing the MTU for the cluster network.

Network connectivity requirements when IPsec is enabled

You must configure the network connectivity between machines to allow OKD cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.

Table 1. Ports used for all-machine to all-machine communications
Protocol Port Description

UDP

500

IPsec IKE packets

4500

IPsec NAT-T packets

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

IPsec encryption for pod-to-pod traffic

OKD supports IPsec encryption for network traffic between pods.

Types of network traffic flows encrypted by pod-to-pod IPsec

With IPsec enabled, only the following network traffic flows between pods are encrypted:

  • Traffic between pods on different nodes on the cluster network

  • Traffic from a pod on the host network to a pod on the cluster network

The following traffic flows are not encrypted:

  • Traffic between pods on the same node on the cluster network

  • Traffic between pods on the host network

  • Traffic from a pod on the cluster network to a pod on the host network

The encrypted and unencrypted flows are illustrated in the following diagram:

IPsec encrypted and unencrypted traffic flows

Encryption protocol and IPsec mode

The encrypt cipher used is AES-GCM-16-256. The integrity check value (ICV) is 16 bytes. The key length is 256 bits.

The IPsec mode used is Transport mode, a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OKD does not currently use or support IPsec Tunnel mode for pod-to-pod communication.

Security certificate generation and rotation

The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO.

The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse.

Enabling pod-to-pod IPsec encryption

As a cluster administrator, you can enable pod-to-pod IPsec encryption after cluster installation.

Prerequisites
  • Install the OpenShift CLI (oc).

  • You are logged in to the cluster as a user with cluster-admin privileges.

  • You have reduced the size of your cluster’s maximum transmission unit (MTU) by 46 bytes to allow for the overhead of the IPsec ESP header.

Procedure
  • To enable IPsec encryption, enter the following command:

    $ oc patch networks.operator.openshift.io cluster --type=merge \
    -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}'
Verification
  1. To find the names of the OVN-Kubernetes data plane pods, enter the following command:

    $ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node
    Example output
    ovnkube-node-5xqbf                       8/8     Running   0              28m
    ovnkube-node-6mwcx                       8/8     Running   0              29m
    ovnkube-node-ck5fr                       8/8     Running   0              31m
    ovnkube-node-fr4ld                       8/8     Running   0              26m
    ovnkube-node-wgs4l                       8/8     Running   0              33m
    ovnkube-node-zfvcl                       8/8     Running   0              34m
  2. Verify that IPsec is enabled on your cluster by entering the following command. The command output must state true to indicate that the node has IPsec enabled.

    $ oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec (1)
    1 Replace <pod_number_sequence> with the random sequence of letters, 5xqbf, for a data plane pod from the previous step

Disabling IPsec encryption

As a cluster administrator, you can disable IPsec encryption only if you enabled IPsec after cluster installation.

To avoid issues with your installed cluster, ensure that after you disable IPsec that you also delete the associated IPsec daemonsets pods.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in to the cluster with a user with cluster-admin privileges.

Procedure
  1. To disable IPsec encryption, enter the following command:

    $ oc patch networks.operator.openshift.io/cluster --type=json \
      -p='[{"op":"remove", "path":"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig"}]'
  2. To find the names of the OVN-Kubernetes data plane pods that exist on a node in your cluster, enter the following command:

    $ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node
    Example output
    ovnkube-node-5xqbf                       8/8     Running   0              28m
    ovnkube-node-6mwcx                       8/8     Running   0              29m
    ovnkube-node-ck5fr                       8/8     Running   0              31m
    ...
  3. To check if a node in your cluster has IPsec disabled, enter the following command. Ensure that you enter this command for each node that exists in your cluster. The command output must state false to indicate that the node has IPsec disabled.

    $ oc -n openshift-ovn-kubernetes rsh ovnkube-node-<pod_number_sequence> ovn-nbctl --no-leader-only get nb_global . ipsec (1)
    1 Replace <pod_number_sequence> with the random sequence of letters, 5xqbf, for a data plane pod from the previous step.
  4. To remove the IPsec ovn-ipsec-host daemonset pod from the openshift-ovn-kubernetes namespace on a node, enter the following command:

    $ oc delete daemonset ovn-ipsec-host -n openshift-ovn-kubernetes (1)
    1 The ovn-ipsec-host daemonset pod configures IPsec connections for east-west traffic on a node.
  5. To remove the IPsec ovn-ipsec-containerized daemonset pod from the openshift-ovn-kubernetes namespace on a node, enter the following command:

    $ oc delete daemonset ovn-ipsec-containerized -n openshift-ovn-kubernetes (1)
    1 The ovn-ipsec-containerized daemonset pod configures IPsec connections for east-west traffic on a node.
  6. Verify that the ovn-ipsec-host and ovn-ipsec-containerized daemonset pods were removed from all the nodes in your cluster by entering the following command. If the command output does not list the pods, the removal operation is successful.

    $ oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec

    You might need to re-run the oc delete command for a pod because sometimes the initial command attempt might not delete the pod.

  7. Optional: You can increase the size of your cluster MTU by 46 bytes because there is no longer any overhead from the IPsec ESP header in IP packets.

IPsec encryption for external traffic

OKD supports IPsec encryption for traffic to external hosts.

You must supply a custom IPsec configuration, which includes the IPsec configuration file itself and TLS certificates.

Ensure that the following prohibitions are observed:

  • The custom IPsec configuration must not include any connection specifications that might interfere with the cluster’s pod-to-pod IPsec configuration.

  • Certificate common names (CN) in the provided certificate bundle must not begin with the ovs_ prefix, because this naming can collide with pod-to-pod IPsec CN names in the Network Security Services (NSS) database of each node.

IPsec support for external endpoints is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Enabling IPsec encryption for external IPsec endpoints

As a cluster administrator, you can enable IPsec encryption between the cluster and external IPsec endpoints. Because this procedure uses Butane to create machine configs, you must have the butane command installed.

After you apply the machine config, the Machine Config Operator reboots affected nodes in your cluster to rollout the new machine config.

Prerequisites
  • Install the OpenShift CLI (oc).

  • You are logged in to the cluster as a user with cluster-admin privileges.

  • You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header.

  • You have installed the butane utility.

  • You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.

Procedure

As a cluster administrator, you can enable IPsec support for external IPsec endpoints.

  1. Create an IPsec configuration file named ipsec-endpoint-config.conf. The configuration is consumed in the next step. For more information, see Libreswan as an IPsec VPN implementation.

  2. Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps.

    • left_server.p12: The certificate bundle for the IPsec endpoints

    • ca.pem: The certificate authority that you signed your certificates with

  3. Create a machine config to apply the IPsec configuration to your cluster by using the following two steps:

    1. To add the IPsec configuration, create Butane config files for the control plane and worker nodes with the following contents:

      $ for role in master worker; do
        cat >> "99-ipsec-${role}-endpoint-config.bu" <<-EOF
        variant: openshift
        version: 4.14.0
        metadata:
          name: 99-${role}-import-certs-enable-svc-os-ext
          labels:
            machineconfiguration.openshift.io/role: $role
        openshift:
          extensions:
            - ipsec
        systemd:
          units:
          - name: ipsec-import.service
            enabled: true
            contents: |
              [Unit]
              Description=Import external certs into ipsec NSS
              Before=ipsec.service
      
              [Service]
              Type=oneshot
              ExecStart=/usr/local/bin/ipsec-addcert.sh
              RemainAfterExit=false
              StandardOutput=journal
      
              [Install]
              WantedBy=multi-user.target
          - name: ipsecenabler.service
            enabled: true
            contents: |
              [Service]
              Type=oneshot
              ExecStart=systemctl enable --now ipsec.service
      
              [Install]
              WantedBy=multi-user.target
        storage:
          files:
          - path: /etc/ipsec.d/ipsec-endpoint-config.conf
            mode: 0400
            overwrite: true
            contents:
              local: ipsec-endpoint-config.conf
          - path: /etc/pki/certs/ca.pem
            mode: 0400
            overwrite: true
            contents:
              local: ca.pem
          - path: /etc/pki/certs/left_server.p12
            mode: 0400
            overwrite: true
            contents:
              local: left_server.p12
          - path: /usr/local/bin/ipsec-addcert.sh
            mode: 0740
            overwrite: true
            contents:
              inline: |
                #!/bin/bash -e
                echo "importing cert to NSS"
                certutil -A -n "CA" -t "CT,C,C" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem
                pk12util -W "" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/
                certutil -M -n "left_server" -t "u,u,u" -d /var/lib/ipsec/nss/
      EOF
      done
    2. To transform the Butane files that you created in the previous step into machine configs, enter the following command:

      $ for role in master worker; do
        butane -d . 99-ipsec-${role}-endpoint-config.bu -o ./99-ipsec-$role-endpoint-config.yaml
      done
  4. To apply the machine configs to your cluster, enter the following command:

    $ for role in master worker; do
      oc apply -f 99-ipsec-${role}-endpoint-config.yaml
    done

    As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available.

  5. Check the machine config pool status by entering the following command:

    $ oc get mcp

    A successfully updated node has the following status: UPDATED=true, UPDATING=false, DEGRADED=false.

    By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.