$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>
You can manage additional networks in your OpenShift cluster by creating, editing, viewing, and deleting them as needed. You can extend the networking capabilities of your cluster beyond the default primary network by configuring additional networks.
You configure a primary network by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group.
The configuration for the API is described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
The name for the primary network. |
|
|
The namespace that the object is associated with. |
|
|
The CNI plugin configuration in JSON format. |
You can manage the life cycle of a primary network created by a NAD CR through the Cluster Network Operator (CNO) or a YAML manifest. Using the CNO provides automated management of the network resource, while applying a YAML manifest allows for direct control over the network configuration.
With this method, the CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle, the CNO ensures that a DHCP is available for a primary network that uses a DHCP assigned IP address.
With this method, you can manage the primary network directly by creating an NetworkAttachmentDefinition object. This approach allows for the invocation of multiple CNI plugins in order to attach primary network interfaces in a pod.
Each approach is mutually exclusive and you can only use one approach for managing a primary network at a time. For either approach, the primary network is managed by a Container Network Interface (CNI) plugin that you configure.
|
When deploying OKD nodes with multiple network interfaces on OpenStack with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command:
|
When you specify a primary network to create by using the Cluster Network Operator (CNO), the (CNO) creates the NetworkAttachmentDefinition custom resource definition (CRD) automatically and manages it.
The Cluster Network Operator (CNO) manages additional network definitions. When you specify a primary network to create, the CNO creates the NetworkAttachmentDefinition custom resource definition (CRD) automatically.
|
Do not edit the |
Install the OpenShift CLI (oc).
Log in as a user with cluster-admin privileges.
Optional: Create the namespace for the additional networks:
$ oc create namespace <namespace_name>
To edit the CNO configuration, enter the following command:
$ oc edit networks.operator.openshift.io cluster
Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR.
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
# ...
additionalNetworks:
- name: tertiary-net
namespace: namespace2
type: Raw
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "tertiary-net",
"type": "ipvlan",
"master": "eth1",
"mode": "l2",
"ipam": {
"type": "static",
"addresses": [
{
"address": "192.168.1.23/24"
}
]
}
}
Save your changes and quit the text editor to commit your changes.
Confirm that the CNO created the NetworkAttachmentDefinition CRD by running the following command. A delay might exist before the CNO creates the CRD. The expected output shows the name of the NAD CRD and the creation age in minutes.
$ oc get network-attachment-definitions -n <namespace>
where:
<namespace>Specifies the namespace for the network attachment that you added to the CNO configuration.
Create a primary network attachment by directly applying a NetworkAttachmentDefinition YAML manifest. This gives you full control over the network configuration without relying on the Cluster Network Operator to manage the resource automatically.
Install the OpenShift CLI (oc).
Log in as a user with cluster-admin privileges.
Create a YAML file with your additional network configuration, such as in the following example:
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: next-net
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "work-network",
"type": "host-device",
"device": "eth1",
"ipam": {
"type": "dhcp"
}
}
To create the additional network, enter the following command:
$ oc apply -f <file>.yaml
where:
<file>Specifies the name of the file contained the YAML manifest.
For additional networks, you can assign IP addresses by using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment.
The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components:
CNI Plugin: Responsible for integrating with the Kubernetes networking stack to request and release IP addresses.
DHCP IPAM CNI Daemon: A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself.
For networks requiring type: dhcp in their IPAM configuration, ensure the following:
A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer’s existing network infrastructure.
The DHCP server is appropriately configured to serve IP addresses to the nodes.
In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server.
|
Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations. |
A DHCP lease must be periodically renewed throughout the container’s lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the additional network setup.
The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD).
|
Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated You must install the NMState Operator before using the OVN-Kubernetes network plugin to configure secondary network interfaces. |
You can configure an OVN-Kubernetes additional network in either layer 2 or localnet topologies.
A layer 2 topology supports east-west cluster traffic, but does not allow access to the underlying physical network.
A localnet topology allows connections to the physical network, but requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes.
The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks.
|
Networks names must be unique. For example, creating multiple |
You can use an OVN-Kubernetes additional network with the following supported platforms:
Bare metal
IBM Power®
IBM Z®
IBM® LinuxONE
VMware vSphere
OpenStack
The OVN-Kubernetes network plugin JSON configuration object describes the configuration parameters for the OVN-Kubernetes CNI network plugin. The following table details these parameters:
| Field | Type | Description |
|---|---|---|
|
|
The CNI specification version. The required value is |
|
|
The name of the network. These networks are not namespaced. For example, a network named |
|
|
The name of the CNI plugin to configure. This value must be set to |
|
|
The topological configuration for the network. Must be one of |
|
|
The subnet to use for the network across the cluster. For When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. |
|
|
The maximum transmission unit (MTU). If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, such as the Geneve (Generic Network Virtualization Encapsulation), and byte capacity of any enabled features, such as IPsec. |
|
|
The metadata |
|
|
A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. |
|
|
If topology is set to |
When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field.
The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network.
Refer to the following table that details supported multi-network policy selectors that are based on a subnets CNI configuration:
subnets field specified |
Allowed multi-network policy selectors |
|---|---|
Yes |
|
No |
|
You can use the k8s.v1.cni.cncf.io/policy-for annotation on a MultiNetworkPolicy object to point to a NetworkAttachmentDefinition (NAD) custom resource (CR). The NAD CR defines the network to which the policy applies. The following example multi-network policy that uses a pod selector is valid only if the subnets field is defined in the secondary network CNI configuration for the secondary network named blue2:
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
name: allow-same-namespace
annotations:
k8s.v1.cni.cncf.io/policy-for: blue2 (1)
spec:
podSelector:
ingress:
- from:
- podSelector: {}
The following example uses the ipBlock network multi-network policy that is always valid for an OVN-Kubernetes secondary network:
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
name: ingress-ipblock
annotations:
k8s.v1.cni.cncf.io/policy-for: default/flatl2net
spec:
podSelector:
matchLabels:
name: access-control
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.200.0.0/30
The switched localnet topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network.
You must map a secondary network to the OVS bridge to use it as an OVN-Kubernetes secondary network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS).
You can create an NodeNetworkConfigurationPolicy (NNCP) object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: ''. With this declarative approach, the NMState Operator applies additional network configuration to all nodes specified by the node selector automatically and transparently.
When attaching an additional network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure. Consider the following approaches:
If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network stops working correctly.
If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network.
|
You cannot make configuration changes to the |
The localnet1 network is mapped to the br-ex bridge in the following sharing-a-bridge example:
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: mapping
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
ovn:
bridge-mappings:
- localnet: localnet1
bridge: br-ex
state: present
+ where:
+
metadata.name:: The name for the configuration object.
spec.nodeSelector.node-role.kubernetes.io/worker:: A node selector that specifies the nodes to apply the node network configuration policy to.
spec.desiredState.ovn.bridge-mappings.localnet:: The name for the secondary network from which traffic is forwarded to the OVS bridge. This secondary network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes secondary network.
spec.desiredState.ovn.bridge-mappings.bridge:: The name of the OVS bridge on the node. This value is required only if you specify state: present.
spec.desiredState.ovn.bridge-mappings.state:: The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present.
+
The following JSON example configures a localnet secondary network that is named localnet1. Note that the value for the mtu parameter must match the MTU value that was set for the secondary network interface that is mapped to the br-ex bridge interface.
{
"cniVersion": "0.3.1",
"name": "localnet1",
"type": "ovn-k8s-cni-overlay",
"topology":"localnet",
"physicalNetworkName": "localnet1",
"subnets": "202.10.130.112/28",
"vlanID": 33,
"mtu": 1500,
"netAttachDefName": "ns1/localnet-network",
"excludeSubnets": "10.100.200.0/29"
}
In the following multiple interfaces example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as a secondary network.
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: ovs-br1-multiple-networks
spec:
nodeSelector:
node-role.kubernetes.io/worker: ''
desiredState:
interfaces:
- name: ovs-br1
description: |-
A dedicated OVS bridge with eth1 as a port
allowing all VLANs and untagged traffic
type: ovs-bridge
state: up
bridge:
allow-extra-patch-ports: true
options:
stp: false
mcast-snooping-enable: true
port:
- name: eth1
ovn:
bridge-mappings:
- localnet: localnet2
bridge: ovs-br1
state: present
+ where:
+
metadata.name:: Specifies the name of the configuration object.
node-role.kubernetes.io/worker:: Specifies a node selector that identifies the nodes to which the node network configuration policy applies.
desiredState.interfaces.name:: Specifies a new OVS bridge that operates separately from the default bridge used by OVN-Kubernetes for cluster traffic.
options.mcast-snooping-enable:: Specifies whether to enable multicast snooping. When enabled, multicast snooping prevents network devices from flooding multicast traffic to all network members. By default, an OVS bridge does not enable multicast snooping. The default value is false.
bridge.port.name:: Specifies the network device on the host system to associate with the new OVS bridge.
ovn.bridge-mappings.localnet:: Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the spec.config.name field in the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes secondary network.
ovn.bridge-mappings.bridge:: Specifies the name of the OVS bridge on the node. The value is required only when state: present is set.
ovn.bridge-mappings.state:: Specifies the state of the mapping. Valid values are present to add the bridge or absent to remove the bridge. The default value is present.
+
The following JSON example configures a localnet secondary network that is named localnet2. Note that the value for the mtu parameter must match the MTU value that was set for the eth1 secondary network interface.
{
"cniVersion": "0.3.1",
"name": "localnet2",
"type": "ovn-k8s-cni-overlay",
"topology":"localnet",
"physicalNetworkName": "localnet2",
"subnets": "202.10.130.112/28",
"vlanID": 33,
"mtu": 1500,
"netAttachDefName": "ns1/localnet-network"
"excludeSubnets": "10.100.200.0/29"
}
You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation.
The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide:
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: l2-network
name: tinypod
namespace: ns1
spec:
containers:
- args:
- pause
image: k8s.gcr.io/e2e-test-images/agnhost:2.36
imagePullPolicy: IfNotPresent
name: agnhost-container
You can configure pods with a static IP address. The example in the procedure provisions a pod with a static IP address.
|
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[
{
"name": "l2-network",
"mac": "02:03:04:05:06:07",
"interface": "myiface1",
"ips": [
"192.0.2.20/24"
]
}
]'
name: tinypod
namespace: ns1
spec:
containers:
- args:
- pause
image: k8s.gcr.io/e2e-test-images/agnhost:2.36
imagePullPolicy: IfNotPresent
name: agnhost-container
where:
k8s.v1.cni.cncf.io/networks.nameThe name of the network. This value must be unique across all NetworkAttachmentDefinition CRDs.
k8s.v1.cni.cncf.io/networks.macThe MAC address to be assigned for the interface.
k8s.v1.cni.cncf.io/networks.interfaceThe name of the network interface to be created for the pod.
k8s.v1.cni.cncf.io/networks.ipsThe IP addresses to be assigned to the network interface.