$ oc get -n openshift-network-operator deployment/network-operator
You can use the Cluster Network Operator (CNO) to deploy and manage cluster network components on an OKD cluster, including the Container Network Interface (CNI) network plugin selected for the cluster during installation.
The Cluster Network Operator implements the network
API from the operator.openshift.io
API group.
The Operator deploys the OVN-Kubernetes network plugin, or the network provider plugin that you selected during cluster installation, by using a daemon set.
The Cluster Network Operator is deployed during installation as a Kubernetes
Deployment
.
Run the following command to view the Deployment status:
$ oc get -n openshift-network-operator deployment/network-operator
NAME READY UP-TO-DATE AVAILABLE AGE
network-operator 1/1 1 1 56m
Run the following command to view the state of the Cluster Network Operator:
$ oc get clusteroperator/network
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
network 4.5.4 True False False 50m
The following fields provide information about the status of the operator:
AVAILABLE
, PROGRESSING
, and DEGRADED
. The AVAILABLE
field is True
when
the Cluster Network Operator reports an available status condition.
Every new OKD installation has a network.config
object named
cluster
.
Use the oc describe
command to view the cluster network configuration:
$ oc describe network.config/cluster
Name: cluster
Namespace:
Labels: <none>
Annotations: <none>
API Version: config.openshift.io/v1
Kind: Network
Metadata:
Self Link: /apis/config.openshift.io/v1/networks/cluster
Spec: (1)
Cluster Network:
Cidr: 10.128.0.0/14
Host Prefix: 23
Network Type: OpenShiftSDN
Service Network:
172.30.0.0/16
Status: (2)
Cluster Network:
Cidr: 10.128.0.0/14
Host Prefix: 23
Cluster Network MTU: 8951
Network Type: OpenShiftSDN
Service Network:
172.30.0.0/16
Events: <none>
1 | The Spec field displays the configured state of the cluster network. |
2 | The Status field displays the current state of the cluster network
configuration. |
You can inspect the status and view the details of the Cluster Network Operator
using the oc describe
command.
Run the following command to view the status of the Cluster Network Operator:
$ oc describe clusteroperators/network
From OKD 4.14 onward, global IP address forwarding is disabled on OVN-Kubernetes based cluster deployments to prevent undesirable effects for cluster administrators with nodes acting as routers. However, in some cases where an administrator expects traffic to be forwarded a new configuration parameter ipForwarding
is available to allow forwarding of all IP traffic.
To re-enable IP forwarding for all traffic on OVN-Kubernetes managed interfaces set the gatewayConfig.ipForwarding
specification in the Cluster Network Operator to Global
following this procedure:
Backup the existing network configuration by running the following command:
$ oc get network.operator cluster -o yaml > network-config-backup.yaml
Run the following command to modify the existing network configuration:
$ oc edit network.operator cluster
Add or update the following block under spec
as illustrated in the following example:
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
networkType: OVNKubernetes
clusterNetworkMTU: 8900
defaultNetwork:
ovnKubernetesConfig:
gatewayConfig:
ipForwarding: Global
Save and close the file.
After applying the changes, the OpenShift Cluster Network Operator (CNO) applies the update across the cluster. You can monitor the progress by using the following command:
$ oc get clusteroperators network
The status should eventually report as Available
, Progressing=False
, and Degraded=False
.
Alternatively, you can enable IP forwarding globally by running the following command:
$ oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}
The other valid option for this parameter is |
You can view Cluster Network Operator logs by using the oc logs
command.
Run the following command to view the logs of the Cluster Network Operator:
$ oc logs --namespace=openshift-network-operator deployment/network-operator
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
After cluster installation, you can only modify the |
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
|
|
|
A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:
This value is ready-only and inherited from the |
|
|
Configures the network plugin for the cluster network. |
|
|
The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
Either
|
||
|
|
This object is only valid for the OpenShift SDN network plugin. |
||
|
|
This object is only valid for the OVN-Kubernetes network plugin. |
The following table describes the configuration fields for the OpenShift SDN network plugin:
Field | Type | Description |
---|---|---|
|
|
The network isolation mode for OpenShift SDN. |
|
|
The maximum transmission unit (MTU) for the VXLAN overlay network. This value is normally configured automatically. |
|
|
The port to use for all VXLAN packets. The default value is |
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description | ||
---|---|---|---|---|
|
|
The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This value is normally configured automatically. |
||
|
|
The UDP port for the Geneve overlay network. |
||
|
|
If the field is present, IPsec is enabled for the cluster. |
||
|
|
Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
||
|
|
Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.
|
||
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
||
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
Field | Type | Description |
---|---|---|
|
integer |
The maximum number of messages to generate every second per node. The default value is |
|
integer |
The maximum size for the audit log in bytes. The default value is |
|
integer |
The maximum number of log files that are retained. |
|
string |
One of the following additional audit log targets:
|
|
string |
The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
|
Set this field to This field has an interaction with the Open vSwitch hardware offloading feature.
If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the |
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
The refresh period for
|
||
|
|
The minimum duration before refreshing
|
A complete CNO configuration is specified in the following example:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
networkType: OVNKubernetes
clusterNetworkMTU: 8900