$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
As a cluster administrator, you can rollback to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin if the migration to OpenShift SDN is unsuccessful.
To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin.
As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.
While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. |
You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode.
You installed the OpenShift CLI (oc
).
You have access to the cluster as a user with the cluster-admin
role.
You have a recent backup of the etcd database.
You can manually reboot each node.
You checked that your cluster is in a known good state without any errors.
You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081
for all nodes on all cloud platforms.
You set all timeouts for webhooks to 3
seconds or removed the webhooks.
To backup the configuration for the cluster network, enter the following command:
$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
Verify that the OVN_SDN_MIGRATION_TIMEOUT
environment variable is set and is equal to 0s
by running the following command:
#!/bin/bash
if [ -n "$OVN_SDN_MIGRATION_TIMEOUT" ] && [ "$OVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then
unset OVN_SDN_MIGRATION_TIMEOUT
fi
#loops the timeout command of the script to repeatedly check the cluster Operators until all are available.
co_timeout=${OVN_SDN_MIGRATION_TIMEOUT:-1200s}
timeout "$co_timeout" bash <<EOT
until
oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \
oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \
oc wait co --all --for='condition=DEGRADED=False' --timeout=10s;
do
sleep 10
echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False";
done
EOT
Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \
--patch '{"spec":{"migration":null}}'
. Delete the NodeNetworkConfigurationPolicy
(NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps:
Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command:
$ oc get nncp
NAME STATUS REASON
bondmaster0 Available SuccessfullyConfigured
Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections
system path.
Remove the NNCP from your cluster:
$ oc delete nncp <nncp_manifest_filename>
To prepare all the nodes for the migration, set the migration
field on the CNO configuration object by running the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \
--patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }'
This step does not deploy OVN-Kubernetes immediately. Instead, specifying the |
Check that the reboot is finished by running the following command:
$ oc get mcp
Check that all cluster Operators are available by running the following command:
$ oc get co
Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents:
Egress IPs
Egress firewall
Multicast
To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys:
$ oc patch Network.operator.openshift.io cluster --type='merge' \
--patch '{
"spec": {
"migration": {
"networkType": "OVNKubernetes",
"features": {
"egressIP": <bool>,
"egressFirewall": <bool>,
"multicast": <bool>
}
}
}
}'
where:
bool
: Specifies whether to enable migration of the feature. The default is true
.
Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements:
Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step:
If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored.
If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step.
This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step.
OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. |
Geneve (Generic Network Virtualization Encapsulation) overlay network port
OVN-Kubernetes IPv4 internal subnet
To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch.
$ oc patch Network.operator.openshift.io cluster --type=merge \
--patch '{
"spec":{
"defaultNetwork":{
"ovnKubernetesConfig":{
"mtu":<mtu>,
"genevePort":<port>,
"v4InternalSubnet":"<ipv4_subnet>",
}}}}'
where:
mtu
The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100
less than the smallest node MTU value.
port
The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081
. The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789
.
ipv4_subnet
An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OKD installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16
.
mtu
field$ oc patch Network.operator.openshift.io cluster --type=merge \
--patch '{
"spec":{
"defaultNetwork":{
"ovnKubernetesConfig":{
"mtu":1200
}}}}'
As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
$ oc get mcp
A successfully updated node has the following status: UPDATED=true
, UPDATING=false
, DEGRADED=false
.
By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. |
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
$ oc describe node | egrep "hostname|machineconfig"
kubernetes.io/hostname=master-0
machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b
machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b
machineconfiguration.openshift.io/reason:
machineconfiguration.openshift.io/state: Done
Verify that the following statements are true:
The value of machineconfiguration.openshift.io/state
field is Done
.
The value of the machineconfiguration.openshift.io/currentConfig
field is equal to the value of the machineconfiguration.openshift.io/desiredConfig
field.
To confirm that the machine config is correct, enter the following command:
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
where <config_name>
is the name of the machine config from the machineconfiguration.openshift.io/currentConfig
field.
The machine config must include the following update to the systemd configuration:
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
If a node is stuck in the NotReady
state, investigate the machine config daemon pod logs and resolve any errors.
To list the pods, enter the following command:
$ oc get pod -n openshift-machine-config-operator
NAME READY STATUS RESTARTS AGE
machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m
machine-config-daemon-5cf4b 2/2 Running 0 43h
machine-config-daemon-7wzcd 2/2 Running 0 43h
machine-config-daemon-fc946 2/2 Running 0 43h
machine-config-daemon-g2v28 2/2 Running 0 43h
machine-config-daemon-gcl4f 2/2 Running 0 43h
machine-config-daemon-l5tnv 2/2 Running 0 43h
machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m
machine-config-server-bsc8h 1/1 Running 0 43h
machine-config-server-hklrm 1/1 Running 0 43h
machine-config-server-k9rtx 1/1 Running 0 43h
The names for the config daemon pods are in the following format: machine-config-daemon-<seq>
. The <seq>
value is a random five character alphanumeric sequence.
Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:
$ oc logs <pod> -n openshift-machine-config-operator
where pod
is the name of a machine config daemon pod.
Resolve any errors in the logs shown by the output from the previous command.
To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands:
To specify the network provider without changing the cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \
--type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'
To specify a different cluster network IP address block, enter the following command:
$ oc patch Network.config.openshift.io cluster \
--type='merge' --patch '{
"spec": {
"clusterNetwork": [
{
"cidr": "<cidr>",
"hostPrefix": <prefix>
}
],
"networkType": "OVNKubernetes"
}
}'
where cidr
is a CIDR block and prefix
is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16
CIDR block because the OVN-Kubernetes network provider uses this block internally.
You cannot change the service network address block during the migration. |
Verify that the Multus daemon set rollout is complete before continuing with subsequent steps:
$ oc -n openshift-multus rollout status daemonset/multus
The name of the Multus pods is in the form of multus-<xxxxx>
where <xxxxx>
is a random sequence of letters. It might take several moments for the pods to restart.
Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated...
...
Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available...
daemon set "multus" successfully rolled out
To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches:
The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. |
With the oc rsh
command, you can use a bash script similar to the following:
#!/bin/bash
readarray -t POD_NODES <<< "$(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print $1" "$7}')"
for i in "${POD_NODES[@]}"
do
read -r POD NODE <<< "$i"
until oc rsh -n openshift-machine-config-operator "$POD" chroot /rootfs shutdown -r +1
do
echo "cannot reboot node $NODE, retry" && sleep 3
done
done
With the ssh
command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password.
#!/bin/bash
for ip in $(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}')
do
echo "reboot node $ip"
ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3
done
Confirm that the migration succeeded:
To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType
must be OVNKubernetes
.
$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
To confirm that the cluster nodes are in the Ready
state, enter the following command:
$ oc get nodes
To confirm that your pods are not in an error state, enter the following command:
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'
If pods on a node are in an error state, reboot that node.
To confirm that all of the cluster Operators are not in an abnormal state, enter the following command:
$ oc get co
The status of every cluster Operator must be the following: AVAILABLE="True"
, PROGRESSING="False"
, DEGRADED="False"
. If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information.
Complete the following steps only if the migration succeeds and your cluster is in a good state:
To remove the migration configuration from the CNO configuration object, enter the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \
--patch '{ "spec": { "migration": null } }'
To remove custom configuration for the OpenShift SDN network provider, enter the following command:
$ oc patch Network.operator.openshift.io cluster --type='merge' \
--patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }'
To remove the OpenShift SDN network provider namespace, enter the following command:
$ oc delete namespace openshift-sdn
Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking".