$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
Updating the operating system (OS) on a host, by either upgrading across major
releases or updating the system software for a minor release, can impact the
OKD software running on those machines. In particular, these updates
can affect the iptables
rules or ovs
flows that OKD requires to
operate.
To safely upgrade the OS on a host:
Drain the node in preparation for maintenance:
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
In order to protect sensitive packages that do not need to be updated, apply the exclude rules to the host:
# atomic-openshift-docker-excluder exclude # atomic-openshift-excluder exclude
A reboot ensures that the host is running the newest versions and means that
the container engine
and OKD processes have been restarted, which forces
them to check that all of the rules in other services are correct.
# yum update # reboot
However, instead of rebooting a node host, you can restart the services that are
affected or preserve the iptables
state. Both processes are described in the
OKD
iptables topic. The ovs
flow rules do not need to be saved, but restarting
the OKD node software fixes the flow rules.
Configure the host to be schedulable again:
$ oc adm uncordon <node_name>
If using OpenShift Container Storage, upgrade the OKD nodes running OpenShift Container Storage one at a time.
To begin, recall the project in which OpenShift Container Storage was deployed.
Confirm the node and pod selectors configured on the service’s daemonset.
$ oc get daemonset -n <project_name> -o wide
Use |
These selectors are found under NODE-SELECTOR
and SELECTOR
, respectively. The
example commands below will use glusterfs=storage-host
and
glusterfs=storage-pod
, respectively.
Given the daemonset’s node selector, confirm which hosts have the label, and hence are running pods from the daemonset:
$ oc get nodes --selector=glusterfs=storage-host
Chose a node which will have its operating system upgraded.
Remove the daemonset label from the node:
$ oc label node <node_name> glusterfs-
This will cause the OpenShift Container Storage pod to terminate on that node.
The node can now have its OS upgraded as described above.
To restart an OpenShift Container Storage pod on the node, relabel the node with the daemonset label:
$ oc label node <node_name> glusterfs=storage-host
Wait for the OpenShift Container Storage pod to respawn and appear.
Given the daemonset’s pod selector, determine the name of the newly spawned pod by searching for a pod running on the node whose OS you upgraded:
$ oc get pod -n <project_name> --selector=glusterfs=storage-pod -o wide
Use |
oc rsh
into the gluster pod to check the volume heal:
$ oc rsh <pod_name> $ for vol in `gluster volume list`; do gluster volume heal $vol info; done $ exit
Ensure all of the volumes are healed and there are no outstanding tasks. The
heal info
command lists all pending entries for a given volume’s heal process.
A volume is considered healed when Number of entries
for that volume is 0
.
Use gluster volume status <volume_name>
for additional details about the
volume. The Online
state should be marked Y
for all bricks.