$ oc debug --as-root node/<node_name>
As the key-value store for OKD, etcd persists the state of all resource objects.
Back up the etcd data for your cluster regularly and store it in a secure location, ideally outside the OKD environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.17.5 cluster must use an etcd backup that was taken from 4.17.5.
Back up your cluster’s etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host. |
After you have an etcd backup, you can restore to a previous cluster state.
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. |
You have access to the cluster as a user with the cluster-admin
role.
You have checked whether the cluster-wide proxy is enabled.
You can check whether the proxy is enabled by reviewing the output of |
Start a debug session as root for a control plane node:
$ oc debug --as-root node/<node_name>
Change your root directory to /host
in the debug shell:
sh-4.4# chroot /host
If the cluster-wide proxy is enabled, export the NO_PROXY
, HTTP_PROXY
, and HTTPS_PROXY
environment variables by running the following commands:
$ export HTTP_PROXY=http://<your_proxy.example.com>:8080
$ export HTTPS_PROXY=https://<your_proxy.example.com>:8080
$ export NO_PROXY=<example.com>
Run the cluster-backup.sh
script in the debug shell and pass in the location to save the backup to.
The |
sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6
found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7
found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6
found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3
ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1
etcdctl version: 3.4.14
API version: 3.4
{"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"}
{"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"}
{"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459}
{"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"}
Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db
{"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336}
snapshot db and kube resources are successfully saved to /home/core/assets/backup
In this example, two files are created in the /home/core/assets/backup/
directory on the control plane host:
snapshot_<datetimestamp>.db
: This file is the etcd snapshot. The cluster-backup.sh
script confirms its validity.
static_kuberesources_<datetimestamp>.tar.gz
: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.
If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. |
The automated backup feature for etcd supports both recurring and single backups. Recurring backups create a cron job that starts a single backup each time the job triggers.
Automating etcd backups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Follow these steps to enable automated backups for etcd.
Enabling the |
You have access to the cluster as a user with the cluster-admin
role.
You have access to the OpenShift CLI (oc
).
Create a FeatureGate
custom resource (CR) file named enable-tech-preview-no-upgrade.yaml
with the following contents:
apiVersion: config.openshift.io/v1
kind: FeatureGate
metadata:
name: cluster
spec:
featureSet: TechPreviewNoUpgrade
Apply the CR and enable automated backups:
$ oc apply -f enable-tech-preview-no-upgrade.yaml
It takes time to enable the related APIs. Verify the creation of the custom resource definition (CRD) by running the following command:
$ oc get crd | grep backup
backups.config.openshift.io 2023-10-25T13:32:43Z
etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z
Follow these steps to create a single etcd backup by creating and applying a custom resource (CR).
You have access to the cluster as a user with the cluster-admin
role.
You have access to the OpenShift CLI (oc
).
If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup:
Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml
with contents such as the following example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: etcd-backup-pvc
namespace: openshift-etcd
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi (1)
volumeMode: Filesystem
1 | The amount of storage available to the PVC. Adjust this value for your requirements. |
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Verify the creation of the PVC by running the following command:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
etcd-backup-pvc Bound 51s
Dynamic PVCs stay in the |
Create a CR file named etcd-single-backup.yaml
with contents such as the following example:
apiVersion: operator.openshift.io/v1alpha1
kind: EtcdBackup
metadata:
name: etcd-single-backup
namespace: openshift-etcd
spec:
pvcName: etcd-backup-pvc (1)
1 | The name of the PVC to save the backup to. Adjust this value according to your environment. |
Apply the CR to start a single backup:
$ oc apply -f etcd-single-backup.yaml
If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup:
Create a StorageClass
CR file named etcd-backup-local-storage.yaml
with the following contents:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: etcd-backup-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Apply the StorageClass
CR by running the following command:
$ oc apply -f etcd-backup-local-storage.yaml
Create a PV named etcd-backup-pv-fs.yaml
with contents such as the following example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-backup-pv-fs
spec:
capacity:
storage: 100Gi (1)
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: etcd-backup-local-storage
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <example_master_node> (2)
1 | The amount of storage available to the PV. Adjust this value for your requirements. |
2 | Replace this value with the node to attach this PV to. |
Verify the creation of the PV by running the following command:
$ oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s
Create a PVC named etcd-backup-pvc.yaml
with contents such as the following example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: etcd-backup-pvc
namespace: openshift-etcd
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi (1)
1 | The amount of storage available to the PVC. Adjust this value for your requirements. |
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Create a CR file named etcd-single-backup.yaml
with contents such as the following example:
apiVersion: operator.openshift.io/v1alpha1
kind: EtcdBackup
metadata:
name: etcd-single-backup
namespace: openshift-etcd
spec:
pvcName: etcd-backup-pvc (1)
1 | The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment. |
Apply the CR to start a single backup:
$ oc apply -f etcd-single-backup.yaml
Follow these steps to create automated recurring backups of etcd.
Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible.
You have access to the cluster as a user with the cluster-admin
role.
You have access to the OpenShift CLI (oc
).
If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups:
Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml
with contents such as the following example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: etcd-backup-pvc
namespace: openshift-etcd
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi (1)
volumeMode: Filesystem
storageClassName: etcd-backup-local-storage
1 | The amount of storage available to the PVC. Adjust this value for your requirements. |
Each of the following providers require changes to the
|
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Verify the creation of the PVC by running the following command:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
etcd-backup-pvc Bound 51s
Dynamic PVCs stay in the |
If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps:
If you delete or otherwise lose access to the node that contains the stored backup data, you can lose data. |
Create a StorageClass
CR file named etcd-backup-local-storage.yaml
with the following contents:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: etcd-backup-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
Apply the StorageClass
CR by running the following command:
$ oc apply -f etcd-backup-local-storage.yaml
Create a PV named etcd-backup-pv-fs.yaml
from the applied StorageClass
with contents such as the following example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd-backup-pv-fs
spec:
capacity:
storage: 100Gi (1)
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: etcd-backup-local-storage
local:
path: /mnt/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <example_master_node> (2)
1 | The amount of storage available to the PV. Adjust this value for your requirements. |
2 | Replace this value with the master node to attach this PV to. |
Run the following command to list the available nodes:
|
Verify the creation of the PV by running the following command:
$ oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s
Create a PVC named etcd-backup-pvc.yaml
with contents such as the following example:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: etcd-backup-pvc
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 10Gi (1)
storageClassName: etcd-backup-local-storage
1 | The amount of storage available to the PVC. Adjust this value for your requirements. |
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Create a custom resource definition (CRD) file named etcd-recurring-backups.yaml
. The contents of the created CRD define the schedule and retention type of automated backups.
For the default retention type of RetentionNumber
with 15 retained backups, use contents such as the following example:
apiVersion: config.openshift.io/v1alpha1
kind: Backup
metadata:
name: etcd-recurring-backup
spec:
etcd:
schedule: "20 4 * * *" (1)
timeZone: "UTC"
pvcName: etcd-backup-pvc
1 | The CronTab schedule for recurring backups. Adjust this value for your needs. |
To use retention based on the maximum number of backups, add the following key-value pairs to the etcd
key:
spec:
etcd:
retentionPolicy:
retentionType: RetentionNumber (1)
retentionNumber:
maxNumberOfBackups: 5 (2)
1 | The retention type. Defaults to RetentionNumber if unspecified. |
2 | The maximum number of backups to retain. Adjust this value for your needs. Defaults to 15 backups if unspecified. |
A known issue causes the number of retained backups to be one greater than the configured value. |
For retention based on the file size of backups, use the following:
spec:
etcd:
retentionPolicy:
retentionType: RetentionSize
retentionSize:
maxSizeOfBackupsGb: 20 (1)
1 | The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified. |
A known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value. |
Create the cron job defined by the CRD by running the following command:
$ oc create -f etcd-recurring-backup.yaml
To find the created cron job, run the following command:
$ oc get cronjob -n openshift-etcd
The process to replace a single unhealthy etcd member depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or because the etcd pod is crashlooping.
If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure. If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure. If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. |
You can identify if your cluster has an unhealthy etcd member.
You have access to the cluster as a user with the cluster-admin
role.
You have taken an etcd backup. For more information, see "Backing up etcd data".
Check the status of the EtcdMembersAvailable
status condition using the following command:
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
This example output shows that the ip-10-0-131-183.ec2.internal
etcd member is unhealthy.
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
The machine is not running or the node is not ready
The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. |
You have access to the cluster as a user with the cluster-admin
role.
You have identified an unhealthy etcd member.
Determine if the machine is not running:
$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
ip-10-0-131-183.ec2.internal stopped (1)
1 | This output lists the node and the status of the node’s machine. If the status is anything other than running , then the machine is not running. |
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable (1)
1 | If the node is listed with an unreachable taint, then the node is not ready. |
If the node is still reachable, then check whether the node is listed as NotReady
:
$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
ip-10-0-131-183.ec2.internal NotReady master 122m v1.32.3 (1)
1 | If the node is listed as NotReady , then the node is not ready. |
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all control plane nodes are listed as Ready
:
$ oc get nodes -l node-role.kubernetes.io/master
NAME STATUS ROLES AGE VERSION
ip-10-0-131-183.ec2.internal Ready master 6h13m v1.32.3
ip-10-0-164-97.ec2.internal Ready master 6h13m v1.32.3
ip-10-0-154-204.ec2.internal Ready master 6h13m v1.32.3
Check whether the status of an etcd pod is either Error
or CrashloopBackoff
:
$ oc -n openshift-etcd get pods -l k8s-app=etcd
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m (1)
etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
1 | Since this status of this pod is Error , then the etcd pod is crashlooping. |
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. |
You have identified the unhealthy etcd member.
You have verified that either the machine is not running or the node is not ready.
You must wait if you power off other control plane nodes. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete. |
You have access to the cluster as a user with the cluster-admin
role.
You have taken an etcd backup.
Before you perform this procedure, take an etcd backup so that you can restore your cluster if you experience any issues. |
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc -n openshift-etcd get pods -l k8s-app=etcd
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m
etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
View the member list:
sh-4.2# etcdctl member list -w table
+------------------+---------+------------------------------+---------------------------+---------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+------------------------------+---------------------------+---------------------------+
| 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 |
| 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
| ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
+------------------+---------+------------------------------+---------------------------+---------------------------+
Take note of the ID and the name of the unhealthy etcd member because these values are needed later in the procedure. The $ etcdctl endpoint health
command will list the removed member until the procedure of replacement is finished and a new member is added.
Remove the unhealthy etcd member by providing the ID to the etcdctl member remove
command:
sh-4.2# etcdctl member remove 6fc1e7c9db35841d
Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346
View the member list again and verify that the member was removed:
sh-4.2# etcdctl member list -w table
+------------------+---------+------------------------------+---------------------------+---------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+------------------------------+---------------------------+---------------------------+
| 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
| ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
+------------------+---------+------------------------------+---------------------------+---------------------------+
You can now exit the node shell.
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
After you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change. |
etcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure. |
Delete the affected node by running the following command:
$ oc delete node <node_name>
$ oc delete node ip-10-0-131-183.ec2.internal
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal (1)
1 | Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. |
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Delete the serving secret:
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Delete the metrics secret:
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Check whether a control plane machine set exists by entering the following command:
$ oc -n openshift-machine-api get controlplanemachineset
If the control plane machine set exists, delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically. For more information, see "Replacing an unhealthy etcd member whose machine is not running or whose node is not ready".
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane by using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal . |
Delete the machine of the unhealthy member:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
1 | Specify the name of the control plane machine for the unhealthy node. |
A new machine is automatically provisioned after deleting the machine of the unhealthy member.
Verify that a new machine was created:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running . |
It might take a few minutes for the new machine to be created. The etcd cluster Operator automatically syncs when the machine or node returns to a healthy state.
Verify the subnet IDs that you are using for your machine sets to ensure that they end up in the correct availability zone. |
If the control plane machine set does not exist, delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane by using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal . |
Save the machine configuration to a file on your file system:
$ oc get machine clustername-8qw5l-master-0 \ (1)
-n openshift-machine-api \
-o yaml \
> new-master-machine.yaml
1 | Specify the name of the control plane machine for the unhealthy node. |
Edit the new-master-machine.yaml
file that was created in the previous step to assign a new name and remove unnecessary fields.
Remove the entire status
section:
status:
addresses:
- address: 10.0.131.183
type: InternalIP
- address: ip-10-0-131-183.ec2.internal
type: InternalDNS
- address: ip-10-0-131-183.ec2.internal
type: Hostname
lastUpdated: "2020-04-20T17:44:29Z"
nodeRef:
kind: Node
name: ip-10-0-131-183.ec2.internal
uid: acca4411-af0d-4387-b73e-52b2484295ad
phase: Running
providerStatus:
apiVersion: awsproviderconfig.openshift.io/v1beta1
conditions:
- lastProbeTime: "2020-04-20T16:53:50Z"
lastTransitionTime: "2020-04-20T16:53:50Z"
message: machine successfully created
reason: MachineCreationSucceeded
status: "True"
type: MachineCreation
instanceId: i-0fdb85790d76d0c3f
instanceState: stopped
kind: AWSMachineProviderStatus
Change the metadata.name
field to a new name.
Keep the same base name as the old machine and change the ending number to the next available number. In this example, clustername-8qw5l-master-0
is changed to clustername-8qw5l-master-3
.
For example:
apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
...
name: clustername-8qw5l-master-3
...
Remove the spec.providerID
field:
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
Delete the machine of the unhealthy member:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
1 | Specify the name of the control plane machine for the unhealthy node. |
Verify that the machine was deleted:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
Create the new machine by using the new-master-machine.yaml
file:
$ oc apply -f new-master-machine.yaml
Verify that the new machine was created:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running . |
It might take a few minutes for the new machine to be created. The etcd cluster Operator automatically syncs when the machine or node returns to a healthy state.
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the unsupportedConfigOverrides
section is removed from the object by entering this command:
$ oc get etcd/cluster -oyaml
If you are using single-node OpenShift, restart the node. Otherwise, you might experience the following error in the etcd cluster Operator:
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc -n openshift-etcd get pods -l k8s-app=etcd
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s
etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
1 | The forceRedeploymentReason value must be unique, which is why a timestamp is appended. |
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
View the member list:
sh-4.2# etcdctl member list -w table
+------------------+---------+------------------------------+---------------------------+---------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+------------------------------+---------------------------+---------------------------+
| 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 |
| 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
| ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
+------------------+---------+------------------------------+---------------------------+---------------------------+
If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. |
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
You have identified the unhealthy etcd member.
You have verified that the etcd pod is crashlooping.
You have access to the cluster as a user with the cluster-admin
role.
You have taken an etcd backup.
It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. |
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc debug node/ip-10-0-131-183.ec2.internal (1)
1 | Replace this with the name of the unhealthy node. |
Change your root directory to /host
:
sh-4.2# chroot /host
Move the existing etcd pod file out of the kubelet manifest directory:
sh-4.2# mkdir /var/lib/etcd-backup
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
Move the etcd data directory to a different location:
sh-4.2# mv /var/lib/etcd/ /tmp
You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc -n openshift-etcd get pods -l k8s-app=etcd
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m
etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
View the member list:
sh-4.2# etcdctl member list -w table
+------------------+---------+------------------------------+---------------------------+---------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+------------------------------+---------------------------+---------------------------+
| 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 |
| b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
| d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
+------------------+---------+------------------------------+---------------------------+---------------------------+
Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the etcdctl member remove
command:
sh-4.2# etcdctl member remove 62bcf33650a7170a
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
View the member list again and verify that the member was removed:
sh-4.2# etcdctl member list -w table
+------------------+---------+------------------------------+---------------------------+---------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+------------------------------+---------------------------+---------------------------+
| b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 |
| d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 |
+------------------+---------+------------------------------+---------------------------+---------------------------+
You can now exit the node shell.
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal (1)
1 | Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. |
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Delete the serving secret:
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Delete the metrics secret:
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Force etcd redeployment.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
1 | The forceRedeploymentReason value must be unique, which is why a timestamp is appended. |
When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod.
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the unsupportedConfigOverrides
section is removed from the object by entering this command:
$ oc get etcd/cluster -oyaml
If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Verify that all members are healthy:
sh-4.2# etcdctl endpoint health
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms
https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms
https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready.
If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it.
You have identified the unhealthy bare metal etcd member.
You have verified that either the machine is not running or the node is not ready.
You have access to the cluster as a user with the cluster-admin
role.
You have taken an etcd backup.
You must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. |
Verify and remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none>
etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none>
etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>
Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc rsh -n openshift-etcd etcd-openshift-control-plane-0
View the member list:
sh-4.2# etcdctl member list -w table
+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+
| 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false |
| 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false |
| cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false |
+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+
Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The etcdctl endpoint health
command will list the removed member until the replacement procedure is completed and the new member is added.
Remove the unhealthy etcd member by providing the ID to the etcdctl member remove
command:
Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. |
sh-4.2# etcdctl member remove 7a8197040a5126c8
Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b
View the member list again and verify that the member was removed:
sh-4.2# etcdctl member list -w table
+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+
| cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false |
| 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false |
+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+
You can now exit the node shell.
After you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot. |
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
Remove the old secrets for the unhealthy etcd member that was removed by running the following commands.
List the secrets for the unhealthy etcd member that was removed.
$ oc get secrets -n openshift-etcd | grep openshift-control-plane-2
Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m
etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m
etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m
Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
$ oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd
secret "etcd-peer-openshift-control-plane-2" deleted
Delete the serving secret:
$ oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd
secret "etcd-serving-metrics-openshift-control-plane-2" deleted
Delete the metrics secret:
$ oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd
secret "etcd-serving-openshift-control-plane-2" deleted
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned (1)
examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned
examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned
examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned
examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
1 | This is the control plane machine for the unhealthy node, examplecluster-control-plane-2 . |
Ensure that the Bare Metal Operator is available by running the following command:
$ oc get clusteroperator baremetal
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
baremetal 4.0 True False False 3d15h
Remove the old BareMetalHost
object by running the following command:
$ oc delete bmh openshift-control-plane-2 -n openshift-machine-api
baremetalhost.metal3.io "openshift-control-plane-2" deleted
Delete the machine of the unhealthy member by running the following command:
$ oc delete machine -n openshift-machine-api examplecluster-control-plane-2
After you remove the BareMetalHost
and Machine
objects, then the Machine
controller automatically deletes the Node
object.
If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field.
Do not interrupt machine deletion by pressing |
A new machine is automatically provisioned after deleting the machine of the unhealthy member.
Edit the machine configuration by running the following command:
$ oc edit machine -n openshift-machine-api examplecluster-control-plane-2
Delete the following fields in the Machine
custom resource, and then save the updated file:
finalizers:
- machine.machine.openshift.io
machine.machine.openshift.io/examplecluster-control-plane-2 edited
Verify that the machine was deleted by running the following command:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned
examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned
examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned
examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
Verify that the node has been deleted by running the following command:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
openshift-control-plane-0 Ready master 3h24m v1.32.3
openshift-control-plane-1 Ready master 3h24m v1.32.3
openshift-compute-0 Ready worker 176m v1.32.3
openshift-compute-1 Ready worker 176m v1.32.3
Create the new BareMetalHost
object and the secret to store the BMC credentials:
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: openshift-control-plane-2-bmc-secret
namespace: openshift-machine-api
data:
password: <password>
username: <username>
type: Opaque
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: openshift-control-plane-2
namespace: openshift-machine-api
spec:
automatedCleaningMode: disabled
bmc:
address: redfish://10.46.61.18:443/redfish/v1/Systems/1
credentialsName: openshift-control-plane-2-bmc-secret
disableCertificateVerification: true
bootMACAddress: 48:df:37:b0:8a:a0
bootMode: UEFI
externallyProvisioned: false
online: true
rootDeviceHints:
deviceName: /dev/disk/by-id/scsi-<serial_number>
userData:
name: master-user-data-managed
namespace: openshift-machine-api
EOF
The username and password can be found from the other bare metal host’s secrets. The protocol to use in |
If you reuse the Existing control plane |
After the inspection is complete, the BareMetalHost
object is created and available to be provisioned.
Verify the creation process using available BareMetalHost
objects:
$ oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m
openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m
openshift-control-plane-2 available examplecluster-control-plane-3 true 47m
openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m
openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m
Verify that a new machine has been created:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned (1)
examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned
examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned
examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned
examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
1 | The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . |
It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verify that the bare metal host becomes provisioned and no error reported by running the following command:
$ oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-api
NAME STATE CONSUMER ONLINE ERROR AGE
openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m
openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m
openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m
openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m
openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m
Verify that the new node is added and in a ready state by running this command:
$ oc get nodes
$ oc get nodes
NAME STATUS ROLES AGE VERSION
openshift-control-plane-0 Ready master 4h26m v1.32.3
openshift-control-plane-1 Ready master 4h26m v1.32.3
openshift-control-plane-2 Ready master 12m v1.32.3
openshift-compute-0 Ready worker 3h58m v1.32.3
openshift-compute-1 Ready worker 3h58m v1.32.3
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the unsupportedConfigOverrides
section is removed from the object by entering this command:
$ oc get etcd/cluster -oyaml
If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc -n openshift-etcd get pods -l k8s-app=etcd
etcd-openshift-control-plane-0 5/5 Running 0 105m
etcd-openshift-control-plane-1 5/5 Running 0 107m
etcd-openshift-control-plane-2 5/5 Running 0 103m
If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
1 | The forceRedeploymentReason value must be unique, which is why a timestamp is appended. |
To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc rsh -n openshift-etcd etcd-openshift-control-plane-0
View the member list:
sh-4.2# etcdctl member list -w table
+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+
| 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false |
| 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false |
| cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false |
+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+
If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member. |
Verify that all etcd members are healthy by running the following command:
# etcdctl endpoint health --cluster
https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms
https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms
https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms
Validate that all nodes are at the latest revision by running the following command:
$ oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
AllNodesAtLatestRevision
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OKD cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.
Disaster recovery requires you to have at least one healthy control plane host. |
This solution handles situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. This solution does not require an etcd backup.
If you have a majority of your control plane nodes still available and have an etcd quorum, replace a single unhealthy etcd member. |
This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. If you have taken an etcd backup, you can restore your cluster to a previous state.
If applicable, you might also need to recover from expired control plane certificates.
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort. Before performing a restore, see Restoring a cluster state for more information on the impact to the cluster. |
This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
Testing the restore procedure is important to ensure that your automation and workload handle the new cluster state gracefully. Due to the complex nature of etcd quorum and the etcd Operator attempting to mend automatically, it is often difficult to correctly bring your cluster into a broken enough state that it can be restored.
You must have SSH access to the cluster. Your cluster might be entirely lost without SSH access. |
You have SSH access to control plane hosts.
You have installed the OpenShift CLI (oc
).
Use SSH to connect to each of your nonrecovery nodes and run the following commands to disable etcd and the kubelet
service:
Disable etcd by running the following command:
$ sudo /usr/local/bin/disable-etcd.sh
Delete variable data for etcd by running the following command:
$ sudo rm -rf /var/lib/etcd
Disable the kubelet
service by running the following command:
$ sudo systemctl disable kubelet.service
Exit every SSH session.
Run the following command to ensure that your nonrecovery nodes are in a NOT READY
state:
$ oc get nodes
Follow the steps in "Restoring to a previous cluster state" to restore your cluster.
After you restore the cluster and the API responds, use SSH to connect to each nonrecovery node and enable the kubelet
service:
$ sudo systemctl enable kubelet.service
Exit every SSH session.
Run the following command to observe your nodes coming back into the READY
state:
$ oc get nodes
Run the following command to verify that etcd is available:
$ oc get pods -n openshift-etcd
You can use the quorum-restore.sh
script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the OKD API becomes read-only. After quorum is restored, the OKD API returns to read/write mode.
The quorum-restore.sh
script instantly brings back a new single-member etcd cluster based on its local data directory and marks all other members as invalid by retiring the previous cluster identifier. No prior backup is required to restore the control plane from.
For high availability (HA) clusters, a three-node HA cluster requires you to shut down etcd on two hosts to avoid a cluster split. On four-node and five-node HA clusters, you must shut down three hosts. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. On four-node and five-node HA clusters, the minimum number of nodes required for quorum is three. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service.
You might experience data loss if the host that runs the restoration does not have all data replicated to it. |
Quorum restoration should not be used to decrease the number of nodes outside of the restoration process. Decreasing the number of nodes results in an unsupported cluster configuration. |
You have SSH access to the node used to restore quorum.
Select a control plane host to use as the recovery host. You run the restore operation on this host.
List the running etcd pods by running the following command:
$ oc get pods -n openshift-etcd -l app=etcd --field-selector="status.phase==Running"
Choose a pod and run the following command to obtain its IP address:
$ oc exec -n openshift-etcd <etcd-pod> -c etcdctl -- etcdctl endpoint status -w table
Note the IP address of a member that is not a learner and has the highest Raft index.
Run the following command and note the node name that corresponds to the IP address of the chosen etcd member:
$ oc get nodes -o jsonpath='{range .items[*]}[{.metadata.name},{.status.addresses[?(@.type=="InternalIP")].address}]{end}'
Using SSH, connect to the chosen recovery node and run the following command to restore etcd quorum:
$ sudo -E /usr/local/bin/quorum-restore.sh
After a few minutes, the nodes that went down are automatically synchronized with the node that the recovery script was run on. Any remaining online nodes automatically rejoin the new etcd cluster created by the quorum-restore.sh
script. This process takes a few minutes.
Exit the SSH session.
Return to a three-node configuration if any nodes are offline. Repeat the following steps for each node that is offline to delete and re-create them. After the machines are re-created, a new revision is forced and etcd automatically scales up.
If you use a user-provisioned bare-metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".
Do not delete and re-create the machine for the recovery host. |
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:
Do not delete and re-create the machine for the recovery host. For bare-metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". |
Obtain the machine for one of the offline nodes.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | This is the control plane machine for the offline node, ip-10-0-131-183.ec2.internal . |
Delete the machine of the offline node by running:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
1 | Specify the name of the control plane machine for the offline node. |
A new machine is automatically provisioned after deleting the machine of the offline node.
Verify that a new machine has been created by running:
$ oc get machines -n openshift-machine-api -o wide
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . |
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically synchronize when the machine or node returns to a healthy state.
Repeat these steps for each node that is offline.
Wait until the control plane recovers by running the following command:
$ oc adm wait-for-stable-cluster
It can take up to 15 minutes for the control plane to recover. |
If you see no progress rolling out the etcd static pods, you can force redeployment from the etcd cluster Operator by running the following command:
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge
To restore the cluster to a previous state, you must have previously backed up the etcd
data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data".
You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:
The cluster has lost the majority of control plane hosts (quorum loss).
An administrator has deleted something critical and must restore to recover the cluster.
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. |
Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OKD Operators, including the network Operator.
It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.
In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.
You can use a saved etcd backup to restore a previous cluster state on a single node.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.2 cluster must use an etcd backup that was taken from 4.2. |
Access to the cluster as a user with the cluster-admin
role through a certificate-based kubeconfig
file, like the one that was used during installation.
You have SSH access to control plane hosts.
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db
and static_kuberesources_<datetimestamp>.tar.gz
.
Use SSH to connect to the single node and copy the etcd backup to the /home/core
directory by running the following command:
$ cp <etcd_backup_directory> /home/core
Run the following command in the single node to restore the cluster from a previous backup:
$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd_backup_directory>
Exit the SSH session.
Monitor the recovery progress of the control plane by running the following command:
$ oc adm wait-for-stable-cluster
It can take up to 15 minutes for the control plane to recover. |
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
For high availability (HA) clusters, a three-node HA cluster requires you to shut down etcd on two hosts to avoid a cluster split. On four-node and five-node HA clusters, you must shut down three hosts. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. On four-node and five-node HA clusters, the minimum number of nodes required for quorum is three. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service.
If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. For OKD on a single node, see "Restoring to a previous cluster state for a single node". |
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.2 cluster must use an etcd backup that was taken from 4.2. |
Access to the cluster as a user with the cluster-admin
role through a certificate-based kubeconfig
file, like the one that was used during installation.
A healthy control plane host to use as the recovery host.
You have SSH access to control plane hosts.
A backup directory containing both the etcd
snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db
and static_kuberesources_<datetimestamp>.tar.gz
.
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. |
Select a control plane host to use as the recovery host. This is the host that you run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
kube-apiserver
becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. |
Using SSH, connect to each control plane node and run the following command to disable etcd:
$ sudo -E /usr/local/bin/disable-etcd.sh
Copy the etcd backup directory to the recovery control plane host.
This procedure assumes that you copied the backup
directory containing the etcd snapshot and the resources for the static pods to the /home/core/
directory of your recovery control plane host.
Use SSH to connect to the recovery host and restore the cluster from a previous backup by running the following command:
$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd-backup-directory>
Exit the SSH session.
Once the API responds, turn off the etcd Operator quorum guard by runnning the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
Monitor the recovery progress of the control plane by running the following command:
$ oc adm wait-for-stable-cluster
It can take up to 15 minutes for the control plane to recover. |
Once recovered, enable the quorum guard by running the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
If you see no progress rolling out the etcd static pods, you can force redeployment from the cluster-etcd-operator
by running the following command:
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge
The restore procedure described in the section "Restoring to a previous cluster state":
Requires the complete recreation of 2 control plane nodes, which might be a complex procedure for clusters installed with the UPI installation method, since an UPI installation does not create any Machine
or ControlPlaneMachineset
for the control plane nodes.
Uses the script /usr/local/bin/cluster-restore.sh, which starts a new single-member etcd cluster and then scales it to three members.
In contrast, this procedure:
Does not require recreating any control plane nodes.
Directly starts a three-member etcd cluster.
If the cluster uses a MachineSet
for the control plane, it is suggested to use the "Restoring to a previous cluster state" for a simpler etcd recovery procedure.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.
Access to the cluster as a user with the cluster-admin
role; for example, the kubeadmin
user.
SSH access to all control plane hosts, with a host user allowed to become root
; for example, the default core
host user.
A backup directory containing both a previous etcd snapshot and the resources for the static pods from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db
and static_kuberesources_<datetimestamp>.tar.gz
.
Use SSH to connect to each of the control plane nodes.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to use a SSH connection for each control plane host you are accessing in a separate terminal.
If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. |
Copy the etcd backup directory to each control plane host.
This procedure assumes that you copied the backup
directory containing the etcd snapshot and the resources for the static pods to the /home/core/assets
directory of each control plane host. You might need to create such assets
folder if it does not exist yet.
Stop the static pods on all the control plane nodes; one host at a time.
Move the existing Kubernetes API Server static pod manifest out of the kubelet manifest directory.
$ mkdir -p /root/manifests-backup
$ mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /root/manifests-backup/
Verify that the Kubernetes API Server containers have stopped with the command:
$ crictl ps | grep kube-apiserver | grep -E -v "operator|guard"
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
If the Kubernetes API Server containers are still running, terminate them manually with the following command:
$ crictl stop <container_id>
Repeat the same steps for kube-controller-manager-pod.yaml
, kube-scheduler-pod.yaml
and finally etcd-pod.yaml
.
Stop the kube-controller-manager
pod with the following command:
$ mv /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /root/manifests-backup/
Check if the containers are stopped using the following command:
$ crictl ps | grep kube-controller-manager | grep -E -v "operator|guard"
Stop the kube-scheduler
pod using the following command:
$ mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /root/manifests-backup/
Check if the containers are stopped using the following command:
$ crictl ps | grep kube-scheduler | grep -E -v "operator|guard"
Stop the etcd
pod using the following command:
$ mv /etc/kubernetes/manifests/etcd-pod.yaml /root/manifests-backup/
Check if the containers are stopped using the following command:
$ crictl ps | grep etcd | grep -E -v "operator|guard"
On each control plane host, save the current etcd
data, by moving it into the backup
folder:
$ mkdir /home/core/assets/old-member-data
$ mv /var/lib/etcd/member /home/core/assets/old-member-data
This data will be useful in case the etcd
backup restore does not work and the etcd
cluster must be restored to the current state.
Find the correct etcd parameters for each control plane host.
The value for <ETCD_NAME>
is unique for the each control plane host, and it is equal to the value of the ETCD_NAME
variable in the manifest /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yaml
file in the specific control plane host. It can be found with the command:
RESTORE_ETCD_POD_YAML="/etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yaml"
cat $RESTORE_ETCD_POD_YAML | \
grep -A 1 $(cat $RESTORE_ETCD_POD_YAML | grep 'export ETCD_NAME' | grep -Eo 'NODE_.+_ETCD_NAME') | \
grep -Po '(?<=value: ").+(?=")'
The value for <UUID>
can be generated in a control plane host with the command:
$ uuidgen
The value for |
The value for ETCD_NODE_PEER_URL
should be set like the following example:
https://<IP_CURRENT_HOST>:2380
The correct IP can be found from the <ETCD_NAME>
of the specific control plane host, with the command:
$ echo <ETCD_NAME> | \
sed -E 's/[.-]/_/g' | \
xargs -I {} grep {} /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/etcd-scripts/etcd.env | \
grep "IP" | grep -Po '(?<=").+(?=")'
The value for <ETCD_INITIAL_CLUSTER>
should be set like the following, where <ETCD_NAME_n>
is the <ETCD_NAME>
of each control plane host.
The port used must be 2380 and not 2379. The port 2379 is used for etcd database management and is configured directly in etcd start command in container. |
<ETCD_NAME_0>=<ETCD_NODE_PEER_URL_0>,<ETCD_NAME_1>=<ETCD_NODE_PEER_URL_1>,<ETCD_NAME_2>=<ETCD_NODE_PEER_URL_2> (1)
1 | Specifies the ETCD_NODE_PEER_URL values from each control plane host. |
The <ETCD_INITIAL_CLUSTER>
value remains same across all control plane hosts. The same value is required in the next steps on every control plane host.
Regenerate the etcd database from the backup.
Such operation must be executed on each control plane host.
Copy the etcd
backup to /var/lib/etcd
directory with the command:
$ cp /home/core/assets/backup/<snapshot_yyyy-mm-dd_hhmmss>.db /var/lib/etcd
Identify the correct etcdctl
image before proceeding. Use the following command to retrieve the image from the backup of the pod manifest:
$ jq -r '.spec.containers[]|select(.name=="etcdctl")|.image' /root/manifests-backup/etcd-pod.yaml
$ podman run --rm -it --entrypoint="/bin/bash" -v /var/lib/etcd:/var/lib/etcd:z <image-hash>
Check that the version of the etcdctl
tool is the version of the etcd
server where the backup was created:
$ etcdctl version
Run the following command to regenerate the etcd
database, using the correct values for the current host:
$ ETCDCTL_API=3 /usr/bin/etcdctl snapshot restore /var/lib/etcd/<snapshot_yyyy-mm-dd_hhmmss>.db \
--name "<ETCD_NAME>" \
--initial-cluster="<ETCD_INITIAL_CLUSTER>" \
--initial-cluster-token "openshift-etcd-<UUID>" \
--initial-advertise-peer-urls "<ETCD_NODE_PEER_URL>" \
--data-dir="/var/lib/etcd/restore-<UUID>" \
--skip-hash-check=true
The quotes are mandatory when regenerating the |
Record the values printed in the added member
logs; for example:
2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "56cd73b614699e7", "added-peer-peer-urls": ["https://10.0.91.5:2380"], "added-peer-is-learner": false} 2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "1f63d01b31bb9a9e", "added-peer-peer-urls": ["https://10.0.90.221:2380"], "added-peer-is-learner": false} 2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "fdc2725b3b70127c", "added-peer-peer-urls": ["https://10.0.94.214:2380"], "added-peer-is-learner": false}
Exit from the container.
Repeat these steps on the other control plane hosts, checking that the values printed in the added member
logs are the same for all control plane hosts.
Move the regenerated etcd
database to the default location.
Such operation must be executed on each control plane host.
Move the regenerated database (the member
folder created by the previous etcdctl snapshot restore
command) to the default etcd location /var/lib/etcd
:
$ mv /var/lib/etcd/restore-<UUID>/member /var/lib/etcd
Restore the SELinux context for /var/lib/etcd/member
folder on /var/lib/etcd
directory:
$ restorecon -vR /var/lib/etcd/
Remove the leftover files and directories:
$ rm -rf /var/lib/etcd/restore-<UUID>
$ rm /var/lib/etcd/<snapshot_yyyy-mm-dd_hhmmss>.db
When you are finished the |
Repeat these steps on the other control plane hosts.
Restart the etcd cluster.
The following steps must be executed on all control plane hosts, but one host at a time.
Move the etcd
static pod manifest back to the kubelet manifest directory, in order to make kubelet start the related containers :
$ mv /tmp/etcd-pod.yaml /etc/kubernetes/manifests
Verify that all the etcd
containers have started:
$ crictl ps | grep etcd | grep -v operator
38c814767ad983 f79db5a8799fd2c08960ad9ee22f784b9fbe23babe008e8a3bf68323f004c840 28 seconds ago Running etcd-health-monitor 2 fe4b9c3d6483c
e1646b15207c6 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcd-metrics 0 fe4b9c3d6483c
08ba29b1f58a7 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcd 0 fe4b9c3d6483c
2ddc9eda16f53 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcdctl
If the output of this command is empty, wait a few minutes and check again.
Check the status of the etcd
cluster.
On any of the control plane hosts, check the status of the etcd
cluster with the following command:
$ crictl exec -it $(crictl ps | grep etcdctl | awk '{print $1}') etcdctl endpoint status -w table
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.89.133:2379 | 682e4a83a0cec6c0 | 3.5.0 | 67 MB | true | false | 2 | 218 | 218 | |
| https://10.0.92.74:2379 | 450bcf6999538512 | 3.5.0 | 67 MB | false | false | 2 | 218 | 218 | |
| https://10.0.93.129:2379 | 358efa9c1d91c3d6 | 3.5.0 | 67 MB | false | false | 2 | 218 | 218 | |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
Restart the other static pods.
The following steps must be executed on all control plane hosts, but one host at a time.
Move the Kubernetes API Server static pod manifest back to the kubelet manifest directory to make kubelet start the related containers with the command:
$ mv /root/manifests-backup/kube-apiserver-pod.yaml /etc/kubernetes/manifests
Verify that all the Kubernetes API Server containers have started:
$ crictl ps | grep kube-apiserver | grep -v operator
if the output of the following command is empty, wait a few minutes and check again. |
Repeat the same steps for kube-controller-manager-pod.yaml
and kube-scheduler-pod.yaml
files.
Restart the kubelets in all nodes using the following command:
$ systemctl restart kubelet
Start the remaining control plane pods using the following command:
$ mv /root/manifests-backup/kube-* /etc/kubernetes/manifests/
Check if the kube-apiserver
, kube-scheduler
and kube-controller-manager
pods start correctly:
$ crictl ps | grep -E 'kube-(apiserver|scheduler|controller-manager)' | grep -v -E 'operator|guard'
Wipe the OVN databases using the following commands:
for NODE in $(oc get node -o name | sed 's:node/::g')
do
oc debug node/${NODE} -- chroot /host /bin/bash -c 'rm -f /var/lib/ovn-ic/etc/ovn*.db && systemctl restart ovs-vswitchd ovsdb-server'
oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName=${NODE} --wait
oc -n openshift-ovn-kubernetes wait pod -l app=ovnkube-node --field-selector=spec.nodeName=${NODE} --for condition=ContainersReady --timeout=600s
done
If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet
object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. |
The following are some example scenarios that produce an out-of-date status:
MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id
or /dev
directories. This situation might cause the local PVs to refer to devices that no longer exist.
To fix this problem, an administrator must:
Manually remove the PVs with invalid devices.
Remove symlinks from respective nodes.
Delete LocalVolume
or LocalVolumeSet
objects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).
The cluster can automatically recover from expired control plane certificates.
However, you must manually approve the pending node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs.
Use the following steps to approve the pending CSRs:
Get the list of current CSRs:
$ oc get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1) csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2) csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
1 | A pending kubelet service CSR (for user-provisioned installations). |
2 | A pending node-bootstrapper CSR. |
Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
Approve each valid node-bootstrapper
CSR:
$ oc adm certificate approve <csr_name>
For user-provisioned installations, approve each valid kubelet serving CSR:
$ oc adm certificate approve <csr_name>