$ oc get pod -n openshift-logging --selector component=elasticsearch
After updating the OKD cluster from 4.6 to 4.7, you can update the Elasticsearch Operator and Cluster Logging Operator from 4.6 to 5.0.
After updating the OKD cluster, you can update from cluster logging 4.6 to Red Hat OpenShift Logging 5.0 by changing the subscription for the Elasticsearch Operator and the Cluster Logging Operator.
When you update:
You must update the Elasticsearch Operator before updating the Cluster Logging Operator.
You must update both the Elasticsearch Operator and the Cluster Logging Operator.
Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated.
If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created.
Upgrade your cluster logging version to 4.6 before updating to Red Hat OpenShift Logging 5.0. |
Update the OKD cluster from 4.6 to 4.7.
Make sure the OpenShift Logging status is healthy:
All pods are ready
.
The Elasticsearch cluster is healthy.
Back up your Elasticsearch and Kibana data.
Update the Elasticsearch Operator:
From the web console, click Operators → Installed Operators.
Select the openshift-operators-redhat
project.
Click the Elasticsearch Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select 5.0 and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
Verify that the Elasticsearch Operator version is 5.0.x.
Wait for the Status field to report Succeeded.
Update the Cluster Logging Operator:
From the web console, click Operators → Installed Operators.
Select the openshift-logging
project.
Click the Cluster Logging Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select 5.0 and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
Verify that the Cluster Logging Operator version is 5.0.x.
Wait for the Status field to report Succeeded.
Check the logging components:
Ensure that all Elasticsearch pods are in the Ready status:
$ oc get pod -n openshift-logging --selector component=elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Ensure that the Elasticsearch cluster is healthy:
$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
{
"cluster_name" : "elasticsearch",
"status" : "green",
}
....
Ensure that the Elasticsearch cron jobs are created:
$ oc project openshift-logging
$ oc get cron job elasticsearch-*
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
elasticsearch-delete-app */15 * * * * False 0 <none> 27s
elasticsearch-delete-audit */15 * * * * False 0 <none> 27s
elasticsearch-delete-infra */15 * * * * False 0 <none> 27s
elasticsearch-rollover-app */15 * * * * False 0 <none> 27s
elasticsearch-rollover-audit */15 * * * * False 0 <none> 27s
elasticsearch-rollover-infra */15 * * * * False 0 <none> 27s
Verify that the log store is updated to 5.0 and the indices are green
:
$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
Verify that the output includes the app-00000x
, infra-00000x
, audit-00000x
, .security
indices.
Tue Jun 30 14:30:54 UTC 2020
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144
green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148
green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147
green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0
green open .operations.2020.06.30 aOHMYOa3S_69NJFh2t3yrQ 3 1 4206118 0 8998 4499
green open project.local-storage.d5c8a3d6-30a3-4512-96df-67c537540072.2020.06.30 O_Uldg2wS5K_L6FyqWxOZg 3 1 91052 0 135 67
green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158
green open .searchguard rQhAbWuLQ9iuTsZeHi_2ew 1 1 5 64 0 0
green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168
green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146
green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145
green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0
green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148
green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148
green open .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac l911Z8dSI23py6GDtyJrA 1 1 5 4 0 0
green open project.ui.29cb9680-864d-43b2-a6cf-134c837d6f0c.2020.06.30 5A_YdRlAT3m1Z-vbqBuGWA 3 1 24 0 0 0
green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147
green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0
green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0
green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147
green open project.nw.6233ad57-aff0-4d5a-976f-370636f47b11.2020.06.30 dtc6J-nLSCC59EygeV41RQ 3 1 10 0 0 0
green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220
green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0
green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146
green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57
green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9
green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148
green open project.qt.2c05acbd-bc12-4275-91ab-84d180b53505.2020.06.30 MUm3eFJjSPKQOJWoHskKqw 3 1 12262 0 14 7
green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148
green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
Verify that the log collector is updated to 5.0:
$ oc get ds fluentd -o json | grep fluentd-init
Verify that the output includes a fluentd-init
container:
"containerName": "fluentd-init"
Verify that the log visualizer is updated to 5.0 using the Kibana CRD:
$ oc get kibana kibana -o json
Verify that the output includes a Kibana pod with the ready
status:
[
{
"clusterCondition": {
"kibana-5fdd766ffd-nb2jj": [
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
},
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
}
]
},
"deployment": "kibana",
"pods": {
"failed": [],
"notReady": []
"ready": []
},
"replicaSets": [
"kibana-5fdd766ffd"
],
"replicas": 1
}
]
Verify the Curator is updated to 5.0:
$ oc get cronjob -o name
cronjob.batch/curator
cronjob.batch/elasticsearch-delete-app
cronjob.batch/elasticsearch-delete-audit
cronjob.batch/elasticsearch-delete-infra
cronjob.batch/elasticsearch-rollover-app
cronjob.batch/elasticsearch-rollover-audit
cronjob.batch/elasticsearch-rollover-infra
Verify that the output includes the elasticsearch-delete-*
and elasticsearch-rollover-*
indices.