apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat (1)
labels:
openshift.io/cluster-monitoring: "true" (2)
OKD Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs.
To get started with logging, you must install the following Operators:
Loki Operator to manage your log store.
Red Hat OpenShift Logging Operator to manage log collection and forwarding.
Cluster Observability Operator (COO) to manage visualization.
You can use either the OKD web console or the OKD CLI to install or configure logging.
You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. |
You have downloaded the pull secret from Red Hat OpenShift Cluster Manager as shown in "Obtaining the installation program" in the installation documentation for your platform.
If you have the pull secret, add the redhat-operators
catalog to the OperatorHub
custom resource (CR) as shown in "Configuring OKD to use Red Hat Operators".
The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI.
Install Loki Operator on your OKD cluster to manage the log store Loki
by using the OKD command-line interface (CLI). You can deploy and configure the Loki
log store by reconciling the resource LokiStack with the Loki Operator.
You have administrator permissions.
You installed the OpenShift CLI (oc
).
You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.
Create a Namespace
object for Loki Operator:
Namespace
objectapiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat (1)
labels:
openshift.io/cluster-monitoring: "true" (2)
1 | You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an OKD metric, causing conflicts. |
2 | A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. |
Apply the Namespace
object by running the following command:
$ oc apply -f <filename>.yaml
Create an OperatorGroup
object.
OperatorGroup
objectapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: loki-operator
namespace: openshift-operators-redhat (1)
spec:
upgradeStrategy: Default
1 | You must specify openshift-operators-redhat as the namespace. |
Apply the OperatorGroup
object by running the following command:
$ oc apply -f <filename>.yaml
Create a Subscription
object for Loki Operator:
Subscription
objectapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: loki-operator
namespace: openshift-operators-redhat (1)
spec:
channel: stable-6.<y> (2)
installPlanApproval: Automatic (3)
name: loki-operator
source: redhat-operators (4)
sourceNamespace: openshift-marketplace
1 | You must specify openshift-operators-redhat as the namespace. |
2 | Specify stable-6.<y> as the channel. |
3 | If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. |
4 | Specify redhat-operators as the value. If your OKD cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). |
Apply the Subscription
object by running the following command:
$ oc apply -f <filename>.yaml
Create a namespace
object for deploy the LokiStack:
namespace
objectapiVersion: v1
kind: Namespace
metadata:
name: openshift-logging (1)
labels:
openshift.io/cluster-monitoring: "true" (2)
1 | The openshift-logging namespace is dedicated for all logging workloads. |
2 | A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. |
Apply the namespace
object by running the following command:
$ oc apply -f <filename>.yaml
Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3.
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3 (1)
namespace: openshift-logging
stringData: (2)
access_key_id: <access_key_id>
access_key_secret: <access_secret>
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
1 | Use the name logging-loki-s3 to match the name used in LokiStack. |
2 | For the contents of the secret see the Loki object storage section. |
If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. |
Apply the Secret
object by running the following command:
$ oc apply -f <filename>.yaml
Create a LokiStack
CR:
LokiStack
CRapiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki (1)
namespace: openshift-logging (2)
spec:
size: 1x.small (3)
storage:
schemas:
- version: v13
effectiveDate: "<yyyy>-<mm>-<dd>" (4)
secret:
name: logging-loki-s3 (5)
type: s3 (6)
storageClassName: <storage_class_name> (7)
tenants:
mode: openshift-logging (8)
1 | Use the name logging-loki . |
2 | You must specify openshift-logging as the namespace. |
3 | Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Additionally, 1x.pico is supported starting with logging 6.1. |
4 | For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. |
5 | Specify the name of your log store secret. |
6 | Specify the corresponding storage type. |
7 | Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. |
8 | The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. |
Apply the LokiStack
CR object by running the following command:
$ oc apply -f <filename>.yaml
Verify the installation by running the following command:
$ oc get pods -n openshift-logging
$ oc get pods -n openshift-logging
NAME READY STATUS RESTARTS AGE
logging-loki-compactor-0 1/1 Running 0 42m
logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m
logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m
logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m
logging-loki-index-gateway-0 1/1 Running 0 42m
logging-loki-ingester-0 1/1 Running 0 42m
logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m
logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m
Install Red Hat OpenShift Logging Operator on your OKD cluster to collect and forward logs to a log store by using the OpenShift CLI (oc
).
You have administrator permissions.
You installed the OpenShift CLI (oc
).
You installed and configured Loki Operator.
You have created the openshift-logging
namespace.
Create an OperatorGroup
object:
OperatorGroup
objectapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging (1)
spec:
upgradeStrategy: Default
1 | You must specify openshift-logging as the namespace. |
Apply the OperatorGroup
object by running the following command:
$ oc apply -f <filename>.yaml
Create a Subscription
object for Red Hat OpenShift Logging Operator:
Subscription
objectapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging (1)
spec:
channel: stable-6.<y> (2)
installPlanApproval: Automatic (3)
name: cluster-logging
source: redhat-operators (4)
sourceNamespace: openshift-marketplace
1 | You must specify openshift-logging as the namespace. |
2 | Specify stable-6.<y> as the channel. |
3 | If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. |
4 | Specify redhat-operators as the value. If your OKD cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). |
Apply the Subscription
object by running the following command:
$ oc apply -f <filename>.yaml
Create a service account to be used by the log collector:
$ oc create sa logging-collector -n openshift-logging
Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs.
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging
Create a ClusterLogForwarder
CR:
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging (1)
spec:
serviceAccount:
name: logging-collector (2)
outputs:
- name: lokistack-out
type: lokiStack (3)
lokiStack:
target: (4)
name: logging-loki
namespace: openshift-logging
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
pipelines:
- name: infra-app-logs
inputRefs: (5)
- application
- infrastructure
outputRefs:
- lokistack-out
1 | You must specify the openshift-logging namespace. |
2 | Specify the name of the service account created before. |
3 | Select the lokiStack output type to send logs to the LokiStack instance. |
4 | Point the ClusterLogForwarder to the LokiStack instance created earlier. |
5 | Select the log output types you want to send to the LokiStack instance. |
Apply the ClusterLogForwarder CR
object by running the following command:
$ oc apply -f <filename>.yaml
Verify the installation by running the following command:
$ oc get pods -n openshift-logging
$ oc get pods -n openshift-logging
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m
instance-222js 2/2 Running 0 18m
instance-g9ddv 2/2 Running 0 18m
instance-hfqq8 2/2 Running 0 18m
instance-sphwg 2/2 Running 0 18m
instance-vv7zn 2/2 Running 0 18m
instance-wk5zz 2/2 Running 0 18m
logging-loki-compactor-0 1/1 Running 0 42m
logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m
logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m
logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m
logging-loki-index-gateway-0 1/1 Running 0 42m
logging-loki-ingester-0 1/1 Running 0 42m
logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m
logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m
The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console.
Install Loki Operator on your OKD cluster to manage the log store Loki
from the OperatorHub by using the OKD web console. You can deploy and configure the Loki
log store by reconciling the resource LokiStack with the Loki Operator.
You have administrator permissions.
You have access to the OKD web console.
You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation).
In the OKD web console Administrator perspective, go to Operators → OperatorHub.
Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install.
The Community Loki Operator is not supported by Red Hat. |
Select stable-x.y as the Update channel.
The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat
, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you.
Select Enable Operator-recommended cluster monitoring on this namespace.
This option sets the openshift.io/cluster-monitoring: "true"
label in the Namespace
object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat
namespace.
For Update approval select Automatic, then click Install.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
An Operator might display a |
While the Operator installs, create the namespace to which the log store will be deployed.
Click + in the top right of the screen to access the Import YAML page.
Add the YAML definition for the openshift-logging
namespace:
namespace
objectapiVersion: v1
kind: Namespace
metadata:
name: openshift-logging (1)
labels:
openshift.io/cluster-monitoring: "true" (2)
1 | The openshift-logging namespace is dedicated for all logging workloads. |
2 | A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. |
Click Create.
Create a secret with the credentials to access the object storage.
Click + in the top right of the screen to access the Import YAML page.
Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3:
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3 (1)
namespace: openshift-logging (2)
stringData: (3)
access_key_id: <access_key_id>
access_key_secret: <access_key>
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
1 | Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. |
2 | Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack . |
3 | For the contents of the secret see the Loki object storage section. |
If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. |
Click Create.
Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance.
Select YAML view, and then use the following template to create a LokiStack
CR:
LokiStack
CRapiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki (1)
namespace: openshift-logging (2)
spec:
size: 1x.small (3)
storage:
schemas:
- version: v13
effectiveDate: "<yyyy>-<mm>-<dd>"
secret:
name: logging-loki-s3 (4)
type: s3 (5)
storageClassName: <storage_class_name> (6)
tenants:
mode: openshift-logging (7)
1 | Use the name logging-loki . |
2 | You must specify openshift-logging as the namespace. |
3 | Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Additionally, 1x.pico is supported starting with logging 6.1. |
4 | Specify the name of your log store secret. |
5 | Specify the corresponding storage type. |
6 | Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. |
7 | The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. |
Click Create.
In the LokiStack tab veriy that you see your LokiStack
instance.
In the Status column, verify that you see the message Condition: Ready
with a green checkmark.
Install Red Hat OpenShift Logging Operator on your OKD cluster to collect and forward logs to a log store from the OperatorHub by using the OKD web console.
You have administrator permissions.
You have access to the OKD web console.
You installed and configured Loki Operator.
In the OKD web console Administrator perspective, go to Operators → OperatorHub.
Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install.
Select stable-x.y as the Update channel. The latest version is already selected in the Version field.
The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging
, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you.
Select Enable Operator-recommended cluster monitoring on this namespace.
This option sets the openshift.io/cluster-monitoring: "true"
label in the Namespace
object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging
namespace.
For Update approval select Automatic, then click Install.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
An Operator might display a |
While the operator installs, create the service account that will be used by the log collector to collect the logs.
Click the + in the top right of the screen to access the Import YAML page.
Enter the YAML definition for the service account.
ServiceAccount
objectapiVersion: v1
kind: ServiceAccount
metadata:
name: logging-collector (1)
namespace: openshift-logging (2)
1 | Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. |
2 | Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. |
Click the Create button.
Create the ClusterRoleBinding
objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs.
Click the + in the top right of the screen to access the Import YAML page.
Enter the YAML definition for the ClusterRoleBinding
resources.
ClusterRoleBinding
resourcesapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-collector:write-logs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: logging-collector-logs-writer (1)
subjects:
- kind: ServiceAccount
name: logging-collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-collector:collect-application
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: collect-application-logs (2)
subjects:
- kind: ServiceAccount
name: logging-collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-collector:collect-infrastructure
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: collect-infrastructure-logs (3)
subjects:
- kind: ServiceAccount
name: logging-collector
namespace: openshift-logging
1 | The cluster role to allow the log collector to write logs to LokiStack. |
2 | The cluster role to allow the log collector to collect logs from applications. |
3 | The cluster role to allow the log collector to collect logs from infrastructure. |
Click the Create button.
Go to the Operators → Installed Operators page. Select the operator and click the All instances tab.
After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs, find the ClusterLogForwarder resource and click Create Instance.
Select YAML view, and then use the following template to create a ClusterLogForwarder
CR:
ClusterLogForwarder
CRapiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging (1)
spec:
serviceAccount:
name: logging-collector (2)
outputs:
- name: lokistack-out
type: lokiStack (3)
lokiStack:
target: (4)
name: logging-loki
namespace: openshift-logging
authentication:
token:
from: serviceAccount
tls:
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
pipelines:
- name: infra-app-logs
inputRefs: (5)
- application
- infrastructure
outputRefs:
- lokistack-out
1 | You must specify openshift-logging as the namespace. |
2 | Specify the name of the service account created earlier. |
3 | Select the lokiStack output type to send logs to the LokiStack instance. |
4 | Point the ClusterLogForwarder to the LokiStack instance created earlier. |
5 | Select the log output types you want to send to the LokiStack instance. |
Click Create.
In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder
instance.
In the Status column, verify that you see the messages:
Condition: observability.openshift.io/Authorized
observability.openshift.io/Valid, Ready