$ oc -n openshift-logging edit ClusterLogging instance
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OKD. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
The following advisories are available for version 5.4 of the logging subsystem: Logging subsystem for Red Hat OpenShift Release 5.4
Before this update, the cluster-logging-operator
utilized cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were only created when deploying the Operator using the console interface but were missing when deploying from the command line. This update fixes the issue by making the roles and bindings namespace-scoped. (LOG-2286)
Before this update, a prior change to fix dashboard reconciliation introduced a ownerReferences
field to the resource across namespaces. As a result, both the config map and dashboard were not getting created in the namespace. With this update, the removal of the ownerReferences
field resolves the issue and the OpenShift Logging dashboard is available in the console. (LOG-2163)
Before this update, changes to the metrics dashboards did not deploy because the cluster-logging-operator
did not correctly compare existing and desired configmaps containing the dashboard. With this update, the addition of a unique hash value to object labels resolves the issue. (LOG-2071)
Before this update, the OpenShift Logging dashboard did not correctly display the pods and namespaces in the table, which displays the top producing containers collected over the last 24 hours. With this update, the pods and namespaces are displayed correctly. (LOG-2069)
Before this update, when the ClusterLogForwarder
was set up with Elasticsearch OutputDefault
and Elasticsearch outputs did not have structured keys, the generated configuration contained the incorrect values for authentication. This update corrects the secret and certificates used. (LOG-2056)
Before this update, the OpenShift Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the correct data point has been selected, resolving the issue. (LOG-2026)
Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image.(LOG-1927)
Before this update, a name change of the deployed collector in the 5.3 release caused the logging collector to generate the FluentdNodeDown
alert. This update resolves the issue by fixing the job name for the Prometheus alert. (LOG-1918)
Before this update, the log collector was collecting its own logs due to a refactoring of the component name change. This could lead to a potential feedback loop of the collector processing its own log that might result in memory and log message size issues. This update resolves the issue by excluding the collector logs from the collection. (LOG-1774)
Before this update, Elasticsearch generated the error "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." if the PVC already existed. With this update, Elasticsearch checks for existing PVCs, resolving the issue. (LOG-2131)
Before this update, Elasticsearch was unable to return to the ready state when the `elasticsearch-signing `secret was removed. With this update, Elasticsearch is able to go back to the ready state after that secret is removed. (LOG-2171)
Before this update, the change of the path from which the collector reads container logs caused the collector to forward some records to the wrong indices. With this update, the collector now uses the correct configuration to resolve the issue. (LOG-2160)
Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. (LOG-1899)
Before this update, the OpenShift Logging dashboard showed the number of shards 'x' times bigger than actual value when Elasticsearch has 'x' nodes. This was because it was printing all primary shards for each ES pod and processing sum on it, while the output is always for the whole ES cluster. With this update, the calculation has been corrected. (LOG-2156)
Before this update, the secrets "kibana" and "kibana-proxy" were not recreated if they were deleted manually. With this update, the elasticsearch-operator
will watch the resources and automatically recreate them if deleted. (LOG-2250)
Before this update, tuning the buffer chunk size could cause the collector to generate a warning about the chunk size exceeding the byte limit for the event stream. With this update, you can also tune the read line limit, resolving the issue. (LOG-2379)
Before this update, the logging console link in OpenShift WebConsole was not removed with the ClusterLogging CR. With this update, deleting the CR or uninstalling the Cluster Logging Operator removes the link. (LOG-2373)
Before this update, a change to the container logs path caused this metric to always be zero with older releases configured with the original path. With this update, the plugin which exposes metrics about collected logs supports reading from either path to resolve the issue. (LOG-2462)
Vector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
Vector is a log collector offered as a tech-preview alternative to the current default collector for the logging subsystem.
The following outputs are supported:
elasticsearch
. An external Elasticsearch instance. The elasticsearch
output can use a TLS connection.
kafka
. A Kafka broker. The kafka
output can use an unsecured or TLS connection.
loki
. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
Vector is not enabled by default. Use the following steps to enable Vector on your OKD cluster.
Vector does not support FIPS Enabled Clusters. |
OKD: 4.10
Logging subsystem for Red Hat OpenShift: 5.4
FIPS disabled
Edit the ClusterLogging
custom resource (CR) in the openshift-logging
project:
$ oc -n openshift-logging edit ClusterLogging instance
Add a logging.openshift.io/preview-vector-collector: enabled
annotation to the ClusterLogging
custom resource (CR).
Add vector
as a collection type to the ClusterLogging
custom resource (CR).
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
annotations:
logging.openshift.io/preview-vector-collector: enabled
spec:
collection:
logs:
type: "vector"
vector: {}
Loki Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/. |
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem.
You can use the OKD web console to install the LokiOperator.
OKD: 4.10
Logging subsystem for Red Hat OpenShift: 5.4
To install the LokiOperator using the OKD web console:
Install the LokiOperator:
In the OKD web console, click Operators → OperatorHub.
Choose LokiOperator from the list of available Operators, and click Install.
Under Installation Mode, select All namespaces on the cluster.
Under Installed Namespace, select openshift-operators-redhat.
You must specify the openshift-operators-redhat
namespace. The openshift-operators
namespace might contain Community Operators, which are untrusted and could publish
a metric with the same name as an OKD metric, which would cause
conflicts.
Select Enable operator recommended cluster monitoring on this namespace.
This option sets the openshift.io/cluster-monitoring: "true"
label in the Namespace object.
You must select this option to ensure that cluster monitoring
scrapes the openshift-operators-redhat
namespace.
Select an Approval Strategy.
The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
The Manual strategy requires a user with appropriate credentials to approve the Operator update.
Click Install.
Verify that you installed the LokiOperator. Visit the Operators → Installed Operators page and look for "LokiOperator."
Ensure that LokiOperator is listed in all the projects whose Status is Succeeded.