The cluster logging Log Forwarding feature enables administrators to configure custom pipelines to send your container and node logs to specific endpoints within or outside of your cluster. You can send logs by type to the internal OKD Elasticsearch instance and/or remote destinations not managed by OKD cluster logging, such as your existing logging service, an external Elasticsearch cluster, external log aggregation solutions, or a Security Information and Event Management (SIEM) system.

Log Forwarding is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Log Forwarding provides an easier way to forward logs to specific endpoints inside or outside your OKD cluster than using the Fluentd plugins.

The Log Forwarding feature is optional. If you do not want to forward logs and use only the internal OKD Elasticsearch instance, do not configure the Log Forwarding feature.

You can send different types of logs to different systems allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.

Understanding cluster log forwarding

The OKD cluster log forwarding feature uses a combination of outputs and pipelines defined in the Log Forwarding Custom Resource to send logs to specific endpoints inside and outside of your OKD cluster.

If you want to use only the default internal OKD Elasticsearch instance, do not configure any outputs and pipelines.

An output is the destination for log data and a pipeline defines simple routing for one source to one or more outputs.

An output can be either:

  • elasticsearch to forward logs to an external Elasticsearch v5.x cluster, specified by server name or FQDN, and/or the internal OKD Elasticsearch instance.

  • forward to forward logs to an external log aggregation solution. This option uses the Fluentd forward plug-ins.

A pipeline associates the type of data to an output. A type of data you can forward is one of the following:

  • logs.app - Container logs generated by user applications running in the cluster, except infrastructure container applications.

  • logs.infra - Logs generated by both infrastructure components running in the cluster and OKD nodes, such as journal logs. Infrastructure components are pods that run in the openshift*, kube*, or default projects.

  • logs.audit - Logs generated by the node audit system (auditd), which are stored in the /var/log/audit/audit.log file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.

Note the following:

  • The internal OKD Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OKD cluster logging does not comply with those regulations.

  • An output supports TLS communication using a secret. Secrets must have keys of: tls.crt, tls.key, and ca-bundler.crt which point to the respective certificates for which they represent. Secrets must have the key shared_key for use when using forward in a secure manner.

  • You are responsible to create and maintain any additional configurations that external destinations might require, such as keys and secrets, service accounts, port opening, or global proxy configuration.

The following example creates three outputs:

  • the internal OKD Elasticsearch instance,

  • an unsecured externally-managed Elasticsearch instance,

  • a secured external log aggregator using the forward plug-in.

Three pipelines send:

  • the application logs to the internal OKD Elasticsearch,

  • the infrastructure logs to an external Elasticsearch instance,

  • the audit logs to the secured device over the forward plug-in.

Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1alpha1"
kind: "LogForwarding"
metadata:
  name: instance (1)
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true (2)
  outputs: (3)
   - name: elasticsearch (4)
     type: "elasticsearch"  (5)
     endpoint: elasticsearch.openshift-logging.svc:9200 (6)
     secret: (7)
        name: fluentd
   - name: elasticsearch-insecure
     type: "elasticsearch"
     endpoint: elasticsearch-insecure.svc.messaging.cluster.local
     insecure: true (8)
   - name: secureforward-offcluster
     type: "forward"
     endpoint: https://secureforward.offcluster.com:24224
     secret:
        name: secureforward
  pipelines: (9)
   - name: container-logs (10)
     inputSource: logs.app (11)
     outputRefs: (12)
     - elasticsearch
     - secureforward-offcluster
   - name: infra-logs
     inputSource: logs.infra
     outputRefs:
     - elasticsearch-insecure
   - name: audit-logs
     inputSource: logs.audit
     outputRefs:
     - secureforward-offcluster
1 The name of the log forwarding CR must be instance.
2 Parameter to enable log forwarding. Set to true to enable log forwarding.
3 Configuration for the outputs.
4 A name to describe the output.
5 The type of output, either elasticsearch or forward.
6 The log forwarding endpoint, either the server name or FQDN. For the internal OKD Elasticsearch instance, specify elasticsearch.openshift-logging.svc:9200.
7 Optional name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.
8 Optional setting if the endpoint does not use a secret, resulting in insecure communication.
9 Configuration for the pipelines.
10 A name to describe the pipeline.
11 The source type, logs.app, logs.infra, or logs.audit.
12 The name of one or more outputs configured in the CR.

Configuring the Log Forwarding feature

To configure the Log Forwarding, edit the Cluster Logging Custom Resource (CR) to add the clusterlogging.openshift.io/logforwardingtechpreview: enabled annotation and create a Log Forwarding Custom Resource to specify the outputs, pipelines, and enable log forwarding.

If you enable Log Forwarding, you should define a pipeline all for three source types: logs.app, logs.infra, and logs.audit. The logs from any undefined source type are dropped. For example, if you specified a pipeline for the logs.app and log-audit types, but did not specify a pipeline for the logs.infra type, logs.infra logs are dropped.

Procedure

To configure the log forwarding feature:

  1. Edit the Cluster Logging Custom Resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
  2. Add the clusterlogging.openshift.io/logforwardingtechpreview annotation and set to enabled:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      annotations:
        clusterlogging.openshift.io/logforwardingtechpreview: enabled (1)
      name: "instance"
      namespace: "openshift-logging"
    spec:
    
    ...
    
      collection: (2)
        logs:
          type: "fluentd"
          fluentd: {}
    1 Enables and disables the Log Forwarding feature. Set to enabled to use log forwarding. To use the only the OKD Elasticsearch instance, set to disabled or do not add the annotation.
    2 The spec.collection block must be defined in the Cluster Logging CR for log forwarding to work.
  3. Create the Log Forwarding Custom Resource:

    apiVersion: "logging.openshift.io/v1alpha1"
    kind: "LogForwarding"
    metadata:
      name: instance (1)
      namespace: openshift-logging (2)
    spec:
      disableDefaultForwarding: true (3)
      outputs: (4)
       - name: elasticsearch
         type: "elasticsearch"
         endpoint: elasticsearch.openshift-logging.svc:9200
         secret:
            name: elasticsearch
       - name: elasticsearch-insecure
         type: "elasticsearch"
         endpoint: elasticsearch-insecure.svc.messaging.cluster.local
         insecure: true
       - name: secureforward-offcluster
         type: "forward"
         endpoint: https://secureforward.offcluster.com:24224
         secret:
            name: secureforward
      pipelines: (5)
       - name: container-logs
         inputSource: logs.app
         outputRefs:
         - elasticsearch
         - secureforward-offcluster
       - name: infra-logs
         inputSource: logs.infra
         outputRefs:
         - elasticsearch-insecure
       - name: audit-logs
         inputSource: logs.audit
         outputRefs:
         - secureforward-offcluster
    1 The name of the log forwarding CR must be instance.
    2 The namespace for the log forwarding CR must be openshift-logging.
    3 Set to enabled to enable log forwarding.
    4 Add one or more endpoints:
    • Specify the type of output, either elasticseach or forward.

    • Enter a name for the output.

    • Enter the endpoint, either the server name or FQDN.

    • Optionally, enter the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.

    • Specify insecure: true if the endpoint does not use a secret, resulting in insecure communication.

    5 Add one or more pipelines:
    • Enter a name for the pipeline

    • Specify the source type: logs.app, logs.infra, or logs.audit.

    • Specify the name of one or more outputs configured in the CR.

      If you set disableDefaultForwarding: true you must configure a pipeline and output for all three types of logs, application, infrastructure, and audit. If you do not specify a pipeline and output for a log type, those logs are not stored and will be lost.

Example log forwarding custom resources

A typical Log Forwarding configuration would be similar to the following examples.

The following Log Forwarding custom resource sends all logs to a secured external Elasticsearch instance:

Sample custom resource to forward to an Elasticsearch instance
apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
  name: instance
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true
  outputs:
    - name: user-created-es
      type: elasticsearch
      endpoint: 'elasticsearch-server.openshift-logging.svc:9200'
      secret:
        name: piplinesecret
  pipelines:
    - name: app-pipeline
      inputSource: logs.app
      outputRefs:
        - user-created-es
    - name: infra-pipeline
      inputSource: logs.infra
      outputRefs:
        - user-created-es
    - name: audit-pipeline
      inputSource: logs.audit
      outputRefs:
        - user-created-es

The following Log Forwarding custom resource sends all logs to a secured Fluentd instance using the Fluentd out_forward plug-in.

Sample custom resource to use the out_forward plugin
apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
  name: instance
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true
  outputs:
    - name: fluentd-created-by-user
      type: forward
      endpoint: 'fluentdserver.openshift-logging.svc:24224'
      secret:
        name: fluentdserver
  pipelines:
    - name: app-pipeline
      inputType: logs.app
      outputRefs:
        - fluentd-created-by-user
    - name: infra-pipeline
      inputType: logs.infra
      outputRefs:
        - fluentd-created-by-user
    - name: clo-default-audit-pipeline
      inputType: logs.audit
      outputRefs:
        - fluentd-created-by-user

The following Log Forwarding custom resource sends all logs to the internal OKD Elaticsearch instance, which is the default log forwarding method.

Sample custom resource to use the default log forwarding
apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
  name: instance
  namespace: openshift-logging
spec:
  disableDefaultForwarding: false

Additional resources

Alternatively, you can use Fluentd plugins to forward logs. For more information, see Sending logs to external devices using Fluentd plugins.