×

About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including:

  • Resource definitions

  • Service logs

By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local.

Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:

  • To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section.

    For example:

    $ oc adm must-gather  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0
  • To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section.

    For example:

    $ oc adm must-gather -- /usr/bin/gather_audit_logs

    Audit logs are not collected as part of the default set of information to reduce the size of the files.

When you run oc adm must-gather, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.

For example:

NAMESPACE                      NAME                 READY   STATUS      RESTARTS      AGE
...
openshift-must-gather-5drcj    must-gather-bklx4    2/2     Running     0             72s
openshift-must-gather-5drcj    must-gather-s8sdh    2/2     Running     0             72s
...

Gathering data about specific features

You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.

Table 1. Available must-gather images
Image Purpose

quay.io/kubevirt/must-gather

Data collection for KubeVirt.

quay.io/openshift-knative/must-gather

Data collection for Knative.

docker.io/maistra/istio-must-gather

Data collection for service mesh.

quay.io/konveyor/must-gather

Data collection for migration-related information.

quay.io/ocs-dev/ocs-must-gather

Data collection for OpenShift Container Storage.

quay.io/openshift/origin-cluster-logging-operator

Data collection for OpenShift Logging.

quay.io/openshift/origin-local-storage-mustgather

Data collection for Local Storage Operator.

To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

  • The OKD CLI (oc) installed.

Procedure
  1. Navigate to the directory where you want to store the must-gather data.

  2. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:

    $ oc adm must-gather \
     --image-stream=openshift/must-gather \ (1)
     --image=quay.io/kubevirt/must-gather (2)
    
    1 The default OKD must-gather image
    2 The must-gather image for KubeVirt

Gathering audit logs

You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for:

  • etcd server

  • Kubernetes API server

  • OpenShift OAuth API server

  • OpenShift API server

Procedure
  1. Run the oc adm must-gather command with the -- /usr/bin/gather_audit_logs flag:

    $ oc adm must-gather -- /usr/bin/gather_audit_logs

Querying bootstrap node journal logs

If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.

Prerequisites
  • You have SSH access to your bootstrap node.

  • You have the fully qualified domain name of the bootstrap node.

Procedure
  1. Query bootkube.service journald unit logs from a bootstrap node during OKD installation. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  2. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'

Querying cluster node journal logs

You can gather journald unit logs and other logs within /var/log on individual cluster nodes.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • Your API service is still functional.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

Procedure
  1. Query kubelet journald unit logs from OKD cluster nodes. The following example queries control plane nodes (also known as the master nodes) only:

    $ oc adm node-logs --role=master -u kubelet  (1)
    1 Replace kubelet as appropriate to query other unit logs.
  2. Collect logs from specific subdirectories under /var/log/ on cluster nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log

      OKD 4.8 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

About toolbox

toolbox is a tool that starts a container on a Fedora CoreOS (FCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run your favorite debugging or admin tools.

Installing packages to a toolbox container

By default, running the toolbox command starts a container with the quay.io/fedora/fedora:36 image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.

Prerequisites
  • You have accessed a node with the oc debug node/<node_name> command.

Procedure
  1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    # chroot /host
  2. Start the toolbox container:

    # toolbox
  3. Install the additional package, such as wget:

    # dnf install -y <package_name>

Starting an alternative image with toolbox

By default, running the toolbox command starts a container with the quay.io/fedora/fedora:36 image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run.

Prerequisites
  • You have accessed a node with the oc debug node/<node_name> command.

Procedure
  1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:

    # chroot /host
  2. Create a .toolboxrc file in the home directory for the root user ID:

    # vi ~/.toolboxrc
    REGISTRY=quay.io                (1)
    IMAGE=fedora/fedora:33-x86_64   (2)
    TOOLBOX_NAME=toolbox-fedora-33  (3)
    1 Optional: Specify an alternative container registry.
    2 Specify an alternative image to start.
    3 Optional: Specify an alternative name for the toolbox container.
  3. Start a toolbox container with the alternative image:

    # toolbox

    If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start…​. Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins.