×

Determining where installation issues occur

When troubleshooting OKD installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage.

OKD installation proceeds through the following stages:

  1. Ignition configuration files are created.

  2. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot.

  3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting.

  4. The control plane machines use the bootstrap machine to form an etcd cluster.

  5. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.

  6. The temporary control plane schedules the production control plane to the control plane machines.

  7. The temporary control plane shuts down and passes control to the production control plane.

  8. The bootstrap machine adds OKD components into the production control plane.

  9. The installation program shuts down the bootstrap machine.

  10. The control plane sets up the worker nodes.

  11. The control plane installs additional services in the form of a set of Operators.

  12. The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments.

User-provisioned infrastructure installation considerations

The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OKD manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure.

You can alternatively install OKD 4.17 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation:

  • Check the Fedora Ecosystem to determine the level of Fedora CoreOS (FCOS) support provided for your chosen server hardware or virtualization technology.

  • Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set.

  • Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling.

    It is not possible to enable cloud provider integration in OKD environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration.

  • A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OKD cluster nodes.

  • Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed.

  • Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured.

  • A load balancer is required to distribute API requests across all control plane nodes in highly available OKD environments. You can use any TCP-based load balancing solution that meets OKD DNS routing and port requirements.

Checking a load balancer configuration before OKD installation

Check your load balancer configuration prior to starting an OKD installation.

Prerequisites
  • You have configured an external load balancer of your choosing, in preparation for an OKD installation. The following example is based on a Fedora host using HAProxy to provide load balancing services to a cluster.

  • You have configured DNS in preparation for an OKD installation.

  • You have SSH access to your load balancer.

Procedure
  1. Check that the haproxy systemd service is active:

    $ ssh <user_name>@<load_balancer> systemctl status haproxy
  2. Verify that the load balancer is listening on the required ports. The following example references ports 80, 443, 6443, and 22623.

    • For HAProxy instances running on Fedora 6, verify port status by using the netstat command:

      $ ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'
    • For HAProxy instances running on Fedora 7 or 8, verify port status by using the ss command:

      $ ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'

      Red Hat recommends the ss command instead of netstat in Fedora 7 or later. ss is provided by the iproute package. For more information on the ss command, see the Fedora 7 Performance Tuning Guide.

  3. Check that the wildcard DNS record resolves to the load balancer:

    $ dig <wildcard_fqdn> @<dns_server>

Specifying OKD installer log levels

By default, the OKD installer log level is set to info. If more detailed logging is required when diagnosing a failed OKD installation, you can increase the openshift-install log level to debug when starting the installation again.

Prerequisites
  • You have access to the installation host.

Procedure
  • Set the installation log level to debug when initiating the installation:

    $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug  (1)
    1 Possible log levels include info, warn, error, and debug.

Troubleshooting openshift-install command issues

If you experience issues running the openshift-install command, check the following:

  • The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run:

    $ ./openshift-install create ignition-configs --dir=./install_dir
  • The install-config.yaml file is in the same directory as the installer. If an alternative installation path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml file exists within that directory.

Monitoring installation progress

You can monitor high-level installation, bootstrap, and control plane logs as an OKD installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and control plane nodes.

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure
  1. Watch the installation log as the installation progresses:

    $ tail -f ~/<installation_directory>/.openshift_install.log
  2. Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  3. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.

    1. Monitor the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
  4. Monitor crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.

    1. Monitor the logs using oc:

      $ oc adm node-logs --role=master -u crio
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@master-N.cluster_name.sub_domain.domain journalctl -b -f -u crio.service

Gathering bootstrap node diagnostic data

When experiencing bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.

Prerequisites
  • You have SSH access to your bootstrap node.

  • You have the fully qualified domain name of the bootstrap node.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

Procedure
  1. If you have access to the bootstrap node’s console, monitor the console until the node reaches the login prompt.

  2. Verify the Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/bootstrap.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command:

        $ grep -is 'bootstrap.ign' /var/log/httpd/access_log

        If the bootstrap Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the bootstrap node’s console to determine if the mechanism is injecting the bootstrap node Ignition file correctly.

  3. Verify the availability of the bootstrap node’s assigned storage device.

  4. Verify that the bootstrap node has been assigned an IP address from the DHCP server.

  5. Collect bootkube.service journald unit logs from the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  6. Collect logs from the bootstrap node containers.

    1. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
  7. If the bootstrap process fails, verify the following.

    • You can resolve api.<cluster_name>.<base_domain> from the installation host.

    • The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OKD installation requirements.

Investigating control plane node installation issues

If you experience control plane node installation issues, determine the control plane node OKD software defined network (SDN), and network Operator status. Collect kubelet.service, crio.service journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and control plane nodes.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure
  1. If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.

  2. Verify Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the control plane node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/master.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files:

        $ grep -is 'master.ign' /var/log/httpd/access_log

        If the master Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly.

  3. Check the availability of the storage device assigned to the control plane node.

  4. Verify that the control plane node has been assigned an IP address from the DHCP server.

  5. Determine control plane node status.

    1. Query control plane node status:

      $ oc get nodes
    2. If one of the control plane nodes does not reach a Ready status, retrieve a detailed node description:

      $ oc describe node <master_node>

      It is not possible to run oc commands if an installation issue prevents the OKD API from running or if the kubelet is not running yet on each node:

  6. Determine OVN-Kubernetes status.

    1. Review ovnkube-node daemon set status, in the openshift-ovn-kubernetes namespace:

      $ oc get daemonsets -n openshift-ovn-kubernetes
    2. If those resources are listed as Not found, review pods in the openshift-ovn-kubernetes namespace:

      $ oc get pods -n openshift-ovn-kubernetes
    3. Review logs relating to failed OKD OVN-Kubernetes pods in the openshift-ovn-kubernetes namespace:

      $ oc logs <ovn-k_pod> -n openshift-ovn-kubernetes
  7. Determine cluster network configuration status.

    1. Review whether the cluster’s network configuration exists:

      $ oc get network.config.openshift.io cluster -o yaml
    2. If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output:

      $ ./openshift-install create manifests
    3. Review the pod status in the openshift-network-operator namespace to determine whether the Cluster Network Operator (CNO) is running:

      $ oc get pods -n openshift-network-operator
    4. Gather network Operator pod logs from the openshift-network-operator namespace:

      $ oc logs pod/<network_operator_pod_name> -n openshift-network-operator
  8. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service

      OKD 4.17 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  9. Retrieve crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=master -u crio
    2. If the API is not functional, review the logs using SSH instead:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
  10. Collect logs from specific subdirectories under /var/log/ on control plane nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
  11. Review control plane node container logs using SSH.

    1. List the containers:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a
    2. Retrieve a container’s logs using crictl:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
  12. If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.

    1. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values:

      $ curl https://api-int.<cluster_name>:22623/config/master
    2. If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.

    3. Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.

      1. Run a DNS lookup for the defined MCO endpoint name:

        $ dig api-int.<cluster_name> @<dns_server>
      2. Run a reverse lookup to the assigned MCO IP address on the load balancer:

        $ dig -x <load_balancer_mco_ip_address> @<dns_server>
    4. Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master
    5. System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:

      $ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
    6. Review certificate validity:

      $ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text

Investigating etcd installation issues

If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the control plane nodes.

Procedure
  1. Check the status of etcd pods.

    1. Review the status of pods in the openshift-etcd namespace:

      $ oc get pods -n openshift-etcd
    2. Review the status of pods in the openshift-etcd-operator namespace:

      $ oc get pods -n openshift-etcd-operator
  2. If any of the pods listed by the previous commands are not showing a Running or a Completed status, gather diagnostic information for the pod.

    1. Review events for the pod:

      $ oc describe pod/<pod_name> -n <namespace>
    2. Inspect the pod’s logs:

      $ oc logs pod/<pod_name> -n <namespace>
    3. If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container:

      $ oc logs pod/<pod_name> -c <container_name> -n <namespace>
  3. If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values.

    1. List etcd pods on each control plane node:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-
    2. For any pods not showing Ready status, inspect pod status in detail. Replace <pod_id> with the pod’s ID listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>
    3. List containers related to a pod:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'
    4. For any containers not showing Ready status, inspect container status in detail. Replace <container_id> with container IDs listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
    5. Review the logs for any containers not showing a Ready status. Replace <container_id> with the container IDs listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>

      OKD 4.17 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  4. Validate primary and secondary DNS server connectivity from control plane nodes.

Investigating control plane node kubelet and API server issues

To investigate control plane node kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the control plane nodes.

Procedure
  1. Verify that the API server’s DNS record directs the kubelet on control plane nodes to https://api-int.<cluster_name>.<base_domain>:6443. Ensure that the record references the load balancer.

  2. Ensure that the load balancer’s port 6443 definition references each control plane node.

  3. Check that unique control plane node hostnames have been provided by DHCP.

  4. Inspect the kubelet.service journald unit logs on each control plane node.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service

      OKD 4.17 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  5. Check for certificate expiration messages in the control plane node kubelet logs.

    1. Retrieve the log using oc:

      $ oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service  | grep -is 'x509: certificate has expired'

Investigating worker node installation issues

If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service, crio.service journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node postinstallation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and worker nodes.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure
  1. If you have access to the worker node’s console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.

  2. Verify Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the worker node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/worker.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files:

        $ grep -is 'worker.ign' /var/log/httpd/access_log

        If the worker Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the worker node’s console to determine if the mechanism is injecting the worker node Ignition file correctly.

  3. Check the availability of the worker node’s assigned storage device.

  4. Verify that the worker node has been assigned an IP address from the DHCP server.

  5. Determine worker node status.

    1. Query node status:

      $ oc get nodes
    2. Retrieve a detailed node description for any worker nodes not showing a Ready status:

      $ oc describe node <worker_node>

      It is not possible to run oc commands if an installation issue prevents the OKD API from running or if the kubelet is not running yet on each node.

  6. Unlike control plane nodes, worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator.

    1. Review Machine API Operator pod status:

      $ oc get pods -n openshift-machine-api
    2. If the Machine API Operator pod does not have a Ready status, detail the pod’s events:

      $ oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api
    3. Inspect machine-api-operator container logs. The container runs within the machine-api-operator pod:

      $ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator
    4. Also inspect kube-rbac-proxy container logs. The container also runs within the machine-api-operator pod:

      $ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy
  7. Monitor kubelet.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=worker -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <worker-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service

      OKD 4.17 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OKD API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  8. Retrieve crio.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=worker -u crio
    2. If the API is not functional, review the logs using SSH instead:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
  9. Collect logs from specific subdirectories under /var/log/ on worker nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/sssd/ on all worker nodes:

      $ oc adm node-logs --role=worker --path=sssd
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/sssd/audit.log contents from all worker nodes:

      $ oc adm node-logs --role=worker --path=sssd/sssd.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/sssd/sssd.log:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log
  10. Review worker node container logs using SSH.

    1. List the containers:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a
    2. Retrieve a container’s logs using crictl:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
  11. If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.

    1. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values:

      $ curl https://api-int.<cluster_name>:22623/config/worker
    2. If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.

    3. Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.

      1. Run a DNS lookup for the defined MCO endpoint name:

        $ dig api-int.<cluster_name> @<dns_server>
      2. Run a reverse lookup to the assigned MCO IP address on the load balancer:

        $ dig -x <load_balancer_mco_ip_address> @<dns_server>
    4. Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker
    5. System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:

      $ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
    6. Review certificate validity:

      $ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text

Querying Operator status after installation

You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending or have an error status. Validate base images used by problematic pods.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Check that cluster Operators are all available at the end of an installation.

    $ oc get clusteroperators
  2. Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a Ready status and some cluster Operators might not become available if there are pending CSRs.

    1. Check the status of the CSRs and ensure that you see a client and server request with the Pending or Approved status for each machine that you added to the cluster:

      $ oc get csr
      Example output
      NAME        AGE     REQUESTOR                                                                   CONDITION
      csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending (1)
      csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
      csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending (2)
      csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
      ...
      1 A client request CSR.
      2 A server request CSR.

      In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

    2. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

      Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager.

      For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

      • To approve them individually, run the following command for each valid CSR:

        $ oc adm certificate approve <csr_name> (1)
        1 <csr_name> is the name of a CSR from the list of current CSRs.
      • To approve all pending CSRs, run the following command:

        $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  3. View Operator events:

    $ oc describe clusteroperator <operator_name>
  4. Review Operator pod status within the Operator’s namespace:

    $ oc get pods -n <operator_namespace>
  5. Obtain a detailed description for pods that do not have Running status:

    $ oc describe pod/<operator_pod_name> -n <operator_namespace>
  6. Inspect pod logs:

    $ oc logs pod/<operator_pod_name> -n <operator_namespace>
  7. When experiencing pod base image related issues, review base image status.

    1. Obtain details of the base image used by a problematic pod:

      $ oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace>
    2. List base image release information:

      $ oc adm release info <image_path>:<tag> --commits

Gathering logs from a failed installation

If you gave an SSH key to your installation program, you can gather data about your failed installation.

You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command.

Prerequisites
  • Your OKD installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH.

  • The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program.

  • If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes.

Procedure
  1. Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines:

    • If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command:

      $ ./openshift-install gather bootstrap --dir <installation_directory> (1)
      1 installation_directory is the directory you specified when you ran ./openshift-install create cluster. This directory contains the OKD definition files that the installation program creates.

      For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses.

    • If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command:

      $ ./openshift-install gather bootstrap --dir <installation_directory> \ (1)
          --bootstrap <bootstrap_address> \ (2)
          --master <master_1_address> \ (3)
          --master <master_2_address> \ (3)
          --master <master_3_address>" (3)
      
      1 For installation_directory, specify the same directory you specified when you ran ./openshift-install create cluster. This directory contains the OKD definition files that the installation program creates.
      2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster’s bootstrap machine.
      3 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address.

      A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses.

    Example output
    INFO Pulling debug logs from the bootstrap machine
    INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz"

    If you open a Red Hat support case about your installation failure, include the compressed logs in the case.

Additional resources