×

There are times when you need to make changes to the operating systems running on OKD nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way.

Aside from a few specialized features, most changes to operating systems on OKD nodes can be done by creating what are referred to as MachineConfig objects that are managed by the Machine Config Operator. For example, you can use the Machine Config Operator (MCO) and machine configs to manage update to systemd, CRI-O and kubelet, the kernel, Network Manager and other system features.

Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OKD nodes.

NetworkManager stores new network configurations to /etc/NetworkManager/system-connections/ in a key file format.

Previously, NetworkManager stored new network configurations to /etc/sysconfig/network-scripts/ in the ifcfg format. Starting with RHEL 9.0, RHEL stores new network configurations at /etc/NetworkManager/system-connections/ in a key file format. The connections configurations stored to /etc/sysconfig/network-scripts/ in the old format still work uninterrupted. Modifications in existing profiles continue updating the older files.

About the Machine Config Operator

OKD 4.16 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Fedora CoreOS (FCOS) on cluster nodes, OKD provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades.

OKD employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include:

  • The machine-config-controller, which coordinates machine upgrades from the control plane. It monitors all of the cluster nodes and orchestrates their configuration updates.

  • The machine-config-daemon daemon set, which runs on each node in the cluster and updates a machine to configuration as defined by machine config and as instructed by the MachineConfigController. When the node detects a change, it drains off its pods, applies the update, and reboots. These changes come in the form of Ignition configuration files that apply the specified machine configuration and control kubelet configuration. The update itself is delivered in a container. This process is key to the success of managing OKD and FCOS updates together.

  • The machine-config-server daemon set, which provides the Ignition config files to control plane nodes as they join the cluster.

The machine configuration is a subset of the Ignition configuration. The machine-config-daemon reads the machine configuration to see if it needs to do an OSTree update or if it must apply a series of systemd kubelet file changes, configuration changes, or other changes to the operating system or OKD configuration.

When you perform node management operations, you create or modify a KubeletConfig custom resource (CR).

When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect.

You can mitigate the disruption caused by some machine config changes by using a node disruption policy. See Understanding node restart behaviors after machine config changes.

Alternatively, you can prevent the nodes from automatically rebooting after machine configuration changes before making the changes. Pause the autoreboot process by setting the spec.paused field to true in the corresponding machine config pool. When paused, machine configuration changes are not applied until you set the spec.paused field to false and the nodes have rebooted into the new configuration.

The following modifications do not trigger a node reboot:

  • When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:

    • Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config.

    • Changes to the global pull secret or pull secret in the openshift-config namespace.

    • Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator.

  • When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet, ImageTagMirrorSet, or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:

    • The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror.

    • The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry.

    • The addition of items to the unqualified-search-registries list.

There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.

Machine config overview

The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, the kernel, Network Manager and other system features. It also offers a MachineConfig CRD that can write configuration files onto the host (see machine-config-operator). Understanding what MCO does and how it interacts with other components is critical to making advanced, system-level changes to an OKD cluster. Here are some things you should know about MCO, machine configs, and how they are used:

  • Machine configs are processed alphabetically, in lexicographically increasing order, of their name. The render controller uses the first machine config in the list as the base and appends the rest to the base machine config.

  • A machine config can make a specific change to a file or service on the operating system of each system representing a pool of OKD nodes.

  • MCO applies changes to operating systems in pools of machines. All OKD clusters start with worker and control plane node pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types.

    A node can have multiple labels applied that indicate its type, such as master or worker, however it can be a member of only a single machine config pool.

  • After a machine config change, the MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are upgraded by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time.

  • Some machine configuration must be in place before OKD is installed to disk. In most cases, this can be accomplished by creating a machine config that is injected directly into the OKD installer process, instead of running as a postinstallation machine config. In other cases, you might need to do bare metal installation where you pass kernel arguments at OKD installer startup, to do such things as setting per-node individual IP addresses or advanced disk partitioning.

  • MCO manages items that are set in machine configs. Manual changes you do to your systems will not be overwritten by MCO, unless MCO is explicitly told to manage a conflicting file. In other words, MCO only makes specific updates you request, it does not claim control over the whole node.

  • Manual changes to nodes are strongly discouraged. If you need to decommission a node and start a new one, those direct changes would be lost.

  • MCO is only supported for writing to files in /etc and /var directories, although there are symbolic links to some directories that can be writeable by being symbolically linked to one of those areas. The /opt and /usr/local directories are examples.

  • Ignition is the configuration format used in MachineConfigs. See the Ignition Configuration Specification v3.2.0 for details.

  • Although Ignition config settings can be delivered directly at OKD installation time, and are formatted in the same way that MCO delivers Ignition configs, MCO has no way of seeing what those original Ignition configs are. Therefore, you should wrap Ignition config settings into a machine config before deploying them.

  • When a file managed by MCO changes outside of MCO, the Machine Config Daemon (MCD) sets the node as degraded. It will not overwrite the offending file, however, and should continue to operate in a degraded state.

  • A key reason for using a machine config is that it will be applied when you spin up new nodes for a pool in your OKD cluster. The machine-api-operator provisions a new machine and MCO configures it.

MCO uses Ignition as the configuration format. OKD 4.6 moved from Ignition config specification version 2 to version 3.

What can you change with machine configs?

The kinds of components that MCO can change include:

  • config: Create Ignition config objects (see the Ignition configuration specification) to do things like modify files, systemd services, and other features on OKD machines, including:

    • Configuration files: Create or overwrite files in the /var or /etc directory.

    • systemd units: Create and set the status of a systemd service or add to an existing systemd service by dropping in additional settings.

    • users and groups: Change SSH keys in the passwd section postinstallation.

      • Changing SSH keys by using a machine config is supported only for the core user.

      • Adding new users by using a machine config is not supported.

  • kernelArguments: Add arguments to the kernel command line when OKD nodes boot.

  • kernelType: Optionally identify a non-standard kernel to use instead of the standard kernel. Use realtime to use the RT kernel (for RAN). This is only supported on select platforms. Use the 64k-pages parameter to enable the 64k page size kernel. This setting is exclusive to machines with 64-bit ARM architectures.

  • extensions: Extend FCOS features by adding selected pre-packaged software. For this feature, available extensions include usbguard and kernel modules.

  • Custom resources (for ContainerRuntime and Kubelet): Outside of machine configs, MCO manages two special custom resources for modifying CRI-O container runtime settings (ContainerRuntime CR) and the Kubelet service (Kubelet CR).

The MCO is not the only Operator that can change operating system components on OKD nodes. Other Operators can modify operating system-level features as well. One example is the Node Tuning Operator, which allows you to do node-level tuning through Tuned daemon profiles.

Tasks for the MCO configuration that can be done after installation are included in the following procedures. See descriptions of FCOS bare metal installation for system configuration tasks that must be done during or before OKD installation. By default, many of the changes you make with the MCO require a reboot.

The following modifications do not trigger a node reboot:

  • When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:

    • Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config.

    • Changes to the global pull secret or pull secret in the openshift-config namespace.

    • Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator.

  • When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet, ImageTagMirrorSet, or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:

    • The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror.

    • The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry.

    • The addition of items to the unqualified-search-registries list.

In other cases, you can mitigate the disruption to your workload when the MCO makes changes by using node disruption policies. For information, see Understanding node restart behaviors after machine config changes.

There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection.

Node configuration management with machine config pools

Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads.

By default, there are two MCPs created by the cluster when it is installed: master and worker. Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP updates.

For worker nodes, you can create additional MCPs, or custom pools, to manage nodes with custom use cases that extend outside of the default node types. Custom MCPs for the control plane nodes are not supported.

Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO.

A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like worker,infra, it is managed by the infra custom pool, not the worker pool. Custom pools take priority on selecting nodes to manage based on node labels; nodes that do not belong to a custom pool are managed by the worker pool.

It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra role label to a worker node so it has the worker,infra dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker label from a node and apply the infra label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster.

Any node labeled with the infra role that is only running infra workloads is not counted toward the total number of subscriptions. The MCP managing an infra node is mutually exclusive from how the cluster determines subscription charges; tagging a node with the appropriate infra role and using taints to prevent user workloads from being scheduled on that node are the only requirements for avoiding subscription charges for infra workloads.

The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes.

There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.

Understanding the Machine Config Operator node drain behavior

When you use a machine config to change a system feature, such as adding new config files, modifying systemd units or kernel arguments, or updating SSH keys, the Machine Config Operator (MCO) applies those changes and ensures that each node is in the desired configuration state.

After you make the changes, the MCO generates a new rendered machine config. In the majority of cases, when applying the new rendered machine config, the Operator performs the following steps on each affected node until all of the affected nodes have the updated configuration:

  1. Cordon. The MCO marks the node as not schedulable for additional workloads.

  2. Drain. The MCO terminates all running workloads on the node, causing the workloads to be rescheduled onto other nodes.

  3. Apply. The MCO writes the new configuration to the nodes as needed.

  4. Reboot. The MCO restarts the node.

  5. Uncordon. The MCO marks the node as schedulable for workloads.

Throughout this process, the MCO maintains the required number of pods based on the MaxUnavailable value set in the machine config pool.

There are conditions which can prevent the MCO from draining a node. If the MCO fails to drain a node, the Operator will be unable to reboot the node, preventing any changes made to the node through a machine config. For more information and mitigation steps, see the MCCDrainError runbook.

If the MCO drains pods on the master node, note the following conditions:

  • In single-node OpenShift clusters, the MCO skips the drain operation.

  • The MCO does not drain static pods in order to prevent interference with services, such as etcd.

In certain cases the nodes are not drained. For more information, see "About the Machine Config Operator."

There are ways to mitigate the disruption caused by drain and reboot cycles by using node disruption policies or disabling control plane reboots. For more information, see "Understanding node restart behaviors after machine config changes" and "Disabling the Machine Config Operator from automatically rebooting."

Understanding configuration drift detection

There might be situations when the on-disk state of a node differs from what is configured in the machine config. This is known as configuration drift. For example, a cluster admin might manually modify a file, a systemd unit file, or a file permission that was configured through a machine config. This causes configuration drift. Configuration drift can cause problems between nodes in a Machine Config Pool or when the machine configs are updated.

The Machine Config Operator (MCO) uses the Machine Config Daemon (MCD) to check nodes for configuration drift on a regular basis. If detected, the MCO sets the node and the machine config pool (MCP) to Degraded and reports the error. A degraded node is online and operational, but, it cannot be updated.

The MCD performs configuration drift detection upon each of the following conditions:

  • When a node boots.

  • After any of the files (Ignition files and systemd drop-in units) specified in the machine config are modified outside of the machine config.

  • Before a new machine config is applied.

    If you apply a new machine config to the nodes, the MCD temporarily shuts down configuration drift detection. This shutdown is needed because the new machine config necessarily differs from the machine config on the nodes. After the new machine config is applied, the MCD restarts detecting configuration drift using the new machine config.

When performing configuration drift detection, the MCD validates that the file contents and permissions fully match what the currently-applied machine config specifies. Typically, the MCD detects configuration drift in less than a second after the detection is triggered.

If the MCD detects configuration drift, the MCD performs the following tasks:

  • Emits an error to the console logs

  • Emits a Kubernetes event

  • Stops further detection on the node

  • Sets the node and MCP to degraded

You can check if you have a degraded node by listing the MCPs:

$ oc get mcp worker

If you have a degraded MCP, the DEGRADEDMACHINECOUNT field is non-zero, similar to the following output:

Example output
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
worker   rendered-worker-404caf3180818d8ac1f50c32f14b57c3   False     True       True       2              1                   1                     1                      5h51m

You can determine if the problem is caused by configuration drift by examining the machine config pool:

$ oc describe mcp worker
Example output
 ...
    Last Transition Time:  2021-12-20T18:54:00Z
    Message:               Node ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 is reporting: "content mismatch for file \"/etc/mco-test-file\"" (1)
    Reason:                1 nodes are reporting degraded status on sync
    Status:                True
    Type:                  NodeDegraded (2)
 ...
1 This message shows that a node’s /etc/mco-test-file file, which was added by the machine config, has changed outside of the machine config.
2 The state of the node is NodeDegraded.

Or, if you know which node is degraded, examine that node:

$ oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
Example output
 ...

Annotations:        cloud.network.openshift.io/egress-ipconfig: [{"interface":"nic0","ifaddr":{"ipv4":"10.0.128.0/17"},"capacity":{"ip":10}}]
                    csi.volume.kubernetes.io/nodeid:
                      {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci/zones/us-central1-a/instances/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4"}
                    machine.openshift.io/machine: openshift-machine-api/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
                    machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable
                    machineconfiguration.openshift.io/currentConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9
                    machineconfiguration.openshift.io/desiredConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9
                    machineconfiguration.openshift.io/reason: content mismatch for file "/etc/mco-test-file" (1)
                    machineconfiguration.openshift.io/state: Degraded (2)
 ...
1 The error message indicating that configuration drift was detected between the node and the listed machine config. Here the error message indicates that the contents of the /etc/mco-test-file, which was added by the machine config, has changed outside of the machine config.
2 The state of the node is Degraded.

You can correct configuration drift and return the node to the Ready state by performing one of the following remediations:

  • Ensure that the contents and file permissions of the files on the node match what is configured in the machine config. You can manually rewrite the file contents or change the file permissions.

  • Generate a force file on the degraded node. The force file causes the MCD to bypass the usual configuration drift detection and reapplies the current machine config.

    Generating a force file on a node causes that node to reboot.

Checking machine config pool status

To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following oc commands:

Procedure
  1. To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command:

    $ oc get machineconfigpool
    Example output
    NAME      CONFIG                    UPDATED  UPDATING   DEGRADED  MACHINECOUNT  READYMACHINECOUNT  UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT  AGE
    master    rendered-master-06c9c4…   True     False      False     3             3                  3                   0                     4h42m
    worker    rendered-worker-f4b64…    False    True       False     3             2                  2                   0                     4h42m

    where:

    UPDATED

    The True status indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in the STATUS field in the oc get mcp output. The False status indicates a node in the MCP is updating.

    UPDATING

    The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated.

    DEGRADED

    A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready.

    MACHINECOUNT

    Indicates the total number of machines in that MCP.

    READYMACHINECOUNT

    Indicates the total number of machines in that MCP that are ready for scheduling.

    UPDATEDMACHINECOUNT

    Indicates the total number of machines in that MCP that have the current machine config.

    DEGRADEDMACHINECOUNT

    Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable.

    In the previous output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the UPDATEDMACHINECOUNT being 2. There are no issues, as indicated by the DEGRADEDMACHINECOUNT being 0 and DEGRADED being False.

    While the nodes in the MCP are updating, the machine config listed under CONFIG is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to.

    If a node is being cordoned, that node is not included in the READYMACHINECOUNT, but is included in the MACHINECOUNT. Also, the MCP status is set to UPDATING. Because the node has the current machine config, it is counted in the UPDATEDMACHINECOUNT total:

    Example output
    NAME      CONFIG                    UPDATED  UPDATING   DEGRADED  MACHINECOUNT  READYMACHINECOUNT  UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT  AGE
    master    rendered-master-06c9c4…   True     False      False     3             3                  3                   0                     4h42m
    worker    rendered-worker-c1b41a…   False    True       False     3             2                  3                   0                     4h42m
  2. To check the status of the nodes in an MCP by examining the MachineConfigPool custom resource, run the following command: :

    $ oc describe mcp worker
    Example output
    ...
      Degraded Machine Count:     0
      Machine Count:              3
      Observed Generation:        2
      Ready Machine Count:        3
      Unavailable Machine Count:  0
      Updated Machine Count:      3
    Events:                       <none>

    If a node is being cordoned, the node is not included in the Ready Machine Count. It is included in the Unavailable Machine Count:

    Example output
    ...
      Degraded Machine Count:     0
      Machine Count:              3
      Observed Generation:        2
      Ready Machine Count:        2
      Unavailable Machine Count:  1
      Updated Machine Count:      3
  3. To see each existing MachineConfig object, run the following command:

    $ oc get machineconfigs
    Example output
    NAME                             GENERATEDBYCONTROLLER          IGNITIONVERSION  AGE
    00-master                        2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    00-worker                        2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    01-master-container-runtime      2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    01-master-kubelet                2c9371fbb673b97a6fe8b1c52…     3.2.0            5h18m
    ...
    rendered-master-dde...           2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    rendered-worker-fde...           2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m

    Note that the MachineConfig objects listed as rendered are not meant to be changed or deleted.

  4. To view the contents of a particular machine config (in this case, 01-master-kubelet), run the following command:

    $ oc describe machineconfigs 01-master-kubelet

    The output from the command shows that this MachineConfig object contains both configuration files (cloud.conf and kubelet.conf) and a systemd service (Kubernetes Kubelet):

    Example output
    Name:         01-master-kubelet
    ...
    Spec:
      Config:
        Ignition:
          Version:  3.2.0
        Storage:
          Files:
            Contents:
              Source:   data:,
            Mode:       420
            Overwrite:  true
            Path:       /etc/kubernetes/cloud.conf
            Contents:
              Source:   data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous...
            Mode:       420
            Overwrite:  true
            Path:       /etc/kubernetes/kubelet.conf
        Systemd:
          Units:
            Contents:  [Unit]
    Description=Kubernetes Kubelet
    Wants=rpc-statd.service network-online.target crio.service
    After=network-online.target crio.service
    
    ExecStart=/usr/bin/hyperkube \
        kubelet \
          --config=/etc/kubernetes/kubelet.conf \ ...

If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run oc create -f ./myconfig.yaml to apply a machine config, you could remove that machine config by running the following command:

$ oc delete -f ./myconfig.yaml

If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state.

If you add your own machine configs to your cluster, you can use the commands shown in the previous example to check their status and the related status of the pool to which they are applied.

Checking machine config node status

During updates you might want to monitor the progress of individual nodes in case issues arise and you need to troubleshoot a node.

To see the status of the Machine Config Operator (MCO) updates to your cluster, use the following oc commands:

Improved MCO state reporting is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites
  • You set featureSet: TechPreviewNoUpgrade in your FeatureGate custom resource (CR) to enable the feature set capability for your cluster.

Procedure
  1. Get a summary of update statuses for all nodes in all machine config pools by running the following command:

    $ oc get machineconfignodes
    Example output
    NAME                          UPDATED   UPDATEPREPARED   UPDATEEXECUTED   UPDATEPOSTACTIONCOMPLETED   UPDATECOMPLETED   RESUMED
    ip-10-0-12-194.ec2.internal   True      False             False              False                    False              False
    ip-10-0-17-102.ec2.internal   False     True              False              False                    False              False
    ip-10-0-2-232.ec2.internal    False     False             True               False                    False              False
    ip-10-0-59-251.ec2.internal   False     False             False              True                     False              False
    ip-10-0-59-56.ec2.internal    False     False             False              False                    True               True
    ip-10-0-6-214.ec2.internal    False     False             Unknown            False                    False              False

    where:

    UPDATED

    The True status indicates that the MCO has applied the current machine config to that particular node. The False status indicates that the node is currently updating. The Unknown status means the operation is processing.

    UPDATEPREPARED

    The False status indicates that the MCO has not started reconciling the new machine configs to be distributed. The True status indicates that the MCO has completed this phase of the update. The Unknown status means the operation is processing.

    UPDATEEXECUTED

    The False status indicates that the MCO has not started cordoning and draining the node. It also indicates that the disk state and operating system have not started updating. The True status indicates that the MCO has completed this phase of the update. The Unknown status means the operation is processing.

    UPDATEPOSTACTIONCOMPLETED

    The False status indicates that the MCO has not started rebooting the node or closing the daemon. The True status indicates that the MCO has completed reboot and updating the node status. The Unknown status indicates either that an error has occurred during the update process at this phase, or that the MCO is currently applying the update.

    UPDATECOMPLETED

    The False status indicates that the MCO has not started uncordoning the node and updating the node state and metrics. The True status indicates that the MCO has finished updating the node state and available metrics.

    RESUMED

    The False status indicates that the MCO has not started the config drift monitor. The True status indicates that the node has resumed operation. The Unknown status means the operation is processing.

    Within the primary phases previously described, you can have secondary phases which you can use to see the update progression in more detail. You can get more information that includes secondary phases of updates by using the -o wide option of the preceding command. This provides the additional UPDATECOMPATIBLE, UPDATEFILESANDOS, DRAINEDNODE, CORDONEDNODE, REBOOTNODE, RELOADEDCRIO and UNCORDONED columns. These secondary phases do not always occur and depend on the type of update you want to apply.

  2. Check the update status of nodes in a specific machine config pool by running the following command:

    $ oc get machineconfignodes $(oc get machineconfignodes -o json | jq -r '.items[]|select(.spec.pool.name=="<pool_name>")|.metadata.name') (1)
    1 The name of the pool is the MachineConfigPool object name.
    Example output
    NAME                          UPDATED   UPDATEPREPARED   UPDATEEXECUTED   UPDATEPOSTACTIONCOMPLETE   UPDATECOMPLETE   RESUMED
    ip-10-0-48-226.ec2.internal   True      False            False            False                      False            False
    ip-10-0-5-241.ec2.internal    True      False            False            False                      False            False
    ip-10-0-74-108.ec2.internal   True      False            False            False                      False            False
  3. Check the update status of an individual node by running the following command:

    $ oc describe machineconfignode/<node_name> (1)
    1 The name of the node is the MachineConfigNode object name.
    Example output
    Name:         <node_name>
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  machineconfiguration.openshift.io/v1alpha1
    Kind:         MachineConfigNode
    Metadata:
      Creation Timestamp:  2023-10-17T13:08:58Z
      Generation:          1
      Resource Version:    49443
      UID:                 4bd758ab-2187-413c-ac42-882e61761b1d
    Spec:
      Node Ref:
        Name:         <node_name>
      Pool:
        Name:         master
      ConfigVersion:
        Desired: rendered-worker-823ff8dc2b33bf444709ed7cd2b9855b (1)
    Status:
      Conditions:
        Last Transition Time:  2023-10-17T13:09:02Z
        Message:               Node has completed update to config rendered-master-cf99e619747ab19165f11e3546c71f1e
        Reason:                NodeUpgradeComplete
        Status:                True
        Type:                  Updated
        Last Transition Time:  2023-10-17T13:09:02Z
        Message:               This node has not yet entered the UpdatePreparing phase
        Reason:                NotYetOccured
        Status:                False
      Config Version:
        Current:            rendered-worker-823ff8dc2b33bf444709ed7cd2b9855b
        Desired:            rendered-worker-823ff8dc2b33bf444709ed7cd2b9855b (2)
      Health:               Healthy
      Most Recent Error:
      Observed Generation:  3
    1 The desired configuration specified in the spec.configversion.desired field updates immediately when a new configuration is detected on the node.
    2 The desired configuration specified in the status.configversion.desired field updates only when the new configuration is validated by the Machine Config Daemon (MCD). The MCD performs validation by checking the current phase of the update. If the update successfully passes the UPDATEPREPARED phase, then the status adds the new configuration.
Additional resources

Understanding Machine Config Operator certificates

Machine Config Operator certificates are used to secure connections between the Red Hat Enterprise Linux CoreOS (RHCOS) nodes and the Machine Config Server. For more information, see Machine Config Operator certificates.

Viewing and interacting with certificates

The following certificates are handled in the cluster by the Machine Config Controller (MCC) and can be found in the ControllerConfig resource:

  • /etc/kubernetes/kubelet-ca.crt

  • /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem

  • /etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt

The MCC also handles the image registry certificates and its associated user bundle certificate.

You can get information about the listed certificates, including the underyling bundle the certificate comes from, and the signing and subject data.

Prerequisites
  • This procedure contains optional steps that require that the python-yq RPM package is installed.

Procedure
  • Get detailed certificate information by running the following command:

    $ oc get controllerconfig/machine-config-controller -o yaml | yq -y '.status.controllerCertificates'
    Example output
    - bundleFile: KubeAPIServerServingCAData
      notAfter: '2034-10-23T13:13:02Z'
      notBefore: '2024-10-25T13:13:02Z'
      signer: CN=admin-kubeconfig-signer,OU=openshift
      subject: CN=admin-kubeconfig-signer,OU=openshift
    - bundleFile: KubeAPIServerServingCAData
      notAfter: '2024-10-26T13:13:05Z'
      notBefore: '2024-10-25T13:27:14Z'
      signer: CN=kubelet-signer,OU=openshift
      subject: CN=kube-csr-signer_@1729862835
    - bundleFile: KubeAPIServerServingCAData
      notAfter: '2024-10-26T13:13:05Z'
      notBefore: '2024-10-25T13:13:05Z'
      signer: CN=kubelet-signer,OU=openshift
      subject: CN=kubelet-signer,OU=openshift
    # ...
  • Get a simpler version of the information found in the ControllerConfig resource by checking the machine config pool status using the following command:

    $ oc get mcp master -o yaml | yq -y '.status.certExpirys'
    Example output
    - bundle: KubeAPIServerServingCAData
      expiry: '2034-10-23T13:13:02Z'
      subject: CN=admin-kubeconfig-signer,OU=openshift
    - bundle: KubeAPIServerServingCAData
      expiry: '2024-10-26T13:13:05Z'
      subject: CN=kube-csr-signer_@1729862835
    - bundle: KubeAPIServerServingCAData
      expiry: '2024-10-26T13:13:05Z'
      subject: CN=kubelet-signer,OU=openshift
    - bundle: KubeAPIServerServingCAData
      expiry: '2025-10-25T13:13:05Z'
      subject: CN=kube-apiserver-to-kubelet-signer,OU=openshift
    # ...

    This method is meant for OKD applications that already consume machine config pool information.

  • Check which image registry certificates are on the nodes:

    1. Log in to a node:

      $ oc debug node/<node_name>
    2. Set /host as the root directory within the debug shell:

      sh-5.1# chroot /host
    3. Look at the contents of the /etc/docker/cert.d directory:

      sh-5.1# ls /etc/docker/certs.d
      Example output
      image-registry.openshift-image-registry.svc.cluster.local:5000
      image-registry.openshift-image-registry.svc:5000