$ oc get sc
You can use the procedures in these runbooks to diagnose and resolve issues that trigger OKD Virtualization alerts.
OKD Virtualization alerts are displayed on the Virtualization → Overview → Overview tab in the web console.
This alert fires when DataImportCron
cannot poll or import the latest disk
image versions.
DataImportCron
polls disk images, checking for the latest versions, and
imports the images as persistent volume claims (PVCs). This process ensures
that PVCs are updated to the latest version so that they can be used as
reliable clone sources or golden images for virtual machines (VMs).
For golden images, latest refers to the latest operating system of the distribution. For other disk images, latest refers to the latest hash of the image that is available.
VMs might be created from outdated disk images.
VMs might fail to start because no source PVC is available for cloning.
Check the cluster for a default storage class:
$ oc get sc
The output displays the storage classes with (default)
beside the name
of the default storage class. You must set a default storage class, either on
the cluster or in the DataImportCron
specification, in order for the
DataImportCron
to poll and import golden images. If no storage class is
defined, the DataVolume controller fails to create PVCs and the following
event is displayed: DataVolume.storage spec is missing accessMode and no
storageClass to choose profile
.
Obtain the DataImportCron
namespace and name:
$ oc get dataimportcron -A -o json | jq -r '.items[] | \
select(.status.conditions[] | select(.type == "UpToDate" and \
.status == "False")) | .metadata.namespace + "/" + .metadata.name'
If a default storage class is not defined on the cluster, check the
DataImportCron
specification for a default storage class:
$ oc get dataimportcron <dataimportcron> -o yaml | \
grep -B 5 storageClassName
url: docker://.../cdi-func-test-tinycore
storage:
resources:
requests:
storage: 5Gi
storageClassName: rook-ceph-block
Obtain the name of the DataVolume
associated with the DataImportCron
object:
$ oc -n <namespace> get dataimportcron <dataimportcron> -o json | \
jq .status.lastImportedPVC.name
Check the DataVolume
log for error messages:
$ oc -n <namespace> get dv <datavolume> -o yaml
Set the CDI_NAMESPACE
environment variable:
$ export CDI_NAMESPACE="$(oc get deployment -A | \
grep cdi-operator | awk '{print $1}')"
Check the cdi-deployment
log for error messages:
$ oc logs -n $CDI_NAMESPACE deployment/cdi-deployment
Set a default storage class, either on the cluster or in the DataImportCron
specification, to poll and import golden images. The updated Containerized Data
Importer (CDI) will resolve the issue within a few seconds.
If the issue does not resolve itself, delete the data volumes associated
with the affected DataImportCron
objects. The CDI will recreate the data
volumes with the default storage class.
If your cluster is installed in a restricted network environment, disable
the enableCommonBootImageImport
feature gate in order to opt out of automatic
updates:
$ oc patch hco kubevirt-hyperconverged -n $CDI_NAMESPACE --type json \
-p '[{"op": "replace", "path": \
"/spec/featureGates/enableCommonBootImageImport", "value": false}]'
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when a DataVolume
object restarts more than three
times.
Data volumes are responsible for importing and creating a virtual machine disk on a persistent volume claim. If a data volume restarts more than three times, these operations are unlikely to succeed. You must diagnose and resolve the issue.
Obtain the name and namespace of the data volume:
$ oc get dv -A -o json | jq -r '.items[] | \
select(.status.restartCount>3)' | jq '.metadata.name, .metadata.namespace'
Check the status of the pods associated with the data volume:
$ oc get pods -n <namespace> -o json | jq -r '.items[] | \
select(.metadata.ownerReferences[] | \
select(.name=="<dv_name>")).metadata.name'
Obtain the details of the pods:
$ oc -n <namespace> describe pods <pod>
Check the pod logs for error messages:
$ oc -n <namespace> describe logs <pod>
Delete the data volume, resolve the issue, and create a new data volume.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the Containerized Data Importer (CDI) is in a degraded state:
Not progressing
Not available to use
CDI is not usable, so users cannot build virtual machine disks on persistent volume claims (PVCs) using CDI’s data volumes. CDI components are not ready and they stopped progressing towards a ready state.
Set the CDI_NAMESPACE
environment variable:
$ export CDI_NAMESPACE="$(oc get deployment -A | \
grep cdi-operator | awk '{print $1}')"
Check the CDI deployment for components that are not ready:
$ oc -n $CDI_NAMESPACE get deploy -l cdi.kubevirt.io
Check the details of the failing pod:
$ oc -n $CDI_NAMESPACE describe pods <pod>
Check the logs of the failing pod:
$ oc -n $CDI_NAMESPACE logs <pod>
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the Containerized Data Importer (CDI) Operator is down. The CDI Operator deploys and manages the CDI infrastructure components, such as data volume and persistent volume claim (PVC) controllers. These controllers help users build virtual machine disks on PVCs.
The CDI components might fail to deploy or to stay in a required state. The CDI installation might not function correctly.
Set the CDI_NAMESPACE
environment variable:
$ export CDI_NAMESPACE="$(oc get deployment -A | grep cdi-operator | \
awk '{print $1}')"
Check whether the cdi-operator
pod is currently running:
$ oc -n $CDI_NAMESPACE get pods -l name=cdi-operator
Obtain the details of the cdi-operator
pod:
$ oc -n $CDI_NAMESPACE describe pods -l name=cdi-operator
Check the log of the cdi-operator
pod for errors:
$ oc -n $CDI_NAMESPACE logs -l name=cdi-operator
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when a Containerized Data Importer (CDI) storage profile is incomplete.
If a storage profile is incomplete, the CDI cannot infer persistent volume claim
(PVC) fields, such as volumeMode
and accessModes
, which are required to
create a virtual machine (VM) disk.
The CDI cannot create a VM disk on the PVC.
Identify the incomplete storage profile:
$ oc get storageprofile <storage_class>
Add the missing storage profile information as in the following example:
$ oc patch storageprofile local --type=merge -p '{"spec": \
{"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], \
"volumeMode": "Filesystem"}]}}'
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the Cluster Network Addons Operator (CNAO) is down. The CNAO deploys additional networking components on top of the cluster.
If the CNAO is not running, the cluster cannot reconcile changes to virtual machine components. As a result, the changes might fail to take effect.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get deployment -A | \
grep cluster-network-addons-operator | awk '{print $1}')"
Check the status of the cluster-network-addons-operator
pod:
$ oc -n $NAMESPACE get pods -l name=cluster-network-addons-operator
Check the cluster-network-addons-operator
logs for error messages:
$ oc -n $NAMESPACE logs -l name=cluster-network-addons-operator
Obtain the details of the cluster-network-addons-operator
pods:
$ oc -n $NAMESPACE describe pods -l name=cluster-network-addons-operator
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when a hostpath provisioner (HPP) installation is in a degraded state.
The HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).
HPP is not usable. Its components are not ready and they are not progressing towards a ready state.
Set the HPP_NAMESPACE
environment variable:
$ export HPP_NAMESPACE="$(oc get deployment -A | \
grep hostpath-provisioner-operator | awk '{print $1}')"
Check for HPP components that are currently not ready:
$ oc -n $HPP_NAMESPACE get all -l k8s-app=hostpath-provisioner
Obtain the details of the failing pod:
$ oc -n $HPP_NAMESPACE describe pods <pod>
Check the logs of the failing pod:
$ oc -n $HPP_NAMESPACE logs <pod>
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the hostpath provisioner (HPP) Operator is down.
The HPP Operator deploys and manages the HPP infrastructure components, such as the daemon set that provisions hostpath volumes.
The HPP components might fail to deploy or to remain in the required state. As a result, the HPP installation might not work correctly in the cluster.
Configure the HPP_NAMESPACE
environment variable:
$ HPP_NAMESPACE="$(oc get deployment -A | grep \
hostpath-provisioner-operator | awk '{print $1}')"
Check whether the hostpath-provisioner-operator
pod is currently running:
$ oc -n $HPP_NAMESPACE get pods -l name=hostpath-provisioner-operator
Obtain the details of the hostpath-provisioner-operator
pod:
$ oc -n $HPP_NAMESPACE describe pods -l name=hostpath-provisioner-operator
Check the log of the hostpath-provisioner-operator
pod for errors:
$ oc -n $HPP_NAMESPACE logs -l name=hostpath-provisioner-operator
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the hostpath provisioner (HPP) shares a file
system with other critical components, such as kubelet
or the operating
system (OS).
HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).
A shared hostpath pool puts pressure on the node’s disks. The node might have degraded performance and stability.
Configure the HPP_NAMESPACE
environment variable:
$ export HPP_NAMESPACE="$(oc get deployment -A | \
grep hostpath-provisioner-operator | awk '{print $1}')"
Obtain the status of the hostpath-provisioner-csi
daemon set
pods:
$ oc -n $HPP_NAMESPACE get pods | grep hostpath-provisioner-csi
Check the hostpath-provisioner-csi
logs to identify the shared
pool and path:
$ oc -n $HPP_NAMESPACE logs <csi_daemonset> -c hostpath-provisioner
I0208 15:21:03.769731 1 utils.go:221] pool (<legacy, csi-data-dir>/csi),
shares path with OS which can lead to node disk pressure
Using the data obtained in the Diagnosis section, try to prevent the pool path from being shared with the OS. The specific steps vary based on the node and other circumstances.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
KubeMacPool
is down. KubeMacPool
is responsible for allocating MAC
addresses and preventing MAC address conflicts.
If KubeMacPool
is down, VirtualMachine
objects cannot be created.
Set the KMP_NAMESPACE
environment variable:
$ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l \
control-plane=mac-controller-manager | awk '{print $1}')"
Set the KMP_NAME
environment variable:
$ export KMP_NAME="$(oc get pod -A --no-headers -l \
control-plane=mac-controller-manager | awk '{print $2}')"
Obtain the KubeMacPool-manager
pod details:
$ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
Check the KubeMacPool-manager
logs for error messages:
$ oc logs -n $KMP_NAMESPACE $KMP_NAME
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when KubeMacPool
detects duplicate MAC addresses.
KubeMacPool
is responsible for allocating MAC addresses and preventing MAC
address conflicts. When KubeMacPool
starts, it scans the cluster for the MAC
addresses of virtual machines (VMs) in managed namespaces.
Duplicate MAC addresses on the same LAN might cause network issues.
Obtain the namespace and the name of the kubemacpool-mac-controller
pod:
$ oc get pod -A -l control-plane=mac-controller-manager --no-headers \
-o custom-columns=":metadata.namespace,:metadata.name"
Obtain the duplicate MAC addresses from the kubemacpool-mac-controller
logs:
$ oc logs -n <namespace> <kubemacpool_mac_controller> | \
grep "already allocated"
mac address 02:00:ff:ff:ff:ff already allocated to
vm/kubemacpool-test/testvm, br1,
conflict with: vm/kubemacpool-test/testvm2, br1
Update the VMs to remove the duplicate MAC addresses.
Restart the kubemacpool-mac-controller
pod:
$ oc delete pod -n <namespace> <kubemacpool_mac_controller>
This alert fires when a component’s CPU usage exceeds the requested limit.
Usage of CPU resources is not optimal and the node might be overloaded.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the component’s CPU request limit:
$ oc -n $NAMESPACE get deployment <component> -o yaml | grep requests: -A 2
Check the actual CPU usage by using a PromQL query:
node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate
{namespace="$NAMESPACE",container="<component>"}
See the Prometheus documentation for more information.
Update the CPU request limit in the HCO
custom resource.
This alert fires when a component’s memory usage exceeds the requested limit.
Usage of memory resources is not optimal and the node might be overloaded.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the component’s memory request limit:
$ oc -n $NAMESPACE get deployment <component> -o yaml | \
grep requests: -A 2
Check the actual memory usage by using a PromQL query:
container_memory_usage_bytes{namespace="$NAMESPACE",container="<component>"}
See the Prometheus documentation for more information.
Update the memory request limit in the HCO
custom resource.
This alert fires when an operand of the HyperConverged Cluster Operator (HCO) is changed by someone or something other than HCO.
HCO configures OKD Virtualization and its supporting operators in an
opinionated way and overwrites its operands when there is an unexpected change
to them. Users must not modify the operands directly. The HyperConverged
custom resource is the source of truth for the configuration.
Changing the operands manually causes the cluster configuration to fluctuate and might lead to instability.
Check the component_name
value in the alert details to determine the operand
kind (kubevirt
) and the operand name (kubevirt-kubevirt-hyperconverged
)
that are being changed:
Labels
alertname=KubevirtHyperconvergedClusterOperatorCRModification
component_name=kubevirt/kubevirt-kubevirt-hyperconverged
severity=warning
Do not change the HCO operands directly. Use HyperConverged
objects to configure
the cluster.
The alert resolves itself after 10 minutes if the operands are not changed manually.
This alert fires when the HyperConverged Cluster Operator (HCO) runs for
more than an hour without a HyperConverged
custom resource (CR).
This alert has the following causes:
During the installation process, you installed the HCO but you did not
create the HyperConverged
CR.
During the uninstall process, you removed the HyperConverged
CR before
uninstalling the HCO and the HCO is still running.
The mitigation depends on whether you are installing or uninstalling the HCO:
Complete the installation by creating a HyperConverged
CR with its
default values:
$ cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: hco-operatorgroup
namespace: kubevirt-hyperconverged
spec: {}
EOF
Uninstall the HCO. If the uninstall process continues to run, you must resolve that issue in order to cancel the alert.
This alert fires when a JSON Patch annotation is used to change an operand of the HyperConverged Cluster Operator (HCO).
HCO configures OKD Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly.
However, if a change is required and it is not supported by the HCO API, you can force HCO to set a change in an operator by using JSON Patch annotations. These changes are not reverted by HCO during its reconciliation process.
Incorrect use of JSON Patch annotations might lead to unexpected results or an unstable environment.
Upgrading a system with JSON Patch annotations is dangerous because the structure of the component custom resources might change.
Check the annotation_name
in the alert details to identify the JSON
Patch annotation:
Labels
alertname=KubevirtHyperconvergedClusterOperatorUSModification
annotation_name=kubevirt.kubevirt.io/jsonpatch
severity=info
It is best to use the HCO API to change an operand. However, if the change can only be done with a JSON Patch annotation, proceed with caution.
Remove JSON Patch annotations before upgrade to avoid potential issues.
This alert fires when a container hosting a virtual machine (VM) has less than 20 MB free memory.
The virtual machine running inside the container is terminated by the runtime if the container’s memory limit is exceeded.
Obtain the virt-launcher
pod details:
$ oc get pod <virt-launcher> -o yaml
Identify compute
container processes with high memory usage in the
virt-launcher
pod:
$ oc exec -it <virt-launcher> -c compute -- top
Increase the memory limit in the VirtualMachine
specification as in
the following example:
spec:
running: false
template:
metadata:
labels:
kubevirt.io/vm: vm-name
spec:
domain:
resources:
limits:
memory: 200Mi
requests:
memory: 128Mi
This alert fires when a virtual machine instance (VMI) live migrates more than 12 times over a period of 24 hours.
This migration rate is abnormally high, even during an upgrade. This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient resources.
A virtual machine (VM) that migrates too frequently might experience degraded performance because memory page faults occur during the transition.
Verify that the worker node has sufficient resources:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
jq .items[].status.allocatable
{
"cpu": "3500m",
"devices.kubevirt.io/kvm": "1k",
"devices.kubevirt.io/sev": "0",
"devices.kubevirt.io/tun": "1k",
"devices.kubevirt.io/vhost-net": "1k",
"ephemeral-storage": "38161122446",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "7000128Ki",
"pods": "250"
}
Check the status of the worker node:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
jq .items[].status.conditions
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:12:02Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:12:02Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:12:02Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2022-05-26T07:36:01Z",
"lastTransitionTime": "2022-05-23T08:24:15Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
Log in to the worker node and verify that the kubelet
service is running:
$ systemctl status kubelet
Check the kubelet
journal log for error messages:
$ journalctl -r -u kubelet
Ensure that the worker nodes have sufficient resources (CPU, memory, disk) to run VM workloads without interruption.
If the problem persists, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when fewer than two nodes in the cluster have KVM resources.
The cluster must have at least two nodes with KVM resources for live migration.
Virtual machines cannot be scheduled or run if no nodes have KVM resources.
Identify the nodes with KVM resources:
$ oc get nodes -o jsonpath='{.items[*].status.allocatable}' | \
grep devices.kubevirt.io/kvm
Install KVM on the nodes without KVM resources.
This alert fires when one or more virt-controller
pods are running, but
none of these pods has been in the Ready
state for the past 5 minutes.
A virt-controller
device monitors the custom resource definitions (CRDs)
of a virtual machine instance (VMI) and manages the associated pods. The
device creates pods for VMIs and manages their lifecycle. The device is
critical for cluster-wide virtualization functionality.
This alert indicates that a cluster-level failure might occur. Actions related to VM lifecycle management, such as launching a new VMI or shutting down an existing VMI, will fail.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Verify a virt-controller
device is available:
$ oc get deployment -n $NAMESPACE virt-controller \
-o jsonpath='{.status.readyReplicas}'
Check the status of the virt-controller
deployment:
$ oc -n $NAMESPACE get deploy virt-controller -o yaml
Obtain the details of the virt-controller
deployment to check for
status conditions, such as crashing pods or failures to pull images:
$ oc -n $NAMESPACE describe deploy virt-controller
Check if any problems occurred with the nodes. For example, they might
be in a NotReady
state:
$ oc get nodes
This alert can have multiple causes, including the following:
The cluster has insufficient memory.
The nodes are down.
The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
There are network issues.
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when one or more virt-operator
pods are running, but
none of these pods has been in a Ready
state for the last 10 minutes.
The virt-operator
is the first Operator to start in a cluster. The virt-operator
deployment has a default replica of two virt-operator
pods.
Its primary responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the lifecycle of top-level controllers, such as virt-controller
,
virt-handler
, virt-launcher
, and managing their reconciliation
Certain cluster-wide tasks, such as certificate rotation and infrastructure management
A cluster-level failure might occur. Critical cluster-wide management
functionalities, such as certification rotation, upgrade, and reconciliation of
controllers, might become unavailable. Such a state also triggers the
NoReadyVirtOperator
alert.
The virt-operator
is not directly responsible for virtual machines (VMs)
in the cluster. Therefore, its temporary unavailability does not significantly
affect VM workloads.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Obtain the name of the virt-operator
deployment:
$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Obtain the details of the virt-operator
deployment:
$ oc -n $NAMESPACE describe deploy virt-operator
Check for node issues, such as a NotReady
state:
$ oc get nodes
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when only one available virt-api
pod is detected during a
60-minute period, although at least two nodes are available for scheduling.
An API call outage might occur during node eviction because the virt-api
pod
becomes a single point of failure.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the number of available virt-api
pods:
$ oc get deployment -n $NAMESPACE virt-api \
-o jsonpath='{.status.readyReplicas}'
Check the status of the virt-api
deployment for error conditions:
$ oc -n $NAMESPACE get deploy virt-api -o yaml
Check the nodes for issues such as nodes in a NotReady
state:
$ oc get nodes
Try to identify the root cause and to resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when a low number of virt-controller
pods is detected. At
least one virt-controller
pod must be available in order to ensure high
availability. The default number of replicas is 2.
A virt-controller
device monitors the custom resource definitions (CRDs) of a
virtual machine instance (VMI) and manages the associated pods. The device
create pods for VMIs and manages the lifecycle of the pods. The device is
critical for cluster-wide virtualization functionality.
The responsiveness of OKD Virtualization might become negatively affected. For example, certain requests might be missed.
In addition, if another virt-launcher
instance terminates unexpectedly,
OKD Virtualization might become completely unresponsive.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Verify that running virt-controller
pods are available:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-controller
Check the virt-launcher
logs for error messages:
$ oc -n $NAMESPACE logs <virt-launcher>
Obtain the details of the virt-launcher
pod to check for status conditions
such as unexpected termination or a NotReady
state.
$ oc -n $NAMESPACE describe pod/<virt-launcher>
This alert can have a variety of causes, including:
Not enough memory on the cluster
Nodes are down
The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
Networking issues
Identify the root cause and fix it, if possible.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when only one virt-operator
pod in a Ready
state has
been running for the last 60 minutes.
The virt-operator
is the first Operator to start in a cluster. Its primary
responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the lifecycle of top-level controllers, such as virt-controller
,
virt-handler
, virt-launcher
, and managing their reconciliation
Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator
cannot provide high availability (HA) for the deployment.
HA requires two or more virt-operator
pods in a Ready
state. The default
deployment is two pods.
The virt-operator
is not directly responsible for virtual machines (VMs)
in the cluster. Therefore, its decreased availability does not significantly
affect VM workloads.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the states of the virt-operator
pods:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Review the logs of the affected virt-operator
pods:
$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the affected virt-operator
pods:
$ oc -n $NAMESPACE describe pod <virt-operator>
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.
This alert fires when the NetworkAddonsConfig
custom resource (CR) of the
Cluster Network Addons Operator (CNAO) is not ready.
CNAO deploys additional networking components on the cluster. This alert indicates that one of the deployed components is not ready.
Network functionality is affected.
Check the status conditions of the NetworkAddonsConfig
CR to identify the
deployment or daemon set that is not ready:
$ oc get networkaddonsconfig \
-o custom-columns="":.status.conditions[*].message
DaemonSet "cluster-network-addons/macvtap-cni" update is being processed...
Check the component’s pod for errors:
$ oc -n cluster-network-addons get daemonset <pod> -o yaml
Check the component’s logs:
$ oc -n cluster-network-addons logs <pod>
Check the component’s details for error conditions:
$ oc -n cluster-network-addons describe <pod>
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when no virt-operator
pod with a leader lease has been detected
for 10 minutes, although the virt-operator
pods are in a Ready
state. The
alert indicates that no leader pod is available.
The virt-operator
is the first Operator to start in a cluster. Its primary
responsibilities include the following:
Installing, live updating, and live upgrading a cluster
Monitoring the lifecycle of top-level controllers, such as virt-controller
,
virt-handler
, virt-launcher
, and managing their reconciliation
Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator
deployment has a default replica of 2 pods, with one pod
holding a leader lease.
This alert indicates a failure at the level of the cluster. As a result, critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A -o \
custom-columns="":.metadata.namespace)"
Obtain the status of the virt-operator
pods:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the virt-operator
pod logs to determine the leader status:
$ oc -n $NAMESPACE logs | grep lead
Leader pod example:
{"component":"virt-operator","level":"info","msg":"Attempting to acquire
leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire
leader lease <namespace>/virt-operator...
I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired
lease <namespace>/virt-operator
{"component":"virt-operator","level":"info","msg":"Started leading",
"pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}
Non-leader pod example:
{"component":"virt-operator","level":"info","msg":"Attempting to acquire
leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire
leader lease <namespace>/virt-operator...
Obtain the details of the affected virt-operator
pods:
$ oc -n $NAMESPACE describe pod <virt-operator>
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when no available virt-controller
devices have been
detected for 5 minutes.
The virt-controller
devices monitor the custom resource definitions of
virtual machine instances (VMIs) and manage the associated pods. The devices
create pods for VMIs and manage the lifecycle of the pods.
Therefore, virt-controller
devices are critical for all cluster-wide
virtualization functionality.
Any actions related to VM lifecycle management fail. This notably includes launching a new VMI or shutting down an existing VMI.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Verify the number of virt-controller
devices:
$ oc get deployment -n $NAMESPACE virt-controller \
-o jsonpath='{.status.readyReplicas}'
Check the status of the virt-controller
deployment:
$ oc -n $NAMESPACE get deploy virt-controller -o yaml
Obtain the details of the virt-controller
deployment to check for
status conditions such as crashing pods or failure to pull images:
$ oc -n $NAMESPACE describe deploy virt-controller
Obtain the details of the virt-controller
pods:
$ get pods -n $NAMESPACE | grep virt-controller
Check the logs of the virt-controller
pods for error messages:
$ oc logs -n $NAMESPACE <virt-controller>
Check the nodes for problems, such as a NotReady
state:
$ oc get nodes
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when no virt-operator
pod in a Ready
state has been
detected for 10 minutes.
The virt-operator
is the first Operator to start in a cluster. Its primary
responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the life cycle of top-level controllers, such as virt-controller
,
virt-handler
, virt-launcher
, and managing their reconciliation
Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The default deployment is two virt-operator
pods.
This alert indicates a cluster-level failure. Critical cluster management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be not available.
The virt-operator
is not directly responsible for virtual machines in
the cluster. Therefore, its temporary unavailability does not significantly
affect workloads.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Obtain the name of the virt-operator
deployment:
$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Generate the description of the virt-operator
deployment:
$ oc -n $NAMESPACE describe deploy virt-operator
Check for node issues, such as a NotReady
state:
$ oc get nodes
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.
This alert fires when a virtual machine instance (VMI), or virt-launcher
pod, runs on a node that does not have a running virt-handler
pod.
Such a VMI is called orphaned.
Orphaned VMIs cannot be managed.
Check the status of the virt-handler
pods to view the nodes on
which they are running:
$ oc get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
Check the status of the VMIs to identify VMIs running on nodes
that do not have a running virt-handler
pod:
$ oc get vmis --all-namespaces
Check the status of the virt-handler
daemon:
$ oc get daemonset virt-handler --all-namespaces
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE ...
virt-handler 2 2 2 2 2 ...
The daemon set is considered healthy if the Desired
, Ready
,
and Available
columns contain the same value.
If the virt-handler
daemon set is not healthy, check the virt-handler
daemon set for pod deployment issues:
$ oc get daemonset virt-handler --all-namespaces -o yaml | jq .status
Check the nodes for issues such as a NotReady
status:
$ oc get nodes
Check the spec.workloads
stanza of the KubeVirt
custom resource
(CR) for a workloads placement policy:
$ oc get kubevirt kubevirt --all-namespaces -o yaml
If a workloads placement policy is configured, add the node with the VMI to the policy.
Possible causes for the removal of a virt-handler
pod from a node
include changes to the node’s taints and tolerations or to a pod’s
scheduling rules.
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when running virtual machine instances (VMIs) in
outdated virt-launcher
pods are detected 24 hours after the OpenShift
Virtualization control plane has been updated.
Outdated VMIs might not have access to new OKD Virtualization features.
Outdated VMIs will not receive the security fixes associated with
the virt-launcher
pod update.
Identify the outdated VMIs:
$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Check the KubeVirt
custom resource (CR) to determine whether
workloadUpdateMethods
is configured in the workloadUpdateStrategy
stanza:
$ oc get kubevirt kubevirt --all-namespaces -o yaml
Check each outdated VMI to determine whether it is live-migratable:
$ oc get vmi <vmi> -o yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
# ...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
message: cannot migrate VMI which does not use masquerade
to connect to the pod network
reason: InterfaceNotLiveMigratable
status: "False"
type: LiveMigratable
Update the HyperConverged
CR to enable automatic workload updates.
If a VMI is not live-migratable and if runStrategy: always
is
set in the corresponding VirtualMachine
object, you can update the
VMI by manually stopping the virtual machine (VM):
$ virctl stop --namespace <namespace> <vm>
A new VMI spins up immediately in an updated virt-launcher
pod to
replace the stopped VMI. This is the equivalent of a restart action.
Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload. |
If a VMI is live-migratable, you can update it by creating a VirtualMachineInstanceMigration
object that targets a specific running VMI. The VMI is migrated into
an updated virt-launcher
pod.
Create a VirtualMachineInstanceMigration
manifest and save it
as migration.yaml
:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: <migration_name>
namespace: <namespace>
spec:
vmiName: <vmi_name>
Create a VirtualMachineInstanceMigration
object to trigger the
migration:
$ oc create -f migration.yaml
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the Scheduling, Scale, and Performance (SSP) Operator reverts changes to common templates as part of its reconciliation procedure.
The SSP Operator deploys and reconciles the common templates and the Template Validator. If a user or script changes a common template, the changes are reverted by the SSP Operator.
Changes to common templates are overwritten.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Check the ssp-operator
logs for templates with reverted changes:
$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator | \
grep 'common template' -C 3
Try to identify and resolve the cause of the changes.
Ensure that changes are made only to copies of templates, and not to the templates themselves.
This alert fires when all the Scheduling, Scale and Performance (SSP) Operator pods are down.
The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.
Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates and/or the Template Validator might not be updated or reset if they fail.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Check the status of the ssp-operator
pods.
$ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
Obtain the details of the ssp-operator
pods:
$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
Check the ssp-operator
logs for error messages:
$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the reconcile cycle of the Scheduling, Scale and Performance (SSP) Operator fails repeatedly, although the SSP Operator is running.
The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.
Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates or the Template Validator might not be updated or reset if they fail.
Export the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Obtain the details of the ssp-operator
pods:
$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
Check the ssp-operator
logs for errors:
$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Obtain the status of the virt-template-validator
pods:
$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Obtain the details of the virt-template-validator
pods:
$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
Check the virt-template-validator
logs for errors:
$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when a user or script attempts to create or modify a large number of virtual machines (VMs), using an invalid configuration.
The VMs are not created or modified. As a result, the environment might not behave as expected.
Export the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Check the virt-template-validator
logs for errors that might indicate the
cause:
$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
{"component":"kubevirt-template-validator","level":"info","msg":"evalution
summary for ubuntu-3166wmdbbfkroku0:\nminimal-required-memory applied: FAIL,
value 1073741824 is lower than minimum [2147483648]\n\nsucceeded=false",
"pos":"admission.go:25","timestamp":"2021-09-28T17:59:10.934470Z"}
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when all the Template Validator pods are down.
The Template Validator checks virtual machines (VMs) to ensure that they do not violate their templates.
VMs are not validated against their templates. As a result, VMs might be created with specifications that do not match their respective workloads.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
awk '{print $1}')"
Obtain the status of the virt-template-validator
pods:
$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Obtain the details of the virt-template-validator
pods:
$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
Check the virt-template-validator
logs for error messages:
$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when all the API Server pods are down.
OKD Virtualization objects cannot send API calls.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-api
pods:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the status of the virt-api
deployment:
$ oc -n $NAMESPACE get deploy virt-api -o yaml
Check the virt-api
deployment details for issues such as crashing pods or
image pull failures:
$ oc -n $NAMESPACE describe deploy virt-api
Check for issues such as nodes in a NotReady
state:
$ oc get nodes
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
More than 80% of REST calls have failed in the virt-api
pods in the last
5 minutes.
A very high rate of failed REST calls to virt-api
might lead to slow
response and execution of API calls, and potentially to API calls being
completely dismissed.
However, currently running virtual machine workloads are not likely to be affected.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Obtain the list of virt-api
pods on your deployment:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the virt-api
logs for error messages:
$ oc logs -n $NAMESPACE <virt-api>
Obtain the details of the virt-api
pods:
$ oc describe -n $NAMESPACE <virt-api>
Check if any problems occurred with the nodes. For example, they might
be in a NotReady
state:
$ oc get nodes
Check the status of the virt-api
deployment:
$ oc -n $NAMESPACE get deploy virt-api -o yaml
Obtain the details of the virt-api
deployment:
$ oc -n $NAMESPACE describe deploy virt-api
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
More than 5% of REST calls have failed in the virt-api
pods in the last 60 minutes.
A high rate of failed REST calls to virt-api
might lead to slow response and
execution of API calls.
However, currently running virtual machine workloads are not likely to be affected.
Set the NAMESPACE
environment variable as follows:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-api
pods:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the virt-api
logs:
$ oc logs -n $NAMESPACE <virt-api>
Obtain the details of the virt-api
pods:
$ oc describe -n $NAMESPACE <virt-api>
Check if any problems occurred with the nodes. For example, they might be in
a NotReady
state:
$ oc get nodes
Check the status of the virt-api
deployment:
$ oc -n $NAMESPACE get deploy virt-api -o yaml
Obtain the details of the virt-api
deployment:
$ oc -n $NAMESPACE describe deploy virt-api
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
No running virt-controller
pod has been detected for 5 minutes.
Any actions related to virtual machine (VM) lifecycle management fail. This notably includes launching a new virtual machine instance (VMI) or shutting down an existing VMI.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-controller
deployment:
$ oc get deployment -n $NAMESPACE virt-controller -o yaml
Review the logs of the virt-controller
pod:
$ oc get logs <virt-controller>
This alert can have a variety of causes, including the following:
Node resource exhaustion
Not enough memory on the cluster
Nodes are down
The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
Networking issues
Identify the root cause and fix it, if possible.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
More than 80% of REST calls in virt-controller
pods failed in the last 5
minutes.
The virt-controller
has likely fully lost the connection to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The virt-controller
pod cannot reach the API server. This is commonly
caused by DNS issues on the node and networking connectivity issues.
Status updates are not propagated and actions like migrations cannot take place. However, running workloads are not impacted.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
List the available virt-controller
pods:
$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
Check the virt-controller
logs for error messages when connecting to the
API server:
$ oc logs -n $NAMESPACE <virt-controller>
If the virt-controller
pod cannot connect to the API server, delete the
pod to force a restart:
$ oc delete -n $NAMESPACE <virt-controller>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
More than 5% of REST calls failed in virt-controller
in the last 60 minutes.
This is most likely because virt-controller
has partially lost connection
to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The virt-controller
pod cannot reach the API server. This is commonly
caused by DNS issues on the node and networking connectivity issues.
Node-related actions, such as starting and migrating, and scheduling virtual machines, are delayed. Running workloads are not affected, but reporting their current status might be delayed.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
List the available virt-controller
pods:
$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
Check the virt-controller
logs for error messages when connecting
to the API server:
$ oc logs -n $NAMESPACE <virt-controller>
If the virt-controller
pod cannot connect to the API server, delete
the pod to force a restart:
$ oc delete -n $NAMESPACE <virt-controller>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
The virt-handler
daemon set has failed to deploy on one or more worker
nodes after 15 minutes.
This alert is a warning. It does not indicate that all virt-handler
daemon
sets have failed to deploy. Therefore, the normal lifecycle of virtual
machines is not affected unless the cluster is overloaded.
Identify worker nodes that do not have a running virt-handler
pod:
Export the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-handler
pods to identify pods that have
not deployed:
$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Obtain the name of the worker node of the virt-handler
pod:
$ oc -n $NAMESPACE get pod <virt-handler> -o jsonpath='{.spec.nodeName}'
If the virt-handler
pods failed to deploy because of insufficient resources,
you can delete other pods on the affected worker node.
More than 80% of REST calls failed in virt-handler
in the last 5 minutes.
This alert usually indicates that the virt-handler
pods cannot connect
to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The virt-handler
pod cannot reach the API server. This is commonly
caused by DNS issues on the node and networking connectivity issues.
Status updates are not propagated and node-related actions, such as migrations, fail. However, running workloads on the affected node are not impacted.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-handler
pod:
$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Check the virt-handler
logs for error messages when connecting to
the API server:
$ oc logs -n $NAMESPACE <virt-handler>
If the virt-handler
cannot connect to the API server, delete the pod
to force a restart:
$ oc delete -n $NAMESPACE <virt-handler>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
More than 5% of REST calls failed in virt-handler
in the last 60 minutes.
This alert usually indicates that the virt-handler
pods have partially
lost connection to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The virt-handler
pod cannot reach the API server. This is commonly
caused by DNS issues on the node and networking connectivity issues.
Node-related actions, such as starting and migrating workloads, are delayed
on the node that virt-handler
is running on. Running workloads are not
affected, but reporting their current status might be delayed.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-handler
pod:
$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Check the virt-handler
logs for error messages when connecting to
the API server:
$ oc logs -n $NAMESPACE <virt-handler>
If the virt-handler
cannot connect to the API server, delete the pod
to force a restart:
$ oc delete -n $NAMESPACE <virt-handler>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when no virt-operator
pod in the Running
state has
been detected for 10 minutes.
The virt-operator
is the first Operator to start in a cluster. Its primary
responsibilities include the following:
Installing, live-updating, and live-upgrading a cluster
Monitoring the life cycle of top-level controllers, such as virt-controller
,
virt-handler
, virt-launcher
, and managing their reconciliation
Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator
deployment has a default replica of 2 pods.
This alert indicates a failure at the level of the cluster. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.
The virt-operator
is not directly responsible for virtual machines (VMs)
in the cluster. Therefore, its temporary unavailability does not significantly
affect VM workloads.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-operator
deployment:
$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Obtain the details of the virt-operator
deployment:
$ oc -n $NAMESPACE describe deploy virt-operator
Check the status of the virt-operator
pods:
$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-operator
Check for node issues, such as a NotReady
state:
$ oc get nodes
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when more than 80% of the REST calls in the virt-operator
pods failed in the last 5 minutes. This usually indicates that the virt-operator
pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The virt-operator
pod cannot reach the API server. This is commonly caused
by DNS issues on the node and networking connectivity issues.
Cluster-level actions, such as upgrading and controller reconciliation, might not be available.
However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-operator
pods:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the virt-operator
logs for error messages when connecting to the
API server:
$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the virt-operator
pod:
$ oc -n $NAMESPACE describe pod <virt-operator>
If the virt-operator
pod cannot connect to the API server, delete the pod
to force a restart:
$ oc delete -n $NAMESPACE <virt-operator>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when more than 5% of the REST calls in virt-operator
pods
failed in the last 60 minutes. This usually indicates the virt-operator
pods
cannot connect to the API server.
This error is frequently caused by one of the following problems:
The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
The virt-operator
pod cannot reach the API server. This is commonly caused
by DNS issues on the node and networking connectivity issues.
Cluster-level actions, such as upgrading and controller reconciliation, might be delayed.
However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.
Set the NAMESPACE
environment variable:
$ export NAMESPACE="$(oc get kubevirt -A \
-o custom-columns="":.metadata.namespace)"
Check the status of the virt-operator
pods:
$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the virt-operator
logs for error messages when connecting to the
API server:
$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the virt-operator
pod:
$ oc -n $NAMESPACE describe pod <virt-operator>
If the virt-operator
pod cannot connect to the API server, delete the pod
to force a restart:
$ oc delete -n $NAMESPACE <virt-operator>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
This alert fires when the eviction strategy of a virtual machine (VM) is set
to LiveMigration
but the VM is not migratable.
Non-migratable VMs prevent node eviction. This condition affects operations such as node drain and updates.
Check the VMI configuration to determine whether the value of
evictionStrategy
is LiveMigrate
:
$ oc get vmis -o yaml
Check for a False
status in the LIVE-MIGRATABLE
column to identify VMIs
that are not migratable:
$ oc get vmis -o wide
Obtain the details of the VMI and check spec.conditions
to identify the
issue:
$ oc get vmi <vmi> -o yaml
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
message: cannot migrate VMI which does not use masquerade to connect
to the pod network
reason: InterfaceNotLiveMigratable
status: "False"
type: LiveMigratable
Set the evictionStrategy
of the VMI to shutdown
or resolve the issue that
prevents the VMI from migrating.