×

CPU requirements

OKD Virtualization requires CPUs supported by Fedora 9 with specific virtualization extensions enabled.

CPU requirements for OKD Virtualization

If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.

For more information, see "Configuring a required node affinity rule" in the Additional resources section.

  • Supports AMD64, Intel 64-bit (x86-64-v2), IBM Z® (s390x), or ARM64-based (arm64 or aarch64) architectures and their respective CPU extensions.

  • Intel VT-x, AMD-V, or ARM virtualization extensions are enabled, or s390x virtualization support is enabled.

  • NX (no execute) flag is enabled.

  • If you use s390x architecture, the default CPU model is set to gen15b. For more information, see "Configuring the default CPU model" in the Additional resources section.

Operating system requirements

OKD Virtualization requires Fedora CoreOS (FCOS) on worker nodes. Fedora worker nodes are not supported.

For more information, see "About RHCOS" in the Additional resources section.

Storage requirements

OKD Virtualization requires OKD-supported storage with specific configuration for VM workloads and snapshots.

Storage requirements for OKD Virtualization
  • Storage must be supported by OKD. For more information, see "Optimizing storage" in the Additional resources section.

  • You must create a default OKD Virtualization or OKD storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OKD Virtualization and OKD default storage classes exist, the OKD Virtualization class takes precedence when creating VM disks.

    To mark a storage class as the default for virtualization workloads, set the annotation storageclass.kubevirt.io/is-default-virt-class to "true".

  • If the storage provisioner supports snapshots, you must associate a VolumeSnapshotClass object with the default storage class.

About volume and access modes for virtual machine disks

If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.

For a list of known storage providers for OKD Virtualization, see the Red Hat Ecosystem Catalog.

For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons:

  • ReadWriteMany (RWX) access mode is required for live migration.

  • The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.

    For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.

You cannot live migrate virtual machines with the following configurations:

  • Storage volume with ReadWriteOnce (RWO) access mode

  • Passthrough features such as GPUs

Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots.

Physical resource overhead requirements

OKD Virtualization is an add-on to OKD and imposes additional overhead that you must account for when planning a cluster.

Each cluster machine must accommodate the following overhead requirements in addition to the OKD requirements. Oversubscribing the physical resources in a cluster can affect performance.

The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.

Memory overhead

Calculate the memory overhead values for OKD Virtualization by using the equations below.

Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per worker node ≈ 360 MiB

Additionally, OKD Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.

Virtual machine memory overhead
Memory overhead per virtual machine ≈ (0.002 × requested memory) \
              + 218 MiB \
              + 8 MiB × (number of vCPUs) \
              + 16 MiB × (number of graphics devices) \
              + (additional memory overhead)
  • 218 MiB is required for the processes that run in the virt-launcher pod.

  • 8 MiB × (number of vCPUs) refers to the number of virtual CPUs requested by the virtual machine.

  • 16 MiB × (number of graphics devices) refers to the number of virtual graphics cards requested by the virtual machine.

  • Additional memory overhead:

    • If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.

    • If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.

    • If Trusted Platform Module (TPM) is enabled, add 53 MiB.

CPU overhead

Calculate the cluster processor overhead requirements for OKD Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.

Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores

OKD Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine

Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OKD Virtualization management workloads in addition to the CPUs required for virtual machine workloads.

Virtual machine CPU overhead

If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.

Storage overhead

Use the guidelines below to estimate storage overhead requirements for your OKD Virtualization environment.

Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB

10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OKD Virtualization.

Virtual machine storage overhead

Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OKD Virtualization does not currently allocate any additional ephemeral storage for the running container itself.

Example

As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.

Single-node OpenShift differences

You can install OKD Virtualization on single-node OpenShift.

However, you should be aware that Single-node OpenShift does not support the following features:

  • High availability

  • Pod disruption

  • Live migration

  • Virtual machines or templates that have an eviction strategy configured

Object maximums

Consider tested object maximums for both OKD and OKD Virtualization when planning your cluster.

OKD

See "OKD object maximums" in the Additional resources section.

OKD Virtualization

See "OKD Virtualization supported limits" in the Additional resources section.

Live migration requirements

Live migration requires shared storage, sufficient resources, and compatible CPUs across nodes.

Live migration requirements
  • Shared storage with ReadWriteMany (RWX) access mode.

  • Sufficient RAM and network bandwidth.

    You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

    Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

    The default number of migrations that can run in parallel in the cluster is 5. For more information, see "Configuring live migration" in the Additional resources section.

  • If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.

A dedicated Multus network for live migration is highly recommended. For more information, see "Using a dedicated network for live migration" in the Additional resources section. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.

Cluster high availability options

Configure high availability (HA) for your cluster using installer-provisioned infrastructure (IPI), Node Health Check Operator, or manual monitoring.

Methods of configuring HA
  • Automatic high availability for installer-provisioned infrastructure is available by deploying machine health checks. For more information, see "Installer-provisioned infrastructure installation overview" and "About machine health checks" in the Additional resources section.

In OKD clusters installed using installer-provisioned infrastructure and with a properly configured MachineHealthCheck resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See the "Run strategies" documentation for more detailed information about the potential outcomes and how run strategies affect those outcomes.

Currently, installer-provisioned infrastructure is not supported on IBM Z®.

  • Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OKD cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.

    Fence Agents Remediation uses supported fencing agents to reset failed nodes faster than the Self Node Remediation Operator. This improves overall virtual machine high availability. For more information, see the OKD Virtualization - Fencing and VM High Availability Guide knowledgebase article.

  • High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node>.

    Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.