apiVersion: aaq.kubevirt.io/v1alpha1
kind: ApplicationAwareResourceQuota
metadata:
name: example-resource-quota
spec:
hard:
requests.memory: 1Gi
limits.memory: 1Gi
requests.cpu/vmi: "1" (1)
requests.memory/vmi: 1Gi (2)
# ...
You can use the Application-Aware Quota (AAQ) Operator to customize and manage resource quotas for individual components in an OKD cluster.
The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native ResourceQuota
object in the OKD platform.
In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native ResourceQuota
object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for OKD Virtualization workloads.
OKD Virtualization requires significant compute resource allocation to handle virtual machine (VM) live migrations and manage VM infrastructure overhead. When upgrading OKD Virtualization, you must migrate VMs to upgrade the virt-launcher
pod. However, migrating a VM in the presence of a resource quota can cause the migration, and subsequently the upgrade, to fail.
With AAQ, you can allocate resources for VMs without interfering with cluster-level activities such as upgrades and node maintenance. The AAQ Operator also supports non-compute resources which eliminates the need to manage both the native resource quota and AAQ API objects separately.
The AAQ Operator introduces two new API objects defined as custom resource definitions (CRDs) for managing alternative quota implementations across multiple namespaces:
ApplicationAwareResourceQuota
: Sets aggregate quota restrictions enforced per namespace. The ApplicationAwareResourceQuota
API is compatible with the native ResourceQuota
object and shares the same specification and status definitions.
apiVersion: aaq.kubevirt.io/v1alpha1
kind: ApplicationAwareResourceQuota
metadata:
name: example-resource-quota
spec:
hard:
requests.memory: 1Gi
limits.memory: 1Gi
requests.cpu/vmi: "1" (1)
requests.memory/vmi: 1Gi (2)
# ...
1 | The maximum amount of CPU that is allowed for VM workloads in the default namespace. |
2 | The maximum amount of RAM that is allowed for VM workloads in the default namespace. |
ApplicationAwareClusterResourceQuota
: Mirrors the ApplicationAwareResourceQuota
object at a cluster scope. It is compatible with the native ClusterResourceQuota
API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing the spec.selector.labels
or spec.selector.annotations
fields.
apiVersion: aaq.kubevirt.io/v1alpha1
kind: ApplicationAwareClusterResourceQuota (1)
metadata:
name: example-resource-quota
spec:
quota:
hard:
requests.memory: 1Gi
limits.memory: 1Gi
requests.cpu/vmi: "1"
requests.memory/vmi: 1Gi
selector:
annotations: null
labels:
matchLabels:
kubernetes.io/metadata.name: default
# ...
1 | You can only create an ApplicationAwareClusterResourceQuota object if the spec.allowApplicationAwareClusterResourceQuota field in the HyperConverged custom resource (CR) is set to true . |
If both |
The AAQ controller uses a scheduling gate mechanism to evaluate whether there is enough of a resource available to run a workload. If so, the scheduling gate is removed from the pod and it is considered ready for scheduling. The quota usage status is updated to indicate how much of the quota is used.
If the CPU and memory requests and limits for the workload exceed the enforced quota usage limit, the pod remains in SchedulingGated
status until there is enough quota available. The AAQ controller creates an event of type Warning
with details on why the quota was exceeded. You can view the event details by using the oc get events
command.
Pods that have the |
To deploy the AAQ Operator, set the enableApplicationAwareQuota
feature gate to true
in the HyperConverged
custom resource (CR).
You have access to the cluster as a user with cluster-admin
privileges.
You have installed the OpenShift CLI (oc
).
Set the enableApplicationAwareQuota
feature gate to true
in the HyperConverged
CR by running the following command:
$ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
--type json -p '[{"op": "add", "path": "/spec/featureGates/enableApplicationAwareQuota", "value": true}]'
You can configure the AAQ Operator by specifying the fields of the spec.applicationAwareConfig
object in the HyperConverged
custom resource (CR).
You have access to the cluster as a user with cluster-admin
privileges.
You have installed the OpenShift CLI (oc
).
Update the HyperConverged
CR by running the following command:
$ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{
"spec": {
"applicationAwareConfig": {
"vmiCalcConfigName": "DedicatedVirtualResources",
"namespaceSelector": {
"matchLabels": {
"app": "my-app"
}
},
"allowApplicationAwareClusterResourceQuota": true
}
}
}'
where:
vmiCalcConfigName
Specifies how resource counting is managed for pods that run virtual machine (VM) workloads. Possible values are:
VmiPodUsage
: Counts compute resources for pods associated with VMs in the same way as native resource quotas and excludes migration-related resources.
VirtualResources
: Counts compute resources based on the VM specifications, using the VM RAM size for memory and virtual CPUs for processing.
DedicatedVirtualResources
(default): Similar to VirtualResources
, but separates resource tracking for pods associated with VMs by adding a /vmi
suffix to CPU and memory resource names. For example, requests.cpu/vmi
and requests.memory/vmi
.
namespaceSelector
Determines the namespaces for which an AAQ scheduling gate is added to pods when they are created. If a namespace selector is not defined, the AAQ Operator targets namespaces with the application-aware-quota/enable-gating
label as default.
allowApplicationAwareClusterResourceQuota
If set to true
, you can create and manage the ApplicationAwareClusterResourceQuota
object. Setting this attribute to true
can increase scheduling time.