$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
Tune hosted control planes for low latency by applying a performance profile. With the performance profile, you can restrict CPUs for infrastructure and application containers and configure huge pages, Hyper-Threading, and CPU partitions for latency-sensitive processes.
You can create a cluster performance profile by using the Performance Profile Creator (PPC) tool. The PPC is a function of the Node Tuning Operator.
The PPC combines information about your cluster with user-supplied configurations to generate a performance profile that is appropriate to your hardware, topology, and use-case.
The following is a high-level workflow for creating and applying a performance profile in your cluster:
Gather information about your cluster using the must-gather
command.
Use the PPC tool to create a performance profile.
Apply the performance profile to your cluster.
The Performance Profile Creator (PPC) tool requires must-gather
data. As a cluster administrator, run the must-gather
command to capture information about your cluster.
You have cluster-admin
role access to the management cluster.
You installed the OpenShift CLI (oc
).
Export the management cluster kubeconfig
file by running the following command:
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
List all node pools across all namespaces by running the following command:
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
The output shows the namespace clusters
in the management cluster where the NodePool
resource is defined.
The name of the NodePool
resource, for example democluster-us-east-1a
.
The HostedCluster
this NodePool
belongs to. For example, democluster
.
On the management cluster, run the following command to list available secrets:
$ oc get secrets -n clusters
NAME TYPE DATA AGE
builder-dockercfg-25qpp kubernetes.io/dockercfg 1 128m
default-dockercfg-mkvlz kubernetes.io/dockercfg 1 128m
democluster-admin-kubeconfig Opaque 1 127m
democluster-etcd-encryption-key Opaque 1 128m
democluster-kubeadmin-password Opaque 1 126m
democluster-pull-secret Opaque 1 128m
deployer-dockercfg-8lfpd kubernetes.io/dockercfg 1 128m
Extract the kubeconfig
file for the hosted cluster by running the following command:
$ oc get secret <secret_name> -n <cluster_namespace> -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
To create a must-gather
bundle for the hosted cluster, open a separate terminal window and run the following commands:
Export the hosted cluster kubeconfig
file:
$ export HC_KUBECONFIG=<path_to_hosted_cluster_kubeconfig>
$ export HC_KUBECONFIG=~/hostedcpkube/hosted-cluster-kubeconfig
Navigate to the directory where you want to store the must-gather
data.
Gather the troubleshooting data for your hosted cluster:
$ oc --kubeconfig="$HC_KUBECONFIG" adm must-gather
Create a compressed file from the must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -czvf must-gather.tar.gz must-gather.local.1203869488012141147
As a cluster administrator, you can use Podman with the Performance Profile Creator (PPC) tool to create a performance profile.
For more information about PPC arguments, see "Performance Profile Creator arguments".
The PPC tool is designed to be hosted-cluster aware. When it detects a hosted cluster from the must-gather
data it automatically takes the following actions:
Recognizes that there is no machine config pool (MCP).
Uses node pools as the source of truth for compute node configurations instead of MCPs.
Does not require you to specify the node-pool-name
value explicitly unless you want to target a specific pool.
The PPC uses the |
Access to the cluster as a user with the cluster-admin
role.
A hosted cluster is installed.
Installation of Podman and the OpenShift CLI (oc
).
Access to the Node Tuning Operator image.
Access to the must-gather
data for your cluster.
On the hosted cluster, use Podman to authenticate to registry.redhat.io
by running the following command:
$ podman login registry.redhat.io
Username: <user_name>
Password: <password>
Create a performance profile on the hosted cluster, by running the following command. The example uses sample PPC arguments and values:
$ podman run --entrypoint performance-profile-creator \
-v /path/to/must-gather:/must-gather:z \(1)
registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4 \
--must-gather-dir-path /must-gather \
--reserved-cpu-count=2 \(2)
--rt-kernel=false \(3)
--split-reserved-cpus-across-numa=false \ (4)
--topology-manager-policy=single-numa-node \(5)
--node-pool-name=democluster-us-east-1a \
--power-consumption-mode=ultra-low-latency \(6)
--offlined-cpu-count=1 \(7)
> my-hosted-cp-performance-profile.yaml
1 | Mounts the local directory where the output of an oc adm must-gather was created into the container. |
2 | Specifies two reserved CPUs. |
3 | Disables the real-time kernel. |
4 | Disables reserved CPUs splitting across NUMA nodes. |
5 | Specifies the NUMA topology policy. If installing the NUMA Resources Operator, this must be set to single-numa-node . |
6 | Specifies minimal latency at the cost of increased power consumption. |
7 | Specifies one offlined CPU. |
level=info msg="Nodes names targeted by democluster-us-east-1a pool are: ip-10-0-129-110.ec2.internal "
level=info msg="NUMA cell(s): 1"
level=info msg="NUMA cell 0 : [0 2 1 3]"
level=info msg="CPU(s): 4"
level=info msg="2 reserved CPUs allocated: 0,2 "
level=info msg="1 isolated CPUs allocated: 1"
level=info msg="Additional Kernel Args based on configuration: []
Review the created YAML file by running the following command:
$ cat my-hosted-cp-performance-profile
---
apiVersion: v1
data:
tuning: |
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
creationTimestamp: null
name: performance
spec:
cpu:
isolated: "1"
offlined: "3"
reserved: 0,2
net:
userLevelNetworking: false
nodeSelector:
node-role.kubernetes.io/worker: ""
numa:
topologyPolicy: single-numa-node
realTimeKernel:
enabled: false
workloadHints:
highPowerConsumption: true
perPodPowerManagement: false
realTime: true
status: {}
kind: ConfigMap
metadata:
name: performance
namespace: clusters
To set low latency with the performance profile on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure low-latency tuning by creating config maps that contain Tuned
objects and referencing those config maps in your node pools. The tuned object in this case is a PerformanceProfile
object that defines the performance profile you want to apply to the nodes in a node pool.
Export the management cluster kubeconfig
file by running the following command:
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
Create the ConfigMap
object in the management cluster by running the following command:
$ oc --kubeconfig="$MGMT_KUBECONFIG" apply -f my-hosted-cp-performance-profile.yaml
Edit the NodePool
object in the clusters
namespace adding the spec.tuningConfig
field and the name of the created performance profile in that field by running the following command:
$ oc edit np -n clusters
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
annotations:
hypershift.openshift.io/nodePoolCurrentConfig: 2f752a2c
hypershift.openshift.io/nodePoolCurrentConfigVersion: 998aa3ce
hypershift.openshift.io/nodePoolPlatformMachineTemplate: democluster-us-east-1a-3dff55ec
creationTimestamp: "2025-04-09T09:41:55Z"
finalizers:
- hypershift.openshift.io/finalizer
generation: 1
labels:
hypershift.openshift.io/auto-created-for-infra: democluster
name: democluster-us-east-1a
namespace: clusters
ownerReferences:
- apiVersion: hypershift.openshift.io/v1beta1
kind: HostedCluster
name: democluster
uid: af77e390-c289-433c-9d29-3aee8e5dc76f
resourceVersion: "53056"
uid: 11efa47c-5a7b-476c-85cf-a274f748a868
spec:
tuningConfig:
- name: performance
arch: amd64
clusterName: democluster
management:
You can reference the same profile in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the |
List all node pools across all namespaces by running the following command:
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
The |
List all config maps in the clusters-democluster
namespace by running the following command:
$ oc --kubeconfig="$MGMT_KUBECONFIG" get cm -n clusters-democluster
NAME DATA AGE
aggregator-client-ca 1 69m
auth-config 1 68m
aws-cloud-config 1 68m
aws-ebs-csi-driver-trusted-ca-bundle 1 66m
... 1 67m
kubelet-client-ca 1 69m
kubeletconfig-performance-democluster-us-east-1a 1 22m
...
ovnkube-identity-cm 2 66m
performance-democluster-us-east-1a 1 22m
...
tuned-performance-democluster-us-east-1a 1 22m
The output shows a kubeletconfig kubeletconfig-performance-democluster-us-east-1a
and a performance profile performance-democluster-us-east-1a
has been created. The Node Tuning Operator syncs the Tuned
objects into the hosted cluster. You can verify which Tuned
objects are defined and which profiles are applied to each node.
List available secrets on the management cluster by running the following command:
$ oc get secrets -n clusters
NAME TYPE DATA AGE
builder-dockercfg-25qpp kubernetes.io/dockercfg 1 128m
default-dockercfg-mkvlz kubernetes.io/dockercfg 1 128m
democluster-admin-kubeconfig Opaque 1 127m
democluster-etcd-encryption-key Opaque 1 128m
democluster-kubeadmin-password Opaque 1 126m
democluster-pull-secret Opaque 1 128m
deployer-dockercfg-8lfpd kubernetes.io/dockercfg 1 128m
Extract the kubeconfig
file for the hosted cluster by running the following command:
$ oc get secret <secret_name> -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
Export the hosted cluster kubeconfig by running the following command:
$ export HC_KUBECONFIG=<path_to_hosted-cluster-kubeconfig>
Verify that the kubeletconfig is mirrored in the hosted cluster by running the following command:
$ oc --kubeconfig="$HC_KUBECONFIG" get cm -n openshift-config-managed | grep kubelet
kubelet-serving-ca 1 79m
kubeletconfig-performance-democluster-us-east-1a 1 15m
Verify that the single-numa-node
policy is set on the hosted cluster by running the following command:
$ oc --kubeconfig="$HC_KUBECONFIG" get cm kubeletconfig-performance-democluster-us-east-1a -o yaml -n openshift-config-managed | grep single
topologyManagerPolicy: single-numa-node