$ mkdir -p ./out
You can use PolicyGenTemplate
CRs to deploy custom functionality in your managed clusters.
If you require cluster configuration changes outside of the base GitOps ZTP pipeline configuration, there are three options:
When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget.
The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required.
Extra manifests are applied during installation and make the installation process more efficient.
Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OKD. |
See Customizing extra installation manifests in the ZTP GitOps pipeline for information about adding extra manifests.
PolicyGenTemplate
custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate
container. You can think of PolicyGenTemplate
CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate
CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR.
The following example procedure describes how to update fields in the generated PerformanceProfile
CR for the reference configuration based on the PolicyGenTemplate
CR in the group-du-sno-ranGen.yaml
file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate
based on your requirements.
Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD.
Review the baseline source CR for existing content. You can review the source CRs listed in the reference PolicyGenTemplate
CRs by extracting them from the zero touch provisioning (ZTP) container.
Create an /out
folder:
$ mkdir -p ./out
Extract the source CRs:
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version}.1 extract /home/ztp --tar | tar x -C ./out
Review the baseline PerformanceProfile
CR in ./out/source-crs/PerformanceProfile.yaml
:
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: $name
annotations:
ran.openshift.io/ztp-deploy-wave: "10"
spec:
additionalKernelArgs:
- "idle=poll"
- "rcupdate.rcu_normal_after_boot=0"
cpu:
isolated: $isolated
reserved: $reserved
hugepages:
defaultHugepagesSize: $defaultHugepagesSize
pages:
- size: $size
count: $count
node: $node
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/$mcp: ""
net:
userLevelNetworking: true
nodeSelector:
node-role.kubernetes.io/$mcp: ''
numa:
topologyPolicy: "restricted"
realTimeKernel:
enabled: true
Any fields in the source CR which contain |
Update the PolicyGenTemplate
entry for PerformanceProfile
in the group-du-sno-ranGen.yaml
reference file. The following example PolicyGenTemplate
CR stanza supplies appropriate CPU specifications, sets the hugepages
configuration, and adds a new field that sets globallyDisableIrqLoadBalancing
to false.
- fileName: PerformanceProfile.yaml
policyName: "config-policy"
metadata:
name: openshift-node-performance-profile
spec:
cpu:
# These must be tailored for the specific hardware platform
isolated: "2-19,22-39"
reserved: "0-1,20-21"
hugepages:
defaultHugepagesSize: 1G
pages:
- size: 1G
count: 10
globallyDisableIrqLoadBalancing: false
Commit the PolicyGenTemplate
change in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application.
The ZTP application generates an RHACM policy that contains the generated PerformanceProfile
CR. The contents of that CR are derived by merging the metadata
and spec
contents from the PerformanceProfile
entry in the PolicyGenTemplate
onto the source CR. The resulting CR has the following content:
---
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: openshift-node-performance-profile
spec:
additionalKernelArgs:
- idle=poll
- rcupdate.rcu_normal_after_boot=0
cpu:
isolated: 2-19,22-39
reserved: 0-1,20-21
globallyDisableIrqLoadBalancing: false
hugepages:
defaultHugepagesSize: 1G
pages:
- count: 10
size: 1G
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/master: ""
net:
userLevelNetworking: true
nodeSelector:
node-role.kubernetes.io/master: ""
numa:
topologyPolicy: restricted
realTimeKernel:
enabled: true
In the An exception to this is the
The |
The source CRs in the GitOps ZTP site generator container provide a set of critical features and node tuning settings for RAN Distributed Unit (DU) applications. These are applied to the clusters that you deploy with ZTP. To add or modify existing source CRs in the ztp-site-generate
container, rebuild the ztp-site-generate
container and make it available to the hub cluster, typically from the disconnected registry associated with the hub cluster. Any valid OKD CR can be added.
Perform the following procedure to add new content to the ZTP pipeline.
Create a directory containing a Containerfile and the source CR YAML files that you want to include in the updated ztp-site-generate
container, for example:
ztp-update/
├── example-cr1.yaml
├── example-cr2.yaml
└── ztp-update.in
Add the following content to the ztp-update.in
Containerfile:
FROM registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10
ADD example-cr2.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
ADD example-cr1.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
Open a terminal at the ztp-update/
folder and rebuild the container:
$ podman build -t ztp-site-generate-rhel8-custom:v4.10-custom-1
Push the built container image to your disconnected registry, for example:
$ podman push localhost/ztp-site-generate-rhel8-custom:v4.10-custom-1 registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.10-custom-1
Patch the Argo CD instance on the hub cluster to point to the newly built container image:
$ oc patch -n openshift-gitops argocd openshift-gitops --type=json -p '[{"op": "replace", "path":"/spec/repo/initContainers/0/image", "value": "registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.10-custom-1"} ]'
When the Argo CD instance is patched, the openshift-gitops-repo-server
pod automatically restarts.
Verify that the new openshift-gitops-repo-server
pod has completed initialization and that the previous repo pod is terminated:
$ oc get pods -n openshift-gitops | grep openshift-gitops-repo-server
openshift-gitops-server-7df86f9774-db682 1/1 Running 1 28s
You must wait until the new openshift-gitops-repo-server
pod has completed initialization and the previous pod is terminated before the newly added container image content is available.
Alternatively, you can patch the ArgoCD instance as described in Configuring the hub cluster with ArgoCD by modifying argocd-openshift-gitops-patch.json
with an updated initContainer
image before applying the patch file.
Create a validator inform policy that signals when the zero touch provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters.
Create a standalone PolicyGenTemplate
custom resource (CR) that contains the source file
validatorCRs/informDuValidator.yaml
. You only need one standalone PolicyGenTemplate
CR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters:
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "group-du-sno-validator" (1)
namespace: "ztp-group" (2)
spec:
bindingRules:
group-du-sno: "" (3)
bindingExcludedRules:
ztp-done: "" (4)
mcp: "master" (5)
sourceFiles:
- fileName: validatorCRs/informDuValidator.yaml
remediationAction: inform (6)
policyName: "du-policy" (7)
1 | The name of PolicyGenTemplates object. This name is also used as part of the names
for the placementBinding , placementRule , and policy that are created in the requested namespace . |
2 | This value should match the namespace used in the group PolicyGenTemplates . |
3 | The group-du-* label defined in bindingRules must exist in the SiteConfig files. |
4 | The label defined in bindingExcludedRules must be`ztp-done:`. The ztp-done label is used in coordination with the Topology Aware Lifecycle Manager. |
5 | mcp defines the MachineConfigPool object that is used in the source file validatorCRs/informDuValidator.yaml . It should be master for single node and three-node cluster deployments and worker for standard cluster deployments. |
6 | Optional. The default value is inform . |
7 | This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is group-du-sno-validator-du-policy . |
Commit the PolicyGenTemplate
CR file in your Git repository and push the changes.
You can configure PTP fast events for vRAN clusters that are deployed using the GitOps Zero Touch Provisioning (ZTP) pipeline. Use PolicyGenTemplate
custom resources (CRs) as the basis to create a hierarchy of configuration files tailored to your specific site requirements.
Create a Git repository where you manage your custom site configuration data.
Add the following YAML into .spec.sourceFiles
in the common-ranGen.yaml
file to configure the AMQP Operator:
#AMQ interconnect operator for fast events
- fileName: AmqSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscription.yaml
policyName: "subscriptions-policy"
Apply the following PolicyGenTemplate
changes to group-du-3node-ranGen.yaml
, group-du-sno-ranGen.yaml
, or group-du-standard-ranGen.yaml
files according to your requirements:
In .sourceFiles
, add the PtpOperatorConfig
CR file that configures the AMQ transport host to the config-policy
:
- fileName: PtpOperatorConfigForEvent.yaml
policyName: "config-policy"
Configure the linuxptp
and phc2sys
for the PTP clock type and interface. For example, add the following stanza into .sourceFiles
:
- fileName: PtpConfigSlave.yaml (1)
policyName: "config-policy"
metadata:
name: "du-ptp-slave"
spec:
profile:
- name: "slave"
interface: "ens5f1" (2)
ptp4lOpts: "-2 -s --summary_interval -4" (3)
phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" (4)
ptpClockThreshold: (5)
holdOverTimeout: 30 #secs
maxOffsetThreshold: 100 #nano secs
minOffsetThreshold: -100 #nano secs
1 | Can be one PtpConfigMaster.yaml , PtpConfigSlave.yaml , or PtpConfigSlaveCvl.yaml depending on your requirements. PtpConfigSlaveCvl.yaml configures linuxptp services for an Intel E810 Columbiaville NIC. For configurations based on group-du-sno-ranGen.yaml or group-du-3node-ranGen.yaml , use PtpConfigSlave.yaml . |
2 | Device specific interface name. |
3 | You must append the --summary_interval -4 value to ptp4lOpts in .spec.sourceFiles.spec.profile to enable PTP fast events. |
4 | Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. |
5 | Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME (phc2sys ) or master offset (ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . |
Apply the following PolicyGenTemplate
changes to your specific site YAML files, for example, example-sno-site.yaml
:
In .sourceFiles
, add the Interconnect
CR file that configures the AMQ router to the config-policy
:
- fileName: AmqInstance.yaml
policyName: "config-policy"
Merge any other required changes and files with your custom site repository.
Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP.
For more information about how to install the AMQ Interconnect Operator, see Installing the AMQ messaging bus.
You can configure bare-metal hardware events for vRAN clusters that are deployed using the GitOps Zero Touch Provisioning (ZTP) pipeline.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create a Git repository where you manage your custom site configuration data.
To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to spec.sourceFiles
in the common-ranGen.yaml
file:
# AMQ interconnect operator for fast events
- fileName: AmqSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: AmqSubscription.yaml
policyName: "subscriptions-policy"
# Bare Metal Event Rely operator
- fileName: BareMetalEventRelaySubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: BareMetalEventRelaySubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: BareMetalEventRelaySubscription.yaml
policyName: "subscriptions-policy"
Add the Interconnect
CR to .spec.sourceFiles
in the site configuration file, for example, the example-sno-site.yaml
file:
- fileName: AmqInstance.yaml
policyName: "config-policy"
Add the HardwareEvent
CR to spec.sourceFiles
in your specific group configuration file, for example, in the group-du-sno-ranGen.yaml
file:
- fileName: HardwareEvent.yaml
policyName: "config-policy"
spec:
nodeSelector: {}
transportHost: "amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local" (1)
logLevel: "info"
1 | The transportHost URL is composed of the existing AMQ Interconnect CR name and namespace . For example, in transportHost: "amqp://amq-router.amq-router.svc.cluster.local" , the AMQ Interconnect name and namespace are both set to amq-router . |
Each baseboard management controller (BMC) requires a single |
Commit the PolicyGenTemplate
change in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP.
Create the Redfish Secret by running the following command:
$ oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \
--from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \
--from-literal=hostaddr="<bmc_host_ip_addr>"
For more information about how to install the Bare Metal Event Relay, see Installing the Bare Metal Event Relay using the CLI.
For more information about how to create the username, password, and host IP address for the BMC secret, see Creating the bare-metal event and Secret CRs.