siteconfig
├── site1-sno-du.yaml
├── site2-standard-du.yaml
├── extra-manifest/
└── custom-manifest
└── 01-example-machine-config.yaml
You can use SiteConfig
custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time.
You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the SiteConfig
custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig
CRs at install time makes the installation process more efficient.
Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs.
In your custom /siteconfig
directory, create a subdirectory /custom-manifest
for your extra manifests. The following example illustrates a sample /siteconfig
with /custom-manifest
folder:
siteconfig
├── site1-sno-du.yaml
├── site2-standard-du.yaml
├── extra-manifest/
└── custom-manifest
└── 01-example-machine-config.yaml
The subdirectory names |
Add your custom extra manifest CRs to the siteconfig/custom-manifest
directory.
In your SiteConfig
CR, enter the directory name in the extraManifests.searchPaths
field, for example:
clusters:
- clusterName: "example-sno"
networkType: "OVNKubernetes"
extraManifests:
searchPaths:
- extra-manifest/ (1)
- custom-manifest/ (2)
1 | Folder for manifests copied from the ztp-site-generate container. |
2 | Folder for custom manifests. |
Save the SiteConfig
, /extra-manifest
, and /custom-manifest
CRs, and push them to the site configuration repo.
During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the /custom-manifest
directory to the default set of extra manifests stored in extra-manifest/
.
As of version 4.14 While If you define both It is strongly recommended that you extract the contents of |
By using filters, you can easily customize SiteConfig
custom resources (CRs) to include or exclude other CRs for use in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline.
You can specify an inclusionDefault
value of include
or exclude
for the SiteConfig
CR, along with a list of the specific extraManifest
RAN CRs that you want to include or exclude. Setting inclusionDefault
to include
makes the GitOps ZTP pipeline apply all the files in /source-crs/extra-manifest
during installation. Setting inclusionDefault
to exclude
does the opposite.
You can exclude individual CRs from the /source-crs/extra-manifest
folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig
CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml
CR at installation time.
Some additional optional filtering scenarios are also described.
You configured the hub cluster for generating the required installation and policy CRs.
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
To prevent the GitOps ZTP pipeline from applying the 03-sctp-machine-config-worker.yaml
CR file, apply the following YAML in the SiteConfig
CR:
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "site1-sno-du"
namespace: "site1-sno-du"
spec:
baseDomain: "example.com"
pullSecretRef:
name: "assisted-deployment-pull-secret"
clusterImageSetNameRef: "openshift-4"
sshPublicKey: "<ssh_public_key>"
clusters:
- clusterName: "site1-sno-du"
extraManifests:
filter:
exclude:
- 03-sctp-machine-config-worker.yaml
The GitOps ZTP pipeline skips the 03-sctp-machine-config-worker.yaml
CR during installation. All other CRs in /source-crs/extra-manifest
are applied.
Save the SiteConfig
CR and push the changes to the site configuration repository.
The GitOps ZTP pipeline monitors and adjusts what CRs it applies based on the SiteConfig
filter instructions.
Optional: To prevent the GitOps ZTP pipeline from applying all the /source-crs/extra-manifest
CRs during cluster installation, apply the following YAML in the SiteConfig
CR:
- clusterName: "site1-sno-du"
extraManifests:
filter:
inclusionDefault: exclude
Optional: To exclude all the /source-crs/extra-manifest
RAN CRs and instead include a custom CR file during installation, edit the custom SiteConfig
CR to set the custom manifests folder and the include
file, for example:
clusters:
- clusterName: "site1-sno-du"
extraManifestPath: "<custom_manifest_folder>" (1)
extraManifests:
filter:
inclusionDefault: exclude (2)
include:
- custom-sctp-machine-config-worker.yaml
1 | Replace <custom_manifest_folder> with the name of the folder that contains the custom installation CRs, for example, user-custom-manifest/ . |
2 | Set inclusionDefault to exclude to prevent the GitOps ZTP pipeline from applying the files in /source-crs/extra-manifest during installation. |
The following example illustrates the custom folder structure:
siteconfig
├── site1-sno-du.yaml
└── user-custom-manifest
└── custom-sctp-machine-config-worker.yaml
By using a SiteConfig
custom resource (CR), you can delete and reprovision a node.
This method is more efficient than manually deleting the node.
You have configured the hub cluster to generate the required installation and policy CRs.
You have created a Git repository in which you can manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as the source repository for the Argo CD application.
Update the SiteConfig
CR to include the bmac.agent-install.openshift.io/remove-agent-and-node-on-delete=true
annotation and push the changes to the Git repository:
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "cnfdf20"
namespace: "cnfdf20"
spec:
clusters:
nodes:
- hostname: node6
role: "worker"
crAnnotations:
add:
BareMetalHost:
bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true
# ...
Verify that the BareMetalHost
object is annotated by running the following command:
oc get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]'
true
Suppress the generation of the BareMetalHost
CR by updating the SiteConfig
CR to include the crSuppression.BareMetalHost
annotation:
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "cnfdf20"
namespace: "cnfdf20"
spec:
clusters:
- nodes:
- hostName: node6
role: "worker"
crSuppression:
- BareMetalHost
# ...
Push the changes to the Git repository and wait for deprovisioning to start.
The status of the BareMetalHost
CR should change to deprovisioning
. Wait for the BareMetalHost
to finish deprovisioning, and be fully deleted.
Verify that the BareMetalHost
and Agent
CRs for the worker node have been deleted from the hub cluster by running the following commands:
$ oc get bmh -n <cluster-ns>
$ oc get agent -n <cluster-ns>
Verify that the node record has been deleted from the spoke cluster by running the following command:
$ oc get nodes
If you are working with secrets, deleting a secret too early can cause an issue because ArgoCD needs the secret to complete resynchronization after deletion. Delete the secret only after the node cleanup, when the current ArgoCD synchronization is complete. |
To reprovision a node, delete the changes previously added to the SiteConfig
, push the changes to the Git repository, and wait for the synchronization to complete.
This regenerates the BareMetalHost
CR of the worker node and triggers the re-install of the node.