-
50 GB disk for
/var/lib/etcd
-
2.9 TB disk for
/var/lib/containers
To use RHACM in a disconnected environment, create a mirror registry that mirrors the OKD release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the FCOS ISO and RootFS disk images that are used to provision the bare-metal hosts.
The Red Hat telco RAN DU 4.17 solution has been validated using the following Red Hat software products for OKD managed clusters and hub clusters.
Component | Software version |
---|---|
Managed cluster version |
4.17 |
Cluster Logging Operator |
6.0 |
Local Storage Operator |
4.17 |
OpenShift API for Data Protection (OADP) |
1.4.1 |
PTP Operator |
4.17 |
SRIOV Operator |
4.17 |
SRIOV-FEC Operator |
2.9 |
Lifecycle Agent |
4.17 |
Component | Software version |
---|---|
Hub cluster version |
4.17 |
Red Hat Advanced Cluster Management (RHACM) |
2.11 |
GitOps ZTP plugin |
4.17 |
Red Hat OpenShift GitOps |
1.13 |
Topology Aware Lifecycle Manager (TALM) |
4.17 |
With GitOps Zero Touch Provisioning (ZTP), you can manage thousands of clusters in geographically dispersed regions and networks. The Red Hat Performance and Scale lab successfully created and managed 3500 virtual single-node OpenShift clusters with a reduced DU profile from a single Red Hat Advanced Cluster Management (RHACM) hub cluster in a lab environment.
In real-world situations, the scaling limits for the number of clusters that you can manage will vary depending on various factors affecting the hub cluster. For example:
Available hub cluster host resources (CPU, memory, storage) are an important factor in determining how many clusters the hub cluster can manage. The more resources allocated to the hub cluster, the more managed clusters it can accommodate.
The hub cluster host storage IOPS rating and whether the hub cluster hosts use NVMe storage can affect hub cluster performance and the number of clusters it can manage.
Slow or high-latency network connections between the hub cluster and managed clusters can impact how the hub cluster manages multiple clusters.
The size and complexity of the managed clusters also affects the capacity of the hub cluster. Larger managed clusters with more nodes, namespaces, and resources require additional processing and management resources. Similarly, clusters with complex configurations such as the RAN DU profile or diverse workloads can require more resources from the hub cluster.
The number of policies managed by the hub cluster scaled over the number of managed clusters bound to those policies is an important factor that determines how many clusters can be managed.
RHACM continuously monitors and manages the managed clusters. The number and complexity of monitoring and management workloads running on the hub cluster can affect its capacity. Intensive monitoring or frequent reconciliation operations can require additional resources, potentially limiting the number of manageable clusters.
Different versions of RHACM can have varying performance characteristics and resource requirements. Additionally, the configuration settings of RHACM, such as the number of concurrent reconciliations or the frequency of health checks, can affect the managed cluster capacity of the hub cluster.
Use the following representative configuration and network specifications to develop your own Hub cluster and network specifications.
The following guidelines are based on internal lab benchmark testing only and do not represent complete bare-metal host specifications. |
Requirement | Description |
---|---|
OKD |
version 4.13 |
RHACM |
version 2.7 |
Topology Aware Lifecycle Manager (TALM) |
version 4.13 |
Server hardware |
3 x Dell PowerEdge R650 rack servers |
NVMe hard disks |
|
SSD hard disks |
|
Number of applied DU profile policies |
5 |
The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing. |
Specification | Description |
---|---|
Round-trip time (RTT) latency |
50 ms |
Packet loss |
0.02% packet loss |
Network bandwidth limit |
20 Mbps |
Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters.
You have installed the OKD CLI (oc
).
You have logged in as a user with cluster-admin
privileges.
You have configured a disconnected mirror registry for use in the cluster.
The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry. |
Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment.
Install GitOps and TALM in the hub cluster.
Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Fedora CoreOS (FCOS) images for it to use. Use a disconnected mirror to host the FCOS images.
Deploy and configure an HTTP server to host the FCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.
The FCOS images might not change with every release of OKD. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OKD version if they are available. You require ISO and RootFS images to install FCOS on the hosts. FCOS QCOW2 images are not supported for this installation type. |
Log in to the mirror host.
Obtain the FCOS ISO and RootFS images from mirror.openshift.com, for example:
Export the required image names and OKD version as environment variables:
$ export ISO_IMAGE_NAME=<iso_image_name> (1)
$ export ROOTFS_IMAGE_NAME=<rootfs_image_name> (2)
$ export OCP_VERSION=<ocp_version> (3)
1 | ISO image name, for example, rhcos-4.17.1-x86_64-live.x86_64.iso |
2 | RootFS image name, for example, rhcos-4.17.1-x86_64-live-rootfs.x86_64.img |
3 | OKD version, for example, 4.17.1 |
Download the required images:
$ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.17/${OCP_VERSION}/${ISO_IMAGE_NAME} -O /var/www/html/${ISO_IMAGE_NAME}
$ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.17/${OCP_VERSION}/${ROOTFS_IMAGE_NAME} -O /var/www/html/${ROOTFS_IMAGE_NAME}
Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example:
$ wget http://$(hostname)/${ISO_IMAGE_NAME}
Saving to: rhcos-4.17.1-x86_64-live.x86_64.iso
rhcos-4.17.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s
Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OKD clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning
resource to watch all namespaces and to update the AgentServiceConfig
custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server.
You have installed the OpenShift CLI (oc
).
You have logged in to the hub cluster as a user with cluster-admin
privileges.
You have RHACM with MultiClusterHub enabled.
Enable the Provisioning
resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the central infrastructure management service.
Update the AgentServiceConfig
CR by running the following command:
$ oc edit AgentServiceConfig
Add the following entry to the items.spec.osImages
field in the CR:
- cpuArchitecture: x86_64
openshiftVersion: "4.17"
rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img
url: https://<mirror-registry>/<path>/rhcos-live.x86_64.iso
where:
Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server.
Is the path to the image on the target mirror registry.
Save and quit the editor to apply the changes.
You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment.
You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.11 installed.
You have hosted the rootfs
and iso
images on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository.
If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OKD hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. |
Create a ConfigMap
containing the mirror registry config:
apiVersion: v1
kind: ConfigMap
metadata:
name: assisted-installer-mirror-config
namespace: multicluster-engine (1)
labels:
app: assisted-service
data:
ca-bundle.crt: | (2)
-----BEGIN CERTIFICATE-----
<certificate_contents>
-----END CERTIFICATE-----
registries.conf: | (3)
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
[[registry]]
prefix = ""
location = "quay.io/example-repository" (4)
mirror-by-digest-only = true
[[registry.mirror]]
location = "mirror1.registry.corp.com:5000/example-repository" (5)
1 | The ConfigMap namespace must be set to multicluster-engine . |
2 | The mirror registry’s certificate that is used when creating the mirror registry. |
3 | The configuration file for the mirror registry. The mirror registry configuration adds mirror information to the /etc/containers/registries.conf file in the discovery image. The mirror information is stored in the imageContentSources section of the install-config.yaml file when the information is passed to the installation program. The Assisted Service pod that runs on the hub cluster fetches the container images from the configured mirror registry. |
4 | The URL of the mirror registry. You must use the URL from the imageContentSources section by running the oc adm release mirror command when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section. |
5 | The registries defined in the registries.conf file must be scoped by repository, not by registry. In this example, both the quay.io/example-repository and the mirror1.registry.corp.com:5000/example-repository repositories are scoped by the example-repository repository. |
This updates mirrorRegistryRef
in the AgentServiceConfig
custom resource, as shown below:
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
namespace: multicluster-engine (1)
spec:
databaseStorage:
volumeName: <db_pv_name>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: <db_storage_size>
filesystemStorage:
volumeName: <fs_pv_name>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: <fs_storage_size>
mirrorRegistryRef:
name: assisted-installer-mirror-config (2)
osImages:
- openshiftVersion: <ocp_version>
url: <iso_url> (3)
1 | Set the AgentServiceConfig namespace to multicluster-engine to match the ConfigMap namespace |
2 | Set mirrorRegistryRef.name to match the definition specified in the related ConfigMap CR |
3 | Set the URL for the ISO hosted on the httpd server |
A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network. |
You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images.
You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
You have installed the OpenShift Container Platform CLI (oc).
You have logged in as a user with cluster-admin
privileges.
You have configured an unauthenticated registry for use with the hub cluster.
Update the AgentServiceConfig
custom resource (CR) by running the following command:
$ oc edit AgentServiceConfig agent
Add the unauthenticatedRegistries
field in the CR:
apiVersion: agent-install.openshift.io/v1beta1
kind: AgentServiceConfig
metadata:
name: agent
spec:
unauthenticatedRegistries:
- example.registry.com
- example.registry2.com
...
Unauthenticated registries are listed under spec.unauthenticatedRegistries
in the AgentServiceConfig
resource.
Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation.
assisted-service
validates the pull secret by making sure it contains the authentication information for every image registry used for installation.
Mirror registries are automatically added to the ignore list and do not need to be added under |
Verify that you can access the newly added registry from the hub cluster by running the following commands:
Open a debug shell prompt to the hub cluster:
$ oc debug node/<node_name>
Test access to the unauthenticated registry by running the following command:
sh-4.4# podman login -u kubeadmin -p $(oc whoami -t) <unauthenticated_registry>
where:
Is the new registry, for example, unauthenticated-image-registry.openshift-image-registry.svc:5000
.
Login Succeeded!
You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps Zero Touch Provisioning (ZTP).
Red Hat Advanced Cluster Management (RHACM) uses |
You have a OKD hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed.
You have extracted the reference deployment from the GitOps ZTP plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the out/argocd/deployment
directory referenced in the following procedure.
Prepare the ArgoCD pipeline configuration:
Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository".
Configure access to the repository using the ArgoCD UI. Under Settings configure the following:
Repositories - Add the connection information. The URL must end in .git
, for example, https://repo.example.com/repo.git
and credentials.
Certificates - Add the public certificate for the repository, if needed.
Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml
and out/argocd/deployment/policies-app.yaml
, based on your Git repository:
Update the URL to point to the Git repository. The URL ends with .git
, for example, https://repo.example.com/repo.git
.
The targetRevision
indicates which Git repository branch to monitor.
path
specifies the path to the SiteConfig
and PolicyGenerator
or PolicyGentemplate
CRs, respectively.
To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image.
Customize the patch file that you previously extracted into the out/argocd/deployment/
directory for your environment.
Select the multicluster-operators-subscription
image that matches your RHACM version.
OKD version | RHACM version | MCE version | MCE RHEL version | MCE image |
---|---|---|---|---|
4.14, 4.15, 4.16 |
2.8, 2.9 |
2.8, 2.9 |
RHEL 8 |
|
4.14, 4.15, 4.16 |
2.10 |
2.10 |
RHEL 9 |
|
The version of the |
Add the following configuration to the out/argocd/deployment/argocd-openshift-gitops-patch.json
file:
{
"args": [
"-c",
"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" (1)
],
"command": [
"/bin/bash"
],
"image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", (2) (3)
"name": "policy-generator-install",
"imagePullPolicy": "Always",
"volumeMounts": [
{
"mountPath": "/.config",
"name": "kustomize"
}
]
}
1 | Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. |
2 | Match the multicluster-operators-subscription image to the RHACM version. |
3 | In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. |
Patch the ArgoCD instance. Run the following command:
$ oc patch argocd openshift-gitops \
-n openshift-gitops --type=merge \
--patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json
In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon
feature by default.
Apply the following patch to disable the cluster-proxy-addon
feature and remove the relevant hub cluster and managed pods that are responsible for this add-on.
Run the following command:
$ oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json
Apply the pipeline configuration to your hub cluster by running the following command:
$ oc apply -k out/argocd/deployment
Before you can use the GitOps Zero Touch Provisioning (ZTP) pipeline, you need to prepare the Git repository to host the site configuration data.
You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs).
You have deployed the managed clusters using GitOps ZTP.
Create a directory structure with separate paths for the SiteConfig
and PolicyGenerator
or PolicyGentemplate
CRs.
Keep |
Export the argocd
directory from the ztp-site-generate
container image using the following commands:
$ podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17
$ mkdir -p ./out
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17 extract /home/ztp --tar | tar x -C ./out
Check that the out
directory contains the following subdirectories:
out/extra-manifest
contains the source CR files that SiteConfig
uses to generate extra manifest configMap
.
out/source-crs
contains the source CR files that PolicyGenerator
uses to generate the Red Hat Advanced Cluster Management (RHACM) policies.
out/argocd/deployment
contains patches and YAML files to apply on the hub cluster for use in the next step of this procedure.
out/argocd/example
contains the examples for SiteConfig
and PolicyGenerator
or PolicyGentemplate
files that represent the recommended configuration.
Copy the out/source-crs
folder and contents to the PolicyGenerator
or PolicyGentemplate
directory.
The out/extra-manifests directory contains the reference manifests for a RAN DU cluster.
Copy the out/extra-manifests
directory into the SiteConfig
folder.
This directory should contain CRs from the ztp-site-generate
container only.
Do not add user-provided CRs here.
If you want to work with user-provided CRs you must create another directory for that content.
For example:
example/
├── acmpolicygenerator
│ ├── kustomization.yaml
│ └── source-crs/
├── policygentemplates (1)
│ ├── kustomization.yaml
│ └── source-crs/
└── siteconfig
├── extra-manifests
└── kustomization.yaml
1 | Using PolicyGenTemplate CRs to manage and deploy polices to manage clusters will be deprecated in a future OKD release.
Equivalent and improved functionality is available by using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. |
Commit the directory structure and the kustomization.yaml
files and push to your Git repository.
The initial push to Git should include the kustomization.yaml
files.
You can use the directory structure under out/argocd/example
as a reference for the structure and content of your Git repository.
That structure includes SiteConfig
and PolicyGenerator
or PolicyGentemplate
reference CRs for single-node, three-node, and standard clusters.
Remove references to cluster types that you are not using.
For all cluster types, you must:
Add the source-crs
subdirectory to the acmpolicygenerator
or policygentemplates
directory.
Add the extra-manifests
directory to the siteconfig
directory.
The following example describes a set of CRs for a network of single-node clusters:
example/
├── acmpolicygenerator
│ ├── acm-common-ranGen.yaml
│ ├── acm-example-sno-site.yaml
│ ├── acm-group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ ├── kustomization.yaml
│ ├── source-crs/
│ └── ns.yaml
└── siteconfig
├── example-sno.yaml
├── extra-manifests/ (1)
├── custom-manifests/ (2)
├── KlusterletAddonConfigOverride.yaml
└── kustomization.yaml
1 | Contains reference manifests from the ztp-container . |
2 | Contains custom manifests. |
Using For more information about |
You can use GitOps ZTP to manage source custom resources (CRs) for managed clusters that are running different versions of OKD. This means that the version of OKD running on the hub cluster can be independent of the version running on the managed clusters.
The following procedure assumes you are using |
You have installed the OpenShift CLI (oc
).
You have logged in as a user with cluster-admin
privileges.
Create a directory structure with separate paths for the SiteConfig
and PolicyGenerator
CRs.
Within the PolicyGenerator
directory, create a directory for each OKD version you want to make available.
For each version, create the following resources:
kustomization.yaml
file that explicitly includes the files in that directory
source-crs
directory to contain reference CR configuration files from the ztp-site-generate
container
If you want to work with user-provided CRs, you must create a separate directory for them.
In the /siteconfig
directory, create a subdirectory for each OKD version you want to make available. For each version, create at least one directory for reference CRs to be copied from the container. There is no restriction on the naming of directories or on the number of reference directories. If you want to work with custom manifests, you must create a separate directory for them.
The following example describes a structure using user-provided manifests and CRs for different versions of OKD:
├── acmpolicygenerator
│ ├── kustomization.yaml (1)
│ ├── version_4.13 (2)
│ │ ├── common-ranGen.yaml
│ │ ├── group-du-sno-ranGen.yaml
│ │ ├── group-du-sno-validator-ranGen.yaml
│ │ ├── helix56-v413.yaml
│ │ ├── kustomization.yaml (3)
│ │ ├── ns.yaml
│ │ └── source-crs/ (4)
│ │ └── reference-crs/ (5)
│ │ └── custom-crs/ (6)
│ └── version_4.14 (2)
│ ├── common-ranGen.yaml
│ ├── group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ ├── helix56-v414.yaml
│ ├── kustomization.yaml (3)
│ ├── ns.yaml
│ └── source-crs/ (4)
│ └── reference-crs/ (5)
│ └── custom-crs/ (6)
└── siteconfig
├── kustomization.yaml
├── version_4.13
│ ├── helix56-v413.yaml
│ ├── kustomization.yaml
│ ├── extra-manifest/ (7)
│ └── custom-manifest/ (8)
└── version_4.14
├── helix57-v414.yaml
├── kustomization.yaml
├── extra-manifest/ (7)
└── custom-manifest/ (8)
1 | Create a top-level kustomization YAML file. |
2 | Create the version-specific directories within the custom /acmpolicygenerator directory. |
3 | Create a kustomization.yaml file for each version. |
4 | Create a source-crs directory for each version to contain reference CRs from the ztp-site-generate container. |
5 | Create the reference-crs directory for policy CRs that are extracted from the ZTP container. |
6 | Optional: Create a custom-crs directory for user-provided CRs. |
7 | Create a directory within the custom /siteconfig directory to contain extra manifests from the ztp-site-generate container. |
8 | Create a folder to hold user-provided manifests. |
In the previous example, each version subdirectory in the custom |
Edit the SiteConfig
CR to include the search paths of any directories you have created.
The first directory that is listed under extraManifests.searchPaths
must be the directory containing the reference manifests.
Consider the order in which the directories are listed.
In cases where directories contain files with the same name, the file in the final directory takes precedence.
extraManifests:
searchPaths:
- extra-manifest/ (1)
- custom-manifest/ (2)
1 | The directory containing the reference manifests must be listed first under extraManifests.searchPaths . |
2 | If you are using user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing those user-provided CRs. |
Edit the top-level kustomization.yaml
file to control which OKD versions are active. The following is an example of a kustomization.yaml
file at the top level:
resources:
- version_4.13 (1)
#- version_4.14 (2)
1 | Activate version 4.13. |
2 | Use comments to deactivate a version. |