Use zero touch provisioning (ZTP) to provision distributed units at new edge sites in a disconnected environment. The workflow starts when the site is connected to the network and ends with the CNF workload deployed and running on the site nodes.
ZTP for RAN deployments is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Telco edge computing presents extraordinary challenges with managing hundreds to tens of thousands of clusters in hundreds of thousands of locations. These challenges require fully-automated management solutions with, as closely as possible, zero human interaction.
Zero touch provisioning (ZTP) allows you to provision new edge sites with declarative configurations of bare-metal equipment at remote sites. Template or overlay configurations install OKD features that are required for CNF workloads. End-to-end functional test suites are used to verify CNF related features. All configurations are declarative in nature.
You start the workflow by creating declarative configurations for ISO images that are delivered to the edge nodes to begin the installation process. The images are used to repeatedly provision large numbers of nodes efficiently and quickly, allowing you keep up with requirements from the field for far edge nodes.
Service providers are deploying a more distributed mobile network architecture allowed by the modular functional framework defined for 5G. This allows service providers to move from appliance-based radio access networks (RAN) to open cloud RAN architecture, gaining flexibility and agility in delivering services to end users.
The following diagram shows how ZTP works within a far edge framework.
ZTP uses the GitOps deployment set of practices for infrastructure deployment that allows developers to perform tasks that would otherwise fall under the purview of IT operations. GitOps achieves these tasks using declarative specifications stored in Git repositories, such as YAML files and other defined patterns, that provide a framework for deploying the infrastructure. The declarative output is leveraged by the Open Cluster Manager for multisite deployment.
One of the motivators for a GitOps approach is the requirement for reliability at scale. This is a significant challenge that GitOps helps solve.
GitOps addresses the reliability issue by providing traceability, RBAC, and a single source of truth for the desired state of each site. Scale issues are addressed by GitOps providing structure, tooling, and event driven operations through webhooks.
You can install a distributed unit (DU) on a single node at scale with Red Hat Advanced Cluster Management (RHACM) (ACM) using the assisted installer (AI) and the policy generator with core-reduction technology enabled. The DU installation is done using zero touch provisioning (ZTP) in a disconnected environment.
ACM manages clusters in a hub and spoke architecture, where a single hub cluster manages many spoke clusters. ACM applies radio access network (RAN) policies from predefined custom resources (CRs). Hub clusters running ACM provision and deploy the spoke clusters using ZTP and AI. DU installation follows the AI installation of OKD on a single node.
The AI service handles provisioning of OKD on single nodes running on bare metal. ACM ships with and deploys the assisted installer when the MultiClusterHub
custom resource is installed.
With ZTP and AI, you can provision OKD single nodes to run your DUs at scale. A high level overview of ZTP for distributed units in a disconnected environment is as follows:
A hub cluster running ACM manages a disconnected internal registry that mirrors the OKD release images. The internal registry is used to provision the spoke single nodes.
You manage the bare-metal host machines for your DUs in an inventory file that uses YAML for formatting. You store the inventory file in a Git repository.
You install the DU bare-metal host machines on site, and make the hosts ready for provisioning. To be ready for provisioning, the following is required for each bare-metal host:
Network connectivity - including DNS for your network. Hosts should be reachable through the hub and managed spoke clusters. Ensure there is layer 3 connectivity between the hub and the host where you want to install your hub cluster.
Baseboard Management Controller (BMC) details for each host - ZTP uses BMC details to connect the URL and credentials for accessing the BMC. Create spoke cluster definition CRs. These define the relevant elements for the managed clusters. Required CRs are as follows:
Custom Resource | Description |
---|---|
Namespace |
Namespace for the managed single-node cluster. |
BMCSecret CR |
Credentials for the host BMC. |
Image Pull Secret CR |
Pull secret for the disconnected registry. |
AgentClusterInstall |
Specifies the single-node cluster’s configuration such as networking, number of supervisor (control plane) nodes, and so on. |
ClusterDeployment |
Defines the cluster name, domain, and other details. |
KlusterletAddonConfig |
Manages installation and termination of add-ons on the ManagedCluster for ACM. |
ManagedCluster |
Describes the managed cluster for ACM. |
InfraEnv |
Describes the installation ISO to be mounted on the destination node that the assisted installer service creates. This is the final step of the manifest creation phase. |
BareMetalHost |
Describes the details of the bare-metal host, including BMC and credentials details. |
When a change is detected in the host inventory repository, a host management event is triggered to provision the new or updated host.
The host is provisioned. When the host is provisioned and successfully rebooted, the host agent reports Ready
status to the hub cluster.
ACM deploys single-node OpenShift, which is OKD installed on single nodes, leveraging zero touch provisioning (ZTP). The initial site plan is broken down into smaller components and initial configuration data is stored in a Git repository. Zero touch provisioning uses a declarative GitOps approach to deploy these nodes. The deployment of the nodes includes:
Installing the host operating system (RHCOS) on a blank server.
Deploying OKD on single nodes.
Creating cluster policies and site subscriptions.
Leveraging a GitOps deployment topology for a develop once, deploy anywhere model.
Making the necessary network configurations to the server operating system.
Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV.
Downloading images needed to run workloads (CNFs).
You use zero touch provisioning (ZTP) to deploy single-node OpenShift clusters to run distributed units (DUs) on small hardware footprints at disconnected far edge sites. A single-node cluster runs OKD on top of one bare-metal host, hence the single node. Edge servers contain a single node with supervisor functions and worker functions on the same host that are deployed at low bandwidth or disconnected edge sites.
OKD is configured on the single node to use workload partitioning. Workload partitioning separates cluster management workloads from user workloads and can run the cluster management workloads on a reserved set of CPUs. Workload partitioning is useful for resource-constrained environments, such as single-node production deployments, where you want to reserve most of the CPU resources for user workloads and configure OKD to use fewer CPU resources within the host.
A single-node cluster hosting a DU application on a node is divided into the following configuration categories:
Common - Values are the same for all single-node cluster sites managed by a hub cluster.
Pools of sites - Common across a pool of sites where a pool size can be 1 to n.
Site specific - Likely specific to a site with no overlap with other sites, for example, a vlan.
Site planning for distributed units (DU) deployments is complex. The following is an overview of the tasks that you complete before the DU hosts are brought online in the production environment.
Develop a network model. The network model depends on various factors such as the size of the area of coverage, number of hosts, projected traffic load, DNS, and DHCP requirements.
Decide how many DU radio nodes are required to provide sufficient coverage and redundancy for your network.
Develop mechanical and electrical specifications for the DU host hardware.
Develop a construction plan for individual DU site installations.
Tune host BIOS settings for production, and deploy the BIOS configuration to the hosts.
Install the equipment on-site, connect hosts to the network, and apply power.
Configure on-site switches and routers.
Perform basic connectivity tests for the host machines.
Establish production network connectivity, and verify host connections to the network.
Provision and deploy on-site DU hosts at scale.
Test and verify on-site operations, performing load and scale testing of the DU hosts before finally bringing the DU infrastructure online in the live production environment.
Low latency is an integral part of the development of 5G networks. Telecommunications networks require as little signal delay as possible to ensure quality of service in a variety of critical use cases.
Low latency processing is essential for any communication with timing constraints that affect functionality and security. For example, 5G Telco applications require a guaranteed one millisecond one-way latency to meet Internet of Things (IoT) requirements. Low latency is also critical for the future development of autonomous vehicles, smart factories, and online gaming. Networks in these environments require almost a real-time flow of data.
Low latency systems are about guarantees with regards to response and processing times. This includes keeping a communication protocol running smoothly, ensuring device security with fast responses to error conditions, or just making sure a system is not lagging behind when receiving a lot of data. Low latency is key for optimal synchronization of radio transmissions.
OKD enables low latency processing for DUs running on COTS hardware by using a number of technologies and specialized hardware devices:
Ensures workloads are handled with a high degree of process determinism.
Avoids CPU scheduling delays and ensures CPU capacity is available consistently.
Aligns memory and huge pages with CPU and PCI devices to pin guaranteed container memory and huge pages to the NUMA node. This decreases latency and improves performance of the node.
Using huge page sizes improves system performance by reducing the amount of system resources required to access page tables.
Allows synchronization between nodes in the network with sub-microsecond accuracy.
Distributed unit (DU) hosts require the BIOS to be configured before the host can be provisioned. The BIOS configuration is dependent on the specific hardware that runs your DUs and the particular requirements of your installation.
In this Developer Preview release, configuration and tuning of BIOS for DU bare-metal host machines is the responsibility of the customer. Automatic setting of BIOS is not handled by the zero touch provisioning workflow. |
Set the UEFI/BIOS Boot Mode to UEFI
.
In the host boot sequence order, set Hard drive first.
Apply the specific BIOS configuration for your hardware. The following table describes a representative BIOS configuration for an Intel Xeon Skylake or Intel Cascade Lake server, based on the Intel FlexRAN 4G and 5G baseband PHY reference design.
The exact BIOS configuration depends on your specific hardware and network requirements. The following sample configuration is for illustrative purposes only. |
BIOS Setting | Configuration |
---|---|
CPU Power and Performance Policy |
Performance |
Uncore Frequency Scaling |
Disabled |
Performance P-limit |
Disabled |
Enhanced Intel SpeedStep ® Tech |
Enabled |
Intel Configurable TDP |
Enabled |
Configurable TDP Level |
Level 2 |
Intel® Turbo Boost Technology |
Enabled |
Energy Efficient Turbo |
Disabled |
Hardware P-States |
Disabled |
Package C-State |
C0/C1 state |
C1E |
Disabled |
Processor C6 |
Disabled |
Enable global SR-IOV and VT-d settings in the BIOS for the host. These settings are relevant to bare-metal environments. |
Before you can provision distributed units (DU) at scale, you must install Red Hat Advanced Cluster Management (RHACM), which handles the provisioning of the DUs.
RHACM is deployed as an Operator on the OKD hub cluster. It controls clusters and applications from a single console with built-in security policies. RHACM provisions and manage your DU hosts. To install RHACM in a disconnected environment, you create a mirror registry that mirrors the Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster.
You also use a disconnected mirror host to serve the FCOS ISO and RootFS disk images that provision the DU bare-metal host operating system.
Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. You can also use this procedure in unrestricted networks to ensure your clusters only use container images that have satisfied your organizational controls on external content.
You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet. If you do not have access to a mirror host, use the disconnected procedure to copy images to a device that you can move across network boundaries. |
You must have a container image registry that supports Docker v2-2 in the location that will host the OKD cluster, such as one of the following registries:
If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support.
Red Hat does not test third party registries with OKD. |
You can mirror the images that are required for OKD installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, a small-scale container registry included with OKD subscriptions.
You can use any container registry that supports Docker v2-2, such as Red Hat Quay, the mirror registry for Red Hat OpenShift, Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.
The internal registry of the OKD cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. |
If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OKD clusters.
When you populate your mirror registry with OKD images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
For mirrored registries, to view the source of pulled images, you must review the Trying to access
log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images
command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location.
Red Hat does not test third party registries with OKD. |
For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source.
Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location.
You can install the OpenShift CLI (oc
) to interact with OKD from a
command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.zip
.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.
You configured a mirror registry to use in your disconnected environment.
Complete the following steps on the installation host:
Generate the base64-encoded user name and password or token for your mirror registry:
$ echo -n '<user_name>:<password>' | base64 -w0 (1)
BGVtbYk3ZHAtqXs=
1 | For <user_name> and <password> , specify the user name and password that
you configured for your registry. |