$ mkdir -p ./out
You can deploy a managed single-node OpenShift cluster by using Red Hat Advanced Cluster Management (RHACM) and the assisted service.
If you are creating multiple managed clusters, use the |
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended cluster configuration for vDU application workloads. |
Use the generator
entrypoint for the ztp-site-generate
container to generate the site installation and configuration custom resource (CRs) for a cluster based on SiteConfig
and PolicyGenerator
CRs.
You have installed the OpenShift CLI (oc
).
You have logged in to the hub cluster as a user with cluster-admin
privileges.
Create an output folder by running the following command:
$ mkdir -p ./out
Export the argocd
directory from the ztp-site-generate
container image:
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17 extract /home/ztp --tar | tar x -C ./out
The ./out
directory has the reference PolicyGenerator
and SiteConfig
CRs in the out/argocd/example/
folder.
out
└── argocd
└── example
├── acmpolicygenerator
│ ├── {policy-prefix}common-ranGen.yaml
│ ├── {policy-prefix}example-sno-site.yaml
│ ├── {policy-prefix}group-du-sno-ranGen.yaml
│ ├── {policy-prefix}group-du-sno-validator-ranGen.yaml
│ ├── ...
│ ├── kustomization.yaml
│ └── ns.yaml
└── siteconfig
├── example-sno.yaml
├── KlusterletAddonConfigOverride.yaml
└── kustomization.yaml
Create an output folder for the site installation CRs:
$ mkdir -p ./site-install
Modify the example SiteConfig
CR for the cluster type that you want to install. Copy example-sno.yaml
to site-1-sno.yaml
and modify the CR to match the details of the site and bare-metal host that you want to install, for example:
# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno
---
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "example-sno"
namespace: "example-sno"
spec:
baseDomain: "example.com"
pullSecretRef:
name: "assisted-deployment-pull-secret"
clusterImageSetNameRef: "openshift-4.16"
sshPublicKey: "ssh-rsa AAAA..."
clusters:
- clusterName: "example-sno"
networkType: "OVNKubernetes"
# installConfigOverrides is a generic way of passing install-config
# parameters through the siteConfig. The 'capabilities' field configures
# the composable openshift feature. In this 'capabilities' setting, we
# remove all the optional set of components.
# Notes:
# - OperatorLifecycleManager is needed for 4.15 and later
# - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier
# - Ingress is needed for 4.16 and later
installConfigOverrides: |
{
"capabilities": {
"baselineCapabilitySet": "None",
"additionalEnabledCapabilities": [
"NodeTuning",
"OperatorLifecycleManager",
"Ingress"
]
}
}
# It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+.
# The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest.
# extraManifestPath: sno-extra-manifest
clusterLabels:
# These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples
du-profile: "latest"
# These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates:
# ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true'
common: true
# ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""'
group-du-sno: ""
# ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"'
# Normally this should match or contain the cluster name so it only applies to a single cluster
sites: "example-sno"
clusterNetwork:
- cidr: 1001:1::/48
hostPrefix: 64
machineNetwork:
- cidr: 1111:2222:3333:4444::/64
serviceNetwork:
- 1001:2::/112
additionalNTPSources:
- 1111:2222:3333:4444::2
# Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate
# please see Workload Partitioning Feature for a complete guide.
cpuPartitioningMode: AllNodes
# Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster:
#crTemplates:
# KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml"
nodes:
- hostName: "example-node1.example.com"
role: "master"
# Optionally; This can be used to configure desired BIOS setting on a host:
#biosConfigRef:
# filePath: "example-hw.profile"
bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1"
bmcCredentialsName:
name: "example-node1-bmh-secret"
bootMACAddress: "AA:BB:CC:DD:EE:11"
# Use UEFISecureBoot to enable secure boot.
bootMode: "UEFISecureBoot"
rootDeviceHints:
deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0"
# disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details
ignitionConfigOverride: |
{
"ignition": {
"version": "3.2.0"
},
"storage": {
"disks": [
{
"device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62",
"partitions": [
{
"label": "var-lib-containers",
"sizeMiB": 0,
"startMiB": 250000
}
],
"wipeTable": false
}
],
"filesystems": [
{
"device": "/dev/disk/by-partlabel/var-lib-containers",
"format": "xfs",
"mountOptions": [
"defaults",
"prjquota"
],
"path": "/var/lib/containers",
"wipeFilesystem": true
}
]
},
"systemd": {
"units": [
{
"contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target",
"enabled": true,
"name": "var-lib-containers.mount"
}
]
}
}
nodeNetwork:
interfaces:
- name: eno1
macAddress: "AA:BB:CC:DD:EE:11"
config:
interfaces:
- name: eno1
type: ethernet
state: up
ipv4:
enabled: false
ipv6:
enabled: true
address:
# For SNO sites with static IP addresses, the node-specific,
# API and Ingress IPs should all be the same and configured on
# the interface
- ip: 1111:2222:3333:4444::aaaa:1
prefix-length: 64
dns-resolver:
config:
search:
- example.com
server:
- 1111:2222:3333:4444::2
routes:
config:
- destination: ::/0
next-hop-interface: eno1
next-hop-address: 1111:2222:3333:4444::1
table-id: 254
Once you have extracted reference CR configuration files from the |
Generate the Day 0 installation CRs by processing the modified SiteConfig
CR site-1-sno.yaml
by running the following command:
$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17 generator install site-1-sno.yaml /output
site-install
└── site-1-sno
├── site-1_agentclusterinstall_example-sno.yaml
├── site-1-sno_baremetalhost_example-node1.example.com.yaml
├── site-1-sno_clusterdeployment_example-sno.yaml
├── site-1-sno_configmap_example-sno.yaml
├── site-1-sno_infraenv_example-sno.yaml
├── site-1-sno_klusterletaddonconfig_example-sno.yaml
├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
├── site-1-sno_managedcluster_example-sno.yaml
├── site-1-sno_namespace_example-sno.yaml
└── site-1-sno_nmstateconfig_example-node1.example.com.yaml
Optional: Generate just the Day 0 MachineConfig
installation CRs for a particular cluster type by processing the reference SiteConfig
CR with the -E
option. For example, run the following commands:
Create an output folder for the MachineConfig
CRs:
$ mkdir -p ./site-machineconfig
Generate the MachineConfig
installation CRs:
$ podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17 generator install -E site-1-sno.yaml /output
site-machineconfig
└── site-1-sno
├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml
├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml
└── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml
Generate and export the Day 2 configuration CRs using the reference PolicyGenerator
CRs from the previous step. Run the following commands:
Create an output folder for the Day 2 CRs:
$ mkdir -p ./ref
Generate and export the Day 2 configuration CRs:
$ podman run -it --rm -v `pwd`/out/argocd/example/acmpolicygenerator:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.17 generator config -N . /output
The command generates example group and site-specific PolicyGenerator
CRs for single-node OpenShift, three-node clusters, and standard clusters in the ./ref
folder.
ref
└── customResource
├── common
├── example-multinode-site
├── example-sno
├── group-du-3node
├── group-du-3node-validator
│ └── Multiple-validatorCRs
├── group-du-sno
├── group-du-sno-validator
├── group-du-standard
└── group-du-standard-validator
└── Multiple-validatorCRs
Use the generated CRs as the basis for the CRs that you use to install the cluster. You apply the installation CRs to the hub cluster as described in "Installing a single managed cluster". The configuration CRs can be applied to the cluster after cluster installation is complete.
Verify that the custom roles and labels are applied after the node is deployed:
$ oc describe node example-node.example.com
Name: example-node.example.com
Roles: control-plane,example-label,master,worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
custom-label/parameter1=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/example-label= (1)
node-role.kubernetes.io/master=
node-role.kubernetes.io/worker=
node.openshift.io/os_id=rhcos
1 | The custom label is applied to the node. |
Add the required Secret
custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
The secrets are referenced from the |
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
Save the following YAML as the file example-sno-secret.yaml
:
apiVersion: v1
kind: Secret
metadata:
name: example-sno-bmc-secret
namespace: example-sno (1)
data: (2)
password: <base64_password>
username: <base64_username>
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
name: pull-secret
namespace: example-sno (3)
data:
.dockerconfigjson: <pull_secret> (4)
type: kubernetes.io/dockerconfigjson
1 | Must match the namespace configured in the related SiteConfig CR |
2 | Base64-encoded values for password and username |
3 | Must match the namespace configured in the related SiteConfig CR |
4 | Base64-encoded pull secret |
Add the relative path to example-sno-secret.yaml
to the kustomization.yaml
file that you use to install the cluster.
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OKD installation process on managed bare-metal hosts. You can edit the InfraEnv
resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier
kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
In OKD 4.17, you can only add kernel arguments. You can not replace or delete kernel arguments. |
You have installed the OpenShift CLI (oc).
You have logged in to the hub cluster as a user with cluster-admin privileges.
You have manually generated the installation and configuration custom resources (CRs).
Edit the spec.kernelArguments
specification in the InfraEnv
CR to configure kernel arguments:
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: <cluster_name>
namespace: <cluster_name>
spec:
kernelArguments:
- operation: append (1)
value: audit=0 (2)
- operation: append
value: trace=1
clusterRef:
name: <cluster_name>
namespace: <cluster_name>
pullSecretRef:
name: pull-secret
1 | Specify the append operation to add a kernel argument. |
2 | Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. |
The |
To verify that the kernel arguments are applied, after the Discovery image verifies that OKD is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline
file.
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name>
View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
You can manually deploy a single managed cluster using the assisted service and Red Hat Advanced Cluster Management (RHACM).
You have installed the OpenShift CLI (oc
).
You have logged in to the hub cluster as a user with cluster-admin
privileges.
You have created the baseboard management controller (BMC) Secret
and the image pull-secret Secret
custom resources (CRs). See "Creating the managed bare-metal host secrets" for details.
Your target bare-metal host meets the networking and hardware requirements for managed clusters.
Create a ClusterImageSet
for each specific cluster version to be deployed, for example clusterImageSet-4.17.yaml
. A ClusterImageSet
has the following format:
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.17.0 (1)
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.17.0-x86_64 (2)
1 | The descriptive version that you want to deploy. |
2 | Specifies the releaseImage to deploy and determines the operating system image version. The discovery ISO is based on the image version as set by releaseImage , or the latest version if the exact version is unavailable. |
Apply the clusterImageSet
CR:
$ oc apply -f clusterImageSet-4.17.yaml
Create the Namespace
CR in the cluster-namespace.yaml
file:
apiVersion: v1
kind: Namespace
metadata:
name: <cluster_name> (1)
labels:
name: <cluster_name> (1)
1 | The name of the managed cluster to provision. |
Apply the Namespace
CR by running the following command:
$ oc apply -f cluster-namespace.yaml
Apply the generated day-0 CRs that you extracted from the ztp-site-generate
container and customized to meet your requirements:
$ oc apply -R ./site-install/site-sno-1
Ensure that cluster provisioning was successful by checking the cluster status.
All of the custom resources have been configured and provisioned, and the Agent
custom resource is created on the hub for the managed cluster.
Check the status of the managed cluster:
$ oc get managedcluster
True
indicates the managed cluster is ready.
Check the agent status:
$ oc get agent -n <cluster_name>
Use the describe
command to provide an in-depth description of the agent’s condition. Statuses to be aware of include BackendError
, InputError
, ValidationsFailing
, InstallationFailed
, and AgentIsConnected
. These statuses are relevant to the Agent
and AgentClusterInstall
custom resources.
$ oc describe agent -n <cluster_name>
Check the cluster provisioning status:
$ oc get agentclusterinstall -n <cluster_name>
Use the describe
command to provide an in-depth description of the cluster provisioning status:
$ oc describe agentclusterinstall -n <cluster_name>
Check the status of the managed cluster’s add-on services:
$ oc get managedclusteraddon -n <cluster_name>
Retrieve the authentication information of the kubeconfig
file for the managed cluster:
$ oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig
Use this procedure to diagnose any installation issues that might occur with the managed cluster.
Check the status of the managed cluster:
$ oc get managedcluster
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
SNO-cluster true True True 2d19h
If the status in the AVAILABLE
column is True
, the managed cluster is being managed by the hub.
If the status in the AVAILABLE
column is Unknown
, the managed cluster is not being managed by the hub.
Use the following steps to continue checking to get more information.
Check the AgentClusterInstall
install status:
$ oc get clusterdeployment -n <cluster_name>
NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE
Sno0026 agent-baremetal false Initialized
2d14h
If the status in the INSTALLED
column is false
, the installation was unsuccessful.
If the installation failed, enter the following command to review the status of the AgentClusterInstall
resource:
$ oc describe agentclusterinstall -n <cluster_name> <cluster_name>
Resolve the errors and reset the cluster:
Remove the cluster’s managed cluster resource:
$ oc delete managedcluster <cluster_name>
Remove the cluster’s namespace:
$ oc delete namespace <cluster_name>
This deletes all of the namespace-scoped custom resources created for this cluster. You must wait for the ManagedCluster
CR deletion to complete before proceeding.
Recreate the custom resources for the managed cluster.
Red Hat Advanced Cluster Management (RHACM) supports deploying OKD on single-node clusters, three-node clusters, and standard clusters with a specific set of installation custom resources (CRs) that you generate using SiteConfig
CRs for each site.
Every managed cluster has its own namespace, and all of the installation CRs except for |
The following table lists the installation CRs that are automatically applied by the RHACM assisted service when it installs clusters using the SiteConfig
CRs that you configure.
CR | Description | Usage |
---|---|---|
|
Contains the connection information for the Baseboard Management Controller (BMC) of the target bare-metal host. |
Provides access to the BMC to load and start the discovery image on the target server by using the Redfish protocol. |
|
Contains information for installing OKD on the target bare-metal host. |
Used with |
|
Specifies details of the managed cluster configuration such as networking and the number of control plane nodes. Displays the cluster |
Specifies the managed cluster configuration information and provides status during the installation of the cluster. |
|
References the |
Used with |
|
Provides network configuration information such as |
Sets up a static IP address for the managed cluster’s Kube API server. |
|
Contains hardware information about the target bare-metal host. |
Created automatically on the hub when the target machine’s discovery image boots. |
|
When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface. |
The hub uses this resource to manage and show the status of managed clusters. |
|
Contains the list of services provided by the hub to be deployed to the |
Tells the hub which addon services to deploy to the |
|
Logical space for |
Propagates resources to the |
|
Two CRs are created: |
|
|
Contains OKD image information such as the repository and image name. |
Passed into resources to provide OKD images. |