apiVersion: v1
kind: Namespace
metadata:
name: openshift-lifecycle-agent
annotations:
workload.openshift.io/allowed: management
Prepare your clusters for the upgrade by installing the Lifecycle Agent and the OADP Operator.
To install the OADP Operator with the non-GitOps method, see "Installing the OADP Operator".
You can use the OpenShift CLI (oc
) to install the Lifecycle Agent.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create a Namespace
object YAML file for the Lifecycle Agent, for example lcao-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-lifecycle-agent
annotations:
workload.openshift.io/allowed: management
Create the Namespace
CR by running the following command:
$ oc create -f lcao-namespace.yaml
Create an OperatorGroup
object YAML file for the Lifecycle Agent, for example lcao-operatorgroup.yaml
:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-lifecycle-agent
namespace: openshift-lifecycle-agent
spec:
targetNamespaces:
- openshift-lifecycle-agent
Create the OperatorGroup
CR by running the following command:
$ oc create -f lcao-operatorgroup.yaml
Create a Subscription
CR, for example, lcao-subscription.yaml
:
apiVersion: operators.coreos.com/v1
kind: Subscription
metadata:
name: openshift-lifecycle-agent-subscription
namespace: openshift-lifecycle-agent
spec:
channel: "stable"
name: lifecycle-agent
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the Subscription
CR by running the following command:
$ oc create -f lcao-subscription.yaml
To verify that the installation succeeded, inspect the CSV resource by running the following command:
$ oc get csv -n openshift-lifecycle-agent
NAME DISPLAY VERSION REPLACES PHASE
lifecycle-agent.v4.16.0 Openshift Lifecycle Agent 4.16.0 Succeeded
Verify that the Lifecycle Agent is up and running by running the following command:
$ oc get deploy -n openshift-lifecycle-agent
NAME READY UP-TO-DATE AVAILABLE AGE
lifecycle-agent-controller-manager 1/1 1 1 14s
You can use the OKD web console to install the Lifecycle Agent.
Log in as a user with cluster-admin
privileges.
In the OKD web console, navigate to Operators → OperatorHub.
Search for the Lifecycle Agent from the list of available Operators, and then click Install.
On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent.
Click Install.
To confirm that the installation is successful:
Click Operators → Installed Operators.
Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded.
During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. |
If the Operator is not installed successfully:
Click Operators → Installed Operators, and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Click Workloads → Pods, and check the logs for pods in the openshift-lifecycle-agent project.
Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade.
Extract the following CRs from the ztp-site-generate
container image and push them to the source-cr
directory:
LcaSubscriptionNS.yaml
fileapiVersion: v1
kind: Namespace
metadata:
name: openshift-lifecycle-agent
annotations:
workload.openshift.io/allowed: management
ran.openshift.io/ztp-deploy-wave: "2"
labels:
kubernetes.io/metadata.name: openshift-lifecycle-agent
LcaSubscriptionOperGroup.yaml
fileapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: lifecycle-agent-operatorgroup
namespace: openshift-lifecycle-agent
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
spec:
targetNamespaces:
- openshift-lifecycle-agent
LcaSubscription.yaml
fileapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: lifecycle-agent
namespace: openshift-lifecycle-agent
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
spec:
channel: "stable"
name: lifecycle-agent
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
status:
state: AtLatestKnown
├── kustomization.yaml
├── sno
│ ├── example-cnf.yaml
│ ├── common-ranGen.yaml
│ ├── group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ └── ns.yaml
├── source-crs
│ ├── LcaSubscriptionNS.yaml
│ ├── LcaSubscriptionOperGroup.yaml
│ ├── LcaSubscription.yaml
Add the CRs to your common PolicyGenTemplate
:
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "example-common-latest"
namespace: "ztp-common"
spec:
bindingRules:
common: "true"
du-profile: "latest"
sourceFiles:
- fileName: LcaSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: LcaSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: LcaSubscription.yaml
policyName: "subscriptions-policy"
[...]
Install and configure the OADP Operator with GitOps ZTP before starting the upgrade.
Extract the following CRs from the ztp-site-generate
container image and push them to the source-cr
directory:
OadpSubscriptionNS.yaml
fileapiVersion: v1
kind: Namespace
metadata:
name: openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
labels:
kubernetes.io/metadata.name: openshift-adp
OadpSubscriptionOperGroup.yaml
fileapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: redhat-oadp-operator
namespace: openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
spec:
targetNamespaces:
- openshift-adp
OadpSubscription.yaml
fileapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: redhat-oadp-operator
namespace: openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
spec:
channel: stable-1.4
name: redhat-oadp-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
status:
state: AtLatestKnown
OadpOperatorStatus.yaml
fileapiVersion: operators.coreos.com/v1
kind: Operator
metadata:
name: redhat-oadp-operator.openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "2"
status:
components:
refs:
- kind: Subscription
namespace: openshift-adp
conditions:
- type: CatalogSourcesUnhealthy
status: "False"
- kind: InstallPlan
namespace: openshift-adp
conditions:
- type: Installed
status: "True"
- kind: ClusterServiceVersion
namespace: openshift-adp
conditions:
- type: Succeeded
status: "True"
reason: InstallSucceeded
├── kustomization.yaml
├── sno
│ ├── example-cnf.yaml
│ ├── common-ranGen.yaml
│ ├── group-du-sno-ranGen.yaml
│ ├── group-du-sno-validator-ranGen.yaml
│ └── ns.yaml
├── source-crs
│ ├── OadpSubscriptionNS.yaml
│ ├── OadpSubscriptionOperGroup.yaml
│ ├── OadpSubscription.yaml
│ ├── OadpOperatorStatus.yaml
Add the CRs to your common PolicyGenTemplate
:
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "example-common-latest"
namespace: "ztp-common"
spec:
bindingRules:
common: "true"
du-profile: "latest"
sourceFiles:
- fileName: OadpSubscriptionNS.yaml
policyName: "subscriptions-policy"
- fileName: OadpSubscriptionOperGroup.yaml
policyName: "subscriptions-policy"
- fileName: OadpSubscription.yaml
policyName: "subscriptions-policy"
- fileName: OadpOperatorStatus.yaml
policyName: "subscriptions-policy"
[...]
Create the DataProtectionApplication
CR and the S3 secret only for the target cluster:
Extract the following CRs from the ztp-site-generate
container image and push them to the source-cr
directory:
DataProtectionApplication.yaml
fileapiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: dataprotectionapplication
namespace: openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "100"
spec:
configuration:
restic:
enable: false (1)
velero:
defaultPlugins:
- aws
- openshift
resourceTimeout: 10m
backupLocations:
- velero:
config:
profile: "default"
region: minio
s3Url: $url
insecureSkipTLSVerify: "true"
s3ForcePathStyle: "true"
provider: aws
default: true
credential:
key: cloud
name: cloud-credentials
objectStorage:
bucket: $bucketName (2)
prefix: $prefixName (2)
status:
conditions:
- reason: Complete
status: "True"
type: Reconciled
1 | The spec.configuration.restic.enable field must be set to false for an image-based upgrade because persistent volume contents are retained and reused after the upgrade. |
2 | The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}} . |
OadpSecret.yaml
fileapiVersion: v1
kind: Secret
metadata:
name: cloud-credentials
namespace: openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "100"
type: Opaque
OadpBackupStorageLocationStatus.yaml
fileapiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
namespace: openshift-adp
annotations:
ran.openshift.io/ztp-deploy-wave: "100"
status:
phase: Available
The OadpBackupStorageLocationStatus.yaml
CR verifies the availability of backup storage locations created by OADP.
Add the CRs to your site PolicyGenTemplate
with overrides:
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "example-cnf"
namespace: "ztp-site"
spec:
bindingRules:
sites: "example-cnf"
du-profile: "latest"
mcp: "master"
sourceFiles:
...
- fileName: OadpSecret.yaml
policyName: "config-policy"
data:
cloud: <your_credentials> (1)
- fileName: DataProtectionApplication.yaml
policyName: "config-policy"
spec:
backupLocations:
- velero:
config:
region: minio
s3Url: <your_S3_URL> (2)
profile: "default"
insecureSkipTLSVerify: "true"
s3ForcePathStyle: "true"
provider: aws
default: true
credential:
key: cloud
name: cloud-credentials
objectStorage:
bucket: <your_bucket_name> (3)
prefix: <cluster_name> (3)
- fileName: OadpBackupStorageLocationStatus.yaml
policyName: "config-policy"
1 | Specify your credentials for your S3 storage backend. |
2 | Specify the URL for your S3-compatible bucket. |
3 | The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket . The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}} . |