×

Installing the Lifecycle Agent by using the CLI

You can use the OpenShift CLI (oc) to install the Lifecycle Agent.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. Create a Namespace object YAML file for the Lifecycle Agent:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
    1. Create the Namespace CR by running the following command:

      $ oc create -f <namespace_filename>.yaml
  2. Create an OperatorGroup object YAML file for the Lifecycle Agent:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-lifecycle-agent
      namespace: openshift-lifecycle-agent
    spec:
      targetNamespaces:
      - openshift-lifecycle-agent
    1. Create the OperatorGroup CR by running the following command:

      $ oc create -f <operatorgroup_filename>.yaml
  3. Create a Subscription CR for the Lifecycle Agent:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openshift-lifecycle-agent-subscription
      namespace: openshift-lifecycle-agent
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    1. Create the Subscription CR by running the following command:

      $ oc create -f <subscription_filename>.yaml
Verification
  1. To verify that the installation succeeded, inspect the CSV resource by running the following command:

    $ oc get csv -n openshift-lifecycle-agent
    Example output
    NAME                              DISPLAY                     VERSION               REPLACES                           PHASE
    lifecycle-agent.v4.0           Openshift Lifecycle Agent   4.0                Succeeded
  2. Verify that the Lifecycle Agent is up and running by running the following command:

    $ oc get deploy -n openshift-lifecycle-agent
    Example output
    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    lifecycle-agent-controller-manager   1/1     1            1           14s

Installing the Lifecycle Agent by using the web console

You can use the OKD web console to install the Lifecycle Agent.

Prerequisites
  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. In the OKD web console, navigate to EcosystemSoftware Catalog.

  2. Search for the Lifecycle Agent from the list of available Operators, and then click Install.

  3. On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent.

  4. Click Install.

Verification
  1. To confirm that the installation is successful:

    1. Click EcosystemInstalled Operators.

    2. Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

If the Operator is not installed successfully:

  1. Click EcosystemInstalled Operators, and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

  2. Click WorkloadsPods, and check the logs for pods in the openshift-lifecycle-agent project.

Installing the Lifecycle Agent with GitOps ZTP

Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade.

Procedure
  1. Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory:

    Example LcaSubscriptionNS.yaml file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
        ran.openshift.io/ztp-deploy-wave: "2"
      labels:
        kubernetes.io/metadata.name: openshift-lifecycle-agent

    Example LcaSubscriptionOperGroup.yaml file:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: lifecycle-agent-operatorgroup
      namespace: openshift-lifecycle-agent
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      targetNamespaces:
        - openshift-lifecycle-agent

    Example LcaSubscription.yaml file:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: lifecycle-agent
      namespace: openshift-lifecycle-agent
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    status:
      state: AtLatestKnown

    Example directory structure:

    ├── kustomization.yaml
    ├── sno
    │   ├── example-cnf.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   ├── group-du-sno-validator-ranGen.yaml
    │   └── ns.yaml
    ├── source-crs
    │   ├── LcaSubscriptionNS.yaml
    │   ├── LcaSubscriptionOperGroup.yaml
    │   ├── LcaSubscription.yaml
  2. Add the CRs to your common PolicyGenerator:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: common-latest
    placementBindingDefaults:
      name: common-placement-binding
    policyDefaults:
      namespace: ztp-common
      placement:
        labelSelector:
          common: "true"
          du-profile: "latest"
      remediationAction: inform
      severity: low
      namespaceSelector:
        exclude:
          - kube-*
        include:
          - '*'
      evaluationInterval:
        compliant: 10m
        noncompliant: 10s
    policies:
    - name: common-latest-subscriptions-policy
      policyAnnotations:
        ran.openshift.io/ztp-deploy-wave: "2"
      manifests:
        - path: source-crs/LcaSubscriptionNS.yaml
        - path: source-crs/LcaSubscriptionOperGroup.yaml
        - path: source-crs/LcaSubscription.yaml
    [...]

Installing and configuring the OADP Operator with GitOps ZTP

Install and configure the OADP Operator with GitOps ZTP before starting the upgrade.

Procedure
  1. Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory:

    Example OadpSubscriptionNS.yaml file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
      labels:
        kubernetes.io/metadata.name: openshift-adp

    Example OadpSubscriptionOperGroup.yaml file:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: redhat-oadp-operator
      namespace: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      targetNamespaces:
      - openshift-adp

    Example OadpSubscription.yaml file:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: redhat-oadp-operator
      namespace: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      channel: stable-1.4
      name: redhat-oadp-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    status:
      state: AtLatestKnown

    Example OadpOperatorStatus.yaml file:

    apiVersion: operators.coreos.com/v1
    kind: Operator
    metadata:
      name: redhat-oadp-operator.openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    status:
      components:
        refs:
        - kind: Subscription
          namespace: openshift-adp
          conditions:
          - type: CatalogSourcesUnhealthy
            status: "False"
        - kind: InstallPlan
          namespace: openshift-adp
          conditions:
          - type: Installed
            status: "True"
        - kind: ClusterServiceVersion
          namespace: openshift-adp
          conditions:
          - type: Succeeded
            status: "True"
            reason: InstallSucceeded

    Example directory structure:

    ├── kustomization.yaml
    ├── sno
    │   ├── example-cnf.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   ├── group-du-sno-validator-ranGen.yaml
    │   └── ns.yaml
    ├── source-crs
    │   ├── OadpSubscriptionNS.yaml
    │   ├── OadpSubscriptionOperGroup.yaml
    │   ├── OadpSubscription.yaml
    │   ├── OadpOperatorStatus.yaml
  2. Add the CRs to your common PolicyGenTemplate:

    apiVersion: ran.openshift.io/v1
    kind: PolicyGenTemplate
    metadata:
      name: "example-common-latest"
      namespace: "ztp-common"
    spec:
      bindingRules:
        common: "true"
        du-profile: "latest"
      sourceFiles:
        - fileName: OadpSubscriptionNS.yaml
          policyName: "subscriptions-policy"
        - fileName: OadpSubscriptionOperGroup.yaml
          policyName: "subscriptions-policy"
        - fileName: OadpSubscription.yaml
          policyName: "subscriptions-policy"
        - fileName: OadpOperatorStatus.yaml
          policyName: "subscriptions-policy"
    [...]
  3. Create the DataProtectionApplication CR and the S3 secret only for the target cluster:

    1. Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory:

      Example OadpDataProtectionApplication.yaml file:

      apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        name: dataprotectionapplication
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      spec:
        configuration:
          restic:
            enable: false
          velero:
            defaultPlugins:
              - aws
              - openshift
            resourceTimeout: 10m
        backupLocations:
          - velero:
              config:
                profile: "default"
                region: minio
                s3Url: $url
                insecureSkipTLSVerify: "true"
                s3ForcePathStyle: "true"
              provider: aws
              default: true
              credential:
                key: cloud
                name: cloud-credentials
              objectStorage:
                bucket: $bucketName
                prefix: $prefixName
      status:
        conditions:
        - reason: Complete
          status: "True"
          type: Reconciled
      • spec.configuration.restic.enable must be set to false for an image-based upgrade because persistent volume contents are retained and reused after the upgrade.

      • bucket defines the bucket name created in S3 backend. prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the Red Hat Advanced Cluster Management hub template function, for example, prefix: {{hub .ManagedClusterName hub}}.

      Example OadpSecret.yaml file:

      apiVersion: v1
      kind: Secret
      metadata:
        name: cloud-credentials
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      type: Opaque

      Example OadpBackupStorageLocationStatus.yaml file:

      apiVersion: velero.io/v1
      kind: BackupStorageLocation
      metadata:
        name: dataprotectionapplication-1
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      status:
        phase: Available

      The name value in the BackupStorageLocation resource must follow a specific naming convention that aligns with the corresponding DataProtectionApplication resource.

      • The name must use the <DataProtectionApplication.metadata.name>-<index> pattern.

      • The <index> represents the position of the corresponding entry in the spec.backupLocations field in the DataProtectionApplication resource. The position starts at 1.

      • If you change the metadata.name value of the DataProtectionApplication resource in the OadpDataProtectionApplication.yaml file, you must also update the metadata.name field in the BackupStorageLocation resource to match the new value.

      The OadpBackupStorageLocationStatus.yaml CR verifies the availability of backup storage locations created by OADP.

    2. Add the CRs to your site PolicyGenTemplate with overrides:

      apiVersion: ran.openshift.io/v1
      kind: PolicyGenTemplate
      metadata:
        name: "example-cnf"
        namespace: "ztp-site"
      spec:
        bindingRules:
          sites: "example-cnf"
          du-profile: "latest"
        mcp: "master"
        sourceFiles:
          ...
          - fileName: OadpSecret.yaml
            policyName: "config-policy"
            data:
              cloud: <your_credentials>
          - fileName: OadpDataProtectionApplication.yaml
            policyName: "config-policy"
            spec:
              backupLocations:
                - velero:
                    config:
                      region: minio
                      s3Url: <your_S3_URL>
                      profile: "default"
                      insecureSkipTLSVerify: "true"
                      s3ForcePathStyle: "true"
                    provider: aws
                    default: true
                    credential:
                      key: cloud
                      name: cloud-credentials
                    objectStorage:
                      bucket: <your_bucket_name>
                      prefix: <cluster_name>
          - fileName: OadpBackupStorageLocationStatus.yaml
            policyName: "config-policy"

where:

your_credentials

Specifies your credentials for your S3 storage backend.

OadpDataProtectionApplication.yaml

If more than one backupLocations entries are defined in the OadpDataProtectionApplication CR, ensure that each location has a corresponding OadpBackupStorageLocation CR added for status tracking. Ensure that the name of each additional OadpBackupStorageLocation CR is overridden with the correct index as described in the example OadpBackupStorageLocationStatus.yaml file.

your_S3_URL

Specifies the URL for your S3-compatible bucket.

bucket and prefix

The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the Red Hat Advanced Cluster Management hub template function, for example, prefix: {{hub .ManagedClusterName hub}}.