×

Logical Volume Manager (LVM) Storage uses the TopoLVM CSI driver to dynamically provision local storage on single-node OpenShift clusters.

LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources single-node OpenShift cluster.

You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.

Logical Volume Manager Storage installation

You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads.

You can deploy LVM Storage on single-node OpenShift clusters by using the OKD CLI (oc), OKD web console, or Red Hat Advanced Cluster Management (RHACM).

Prerequisites to install LVM Storage

The prerequisites to install LVM Storage are as follows:

  • Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.

  • Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.

  • Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.

    You cannot wipe the disks that are in use.

  • If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OKD cluster. See the Installing LVM Storage using RHACM section.

Installing LVM Storage with the CLI

As a cluster administrator, you can install Logical Volume Manager (LVM) Storage by using the CLI.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. Create a namespace for the LVM Storage Operator.

    1. Save the following YAML in the lvms-namespace.yaml file:

      apiVersion: v1
      kind: Namespace
      metadata:
        labels:
          openshift.io/cluster-monitoring: "true"
          pod-security.kubernetes.io/enforce: privileged
          pod-security.kubernetes.io/audit: privileged
          pod-security.kubernetes.io/warn: privileged
        name: openshift-storage
    2. Create the Namespace CR:

      $ oc create -f lvms-namespace.yaml
  2. Create an Operator group for the LVM Storage Operator.

    1. Save the following YAML in the lvms-operatorgroup.yaml file:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-storage-operatorgroup
        namespace: openshift-storage
      spec:
        targetNamespaces:
        - openshift-storage
    2. Create the OperatorGroup CR:

      $ oc create -f lvms-operatorgroup.yaml
  3. Subscribe to the LVM Storage Operator.

    1. Save the following YAML in the lvms-sub.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: lvms
        namespace: openshift-storage
      spec:
        installPlanApproval: Automatic
        name: lvms-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
    2. Create the Subscription CR:

      $ oc create -f lvms-sub.yaml
  4. To verify that the Operator is installed, enter the following command:

    $ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase
    Example output
    Name                         Phase
    4.13.0-202301261535          Succeeded

Installing LVM Storage with the web console

You can install Logical Volume Manager (LVM) Storage by using the Red Hat OKD OperatorHub.

Prerequisites
  • You have access to the single-node OpenShift cluster.

  • You are using an account with the cluster-admin and Operator installation permissions.

Procedure
  1. Log in to the OKD Web Console.

  2. Click Operators → OperatorHub.

  3. Scroll or type LVM Storage into the Filter by keyword box to find LVM Storage.

  4. Click Install.

  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.13.

    2. Installation Mode as A specific namespace on the cluster.

    3. Installed Namespace as Operator recommended namespace openshift-storage. If the openshift-storage namespace does not exist, it is created during the operator installation.

    4. Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

  6. Click Install.

Verification steps
  • Verify that LVM Storage shows a green tick, indicating successful installation.

Uninstalling LVM Storage by using the CLI

You can uninstall LVM Storage by using the OpenShift CLI (oc).

Prerequisites
  • You have logged in to oc as a user with cluster-admin permissions.

  • You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.

  • You deleted the LVMCluster custom resource (CR).

Procedure
  1. Get the currentCSV value for the LVM Storage Operator by running the following command:

    $ oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV
    Example output
    currentCSV: lvms-operator.v4.15.3
  2. Delete the subscription by running the following command:

    $ oc delete subscription.operators.coreos.com lvms-operator -n <namespace>
    Example output
    subscription.operators.coreos.com "lvms-operator" deleted
  3. Delete the CSV for the LVM Storage Operator in the target namespace by running the following command:

    $ oc delete clusterserviceversion <currentCSV> -n <namespace> (1)
    1 Replace <currentCSV> with the currentCSV value for the LVM Storage Operator.
    Example output
    clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted
Verification
  • To verify that the LVM Storage Operator is uninstalled, run the following command:

    $ oc get csv -n <namespace>

    If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command.

Uninstalling LVM Storage installed using the OpenShift Web Console

You can uninstall LVM Storage using the Red Hat OpenShift Container Platform Web Console.

Prerequisites
  • You deleted all the applications on the clusters that are using the storage provisioned by LVM Storage.

  • You deleted the persistent volume claims (PVCs) and persistent volumes (PVs) provisioned using LVM Storage.

  • You deleted all volume snapshots provisioned by LVM Storage.

  • You verified that no logical volume resources exist by using the oc get logicalvolume command.

  • You have access to the single-node OpenShift cluster using an account with cluster-admin permissions.

Procedure
  1. From the Operators → Installed Operators page, scroll to LVM Storage or type LVM Storage into the Filter by name to find and click on it.

  2. Click on the LVMCluster tab.

  3. On the right-hand side of the LVMCluster page, select Delete LVMCluster from the Actions drop-down menu.

  4. Click on the Details tab.

  5. On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.

  6. Select Remove. LVM Storage stops running and is completely removed.

Installing LVM Storage in a disconnected environment

You can install LVM Storage on OKD 4.13 in a disconnected environment. All sections referenced in this procedure are linked in Additional resources.

Prerequisites
  • You read the About disconnected installation mirroring section.

  • You have access to the OKD image repository.

  • You created a mirror registry.

Procedure
  1. Follow the steps in the Creating the image set configuration procedure. To create an ImageSetConfiguration resource for LVM Storage, you can use the following example YAML file:

    Example ImageSetConfiguration file for LVM Storage
    kind: ImageSetConfiguration
    apiVersion: mirror.openshift.io/v1alpha2
    archiveSize: 4 (1)
    storageConfig: (2)
      registry:
        imageURL: example.com/mirror/oc-mirror-metadata (3)
        skipTLS: false
    mirror:
      platform:
        channels:
        - name: stable-4.13 (4)
          type: ocp
        graph: true (5)
      operators:
      - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 (6)
        packages:
        - name: lvms-operator (7)
          channels:
          - name: stable (8)
      additionalImages:
      - name: registry.redhat.io/ubi9/ubi:latest (9)
      helm: {}
    1 Add archiveSize to set the maximum size, in GiB, of each file within the image set.
    2 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values, unless you are using the Technology Preview OCI feature.
    3 Set the registry URL for the storage backend.
    4 Set the channel to retrieve the OKD images from.
    5 Add graph: true to generate the OpenShift Update Service (OSUS) graph image to allow for an improved cluster update experience when using the web console. For more information, see About the OpenShift Update Service.
    6 Set the Operator catalog to retrieve the OKD images from.
    7 Specify only certain Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog.
    8 Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name>.
    9 Specify any additional images to include in image set.
  2. Follow the procedure in the Mirroring an image set to a mirror registry section.

  3. Follow the procedure in the Configuring image registry repository mirroring section.

Installing LVM Storage using RHACM

LVM Storage is deployed on single-node OpenShift clusters using Red Hat Advanced Cluster Management (RHACM). You create a Policy object on RHACM that deploys and configures the Operator when it is applied to managed clusters which match the selector specified in the PlacementRule resource. The policy is also applied to clusters that are imported later and satisfy the placement rule.

Prerequisites
  • Access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.

  • Dedicated disks on each single-node OpenShift cluster to be used by LVM Storage.

  • The single-node OpenShift cluster needs to be managed by RHACM, either imported or created.

Procedure
  1. Log in to the RHACM CLI using your OKD credentials.

  2. Create a namespace in which you will create policies.

    # oc create ns lvms-policy-ns
  3. To create a policy, save the following YAML to a file with a name such as policy-lvms-operator.yaml:

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-install-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector: (1)
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-install-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: install-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: install-lvms
    spec:
      disabled: false
      remediationAction: enforce
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: install-lvms
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  labels:
                    openshift.io/cluster-monitoring: "true"
                    pod-security.kubernetes.io/enforce: privileged
                    pod-security.kubernetes.io/audit: privileged
                    pod-security.kubernetes.io/warn: privileged
                  name: openshift-storage
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms
                  namespace: openshift-storage
                spec:
                  installPlanApproval: Automatic
                  name: lvms-operator
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: lvms
          spec:
            object-templates:
               - complianceType: musthave
                 objectDefinition:
                   apiVersion: lvm.topolvm.io/v1alpha1
                   kind: LVMCluster
                   metadata:
                     name: my-lvmcluster
                     namespace: openshift-storage
                   spec:
                     storage:
                       deviceClasses:
                       - name: vg1
                         default: true
                         deviceSelector: (2)
                           paths:
                           - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                           - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
                         thinPoolConfig:
                           name: thin-pool-1
                           sizePercent: 90
                           overprovisionRatio: 10
                         nodeSelector: (3)
                           nodeSelectorTerms:
                           - matchExpressions:
                               - key: app
                                 operator: In
                                 values:
                                 - test1
            remediationAction: enforce
            severity: low
    1 Replace the key and value in PlacementRule.spec.clusterSelector to match the labels set on the single-node OpenShift clusters on which you want to install LVM Storage.
    2 To control or restrict the volume group to your preferred disks, you can manually specify the local paths of the disks in the deviceSelector section of the LVMCluster YAML.
    3 To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the nodeSelector section. LVM Storage detects and uses the additional worker nodes when the new nodes show up.

    This nodeSelector node filter matching is not the same as the pod label matching.

  4. Create the policy in the namespace by running the following command:

    # oc create -f policy-lvms-operator.yaml -n lvms-policy-ns (1)
    1 The policy-lvms-operator.yaml is the name of the file to which the policy is saved.

    This creates a Policy, a PlacementRule, and a PlacementBinding object in the lvms-policy-ns namespace. The policy creates a Namespace, OperatorGroup, Subscription, and LVMCluster resource on the clusters that match the placement rule. This deploys the Operator on the single-node OpenShift clusters which match the selection criteria and configures it to set up the required resources to provision storage. The Operator uses all the disks specified in the LVMCluster CR. If no disks are specified, the Operator uses all the unused disks on the single-node OpenShift node.

    After a device is added to the LVMCluster, it cannot be removed.

Uninstalling LVM Storage installed using RHACM

To uninstall LVM Storage that you installed using RHACM, you need to delete the RHACM policy that you created for deploying and configuring the Operator.

When you delete the RHACM policy, the resources that the policy has created are not removed. You need to create additional policies to remove the resources.

As the created resources are not removed when you delete the policy, you need to perform the following steps:

  1. Remove all the Persistent volume claims (PVCs) and volume snapshots provisioned by LVM Storage.

  2. Remove the LVMCluster resources to clean up Logical Volume Manager resources created on the disks.

  3. Create an additional policy to uninstall the Operator.

Prerequisites
  • Ensure that the following are deleted before deleting the policy:

    • All the applications on the managed clusters that are using the storage provisioned by LVM Storage.

    • PVCs and persistent volumes (PVs) provisioned using LVM Storage.

    • All volume snapshots provisioned by LVM Storage.

  • Ensure you have access to the RHACM cluster using an account with a cluster-admin role.

Procedure
  1. In the OpenShift CLI (oc), delete the RHACM policy that you created for deploying and configuring LVM Storage on the hub cluster by using the following command:

    # oc delete -f policy-lvms-operator.yaml -n lvms-policy-ns (1)
    1 The policy-lvms-operator.yaml is the name of the file to which the policy was saved.
  2. To create a policy for removing the LVMCluster resource, save the following YAML to a file with a name such as lvms-remove-policy.yaml. This enables the Operator to clean up all Logical Volume Manager resources that it created on the cluster.

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-delete
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: enforce
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal
            spec:
              remediationAction: enforce (1)
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage (2)
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-delete
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-delete
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-delete
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-delete
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue
    1 The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2 This namespace field must have the openshift-storage value.
  3. Set the value of the PlacementRule.spec.clusterSelector field to select the clusters from which to uninstall LVM Storage.

  4. Create the policy by running the following command:

    # oc create -f lvms-remove-policy.yaml -n lvms-policy-ns
  5. To create a policy to check if the LVMCluster CR has been removed, save the following YAML to a file with a name such as check-lvms-remove-policy.yaml:

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-lvmcluster-inform
      annotations:
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    spec:
      remediationAction: inform
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: policy-lvmcluster-removal-inform
            spec:
              remediationAction: inform (1)
              severity: low
              object-templates:
                - complianceType: mustnothave
                  objectDefinition:
                    kind: LVMCluster
                    apiVersion: lvm.topolvm.io/v1alpha1
                    metadata:
                      name: my-lvmcluster
                      namespace: openshift-storage (2)
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-lvmcluster-check
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-policy-lvmcluster-check
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: policy-lvmcluster-inform
    ---
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-policy-lvmcluster-check
    spec:
      clusterConditions:
        - status: "True"
          type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
          - key: mykey
            operator: In
            values:
              - myvalue
    1 The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2 The namespace field must have the openshift-storage value.
  6. Create the policy by running the following command:

    # oc create -f check-lvms-remove-policy.yaml -n lvms-policy-ns
  7. Check the policy status by running the following command:

    # oc get policy -n lvms-policy-ns
    Example output
    NAME                       REMEDIATION ACTION   COMPLIANCE STATE   AGE
    policy-lvmcluster-delete   enforce              Compliant          15m
    policy-lvmcluster-inform   inform               Compliant          15m
  8. After both the policies are compliant, save the following YAML to a file with a name such as lvms-uninstall-policy.yaml to create a policy to uninstall LVM Storage.

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-uninstall-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-uninstall-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-uninstall-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: uninstall-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: uninstall-lvms
    spec:
      disabled: false
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: uninstall-lvms
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  name: openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms-operator
                  namespace: openshift-storage
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: policy-remove-lvms-crds
          spec:
            object-templates:
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: logicalvolumes.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmclusters.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroupnodestatuses.lvm.topolvm.io
            - complianceType: mustnothave
              objectDefinition:
                apiVersion: apiextensions.k8s.io/v1
                kind: CustomResourceDefinition
                metadata:
                  name: lvmvolumegroups.lvm.topolvm.io
            remediationAction: enforce
            severity: high
  9. Create the policy by running the following command:

    # oc create -f lvms-uninstall-policy.yaml -ns lvms-policy-ns
Additional resources

Limitations to configure the size of the devices used in LVM Storage

The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:

  • The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.

  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • You can define the size of PE and LE during the physical and logical device creation.

    • The default PE and LE size is 4 MB.

    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.

Table 1. Size limits for different architectures using the default PE and LE size
Architecture RHEL 6 RHEL 7 RHEL 8 RHEL 9

32-bit

16 TB

-

-

-

64-bit

8 EB [1]

100 TB [2]

8 EB [1]

500 TB [2]

8 EB

8 EB

  1. Theoretical size.

  2. Tested size.

Creating a Logical Volume Manager cluster on a single-node OpenShift worker node

You can configure a single-node OpenShift worker node as a Logical Volume Manager cluster. On the control-plane single-node OpenShift node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster.

When you create a Logical Volume Manager cluster, StorageClass and LVMVolumeGroup resources work together to provide dynamic provisioning of storage. StorageClass CRs define the properties of the storage that you can dynamically provision. LVMVolumeGroup is a specific type of persistent volume (PV) that is backed by an LVM Volume Group. LVMVolumeGroup CRs provide the back-end storage for the persistent volumes that you create.

Perform the following procedure to create a Logical Volume Manager cluster on a single-node OpenShift worker node.

You also can perform the same task by using the OKD web console.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You installed LVM Storage in a single-node OpenShift cluster and have installed a worker node for use in the single-node OpenShift cluster.

Procedure
  1. Create the LVMCluster custom resource (CR).

    You can only create a single instance of the LVMCluster custom resource (CR) on an OKD cluster.

    1. Save the following YAML in the lvmcluster.yaml file:

      apiVersion: lvm.topolvm.io/v1alpha1
      kind: LVMCluster
      metadata:
        name: lvmcluster
      spec:
        storage:
          deviceClasses:  (1)
            - name: vg1
              default: true (2)
              deviceSelector:
                paths:
                - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
              thinPoolConfig:
                name: thin-pool-1
                sizePercent: 90
                overprovisionRatio: 10
              nodeSelector: (3)
                nodeSelectorTerms:
                  - matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - test1
      1 To create multiple device storage classes in the cluster, create a YAML array under deviceClasses for each required storage class. If you add or remove a deviceClass, then the update reflects in the cluster only after deleting and recreating the topolvm-node pod. Configure the local device paths of the disks as an array of values in the deviceSelector field. When configuring multiple device classes, you must specify the device path for each device.
      2 Mandatory: The LVMCluster resource must contain a single default storage class. Set default: false for secondary device storage classes. If you are upgrading the LVMCluster resource from a previous version, you must specify a single default device class.
      3 Optional: To control what worker nodes the LVMCluster CR is applied to, specify a set of node selector labels. The specified labels must be present on the node in order for the LVMCluster to be scheduled on that node.

    It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX, as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/, to ensure consistent disk identification.

    With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node.

    For more information, see the Fedora documentation.

    1. Create the LVMCluster CR:

      $ oc create -f lvmcluster.yaml
      Example output
      lvmcluster/lvmcluster created

      The LVMCluster resource creates the following system-managed CRs:

      LVMVolumeGroup

      Tracks individual volume groups across multiple nodes.

      LVMVolumeGroupNodeStatus

      Tracks the status of the volume groups on a node.

Verification

Verify that the LVMCluster resource has created the StorageClass, LVMVolumeGroup, and LVMVolumeGroupNodeStatus CRs.

LVMVolumeGroup and LVMVolumeGroupNodeStatus are managed by LVM Storage. Do not edit these CRs directly.

  1. Check that the LVMCluster CR is in a ready state by running the following command:

    $ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.deviceClassStatuses[*]}'
    Example output
    {
        "name": "vg1",
        "nodeStatus": [
            {
                "devices": [
                    "/dev/nvme0n1",
                    "/dev/nvme1n1",
                    "/dev/nvme2n1"
                ],
                "node": "kube-node",
                "status": "Ready"
            }
        ]
    }
  2. Check that the storage class is created:

    $ oc get storageclass
    Example output
    NAME          PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    lvms-vg1      topolvm.io           Delete          WaitForFirstConsumer   true                   31m
  3. Check that the volume snapshot class is created:

    $ oc get volumesnapshotclass
    Example output
    NAME          DRIVER               DELETIONPOLICY   AGE
    lvms-vg1      topolvm.io           Delete           24h
  4. Check that the LVMVolumeGroup resource is created:

    $ oc get lvmvolumegroup vg1 -o yaml
    Example output
    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMVolumeGroup
    metadata:
      creationTimestamp: "2022-02-02T05:16:42Z"
      generation: 1
      name: vg1
      namespace: lvm-operator-system
      resourceVersion: "17242461"
      uid: 88e8ad7d-1544-41fb-9a8e-12b1a66ab157
    spec: {}
  5. Check that the LVMVolumeGroupNodeStatus resource is created:

    $ oc get lvmvolumegroupnodestatuses.lvm.topolvm.io kube-node -o yaml
    Example output
    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMVolumeGroupNodeStatus
    metadata:
      creationTimestamp: "2022-02-02T05:17:59Z"
      generation: 1
      name: kube-node
      namespace: lvm-operator-system
      resourceVersion: "17242882"
      uid: 292de9bb-3a9b-4ee8-946a-9b587986dafd
    spec:
      nodeStatus:
        - devices:
            - /dev/nvme0n1
            - /dev/nvme1n1
            - /dev/nvme2n1
          name: vg1
          status: Ready

Provisioning storage using LVM Storage

You can provision persistent volume claims (PVCs) using the storage class that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.

LVM Storage provisions PVCs in units of 1 GiB. The requested storage is rounded up to the nearest GiB.

Procedure
  1. Identify the StorageClass that is created when LVM Storage is deployed.

    The StorageClass name is in the format, lvms-<device-class-name>. The device-class-name is the name of the device class that you provided in the LVMCluster of the Policy YAML. For example, if the deviceClass is called vg1, then the storageClass name is lvms-vg1.

    The volumeBindingMode of the storage class is set to WaitForFirstConsumer.

  2. To create a PVC where the application requires storage, save the following YAML to a file with a name such as pvc.yaml.

    Example YAML to create a PVC
    # block pvc
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-block-1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Block
      resources:
        requests:
          storage: 10Gi
      storageClassName: lvms-vg1
    ---
    # file pvc
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvm-file-1
      namespace: default
    spec:
      accessModes:
        - ReadWriteOnce
      volumeMode: Filesystem
      resources:
        requests:
          storage: 10Gi
      storageClassName: lvms-vg1
  3. Create the PVC by running the following command:

    # oc create -f pvc.yaml -ns <application_namespace>

    The created PVCs remain in pending state until you deploy the pods that use them.

Scaling storage of single-node OpenShift clusters

The OKD supports additional worker nodes for single-node OpenShift clusters on bare-metal user-provisioned infrastructure. LVM Storage detects and uses the new additional worker nodes when the nodes show up.

Scaling up storage by adding capacity to your single-node OpenShift cluster

To scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster, you can increase the capacity by adding disks.

Prerequisites
  • You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.

Procedure
  1. Log in to OKD console of the single-node OpenShift cluster.

  2. From the OperatorsInstalled Operators page, click on the LVM Storage Operator in the openshift-storage namespace.

  3. Click on the LVMCluster tab to list the LVMCluster CR created on the cluster.

  4. Select Edit LVMCluster from the Actions drop-down menu.

  5. Click on the YAML tab.

  6. Edit the LVMCluster CR YAML to add the new device path in the deviceSelector section:

    In case the deviceSelector field is not included during the LVMCluster creation, it is not possible to add the deviceSelector section to the CR. You need to remove the LVMCluster and then create a new CR.

    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: my-lvmcluster
    spec:
      storage:
        deviceClasses:
        - name: vg1
          default: true
          deviceSelector:
            paths:
            - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 (1)
            - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
            - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 (2)
          thinPoolConfig:
            name: thin-pool-1
            sizePercent: 90
            overprovisionRatio: 10
    1 The path can be added by name (/dev/sdb) or by path.
    2 A new disk is added.
Additional resources

Scaling up storage by adding capacity to your single-node OpenShift cluster using RHACM

You can scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster using RHACM.

Prerequisites
  • You have access to the RHACM cluster using an account with cluster-admin privilages.

  • You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.

Procedure
  1. Log in to the RHACM CLI using your OKD credentials.

  2. Find the disk that you want to add. The disk to be added needs to match with the device name and path of the existing disks.

  3. To add capacity to the single-node OpenShift cluster, edit the deviceSelector section of the existing policy YAML, for example, policy-lvms-operator.yaml.

    In case the deviceSelector field is not included during the LVMCluster creation, it is not possible to add the deviceSelector section to the CR. You need to remove the LVMCluster and then recreate from the new CR.

    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    metadata:
      name: placement-install-lvms
    spec:
      clusterConditions:
      - status: "True"
        type: ManagedClusterConditionAvailable
      clusterSelector:
        matchExpressions:
        - key: mykey
          operator: In
          values:
          - myvalue
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-lvms
    placementRef:
      apiGroup: apps.open-cluster-management.io
      kind: PlacementRule
      name: placement-install-lvms
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: install-lvms
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
      name: install-lvms
    spec:
      disabled: false
      remediationAction: enforce
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: install-lvms
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                kind: Namespace
                metadata:
                  labels:
                    openshift.io/cluster-monitoring: "true"
                    pod-security.kubernetes.io/enforce: privileged
                    pod-security.kubernetes.io/audit: privileged
                    pod-security.kubernetes.io/warn: privileged
                  name: openshift-storage
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1
                kind: OperatorGroup
                metadata:
                  name: openshift-storage-operatorgroup
                  namespace: openshift-storage
                spec:
                  targetNamespaces:
                  - openshift-storage
            - complianceType: musthave
              objectDefinition:
                apiVersion: operators.coreos.com/v1alpha1
                kind: Subscription
                metadata:
                  name: lvms
                  namespace: openshift-storage
                spec:
                  installPlanApproval: Automatic
                  name: lvms-operator
                  source: redhat-operators
                  sourceNamespace: openshift-marketplace
            remediationAction: enforce
            severity: low
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: lvms
          spec:
            object-templates:
               - complianceType: musthave
                 objectDefinition:
                   apiVersion: lvm.topolvm.io/v1alpha1
                   kind: LVMCluster
                   metadata:
                     name: my-lvmcluster
                     namespace: openshift-storage
                   spec:
                     storage:
                       deviceClasses:
                       - name: vg1
                         default: true
                         deviceSelector:
                           paths:
                           - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
                           - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
                           - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # new disk is added
                         thinPoolConfig:
                           name: thin-pool-1
                           sizePercent: 90
                           overprovisionRatio: 10
                         nodeSelector:
                           nodeSelectorTerms:
                           - matchExpressions:
                               - key: app
                                 operator: In
                                 values:
                                 - test1
            remediationAction: enforce
            severity: low
  4. Edit the policy by running the following command:

    # oc edit -f policy-lvms-operator.yaml -ns lvms-policy-ns (1)
    1 The policy-lvms-operator.yaml is the name of the existing policy.

    This uses the new disk specified in the LVMCluster CR to provision storage.

Expanding PVCs

To leverage the new storage after adding additional capacity, you can expand existing persistent volume claims (PVCs) with LVM Storage.

Prerequisites
  • Dynamic provisioning is used.

  • The controlling StorageClass object has allowVolumeExpansion set to true.

Procedure
  1. Modify the .spec.resources.requests.storage field in the desired PVC resource to the new size by running the following command:

    oc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'
  2. Watch the status.conditions field of the PVC to see if the resize has completed. OKD adds the Resizing condition to the PVC during expansion, which is removed after the expansion completes.

Upgrading LVM Storage on single-node OpenShift clusters

You can upgrade the Logical Volume Manager (LVM) Storage Operator to ensure compatibility with your single-node OpenShift version.

Prerequisites
  • You have upgraded your single-node OpenShift cluster.

  • You have installed a previous version of the LVM Storage Operator.

  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. Update the Subscription resource for the LVM Storage Operator by running the following command:

    $ oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update-channel>"}}' (1)
    1 Replace <update-channel> with the version of the LVM Storage Operator that you want to install, for example stable-4.13.
  2. View the upgrade events to check that the installation is complete by running the following command:

    $ oc get events -n openshift-storage
    Example output
    ...
    8m13s       Normal    RequirementsUnknown   clusterserviceversion/lvms-operator.v4.13   requirements not yet checked
    8m11s       Normal    RequirementsNotMet    clusterserviceversion/lvms-operator.v4.13   one or more requirements couldn't be found
    7m50s       Normal    AllRequirementsMet    clusterserviceversion/lvms-operator.v4.13   all requirements found, attempting install
    7m50s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.13   waiting for install components to report healthy
    7m49s       Normal    InstallWaiting        clusterserviceversion/lvms-operator.v4.13   installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated
    7m39s       Normal    InstallSucceeded      clusterserviceversion/lvms-operator.v4.13   install strategy completed with no errors
    ...
Verification
  • Verify the version of the LVM Storage Operator by running the following command:

    $ oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'
    Example output
    lvms-operator.v4.13

Volume snapshots for single-node OpenShift

You can take volume snapshots of persistent volumes (PVs) that are provisioned by LVM Storage. You can also create volume snapshots of the cloned volumes. Volume snapshots help you to do the following:

  • Back up your application data.

    Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you need to move the snapshots to a secure location. You can use OpenShift API for Data Protection backup and restore solutions.

  • Revert to a state at which the volume snapshot was taken.

Additional resources

Creating volume snapshots in single-node OpenShift

You can create volume snapshots based on the available capacity of the thin pool and the overprovisioning limits. LVM Storage creates a VolumeSnapshotClass with the lvms-<deviceclass-name> name.

Prerequisites
  • You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.

  • You stopped all the I/O to the PVC before taking the snapshot.

Procedure
  1. Log in to the single-node OpenShift for which you need to run the oc command.

  2. Save the following YAML to a file with a name such as lvms-vol-snapshot.yaml.

    Example YAML to create a volume snapshot
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
        name: lvm-block-1-snap
    spec:
        volumeSnapshotClassName: lvms-vg1
        source:
            persistentVolumeClaimName: lvm-block-1
  3. Create the snapshot by running the following command in the same namespace as the PVC:

    # oc create -f lvms-vol-snapshot.yaml

A read-only copy of the PVC is created as a volume snapshot.

Restoring volume snapshots in single-node OpenShift

When you restore a volume snapshot, a new persistent volume claim (PVC) is created. The restored PVC is independent of the volume snapshot and the source PVC.

Prerequisites
  • The storage class must be the same as that of the source PVC.

  • The size of the requested PVC must be the same as that of the source volume of the snapshot.

    A snapshot must be restored to a PVC of the same size as the source volume of the snapshot. If a larger PVC is required, you can resize the PVC after the snapshot is restored successfully.

Procedure
  1. Identify the storage class name of the source PVC and volume snapshot name.

  2. Save the following YAML to a file with a name such as lvms-vol-restore.yaml to restore the snapshot.

    Example YAML to restore a PVC.
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: lvm-block-1-restore
    spec:
      accessModes:
      - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi
      storageClassName: lvms-vg1
      dataSource:
        name: lvm-block-1-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
  3. Create the policy by running the following command in the same namespace as the snapshot:

    # oc create -f lvms-vol-restore.yaml

Deleting volume snapshots in single-node OpenShift

You can delete volume snapshots resources and persistent volume claims (PVCs).

Procedure
  1. Delete the volume snapshot resource by running the following command:

    # oc delete volumesnapshot <volume_snapshot_name> -n <namespace>

    When you delete a persistent volume claim (PVC), the snapshots of the PVC are not deleted.

  2. To delete the restored volume snapshot, delete the PVC that was created to restore the volume snapshot by running the following command:

    # oc delete pvc <pvc_name> -n <namespace>

Volume cloning for single-node OpenShift

A clone is a duplicate of an existing storage volume that can be used like any standard volume.

Creating volume clones in single-node OpenShift

You create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size.

The cloned PVC has write access.

Prerequisites
  • You ensured that the PVC is in Bound state. This is required for a consistent snapshot.

  • You ensured that the StorageClass is the same as that of the source PVC.

Procedure
  1. Identify the storage class of the source PVC.

  2. To create a volume clone, save the following YAML to a file with a name such as lvms-vol-clone.yaml:

    Example YAML to clone a volume
    apiVersion: v1
    kind: PersistentVolumeClaim
    Metadata:
      name: lvm-block-1-clone
    Spec:
      storageClassName: lvms-vg1
      dataSource:
        name: lvm-block-1
        kind: PersistentVolumeClaim
      accessModes:
       - ReadWriteOnce
      volumeMode: Block
      Resources:
        Requests:
          storage: 2Gi
  3. Create the policy in the same namespace as the source PVC by running the following command:

    # oc create -f lvms-vol-clone.yaml

Deleting cloned volumes in single-node OpenShift

You can delete cloned volumes.

Procedure
  • To delete the cloned volume, delete the cloned PVC by running the following command:

    # oc delete pvc <clone_pvc_name> -n <namespace>

Monitoring LVM Storage

To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:

openshift.io/cluster-monitoring=true

For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.

Metrics

You can monitor LVM Storage by viewing the metrics.

The following table describes the topolvm metrics:

Table 2. topolvm metrics
Alert Description

topolvm_thinpool_data_percent

Indicates the percentage of data space used in the LVM thinpool.

topolvm_thinpool_metadata_percent

Indicates the percentage of metadata space used in the LVM thinpool.

topolvm_thinpool_size_bytes

Indicates the size of the LVM thin pool in bytes.

topolvm_volumegroup_available_bytes

Indicates the available space in the LVM volume group in bytes.

topolvm_volumegroup_size_bytes

Indicates the size of the LVM volume group in bytes.

topolvm_thinpool_overprovisioned_available

Indicates the available over-provisioned size of the LVM thin pool in bytes.

Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.

Alerts

When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.

LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:

Table 3. LVM Storage alerts
Alert Description

VolumeGroupUsageAtThresholdNearFull

This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required.

VolumeGroupUsageAtThresholdCritical

This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required.

ThinPoolDataUsageAtThresholdNearFull

This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolDataUsageAtThresholdCritical

This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdNearFull

This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdCritical

This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.

Downloading log files and diagnostic information using must-gather

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

Procedure
  • Run the must-gather command from the client connected to the LVM Storage cluster:

    $ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.13 --dest-dir=<directory_name>
Additional resources

LVM Storage reference YAML file

The sample LVMCluster custom resource (CR) describes all the fields in the YAML file.

Example LVMCluster CR
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
  name: my-lvmcluster
spec:
  tolerations:
  - effect: NoSchedule
    key: xyz
    operator: Equal
    value: "true"
  storage:
    deviceClasses:    (1)
    - name: vg1    (2)
      default: true
      nodeSelector: (3)
        nodeSelectorTerms: (4)
        - matchExpressions:
          - key: mykey
            operator: In
            values:
            - ssd
      deviceSelector: (5)
        paths:
        - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
        - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
      thinPoolConfig: (6)
        name: thin-pool-1 (7)
        sizePercent: 90 (8)
        overprovisionRatio: 10 (9)
status:
    deviceClassStatuses: (10)
    - name: vg1
      nodeStatus: (11)
      - devices: (12)
        - /dev/nvme0n1
        - /dev/nvme1n1
        - /dev/nvme2n1
        node: my-node.example.com (13)
        status: Ready (14)
    ready: true (15)
    state: Ready (16)
1 The LVM volume groups to be created on the cluster. Currently, only a single deviceClass is supported.
2 The name of the LVM volume group to be created on the nodes.
3 The nodes on which to create the LVM volume group. If the field is empty, all nodes are considered.
4 A list of node selector requirements.
5 A list of device paths which is used to create the LVM volume group. If this field is empty, all unused disks on the node are used.
6 The LVM thin pool configuration.
7 The name of the thin pool to be created in the LVM volume group.
8 The percentage of remaining space in the LVM volume group that should be used for creating the thin pool.
9 The factor by which additional storage can be provisioned compared to the available storage in the thin pool.
10 The status of the deviceClass.
11 The status of the LVM volume group on each node.
12 The list of devices used to create the LVM volume group.
13 The node on which the deviceClass was created.
14 The status of the LVM volume group on the node.
15 This field is deprecated.
16 The status of the LVMCluster.

Troubleshooting persistent storage

While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting.

Investigating a PVC stuck in the Pending state

A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons:

  • Insufficient computing resources.

  • Network problems.

  • Mismatched storage class or node selector.

  • No available persistent volumes (PVs).

  • The node with the PV is in the Not Ready state.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure
  1. Retrieve the list of PVCs by running the following command:

    $ oc get pvc
    Example output
    NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    lvms-test   Pending                                      lvms-vg1       11s
  2. Inspect the events associated with a PVC stuck in the Pending state by running the following command:

    $ oc describe pvc <pvc_name> (1)
    1 Replace <pvc_name> with the name of the PVC. For example, lvms-vg1.
    Example output
    Type     Reason              Age               From                         Message
    ----     ------              ----              ----                         -------
    Warning  ProvisioningFailed  4s (x2 over 17s)  persistentvolume-controller  storageclass.storage.k8s.io "lvms-vg1" not found

Recovering from a missing storage class

If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure
  1. Verify that the LVMCluster CR is present by running the following command:

    $ oc get lvmcluster -n openshift-storage
    Example output
    NAME            AGE
    my-lvmcluster   65m
  2. If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Creating a Logical Volume Manager cluster on a single-node OpenShift worker node".

  3. In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command:

    $ oc get pods -n openshift-storage
    Example output
    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    vg-manager-r6zdv                      1/1     Running   0             66m

    The output of this command must contain a running instance of the following pods:

    • lvms-operator

    • vg-manager

    • topolvm-controller

    • topolvm-node

      If the topolvm-node pod is stuck in the Init state, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command:

      $ oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage

Recovering from node failure

A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster.

To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure
  • Examine the restart count of the topolvm-node pod instances by running the following command:

    $ oc get pods -n openshift-storage
    Example output
    NAME                                  READY   STATUS    RESTARTS      AGE
    lvms-operator-7b9fb858cb-6nsml        3/3     Running   0             70m
    topolvm-controller-5dd9cf78b5-7wwr2   5/5     Running   0             66m
    topolvm-node-dr26h                    4/4     Running   0             66m
    topolvm-node-54as8                    4/4     Running   0             66m
    topolvm-node-78fft                    4/4     Running   17 (8s ago)   66m
    vg-manager-r6zdv                      1/1     Running   0             66m
    vg-manager-990ut                      1/1     Running   0             66m
    vg-manager-an118                      1/1     Running   0             66m
Next steps
  • If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

Additional resources

Recovering from disk failure

If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk.

Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name>. The generic error message is followed by a specific volume failure error message.

The following table describes the volume failure error messages:

Table 4. Volume failure error messages
Error message Description

Failed to check volume existence

Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures.

Failed to bind volume

Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC.

FailedMount or FailedAttachVolume

This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

FailedUnMount

This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC.

Volume is already exclusively attached to one node and cannot be attached to another

This error can appear with storage solutions that do not support ReadWriteMany access modes.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

Procedure
  1. Inspect the events associated with a PVC by running the following command:

    $ oc describe pvc <pvc_name> (1)
    1 Replace <pvc_name> with the name of the PVC.
  2. Establish a direct connection to the host where the problem is occurring.

  3. Resolve the disk issue.

Next steps
  • If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".

Additional resources

Performing a forced clean-up

If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the OpenShift CLI (oc) as a user with cluster-admin permissions.

  • You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage.

  • You have stopped the pods that are using the PVCs that were created by using LVM Storage.

Procedure
  1. Switch to the openshift-storage namespace by running the following command:

    $ oc project openshift-storage
  2. Check if the LogicalVolume custom resources (CRs) are present by running the following command:

    $ oc get logicalvolume
    1. If the LogicalVolume CRs are present, delete them by running the following command:

      $ oc delete logicalvolume <name> (1)
      1 Replace <name> with the name of the LogicalVolume CR.
    2. After deleting the LogicalVolume CRs, remove their finalizers by running the following command:

      $ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge (1)
      1 Replace <name> with the name of the LogicalVolume CR.
  3. Check if the LVMVolumeGroup CRs are present by running the following command:

    $ oc get lvmvolumegroup
    1. If the LVMVolumeGroup CRs are present, delete them by running the following command:

      $ oc delete lvmvolumegroup <name> (1)
      1 Replace <name> with the name of the LVMVolumeGroup CR.
    2. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command:

      $ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge (1)
      1 Replace <name> with the name of the LVMVolumeGroup CR.
  4. Delete any LVMVolumeGroupNodeStatus CRs by running the following command:

    $ oc delete lvmvolumegroupnodestatus --all
  5. Delete the LVMCluster CR by running the following command:

    $ oc delete lvmcluster --all
    1. After deleting the LVMCluster CR, remove its finalizer by running the following command:

      $ oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge (1)
      1 Replace <name> with the name of the LVMCluster CR.