×

After installing OKD, you can further expand and customize your cluster to your requirements, including storage configuration.

By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods:

Dynamic provisioning

You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access.

Static provisioning

You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options.

Dynamic provisioning

Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning.

oVirt object definition

OKD creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes.

To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML:

ovirt-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <storage_class_name>  (1)
  annotations:
    storageclass.kubernetes.io/is-default-class: "<boolean>"  (2)
provisioner: csi.ovirt.org
allowVolumeExpansion: <boolean> (3)
reclaimPolicy: Delete (4)
volumeBindingMode: Immediate (5)
parameters:
  storageDomainName: <rhv-storage-domain-name> (6)
  thinProvisioning: "<boolean>"  (7)
  csi.storage.k8s.io/fstype: <file_system_type> (8)
1 Name of the storage class.
2 Set to false if the storage class is the default storage class in the cluster. If set to true, the existing default storage class must be edited and set to false.
3 true enables dynamic volume expansion, false prevents it. true is recommended.
4 Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete.
5 Indicates how to provision and bind PersistentVolumeClaims. When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature.
6 The oVirt storage domain name to use.
7 If true, the disk is thin provisioned. If false, the disk is preallocated. Thin provisioning is recommended.
8 Optional: File system type to be created. Possible values: ext4 (default) or xfs.

Recommended configurable storage technology

The following table summarizes the recommended and configurable storage technologies for the given OKD cluster application.

Table 1. Recommended and configurable storage technology
Storage type Block File Object

ROX1

Yes4

Yes4

Yes

RWX2

No

Yes

Yes

Registry

Configurable

Configurable

Recommended

Scaled registry

Not configurable

Configurable

Recommended

Metrics3

Recommended

Configurable5

Not configurable

Elasticsearch Logging

Recommended

Configurable6

Not supported6

Loki Logging

Not configurable

Not configurable

Recommended

Apps

Recommended

Recommended

Not configurable7

1 ReadOnlyMany

2 ReadWriteMany

3 Prometheus is the underlying technology used for metrics.

4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.

5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics.

6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OKD Logging. You must use one persistent volume type per log store.

7 Object storage is not consumed through OKD’s PVs or PVCs. Apps must integrate with the object storage REST API.

A scaled registry is an OpenShift image registry where two or more pod replicas are running.

Specific application storage recommendations

Testing shows issues with using the NFS server on Fedora as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using Fedora NFS to back PVs used by core services is not recommended.

Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OKD core components.

Registry

In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment:

  • The storage technology does not have to support RWX access mode.

  • The storage technology must ensure read-after-write consistency.

  • The preferred storage technology is object storage followed by block storage.

  • File storage is not recommended for OpenShift image registry cluster deployment with production workloads.

Scaled registry

In a scaled/HA OpenShift image registry cluster deployment:

  • The storage technology must support RWX access mode.

  • The storage technology must ensure read-after-write consistency.

  • The preferred storage technology is object storage.

  • Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported.

  • Object storage should be S3 or Swift compliant.

  • For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage.

  • Block storage is not configurable.

  • The use of Network File System (NFS) storage with OKD is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production?.

Metrics

In an OKD hosted metrics cluster deployment:

  • The preferred storage technology is block storage.

  • Object storage is not configurable.

It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads.

Logging

In an OKD hosted logging cluster deployment:

  • Loki Operator:

    • The preferred storage technology is S3 compatible Object storage.

    • Block storage is not configurable.

  • OpenShift Elasticsearch Operator:

    • The preferred storage technology is block storage.

    • Object storage is not supported.

As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

Applications

Application use cases vary from application to application, as described in the following examples:

  • Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster.

  • Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer.

Other specific application storage recommendations

It is not recommended to use RAID configurations on Write intensive workloads, such as etcd. If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads.

  • OpenStack Cinder: OpenStack Cinder tends to be adept in ROX access mode use cases.

  • Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.

  • The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices.

Deploy Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OKD supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OKD for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.

OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OKD, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide.

If you are looking for Red Hat OpenShift Data Foundation information about…​ See the following Red Hat OpenShift Data Foundation documentation:

What’s new, known issues, notable bug fixes, and Technology Previews

OpenShift Data Foundation 4.12 Release Notes

Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations

Planning your OpenShift Data Foundation 4.12 deployment

Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster

Deploying OpenShift Data Foundation 4.12 in external mode

Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure

Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure

Instructions on deploying OpenShift Data Foundation on Red Hat OKD VMware vSphere clusters

Deploying OpenShift Data Foundation 4.12 on VMware vSphere

Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage

Deploying OpenShift Data Foundation 4.12 using Amazon Web Services

Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OKD Google Cloud clusters

Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud

Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OKD Azure clusters

Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure

Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power infrastructure

Deploying OpenShift Data Foundation on IBM Power

Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z infrastructure

Deploying OpenShift Data Foundation on IBM Z infrastructure

Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone

Managing and allocating resources

Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa)

Managing hybrid and multicloud resources

Safely replacing storage devices for Red Hat OpenShift Data Foundation

Replacing devices

Safely replacing a node in a Red Hat OpenShift Data Foundation cluster

Replacing nodes

Scaling operations in Red Hat OpenShift Data Foundation

Scaling storage

Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster

Monitoring Red Hat OpenShift Data Foundation 4.12

Resolve issues encountered during operations

Troubleshooting OpenShift Data Foundation 4.12

Migrating your OKD cluster from version 3 to version 4

Migration