apiServerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
controllerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
OKD can be configured to access local volumes for application data.
Local volumes are persistent volumes (PV) representing locally-mounted file systems. In the future, they may be extended to raw block devices.
Local volumes are different from a hostPath. They have a special annotation that makes any pod that uses the PV to be scheduled on the same node where the local volume is mounted.
In addition, local volume includes a provisioner that automatically creates PVs for locally mounted devices. This provisioner is currently limited and it only scans pre-configured directories. It cannot dynamically provision volumes, which may be implemented in a future release.
The local volume provisioner allows using local storage within OKD and supports:
Volumes
PVs
Local volumes is an alpha feature and may change in a future release of OKD. |
Enable the PersistentLocalVolumes
feature gate on all masters and nodes.
Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add PersistentLocalVolumes=true
under the apiServerArguments
and controllerArguments
sections:
apiServerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
controllerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
On all nodes, edit or create the node configuration file (/etc/origin/node/node-config.yaml by default) and add PersistentLocalVolumes=true
feature gate under kubeletArguments
:
kubeletArguments:
feature-gates:
- PersistentLocalVolumes=true
All local volumes must be manually mounted before they can be consumed by OKD as PVs.
All volumes must be mounted into the
/mnt/local-storage/<storage-class-name>/<volume> path. The administrators are required to create the local devices as needed (by using any method such as
a disk partition or an LVM), create suitable file systems on these devices, and mount them using a script or /etc/fstab
entries.
/etc/fstab
entries# device name # mount point # FS # options # extra
/dev/sdb1 /mnt/local-storage/ssd/disk1 ext4 defaults 1 2
/dev/sdb2 /mnt/local-storage/ssd/disk2 ext4 defaults 1 2
/dev/sdb3 /mnt/local-storage/ssd/disk3 ext4 defaults 1 2
/dev/sdc1 /mnt/local-storage/hdd/disk1 ext4 defaults 1 2
/dev/sdc2 /mnt/local-storage/hdd/disk2 ext4 defaults 1 2
All volumes must be accessible to processes running within Docker containers. Change the labels of mounted file systems to allow that:
$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
OKD depends on an external provisioner to create PVs for local devices and to clean them up when they are not needed (to enable reuse).
|
This external provisioner should be configured using a ConfigMap
to relate directories with StorageClasses. This configuration must be created before the provisioner is deployed.
(Optional) Create a standalone namespace for local volume provisioner and its configuration, for example:
|
apiVersion: v1
kind: ConfigMap
metadata:
name: local-volume-config
data:
"local-ssd": | (1)
{
"hostDir": "/mnt/local-storage/ssd", (2)
"mountDir": "/mnt/local-storage/ssd" (3)
}
"local-hdd": |
{
"hostDir": "/mnt/local-storage/hdd",
"mountDir": "/mnt/local-storage/hdd"
}
1 | Name of the StorageClass. |
2 | Path to the directory on the host. It must be a subdirectory of /mnt/local-storage. |
3 | Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host. |
With this configuration, the provisioner creates:
One PV with StorageClass local-ssd
for every subdirectory in /mnt/local-storage/ssd.
One PV with StorageClass local-hdd
for every subdirectory in /mnt/local-storage/hdd.
The LocalPersistentVolumes
alpha feature now also requires the VolumeScheduling
alpha feature. This is a breaking change, and the following changes are required:
The VolumeScheduling
feature gate must also be enabled on kube-scheduler and kube-controller-manager components.
The NoVolumeNodeConflict
predicate has been removed. For non-default schedulers, update your scheduler policy.
The CheckVolumeBinding
predicate must be enabled in non-default schedulers.
Before starting the provisioner, mount all local devices and create a |
Install the local provisioner from the local-storage-provisioner-template.yaml file.
Create a service account that allows running pods as a root user, use hostPath volumes, and use any SELinux context to be able to monitor, manage, and clean local volumes:
$ oc create serviceaccount local-storage-admin
$ oc adm policy add-scc-to-user privileged -z local-storage-admin
To allow the provisioner pod to delete content on local volumes created by any pod, root privileges and any SELinux context are required. hostPath is required to access the /mnt/local-storage path on the host.
Install the template:
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
Instantiate the template by specifying values for configmap
, account
, and provisioner_image
parameters:
$ oc new-app -p CONFIGMAP=local-volume-config \
-p SERVICE_ACCOUNT=local-storage-admin \
-p NAMESPACE=local-storage \
-p PROVISIONER_IMAGE=quay.io/external_storage/local-volume-provisioner:v1.0.1 \
local-storage-provisioner
Add the necessary storage classes:
oc create -f ./storage-class-ssd.yaml
oc create -f ./storage-class-hdd.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hdd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
See the template for other configurable options. This template creates a DaemonSet that runs a
pod on every node. The pod watches directories specified in the ConfigMap
and
creates PVs for them automatically.
The provisioner runs as root to be able to clean up the directories when a PV is released and all data needs to be removed.
Adding a new device requires several manual steps:
Stop DaemonSet with the provisioner.
Create a subdirectory in the right directory on the node with the new device and mount it there.
Start the DaemonSet with the provisioner.
Omitting any of these steps may result in the wrong PV being created. |