apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001" (1)
spec:
capacity:
storage: "5Gi" (2)
accessModes:
- "ReadWriteOnce"
cinder: (3)
fsType: "ext3" (4)
volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" (5)
OKD supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.
Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OKD cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
OKD defaults to using an in-tree (non-CSI) plugin to provision Cinder storage. In future OKD versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration. After full migration, in-tree plugins will eventually be removed in future versions of OKD. |
For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder.
Storage must exist in the underlying infrastructure before it can be mounted as a volume in OKD.
OKD configured for OpenStack
Cinder volume ID
You must define your persistent volume (PV) in an object definition before creating it in OKD:
Save your object definition to a file.
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001" (1)
spec:
capacity:
storage: "5Gi" (2)
accessModes:
- "ReadWriteOnce"
cinder: (3)
fsType: "ext3" (4)
volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" (5)
1 | The name of the volume that is used by persistent volume claims or pods. |
2 | The amount of storage allocated to this volume. |
3 | Indicates cinder for OpenStack Cinder volumes. |
4 | The file system that is created when the volume is mounted for the first time. |
5 | The Cinder volume to use. |
Do not change the |
Create the object definition file you saved in the previous step.
$ oc create -f cinder-persistentvolume.yaml
You can use unformatted Cinder volumes as PVs because OKD formats them before the first use.
Before OKD mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType
parameter in the
PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.
If you use Cinder PVs in your application, configure security for their deployment configurations.
An SCC must be created that uses the appropriate fsGroup
strategy.
Create a service account and add it to the SCC:
$ oc create serviceaccount <service_account>
$ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>
In your application’s deployment configuration, provide the service account
name and securityContext
:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend-1
spec:
replicas: 1 (1)
selector: (2)
name: frontend
template: (3)
metadata:
labels: (4)
name: frontend (5)
spec:
containers:
- image: openshift/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
serviceAccountName: <service_account> (6)
securityContext:
fsGroup: 7777 (7)
1 | The number of copies of the pod to run. |
2 | The label selector of the pod to run. |
3 | A template for the pod that the controller creates. |
4 | The labels on the pod. They must include labels from the label selector. |
5 | The maximum name length after expanding any parameters is 63 characters. |
6 | Specifies the service account you created. |
7 | Specifies an fsGroup for the pods. |
By default, OKD supports a maximum of 256 Cinder volumes attached to one node, and the Cinder predicate that limits attachable volumes is disabled. To enable the predicate, add MaxCinderVolumeCount
string to the predicates
field in the scheduler policy.
For more information on modifying the scheduler policy, see Modifying scheduler policies.