×

A hostPath volume in an OKD cluster mounts a file or directory from the host node’s file system into your pod. Most pods do not need a hostPath volume, but it does offer a quick option for testing should an application require it.

The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.

Overview

OKD supports hostPath mounting for development and testing on a single-node cluster.

In a production cluster, you would not use hostPath. Instead, a cluster administrator provisions a network resource, such as a GCE Persistent Disk volume or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.

A hostPath volume must be provisioned statically.

Configuring hostPath volumes in the Pod specification

You can use hostPath volumes to access read-write files on nodes. This can be useful for pods that can configure and monitor the host from the inside. You can also use hostPath volumes to mount volumes on the host using mountPropagation.

Using hostPath volumes can be dangerous, as they allow pods to read and write any file on the host. Proceed with caution.

It is recommended that you specify hostPath volumes directly in the Pod specification, rather than in a PersistentVolume object. This is useful because the pod already knows the path it needs to access when configuring nodes.

Procedure
  1. Create a privileged pod:

      apiVersion: v1
      kind: Pod
      metadata:
        name: pod-name
      spec:
        containers:
        ...
          securityContext:
            privileged: true
        volumeMounts:
        - mountPath: /host/etc/motd.confg (1)
          name: hostpath-privileged
      ...
      volumes:
        - name: hostpath-privileged
          hostPath:
              path: /etc/motd.confg (2)
    1 The path used to mount the hostPath share inside the privileged pod.
    2 The path on the host that is used to share into the privileged pod.

In this example, the pod can see the path of the host inside /etc/motd.confg as /host/etc/motd.confg. As a result, the motd can be configured without accessing the host directly.

Statically provisioning hostPath volumes

A pod that uses a hostPath volume must be referenced by manual, or static, provisioning.

Using persistent volumes with hostPath should only be used when there is no persistent storage available.

Procedure
  1. Define the persistent volume (PV). Create a pv.yaml file with the PersistentVolume object definition:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: task-pv-volume (1)
        labels:
          type: local
      spec:
        storageClassName: manual (2)
        capacity:
          storage: 5Gi
        accessModes:
          - ReadWriteOnce (3)
        persistentVolumeReclaimPolicy: Retain
        hostPath:
          path: "/mnt/data" (4)
    1 The name of the volume. This name is how it is identified by persistent volume claims or pods.
    2 Used to bind persistent volume claim requests to this persistent volume.
    3 The volume can be mounted as read-write by a single node.
    4 The configuration file specifies that the volume is at /mnt/data on the cluster’s node.
  2. Create the PV from the file:

    $ oc create -f pv.yaml
  3. Define the persistent volume claim (PVC). Create a pvc.yaml file with the PersistentVolumeClaim object definition:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: task-pvc-volume
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: manual
  4. Create the PVC from the file:

    $ oc create -f pvc.yaml

Mounting the hostPath share in a privileged pod

After the persistent volume claim has been created, it can be used inside of a pod by an application. The following example demonstrates mounting this share inside of a pod.

Prerequisites
  • A persistent volume claim exists that is mapped to the underlying hostPath share.

Procedure
  • Create a privileged pod that mounts the existing persistent volume claim:

    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-name (1)
    spec:
      containers:
        ...
        securityContext:
          privileged: true (2)
        volumeMounts:
        - mountPath: /data (3)
          name: hostpath-privileged
      ...
      securityContext: {}
      volumes:
        - name: hostpath-privileged
          persistentVolumeClaim:
            claimName: task-pvc-volume (4)
    1 The name of the pod.
    2 The pod must run as privileged to access the node’s storage.
    3 The path to mount the hostPath share inside the privileged pod.
    4 The name of the PersistentVolumeClaim object that has been previously created.

Additional resources