# yum install heketi heketi-client -y
This example assumes a functioning OKD cluster along with Heketi and
GlusterFS. All |
Container Native Storage (CNS) using GlusterFS and Heketi is a great way to perform dynamic provisioning for shared filesystems in a Kubernetes-based cluster like OKD. However, if an existing, dedicated Gluster cluster is available external to the OKD cluster, you can also provision storage from it rather than a containerized GlusterFS implementation.
This example:
Shows how simple it is to install and configure a Heketi server to work with OKD to perform dynamic provisioning.
Assumes some familiarity with Kubernetes and the Kubernetes Persistent Storage model.
Assumes you have access to an existing, dedicated GlusterFS cluster that has raw devices available for consumption and management by a Heketi server. If you do not have this, you can create a three node cluster using your virtual machine solution of choice. Ensure sure you create a few raw devices and give plenty of space (at least 100GB recommended). See Red Hat Gluster Storage Installation Guide.
This example uses the following environment and prerequisites:
GlusterFS cluster running Red Hat Gluster Storage (RHGS) 3.1. Three nodes, each with at least two 100GB RAW devices:
gluster23.rhs (192.168.1.200)
gluster24.rhs (192.168.1.201)
gluster25.rhs (192.168.1.202)
Heketi service/client node running Red Hat Enterprise Linux (RHEL) 7.x or RHGS 3.1. Heketi can be installed on one of the Gluster nodes:
glusterclient2.rhs (192.168.1.203)
OKD node. This example uses an all-in-one OKD cluster (master and node on a single host), though it can work using a standard, multi-node cluster as well.
k8dev2.rhs (192.168.1.208)
Heketi is used to manage the Gluster cluster storage (adding volumes, removing volumes, etc.). As stated, this can be RHEL or RHGS, and can be installed on one of the existing Gluster storage nodes. This example uses a stand-alone RHGS 3.1 node running Heketi.
The Red Hat Gluster Storage Administration Guide can be used a reference during this process.
Install Heketi and the Heketi client. From the host designated to run Heketi and the Heketi client, run:
# yum install heketi heketi-client -y
The Heketi server can be any of the existing hosts, though typically this will be the OKD master host. This example, however, uses a separate host not part of the GlusterFS or OKD cluster. |
Create and install Heketi private keys on each GlusterFS cluster node. From the host that is running Heketi:
# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N '' # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster23.rhs # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster24.rhs # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster25.rhs # chown heketi:heketi /etc/heketi/heketi_key*
Edit the /etc/heketi/heketi.json file to setup the SSH executor. Below
is an excerpt from the /etc/heketi/heketi.json file; the parts to
configure are the executor
and SSH sections:
"executor": "ssh", (1)
"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key", (2)
"user": "root", (3)
"port": "22", (4)
"fstab": "/etc/fstab" (5)
},
1 | Change executor from mock to ssh . |
2 | Add in the public key directory specified in previous step. |
3 | Update user to a user that has sudo or root access. |
4 | Set port to 22 and remove all other text. |
5 | Set fstab to the default, /etc/fstab and remove all other text. |
Restart and enable service:
# systemctl restart heketi # systemctl enable heketi
Test the connection to Heketi:
# curl http://glusterclient2.rhs:8080/hello Hello from Heketi
Set an environment variable for the Heketi server:
# export HEKETI_CLI_SERVER=http://glusterclient2.rhs:8080
Topology is used to tell Heketi about the environment and what nodes and devices it will manage.
Heketi is currently limited to managing raw devices only. If a device is already a Gluster volume, it will be skipped and ignored. |
Create and load the topology file. There is a sample file located in /usr/share/heketi/topology-sample.json by default, or /etc/heketi depending on how it was installed.
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"gluster23.rhs"
],
"storage": [
"192.168.1.200"
]
},
"zone": 1
},
"devices": [
"/dev/sde",
"/dev/sdf"
]
},
{
"node": {
"hostnames": {
"manage": [
"gluster24.rhs"
],
"storage": [
"192.168.1.201"
]
},
"zone": 1
},
"devices": [
"/dev/sde",
"/dev/sdf"
]
},
{
"node": {
"hostnames": {
"manage": [
"gluster25.rhs"
],
"storage": [
"192.168.1.202"
]
},
"zone": 1
},
"devices": [
"/dev/sde",
"/dev/sdf"
]
}
]
}
]
}
Using heketi-cli
, run the following command to load the topology of your
environment.
# heketi-cli topology load --json=topology.json Found node gluster23.rhs on cluster bdf9d8ca3fa269ff89854faf58f34b9a Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node gluster24.rhs ... ID: 8e677d8bebe13a3f6846e78a67f07f30 Adding device /dev/sde ... OK Adding device /dev/sdf ... OK ... ...
Create a Gluster volume to verify Heketi:
# heketi-cli volume create --size=50
View the volume information from one of the the Gluster nodes:
# gluster volume info Volume Name: vol_335d247ac57ecdf40ac616514cc6257f (1) Type: Distributed-Replicate Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3 Status: Started ... ...
1 | Volume created by heketi-cli . |
Create a StorageClass
object definition. The definition below is based on the
minimum requirements needed for this example to work with OKD. See
Dynamic
Provisioning and Creating Storage Classes for additional parameters and
specification definitions.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-dyn
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://glusterclient2.rhs:8080" (1)
restauthenabled: "false" (2)
1 | The Heketi server from the HEKETI_CLI_SERVER environment variable. |
2 | Since authentication is not turned on in this example, set to false . |
From the OKD master host, create the storage class:
# oc create -f glusterfs-storageclass1.yaml storageclass "gluster-dyn" created
Create a persistent volume claim (PVC), requesting the newly-created storage class. For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-dyn-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Gi
storageClassName: gluster-dyn
From the OKD master host, create the PVC:
# oc create -f glusterfs-pvc-storageclass.yaml persistentvolumeclaim "gluster-dyn-pvc" created
View the PVC to see that the volume was dynamically created and bound to the PVC:
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE gluster-dyn-pvc Bound pvc-78852230-d8e2-11e6-a3fa-0800279cf26f 30Gi RWX gluster-dyn 42s
Verify and view the new volume on one of the Gluster nodes:
# gluster volume info Volume Name: vol_335d247ac57ecdf40ac616514cc6257f (1) Type: Distributed-Replicate Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3 Status: Started ... Volume Name: vol_f1404b619e6be6ef673e2b29d58633be (2) Type: Distributed-Replicate Volume ID: 7dc234d0-462f-4c6c-add3-fb9bc7e8da5e Status: Started Number of Bricks: 2 x 2 = 4 ...
1 | Volume created by heketi-cli . |
2 | New dynamically created volume triggered by Kubernetes and the storage class. |
At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, create a simple NGINX pod.
Create the pod object definition:
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod1
labels:
name: gluster-pod1
spec:
containers:
- name: gluster-pod1
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster-dyn-pvc (1)
1 | The name of the PVC created in the previous step. |
From the OKD master host, create the pod:
# oc create -f nginx-pod.yaml pod "gluster-pod1" created
View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:
# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE gluster-pod1 1/1 Running 0 9m 10.38.0.0 node1
Now remote into the container with oc exec
and create an index.html file:
# oc exec -ti gluster-pod1 /bin/sh $ cd /usr/share/nginx/html $ echo 'Hello World from GlusterFS!!!' > index.html $ ls index.html $ exit
Now curl
the URL of the pod:
# curl http://10.38.0.0 Hello World from GlusterFS!!!