$ oc adm release info 4.9.0 --image-for=driver-toolkit
Learn about the Driver Toolkit and how you can use it as a base image for driver containers for enabling special software and hardware devices on Kubernetes.
The Driver Toolkit is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The Driver Toolkit is a container image in the OKD payload used as a base image on which you can build driver containers. The Driver Toolkit image contains the kernel packages commonly required as dependencies to build or install kernel modules, as well as a few tools needed in driver containers. The version of these packages will match the kernel version running on the Fedora CoreOS (FCOS) nodes in the corresponding OKD release.
Driver containers are container images used for building and deploying out-of-tree kernel modules and drivers on container operating systems like FCOS. Kernel modules and drivers are software libraries running with a high level of privilege in the operating system kernel. They extend the kernel functionalities or provide the hardware-specific code required to control new devices. Examples include hardware devices like Field Programmable Gate Arrays (FPGA) or GPUs, and software-defined storage (SDS) solutions, such as Lustre parallel file systems, which require kernel modules on client machines. Driver containers are the first layer of the software stack used to enable these technologies on Kubernetes.
The list of kernel packages in the Driver Toolkit includes the following and their dependencies:
kernel-core
kernel-devel
kernel-headers
kernel-modules
kernel-modules-extra
In addition, the Driver Toolkit also includes the corresponding real-time kernel packages:
kernel-rt-core
kernel-rt-devel
kernel-rt-modules
kernel-rt-modules-extra
The Driver Toolkit also has several tools which are commonly needed to build and install kernel modules, including:
elfutils-libelf-devel
kmod
binutilskabi-dw
kernel-abi-whitelists
dependencies for the above
Prior to the Driver Toolkit’s existence, you could install kernel packages in a pod or build config on OKD using entitled builds or by installing from the kernel RPMs in the hosts machine-os-content
. The Driver Toolkit simplifies the process by removing the entitlement step, and avoids the privileged operation of accessing the machine-os-content in a pod. The Driver Toolkit can also be used by partners who have access to pre-released OKD versions to prebuild driver-containers for their hardware devices for future OKD releases.
The Driver Toolkit is also used by the Special Resource Operator (SRO), which is currently available as a community Operator on OperatorHub. SRO supports out-of-tree and third-party kernel drivers and the support software for the underlying operating system. Users can create recipes for SRO to build and deploy a driver container, as well as support software like a device plugin, or metrics. Recipes can include a build config to build a driver container based on the Driver Toolkit, or SRO can deploy a prebuilt driver container.
The driver-toolkit
image is available from the Container images section of the Red Hat Ecosystem Catalog and in the OKD release payload. The image corresponding to the most recent minor release of OKD will be tagged with the version number in the catalog. The image URL for a specific release can be found using the oc adm
CLI command.
Instructions for pulling the driver-toolkit
image from registry.redhat.io
with podman or in OKD can be found on the Red Hat Ecosystem Catalog.
The driver-toolkit image for the latest minor release will be tagged with the minor release version on registry.redhat.io for example registry.redhat.io/openshift4/driver-toolkit-rhel8:v4.9
.
You obtained the image pull secret from the Red Hat OpenShift Cluster Manager.
You installed the OpenShift CLI (oc
).
The image URL of the driver-toolkit
corresponding to a certain release can be extracted from the release image using the oc adm
command:
$ oc adm release info 4.9.0 --image-for=driver-toolkit
quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4
This image can be pulled using a valid pull secret, such as the pull secret required to install OKD.
$ podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>
As an example, the Driver Toolkit can be used as the base image for building a very simple kernel module called simple-kmod.
The Driver Toolkit contains the necessary dependencies, |
You have a running OKD cluster.
You set the Image Registry Operator state to Managed
for your cluster.
You installed the OpenShift CLI (oc
).
You are logged into the OpenShift CLI as a user with cluster-admin
privileges.
Create a namespace. For example:
$ oc new-project simple-kmod-demo
The YAML defines an ImageStream
for storing the simple-kmod
driver container image, and a BuildConfig
for building the container. Save this YAML as 0000-buildconfig.yaml.template
.
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
app: simple-kmod-driver-container
name: simple-kmod-driver-container
namespace: simple-kmod-demo
spec: {}
---
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
app: simple-kmod-driver-build
name: simple-kmod-driver-build
namespace: simple-kmod-demo
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
runPolicy: "Serial"
triggers:
- type: "ConfigChange"
- type: "ImageChange"
source:
git:
ref: "master"
uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"
type: Git
dockerfile: |
FROM DRIVER_TOOLKIT_IMAGE
WORKDIR /build/
# Expecting kmod software version as an input to the build
ARG KMODVER
# Grab the software from upstream
RUN git clone https://github.com/openshift-psap/simple-kmod.git
WORKDIR simple-kmod
# Build and install the module
RUN make all KVER=$(rpm -q --qf "%{VERSION}-%{RELEASE}.%{ARCH}" kernel-core) KMODVER=${KMODVER} \
&& make install KVER=$(rpm -q --qf "%{VERSION}-%{RELEASE}.%{ARCH}" kernel-core) KMODVER=${KMODVER}
# Add the helper tools
WORKDIR /root/kvc-simple-kmod
ADD Makefile .
ADD simple-kmod-lib.sh .
ADD simple-kmod-wrapper.sh .
ADD simple-kmod.conf .
RUN mkdir -p /usr/lib/kvc/ \
&& mkdir -p /etc/kvc/ \
&& make install
RUN systemctl enable kmods-via-containers@simple-kmod
strategy:
dockerStrategy:
buildArgs:
- name: KMODVER
value: DEMO
output:
to:
kind: ImageStreamTag
name: simple-kmod-driver-container:demo
Substitute the correct driver toolkit image for the OKD version you are running in place of “DRIVER_TOOLKIT_IMAGE” with the following commands.
$ OCP_VERSION=$(oc get clusterversion/version -ojsonpath={.status.desired.version})
$ DRIVER_TOOLKIT_IMAGE=$(oc adm release info $OCP_VERSION --image-for=driver-toolkit)
$ sed "s#DRIVER_TOOLKIT_IMAGE#${DRIVER_TOOLKIT_IMAGE}#" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml
Create the image stream and build config with
$ oc create -f 0000-buildconfig.yaml
After the builder pod completes successfully, deploy the driver container image as a DaemonSet
.
The driver container must run with the privileged security context in order to load the kernel modules on the host. The following YAML file contains the RBAC rules and the DaemonSet
for running the driver container. Save this YAML as 1000-drivercontainer.yaml
.
apiVersion: v1
kind: ServiceAccount
metadata:
name: simple-kmod-driver-container
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: simple-kmod-driver-container
rules:
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- use
resourceNames:
- privileged
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: simple-kmod-driver-container
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: simple-kmod-driver-container
subjects:
- kind: ServiceAccount
name: simple-kmod-driver-container
userNames:
- system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: simple-kmod-driver-container
spec:
selector:
matchLabels:
app: simple-kmod-driver-container
template:
metadata:
labels:
app: simple-kmod-driver-container
spec:
serviceAccount: simple-kmod-driver-container
serviceAccountName: simple-kmod-driver-container
containers:
- image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo
name: simple-kmod-driver-container
imagePullPolicy: Always
command: ["/sbin/init"]
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@simple-kmod"]
securityContext:
privileged: true
nodeSelector:
node-role.kubernetes.io/worker: ""
Create the RBAC rules and daemon set:
$ oc create -f 1000-drivercontainer.yaml
After the pods are running on the worker nodes, verify that the simple_kmod
kernel module is loaded successfully on the host machines with lsmod
.
Verify that the pods are running:
$ oc get pod -n simple-kmod-demo
NAME READY STATUS RESTARTS AGE
simple-kmod-driver-build-1-build 0/1 Completed 0 6m
simple-kmod-driver-container-b22fd 1/1 Running 0 40s
simple-kmod-driver-container-jz9vn 1/1 Running 0 40s
simple-kmod-driver-container-p45cc 1/1 Running 0 40s
Execute the lsmod
command in the driver container pod:
$ oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple
simple_procfs_kmod 16384 0
simple_kmod 16384 0
For more information about configuring registry storage for your cluster, see Image Registry Operator in OpenShift Container Platform.