Specialized hardware accelerators play a key role in the emerging generative artificial intelligence and machine learning (AI/ML) industry. Specifically, hardware accelerators are essential to the training and serving of large language and other foundational models that power this new technology. Data scientists, data engineers, ML engineers, and developers can take advantage of the specialized hardware acceleration for data-intensive transformations and model development and serving. Much of that ecosystem is open source, with a number of contributing partners and open source foundations.
Red Hat OKD provides support for cards and peripheral hardware that add processing units that comprise hardware accelerators:
Graphical processing units (GPUs)
Neural processing units (NPUs)
Application-specific integrated circuits (ASICs)
Data processing units (DPUs)
Specialized hardware accelerators provide a rich set of benefits for AI/ML development:
A collaborative environment for developers, data engineers, data scientists, and DevOps
Operators allow for bringing AI/ML capabilities to OKD
On-premise support for model development, delivery, and deployment
Model testing, iteration, integration, promotion, and serving into production as services
Red Hat provides an optimized platform to enable these specialized hardware accelerators in Fedora and OKD platforms at the Linux (kernel and userspace) and Kubernetes layers. To do this, Red Hat combines the proven capabilities of Red Hat OpenShift AI and Red Hat OKD in a single enterprise-ready AI application platform.
Hardware Operators use the operating framework of a Kubernetes cluster to enable the required accelerator resources. You can also deploy the provided device plugin manually or as a daemon set. This plugin registers the GPU in the cluster.
Certain specialized hardware accelerators are designed to work within disconnected environments where a secure environment must be maintained for development and testing.