×

Welcome to the official OKD 4 documentation, where you can learn about OKD and start exploring its features.

To navigate the OKD 4 documentation, you can use one of the following methods:

  • Use the left navigation bar to browse the documentation.

  • Select the task that interests you from the contents of this Welcome page.

Cluster installer activities

Explore the following OKD installation tasks:

  • OKD installation overview: Depending on the platform, you can install OKD on installer-provisioned or user-provisioned infrastructure. The OKD installation program provides the flexibility to deploy OKD on a range of different platforms.

  • Install a cluster on Alibaba: On Alibaba Cloud, you can install OKD on installer-provisioned infrastructure. This is currently a Technology Preview feature only.

  • Install a cluster on AWS: On AWS, you can install OKD on installer-provisioned infrastructure or user-provisioned infrastructure.

  • Install a cluster on Azure: On Microsoft Azure, you can install OKD on installer-provisioned infrastructure or user-provisioned infrastructure.

  • Install a cluster on Azure Stack Hub: On Microsoft Azure Stack Hub, you can install OKD on installer-provisioned infrastructure or user-provisioned infrastructure.

  • Installing OKD with the Assisted Installer: The Assisted Installer is an installation solution that is provided on the Red Hat Red Hat Hybrid Cloud Console. The Assisted Installer supports installing an OKD cluster on many platforms, but with a focus on bare metal, Nutanix, and VMware vSphere infrastructures.

  • Installing OKD with the Agent-based Installer: You can use the Agent-based Installer to generate a bootable ISO image that contains the Assisted discovery agent, the Assisted Service, and all the other information required to deploy an OKD cluster. The Agent-based Installer leverages the advantages of the Assisted Installer in a disconnected environment

  • Install a cluster on bare metal: On bare metal, you can install OKD on installer-provisioned infrastructure or user-provisioned infrastructure. If none of the available platform and cloud provider deployment options meet your needs, consider using bare metal user-provisioned infrastructure.

  • Install a cluster on GCP: On Google Cloud Platform (GCP) you can install OKD on installer-provisioned infrastructure or user-provisioned infrastructure.

  • Install a cluster on Oracle® Cloud Infrastructure (OCI): You can use the Assisted Installer or the Agent-based Installer to install a cluster on OCI. This means that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. See Using the Assisted Installer to install a cluster on OCI and Using the Agent-based Installer to install a cluster on OCI.

  • Install a cluster on Nutanix: On Nutanix, you can install a cluster on your OKD on installer-provisioned infrastructure.

  • Install a cluster on OpenStack: On OpenStack, you can install OKD on installer-provisioned infrastructure or user-provisioned infrastructure.

  • Install a cluster on VMware vSphere: You can install OKD on supported versions of vSphere.

Other cluster installer activities

  • Install a cluster in a restricted network: If your cluster that uses user-provisioned infrastructure on AWS, GCP, or bare metal does not have full access to the internet, then mirror the OKD installation images and install a cluster in a restricted network.

  • Install a cluster in an existing network: If you use an existing Virtual Private Cloud (VPC) in AWS or GCP or an existing VNet on Microsoft Azure, you can install a cluster. Also consider Installing a cluster on GCP into a shared VPC

  • Install a private cluster: If your cluster does not require external internet access, you can install a private cluster on AWS, Azure, GCP, or IBM Cloud®. Internet access is still required to access the cloud APIs and installation media.

  • Check installation logs: Access installation logs to evaluate issues that occur during OKD installation.

  • Access OKD: Use credentials output at the end of the installation process to log in to the OKD cluster from the command line or web console.

  • Install Red Hat OpenShift Data Foundation: You can install Red Hat OpenShift Data Foundation as an Operator to provide highly integrated and simplified persistent storage management for containers.

  • Fedora CoreOS (FCOS) image layering: As a post-installation task, you can add new images on top of the base FCOS image. This layering does not modify the base FCOS image. Instead, the layering creates a custom layered image that includes all FCOS functions and adds additional functions to specific nodes in the cluster.

Developer activities

Develop and deploy containerized applications with OKD. OKD is a platform for developing and deploying containerized applications. Read the following OKD documentation, so that you can better understand OKD functions:

  • Understand OKD development: Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.

  • Work with projects: Create projects from the OKD web console or OpenShift CLI (oc) to organize and share the software you develop.

  • Creating applications using the Developer perspective: Use the Developer perspective in the OKD web console to easily create and deploy applications.

  • Viewing application composition using the Topology view: Use the Topology view to visually interact with your applications, monitor status, connect and group components, and modify your code base.

  • Understanding Service Binding Operator: With the Service Binding Operator, an application developer can bind workloads with Operator-managed backing services by automatically collecting and sharing binding data with the workloads. The Service Binding Operator improves the development lifecycle with a consistent and declarative service binding method that prevents discrepancies in cluster environments.

  • Create CI/CD Pipelines: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservice-based architecture.

  • Manage your infrastructure and application configurations: GitOps is a declarative way to implement continuous deployment for cloud native applications. GitOps defines infrastructure and application definitions as code. GitOps uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. GitOps also handles and automates complex deployments at a fast pace, which saves time during deployment and release cycles.

  • Deploy Helm charts: Helm is a software package manager that simplifies deployment of applications and services to OKD clusters. Helm uses a packaging format called charts. A Helm chart is a collection of files that describes the OKD resources.

  • Understand image builds: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials, such as Git repositories, local binary inputs, and external artifacts. You can follow examples of build types from basic builds to advanced builds.

  • Create container images: A container image is the most basic building block in OKD and Kubernetes applications. By defining image streams, you can gather multiple versions of an image in one place as you continue to develop the image stream. With S2I containers, you can insert your source code into a base container. The base container is configured to run code of a particular type, such as Ruby, Node.js, or Python.

  • Create deployments: Use Deployment objects to exert fine-grained management over applications. Deployments create replica sets according to the rollout strategy, which orchestrates pod lifecycles.

  • Create templates: Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.

  • Understand Operators: Operators are the preferred method for creating on-cluster applications for OKD 4. Learn about the Operator Framework and how to deploy applications by using installed Operators into your projects.

  • Develop Operators: Operators are the preferred method for creating on-cluster applications for OKD 4. Learn the workflow for building, testing, and deploying Operators. You can then create your own Operators based on Ansible or Helm, or configure built-in Prometheus monitoring by using the Operator SDK.

  • Reference the REST API index: Learn about OKD application programming interface endpoints.

  • Software Supply Chain Security enhancements: The PipelineRun details page in the Developer or Administrator perspective of the web console provides a visual representation of identified vulnerabilities, which are categorized by severity. Additionally, these enhancements provide an option to download or view Software Bill of Materials (SBOMs) for enhanced transparency and control within your supply chain. Learn about setting up OpenShift Pipelines in the web console to view Software Supply Chain Security elements.

Cluster administrator activities

Manage machines, provide services to users, and follow monitoring and logging reports. Read the following OKD documentation, so that you can better understand OKD functions:

Manage cluster components

Change cluster components

Observe a cluster

  • OpenShift Logging: Learn about logging and configure different logging components, such as log storage, log collectors, and the logging web console plugin.

  • Red Hat OpenShift distributed tracing platform: Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use the distributed tracing platform for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications.

  • Red Hat build of OpenTelemetry: Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software’s performance and behavior. Use open source backends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate.