$ oc get nodepool -A
Release notes contain information about new and deprecated features, changes, and known issues.
With this release, hosted control planes for OKD 4.18 is available. Hosted control planes for OKD 4.18 supports multicluster engine for Kubernetes Operator version 2.8.
This release adds improvements related to the following concepts:
The OKD documentation now highlights the differences between hosted control planes and standalone OKD. For more information, see Differences between hosted control planes and OKD.
Configuring proxy support for hosted control planes has a few differences from configuring proxy support for standalone OKD. For more information, see Networking for hosted control planes.
Previously, you could not create node pools that use ARM64 architecture on a hosted control planes cluster with the platform parameter set to either agentBareMetal
or none
. This issue did not exist for a hosted control planes cluster that runs on an AWS or an Microsoft Azure platform. With this release, a fix ensures that you can now create node pools that use ARM64 architecture in a hosted control planes cluster that has platform
set to either agentBareMetal
or none
. (OCPBUGS-48579)
Previously, when you attempted to use the hosted control planes CLI to create a cluster in a disconnected environment, the installation command failed. An issue existed with the registry that hosts the command. With this release, a fix to the command registry means that you can use the hosted control planes CLI to create a cluster in a disconnected environment. (OCPBUGS-48170)
Previously, incorrect addresses were being passed to the Kubernetes EndpointSlice on a cluster, and this issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red Hat Marketplace pods can now successfully connect to the cluster API server, so that the installation of MetalLB Operator and handling of ingress traffic in IPv6 disconnected environments can occur. (OCPBUGS-46665)
Previously, in hosted control planes on the Agent platform, ARM64 architecture was not allowed in the NodePool
API. As a consequence, heterogeneous clusters could not be deployed on the Agent platform. In this release, the API now allows ARM64 architecture node pools on the Agent platform. (OCPBUGS-4673)
Previously, the default node-monitor-grace-period
value was 50 seconds. As a consequence, nodes did not stay ready for the duration of time that Kubernetes components needed to reconnect, coordinate, and complete their requests. With this release, the default node-monitor-grace-period
value is 55 seconds. As a result, the issue is resolved and deployments have enough time to be completed. (OCPBUGS-46008)
Previously, the provider ID for the IBMPowerVSMachine
object was not populated properly, due to the improper retrieval of IBM Cloud Workspace ID. As a consequence, any certificate signing requests (CSRs) were pending in hosted clusters. With this release, the provider ID is properly populated for the IBMPowerVSMachine
object. As a result, no CSRs are pending, and all COs are moved to the available state. (OCPBUGS-44944)
Previously, when you created a hosted cluster by using a shared VPC where the private DNS hosted zones existed in the cluster creator account, the private link controller failed to create the route53 DNS records in the local zone. With this release, the ingress shared role adds records to the private link controller. The VPC endpoint is used to share the role to create the VPC endpoint in the VPC owner account. A hosted cluster is created in a shared VPC configuration, where the private hosted zones exist in the cluster creator account. (OCPBUGS-44630)
Previously, when the hosted cluster controllerAvailabilityPolicy
parameter was set to SingleReplica
, the podAntiAffinity
rules on networking components blocked the availability of the components. With this release, the issue is resolved. (OCPBUGS-39313)
Previously, periodic conformance jobs in hosted control planes failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected. (OCPBUGS-38943)
Previously, you could not create NodePool
resources with ARM64 architecture on non-AWS or Azure platforms. This bug resulted in validation errors that prevented the addition of bare-metal compute nodes and caused CEL validation blocks when creating a NodePool
resource. The fix modifies the NodePool
spec validation rules by adding "self.platform.type == 'None'"
. As a result, you can now create NodePool
resources with ARM64 architecture specifications on non-AWS or Azure bare-metal platforms, expanding functionality. (OCPBUGS-46440)
Previously, when you created a hosted cluster in a shared VPC, the private link controller sometimes failed to assume the shared VPC role to manage the VPC endpoints in the shared VPC. With this release, a client is created for every reconciliation in the private link controller so that you can recover from invalid clients. As a result, the hosted cluster endpoints and the hosted cluster are created successfully. (OCPBUGS-45184)
If the annotation and the ManagedCluster
resource name do not match, the multicluster engine for Kubernetes Operator console displays the cluster as Pending import
. The cluster cannot be used by the multicluster engine Operator. The same issue happens when there is no annotation and the ManagedCluster
name does not match the Infra-ID
value of the HostedCluster
resource.
When you use the multicluster engine for Kubernetes Operator console to add a new node pool to an existing hosted cluster, the same version of OKD might appear more than once in the list of options. You can select any instance in the list for the version that you want.
When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a Ready
state. You can verify the number of nodes in two ways:
In the console, go to the node pool and verify that it has 0 nodes.
On the command-line interface, run the following commands:
Verify that 0 nodes are in the node pool by running the following command:
$ oc get nodepool -A
Verify that 0 nodes are in the cluster by running the following command:
$ oc get nodes --kubeconfig
Verify that 0 agents are reported as bound to the cluster by running the following command:
$ oc get agents -A
When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues:
CrashLoopBackOff
state in the service-ca-operator
pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the kube-system
namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve.
Pods stuck in the ContainerCreating
state: This issue occurs because the openshift-service-ca-operator
resource cannot generate the metrics-tls
secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server.
To resolve these issues, configure the DNS server settings for a dual stack network.
On the Agent platform, the hosted control planes feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the IgnitionEndpointTokenReference
property then add or modify any label on the Agent resource. The system re-creates the secret with the new token.
If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster:
You created a hosted cluster on the Agent platform through the multicluster engine for Kubernetes Operator console by using the default hosted cluster cluster namespace.
You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name.
When you use the console or API to specify an IPv6 address for the spec.services.servicePublishingStrategy.nodePort.address
field of a hosted cluster, a full IPv6 address with 8 hextets is required. For example, instead of specifying 2620:52:0:1306::30
, you need to specify 2620:52:0:1306:0:0:0:30
.
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. For more information about the scope of support for these features, see Technology Preview Features Support Scope on the Red Hat Customer Portal.
For IBM Power and IBM Z, you must run the control plane on machine types based on 64-bit x86 architecture, and node pools on IBM Power or IBM Z. |
Feature | 4.16 | 4.17 | 4.18 |
---|---|---|---|
Hosted control planes for OKD on Amazon Web Services (AWS) |
General Availability |
General Availability |
General Availability |
Hosted control planes for OKD on bare metal |
General Availability |
General Availability |
General Availability |
Hosted control planes for OKD on OKD Virtualization |
General Availability |
General Availability |
General Availability |
Hosted control planes for OKD using non-bare-metal agent machines |
Technology Preview |
Technology Preview |
Technology Preview |
Hosted control planes for an ARM64 OKD cluster on Amazon Web Services |
Technology Preview |
General Availability |
General Availability |
Hosted control planes for OKD on IBM Power |
Technology Preview |
General Availability |
General Availability |
Hosted control planes for OKD on IBM Z |
Technology Preview |
General Availability |
General Availability |
Hosted control planes for OKD on OpenStack |
Not Available |
Developer Preview |
Developer Preview |