$ export KUBECONFIG=<install_directory>/auth/kubeconfig
After you configure your environment for hosted control planes and create a hosted cluster, you can further manage your clusters and nodes.
If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero.
Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down. |
Set the kubeconfig
file to access the hosted cluster by running the following command:
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
Get the name of the NodePool
resource associated to your hosted cluster by running the following command:
$ oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE>
Optional: To prevent the pods from draining, add the nodeDrainTimeout
field in the NodePool
resource by running the following command:
$ oc edit NodePool <nodepool> -o yaml --namespace <HOSTED_CLUSTER_NAMESPACE>
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
# ...
name: nodepool-1
namespace: clusters
# ...
spec:
arch: amd64
clusterName: clustername (1)
management:
autoRepair: false
replace:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
strategy: RollingUpdate
upgradeType: Replace
nodeDrainTimeout: 0s (2)
# ...
1 | Defines the name of your hosted cluster. |
2 | Specifies the total amount of time that the controller spends to drain a node. By default, the nodeDrainTimeout: 0s setting blocks the node draining process. |
To allow the node draining process to continue for a certain period of time, you can set the value of the |
Scale down the NodePool
resource associated to your hosted cluster by running the following command:
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0
After scaling down the data plan to zero, some pods in the control plane stay in the |
Optional: Scale up the NodePool
resource associated to your hosted cluster by running the following command:
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1
After rescaling the NodePool
resource, wait for couple of minutes for the NodePool
resource to become available in a Ready
state.
Verify that the value for the nodeDrainTimeout
field is greater than 0s
by running the following command:
$ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}'
The steps to delete a hosted cluster differ depending on which provider you use.
If the cluster is on AWS, follow the instructions in Destroying a hosted cluster on AWS.
If the cluster is on bare metal, follow the instructions in Destroying a hosted cluster on bare metal.
If the cluster is on OKD Virtualization, follow the instructions in Destroying a hosted cluster on OpenShift Virtualization.
If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature.