# yum update atomic-openshift-utils
Depending on how your OKD cluster was installed, you can add new hosts (either nodes or masters) to your installation by using the install tool for quick installations, or by using the scaleup.yml playbook for advanced installations.
If you used the quick install tool to install your OKD cluster, you can use the quick install tool to add a new node host to your existing cluster.
Currently, you can not use the quick installer tool to add new master hosts. You must use the advanced installation method to do so. |
If you used the installer in either
interactive or
unattended mode, you can re-run the
installation as long as you have an
installation configuration
file at ~/.config/openshift/installer.cfg.yml (or specify a different
location with the -c
option).
See the cluster limits section for the recommended maximum number of nodes. |
To add nodes to your installation:
Ensure you have the latest installer and playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Run the installer with the scaleup
subcommand in interactive or
unattended mode:
# atomic-openshift-installer [-u] [-c </path/to/file>] scaleup
The installer detects your current environment and allows you to add additional nodes:
*** Installation Summary *** Hosts: - 100.100.1.1 - OpenShift master - OpenShift node - Etcd (Embedded) - Storage Total OpenShift masters: 1 Total OpenShift nodes: 1 --- We have detected this previously installed OpenShift environment. This tool will guide you through the process of adding additional nodes to your cluster. Are you ready to continue? [y/N]:
Choose (y) and follow the on-screen instructions to complete your desired task.
You can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on only the new hosts. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.
The scaleup.yml playbook configures only the new host. It does not update NO_PROXY in master services, and it does not restart master services. |
You must have an existing inventory file,for example /etc/ansible/hosts, that is representative of your current cluster configuration in order to run the scaleup.yml playbook.
See the cluster limits section for the recommended maximum number of nodes. |
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section:
For example, to add a new node host, add new_nodes:
[OSEv3:children] masters nodes new_nodes
To add new master hosts, add new_masters.
Create a [new_<host_type>] section to specify host information for the new hosts. Format this section like an existing section, as shown in the following example of adding a new node:
[nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" [new_nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
See Configuring Host Variables for more options.
When adding new masters, add hosts to both the [new_masters] section and the [new_nodes] section to ensure that the new master host is part of the OpenShift SDN.
[masters] master[1:2].example.com [new_masters] master3.example.com [nodes] master[1:2].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" [new_nodes] master3.example.com
If you label a master host with the |
Run the scaleup.yml playbook. If your inventory file is located somewhere
other than the default of /etc/ansible/hosts, specify the location with the
-i
option.
For additional nodes:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml
For additional masters:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/openshift-master/scaleup.yml
Set the node label to logging-infra-fluentd: "true"
.
# oc label node/new-node.example.com logging-infra-fluentd: "true"
After the playbook runs, verify the installation.
Move any hosts that you defined in the [new_<host_type>] section to their appropriate section. By moving these hosts, subsequent playbook runs that use this inventory file treat the nodes correctly. You can keep the empty [new_<host_type>] section. For example, when adding new nodes:
[nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" [new_nodes]
You can add new etcd hosts to your cluster by running the etcd scaleup playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on the new hosts only. Before running the etcd scaleup.yml playbook, complete all prerequisite host preparation steps.
To add an etcd host to an existing cluster:
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
$ yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file, add new_<host_type> to the [OSEv3:children] group and add hosts under the new_<host_type> group:
For example, to add a new etcd, add new_etcd:
[OSEv3:children] masters nodes etcd new_etcd [etcd] etcd1.example.com etcd2.example.com [new_etcd] etcd3.example.com
Run the etcd scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i
option.
$ ansible-playbook [-i /path/to/file] \
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/scaleup.yml
If you use the service catalog, you must update its list of etcd servers:
$ oc edit ds apiserver -n kube-service-catalog
Add the FQDN for the new etcd node to the --etcd-servers
argument. This
argument contains a comma-separated list.
After the playbook completes successfully, verify the installation.
Follow these steps when you are migrating your machines to a different data center and the network and IPs assigned to it will change.
Back up the primary etcd and master nodes.
Ensure that you back up the /etc/etcd/ directory, as noted in the etcd backup instructions. |
Provision as many new machines as there are masters to replace.
Add or expand the cluster. for example, if you want to add 3 masters with etcd colocated, scale up 3 master nodes or 3 etcd nodes.
Add a master. In step 3 of that process, add the
host of the new data center in [new_masters]
and [new_nodes]
and run the
master scaleup.yml playbook.
Put the same host in the etcd section and run the etcd scaleup.yml playbook.
Verify that the host was added:
# oc get nodes
Verify that the master host IP was added:
# oc get ep kubernetes
Verify that etcd was added. The value of ETCDCTL_API
depends on the version
being used:
# source /etc/etcd/etcd.conf # ETCDCTL_API=2 etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \ --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URLS member list
Copy /etc/origin/master/ca.serial.txt from the /etc/origin/master directory to the new master host that is listed first in your inventory file. By default, this is /etc/ansible/hosts.
Remove the etcd hosts.
Copy the /etc/etcd/ca directory to the new etcd host that is listed first in your inventory file. By default, this is /etc/ansible/hosts.
Remove the old etcd clients from the master-config.yaml file:
# grep etcdClientInfo -A 11 /etc/origin/master/master-config.yaml
Restart the masters:
# systemctl restart atomic-openshift-master-*
Remove the old etcd members from the cluster. The value of ETCDCTL_API
depends
on the version being used:
# source /etc/etcd/etcd.conf # ETCDCTL_API=2 etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \ --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URLS member list
Take the IDs from the output of the command above and remove the old members using the IDs:
# etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \ --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URL member remove 1609b5a3a078c227
Stop and disable the etcd services on the old etcd hosts:
# systemctl stop etcd # systemctl disable etcd
Shut down old master API and controller services:
# systemctl stop atomic-openshift-master-api
Remove the master nodes from the HA proxy configuration, which was installed as a load balancer by default during the native installation process.
Decommission the machine.
Stop the atomic-openshift-node
service on the
master to be removed:
# systemctl stop atomic-openshift-node
Delete the node resource:
# oc delete node
You can migrate nodes individually or in groups (of 2, 5, 10, and so on), depending on what you are comfortable with and how the services on the node are run and scaled.
For the migration node or nodes, provision new VMs for the node’s use in the new data center.
To add the new node, scale up the infrastructure. Ensure the labels for the new node are set properly and that your new API servers are added to your load balancer and successfully serving traffic.
Evaluate and scale down.
Mark the current node (in the old data center) unscheduled.
Evacuate the node, so that pods on it are scheduled to other nodes.
Verify that the evacuated services are running on the new nodes.
Remove the node.
Verify that the node is empty and does not have running processes.
Stop the service or delete the node.