When deployed on OpenStack, OKD can be configured to access the OpenStack infrastructure, including using OpenStack Cinder volumes as persistent storage for application data.


Configuring OpenStack for OKD requires the following role:


For creating assets such as instances, networking ports, floating IPs, volumes, and so on. You need the member role for the tenant.

Configuring a Security Group

When installing OKD on OpenStack, ensure that you set up the appropriate security groups.

These are some ports that you must have in your security groups, without which the installation fails. You may need more depending on the cluster configuration you want to install. For more information and to adjust your security groups accordingly, see Required Ports for more information.

All OKD Hosts

  • tcp/22 from host running the installer/Ansible

etcd Security Group

  • tcp/2379 from masters

  • tcp/2380 from etcd hosts

Master Security Group

  • tcp/8443 from

  • tcp/53 from all OKD hosts for environments installed prior to or upgraded to 1.2

  • udp/53 from all OKD hosts for environments installed prior to or upgraded to 1.2

  • tcp/8053 from all OKD hosts for new environments installed with 1.2

  • udp/8053 from all OKD hosts for new environments installed with 1.2

Node Security Group

  • tcp/10250 from masters

  • udp/4789 from nodes

Infrastructure Nodes (ones that can host the OKD router)

  • tcp/443 from

  • tcp/80 from


If using CRIO, you must open tcp/10010 to allow oc exec and oc rsh operations.

If configuring external load-balancers (ELBs) for load balancing the masters and/or routers, you also need to configure Ingress and Egress security groups for the ELBs appropriately.

Configuring OpenStack Variables

To set the required OpenStack variables, create a /etc/cloud.conf file with the following contents on all of your OKD hosts, both masters and nodes:

auth-url = <OS_AUTH_URL>
username = <OS_USERNAME>
password = <password>
domain-id = <OS_USER_DOMAIN_ID>
tenant-id = <OS_TENANT_ID>
region = <OS_REGION_NAME>

subnet-id = <UUID of the load balancer subnet>

Consult your OpenStack administrators for values of the OS_ variables, which are commonly used in OpenStack configuration.

Configuring OKD Masters for OpenStack

You can set an OpenStack configuration on your OKD master and node hosts in two different ways:

Configuring OKD for OpenStack with Ansible

During cluster installations, OpenStack can be configured using the following parameters, which are configurable in the inventory file:

  • openshift_cloudprovider_kind

  • openshift_cloudprovider_openstack_auth_url

  • openshift_cloudprovider_openstack_username

  • openshift_cloudprovider_openstack_password

  • openshift_cloudprovider_openstack_domain_id

  • openshift_cloudprovider_openstack_domain_name

  • openshift_cloudprovider_openstack_tenant_id

  • openshift_cloudprovider_openstack_tenant_name

  • openshift_cloudprovider_openstack_region

  • openshift_cloudprovider_openstack_lb_subnet_id

Example OpenStack Configuration with Ansible
# Cloud Provider Configuration
# Note: You may make use of environment variables rather than store
# sensitive configuration within the ansible inventory.
# For example:
#openshift_cloudprovider_openstack_username="{{ lookup('env','USERNAME') }}"
#openshift_cloudprovider_openstack_password="{{ lookup('env','PASSWORD') }}"
# Openstack

Manually Configuring OKD Masters for OpenStack

Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and update the contents of the apiServerArguments and controllerArguments sections:

      - "openstack"
      - "/etc/cloud.conf"
      - "openstack"
      - "/etc/cloud.conf"

When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, cloud.conf should be in /etc/origin/ instead of /etc/.

Manually Configuring OKD Nodes for OpenStack

Edit the appropriate node configuration map and update the contents of the kubeletArguments sections:

    - "openstack"
    - "/etc/cloud.conf"

If the hostname of node hosts does not match the OpenStack instance name(The RFC1123-compliant OpenStack instance name), the cloud provider integration will not work.

When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, cloud.conf should be in /etc/origin/ instead of /etc/.

Installing OKD by Using an Ansible Playbook

The OpenStack installation playbook is a Technology Preview feature.

To install OKD on an existing OpenStack installation, use the OpenStack playbook. For more information about the playbook, including detailed prerequisites, see the OpenStack Provisioning README file.

To run the playbook, run the following command:

$ ansible-playbook --user openshift \
  -i openshift-ansible/playbooks/openstack/inventory.py \
  -i inventory \

Applying Configuration Changes

Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:

# master-restart api
# master-restart controllers
# systemctl restart atomic-openshift-node

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.

  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OKD service.

    # systemctl restart origin-node
  5. Add back any labels on each node that you previously had.

Configuring Zone Labels for Dynamically Created OpenStack PVs

Administrators can configure zone labels for dynamically created OpenStack PVs. This option is useful if the OpenStack Cinder zone name does not match the compute zone names, for example, if there is only one Cinder zone and many compute zones. Administrators can create Cinder volumes dynamically and then check the labels.

To view the zone labels for the PVs:

# oc get pv --show-labels
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                 STORAGECLASS   REASON    AGE       LABELS
pvc-1faa6f93-64ac-11e8-930c-fa163e3c373c   1Gi        RWO            Delete           Bound     openshift-node/pvc1   standard                 12s       failure-domain.beta.kubernetes.io/zone=nova

The default setting is enabled. Using the oc get pv --show-labels command returns the failure-domain.beta.kubernetes.io/zone=nova label.

To disable the zone label, update the cloud.conf file by adding:

ignore-volume-az = yes

The PVs created after restarting the master services will not have the zone label.