[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',}] openshift_master_default_subdomain=apps.test.example.com
Ansible inventory files describe the details about the hosts in your cluster and the cluster configuration details for your OKD installation. The OKD installation playbooks read your inventory file to know where and how to install OKD across your set of hosts.
See Ansible documentation for details about the format of an inventory file, including basic details about YAML syntax. |
When you install the openshift-ansible RPM package as described in Host preparation, Ansible dependencies create a file at the default location of /etc/ansible/hosts. However, the file is simply the default Ansible example and has no variables related specifically to OKD configuration. To successfully install OKD, you must replace the default contents of the file with your own configuration based on your cluster topography and requirements.
The following sections describe commonly-used variables to set in your inventory file during cluster installation. Many of the Ansible variables described are optional. For development environments, you can accept the default values for the required parameters, but you must select appropriate values for them in production environments.
You can review Example Inventory Files for various examples to use as a starting point for your cluster installation.
Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information. |
To assign global cluster environment variables during the Ansible installation, add them to the [OSEv3:vars] section of the /etc/ansible/hosts file. You must place each parameter value on a on separate line. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',}] openshift_master_default_subdomain=apps.test.example.com
If a parameter value in the Ansible inventory file contains special characters,
such as |
The following tables describe global cluster variables for use with the Ansible installer:
Variable | Purpose | ||
---|---|---|---|
|
This variable sets the SSH user for the installer to use and defaults to
|
||
|
If |
||
|
This variable sets which INFO messages are logged to the
For more information on debug log levels, see Configuring Logging Levels. |
||
|
Whether to enable Network Time Protocol (NTP) on cluster nodes. The default
value is If the
|
||
|
This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example: openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}} In this value,
|
||
|
This variable enables API service auditing. See Audit Configuration for more information. |
||
|
Provide the location of a audit policy file. See Audit Policy Configuration for more information. |
||
|
This variable overrides the host name for the cluster, which defaults to the host name of the master. |
||
|
This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer. For example: openshift_master_cluster_public_hostname=openshift-ansible.public.example.com |
||
|
Optional. This variable defines the HA method when deploying multiple masters.
Supports the |
||
|
This variable enables rolling restarts of HA masters (i.e., masters are taken
down one at a time) when
running
the upgrade playbook directly. It defaults to A rolling restart of the masters can be necessary to apply additional changes using the supplied Ansible hooks during the upgrade. Depending on the tasks you choose to perform you might want to reboot the host to restart your services. |
||
|
This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure OKD to use it. You can configure multiple identity providers. |
||
|
These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
||
|
|||
|
Provide the location of the custom certificates for the hosted router. |
||
|
Provide the single certificate and key that signs the OKD certificates. See Redeploying a New or Custom OKD CA |
||
|
If the certificate for your |
||
|
If the parameter is set to |
||
|
Validity of the auto-generated registry certificate in days. Defaults to |
||
|
Validity of the auto-generated CA certificate in days. Defaults to |
||
|
Validity of the auto-generated master certificate in days. Defaults to |
||
|
Validity of the auto-generated external etcd certificates in days. Controls
validity for etcd CA, peer, server and client certificates. Defaults to |
||
|
Halt upgrades to clusters that have certificates expiring in this many days or fewer. Defaults to |
||
|
Whether upgrade fails if the auto-generated certificates are not valid for the
period specified by the |
||
|
Set to |
||
|
These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
||
|
|||
|
|||
|
|||
|
Sets |
||
|
Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
||
|
Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
||
|
This variable enables the template service broker by specifying one or more namespaces whose templates will be served by the broker. |
||
|
This variable enables TLS bootstrapping auto approval, which allows nodes to
automatically join the cluster when provided a bootstrap credential. Set to
|
||
|
Default node selector for automatically deploying Ansible service broker pods,
defaults |
||
|
This variable overrides the node selector that projects will use by default when
placing pods, which is defined by the |
||
|
OKD adds the specified additional registry or registries to the
docker configuration. These are the registries to search.
If the registry requires access to a port other than For example: openshift_docker_additional_registries=example.com:443
|
||
|
OKD adds the specified additional insecure registry or registries to
the docker configuration. For any of these registries, secure sockets layer
(SSL) is not verified. Can be set to the host name or IP address
of the host. |
||
|
OKD adds the specified blocked registry or registries to the
docker configuration. Block the listed registries. Setting this to |
||
|
An additional registry that is trusted by the container runtime, when |
||
|
This variable sets the host name for integration with the metrics console by
overriding |
||
|
This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Services (AWS) with multiple zones or multiple clusters. See Labeling Clusters for AWS for details. |
||
|
Use this variable to configure datastore-layer encryption. |
||
|
Use this variable to specify a container image tag to install or configure. |
||
|
Use this variable to specify an RPM version to install or configure. |
If you modify the
|
Variable | Purpose |
---|---|
|
This variable overrides the default subdomain to use for exposed
routes. The value for this variable must consist of lower case alphanumeric characters or dashes ( |
|
This variable configures which
OpenShift SDN plug-in to
use for the pod network, which defaults to |
|
This variable overrides the SDN cluster network CIDR block. This is the network
from which pod IPs are assigned. Specify a private block
that does not conflict with existing network blocks in your infrastructure to
which pods, nodes, or the master might require access. Defaults to |
|
This variable configures the subnet in which
services
will be created in the
OKD
SDN. Specify a private block that does not conflict with any
existing network blocks in your infrastructure to which pods, nodes, or the
master might require access to, or the installation will fail. Defaults to
|
|
This variable specifies the size of the per host subnet allocated for pod IPs
by
OKD
SDN. Defaults to |
|
This variable specifies the
service
proxy mode to use: either |
|
This variable enables flannel as an alternative networking layer instead of
the default SDN. If enabling flannel, disable the default SDN with the
|
|
Set to |
|
This variable sets the |
|
This variable specifies the MTU size to use for OpenShift SDN. The value must be |
Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).
Ensure the openshift_deployment_type
parameter in your inventory file’s [OSEv3:vars]
section is set to origin
to install the OKD variant:
[OSEv3:vars] openshift_deployment_type=origin
To assign environment variables to hosts during the Ansible installation, set them in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
|
This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable is deprecated; see Defining Node Groups and Host Mappings for the current method of setting node labels. |
|
This variable configures additional The following example shows the configuration of Docker to use the OPTIONS=' --selinux-enabled --log-opt max-size=1M --log-opt max-file=3 --insecure-registry 172.30.0.0/16 --log-driver=json-file --signature-verification=false' |
|
This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on Masters. |
|
This variable is used to activate the Node Problem Detector.
If set to |
Node configurations are bootstrapped from the master. When the node boots and services are started, the node checks if a kubeconfig and other node configuration files exist before joining the cluster. If they do not, the node pulls the configuration from the master, then joins the cluster.
This process replaces administrators having to manually maintain the node configuration uniquely on each node host. Instead, the contents of a node host’s /etc/origin/node/node-config.yaml file are now provided by ConfigMaps sourced from the master.
The Configmaps for defining the node configurations must be available in the
openshift-node project. ConfigMaps are also now the authoritative definition
for node labels; the old openshift_node_labels
value is effectively ignored.
By default during a cluster installation, the installer creates the following default ConfigMaps:
node-config-master
node-config-infra
node-config-compute
The following ConfigMaps are also created, which label nodes into multiple roles:
node-config-all-in-one
node-config-master-infra
The following ConfigMaps are CRI-O variants for each of the existing default node groups:
node-config-master-crio
node-config-infra-crio
node-config-compute-crio
node-config-all-in-one-crio
node-config-master-infra-crio
You must not modify a node host’s /etc/origin/node/node-config.yaml file. Any changes are overwritten by the configuration that is defined in the ConfigMap the node uses. |
After installing the latest openshift-ansible package, you can view the default set of node group definitions in YAML format in the ~/openshift-ansible/roles/openshift_facts/defaults/main.yml file:
openshift_node_groups: - name: node-config-master (1) labels: - 'node-role.kubernetes.io/master=true' (2) edits: [] (3) - name: node-config-infra labels: - 'node-role.kubernetes.io/infra=true' edits: [] - name: node-config-compute labels: - 'node-role.kubernetes.io/compute=true' edits: [] - name: node-config-master-infra labels: - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true' edits: [] - name: node-config-all-in-one labels: - 'node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true,node-role.kubernetes.io/compute=true' edits: []
1 | Node group name. |
2 | List of node labels associated with the node group. See Node Host Labels for details. |
3 | Any edits to the node group’s configuration. |
If you do not set the openshift_node_groups
variable in your inventory file’s
[OSEv3:vars]
group, these defaults values are used. However, if you
want to set custom node groups, you must define the entire
openshift_node_groups
structure, including all planned node groups, in your
inventory file.
The openshift_node_groups
value is not merged with the default values, and you
must translate the YAML definitions into a Python dictionary. You can then use
the edits
field to modify any node configuration variables by specifying
key-value pairs.
See Master and Node Configuration Files for reference on configurable node variables. |
For example, the following entry in an inventory file defines groups named
node-config-master
, node-config-infra
, and node-config-compute
.
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}]
You can also define new node group names with other labels, the following entry in an inventory file defines groups named
node-config-master
, node-config-infra
, node-config-compute
and node-config-compute-storage
.
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}, {'name': 'node-config-compute-storage', 'labels': ['node-role.kubernetes.io/compute-storage=true']}]
You can use a list to modify multiple key value pairs, such as modifying the
node-config-compute
group to add two parameters to the kubelet
:
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.experimental-allocatable-ignore-eviction','value': ['true']}, {'key': 'kubeletArguments.eviction-hard', 'value': ['memory.available<1Ki']}]}]
You can use also use a dictionary as value, such as modifying the
node-config-compute
group to set perFSGroup
to 512Mi
:
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'volumeConfig.localQuota','value': {'perFSGroup':'512Mi'}}]}]
Whenever the openshift_node_group.yml playbook is run, the changes defined
in the edits
field will update the related ConfigMap (node-config-compute
in
this example), which will ultimately affect the node’s configuration file on the
host.
Running the openshift_node_group.yaml playbook only updates new nodes. It cannot be run to update existing nodes in a cluster. |
To map which ConfigMap to use for which node host, all hosts defined in the
[nodes]
group of your inventory must be assigned to a node group using the
openshift_node_group_name
variable.
Setting |
The value of openshift_node_group_name
is used to select the ConfigMap that
configures each node. For example:
[nodes] master[1:3].example.com openshift_node_group_name='node-config-master' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute'
If other custom ConfigMaps have been defined in openshift_node_groups
they can also be used. For exmaple:
[nodes] master[1:3].example.com openshift_node_group_name='node-config-master' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' gluster[1:6].example.com openshift_node_group_name='node-config-compute-storage'
You can assign Labels to node hosts during cluster installation. You can use these labels to determine the placement of pods onto nodes using the scheduler.
You must create your own
custom node groups if you want to modify the default labels that are assigned to
node hosts. You can no longer set the openshift_node_labels
variable to change
labels. See Node Group Definitions
to modify the default node groups.
Other than node-role.kubernetes.io/infra=true
(hosts using this group are also referred to as
dedicated infrastructure nodes and discussed further in
Configuring Dedicated
Infrastructure Nodes), the actual label names and values are arbitrary and can
be assigned however you see fit per your cluster’s requirements.
Configure all hosts that you designate as masters during the installation process
as nodes. By doing so, the masters are configured as part of the
OpenShift SDN.
You must add entries for the master hosts to the [nodes]
section:
[nodes] master[1:3].example.com openshift_node_group_name='node-config-master'
If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.
Masters are marked as schedulable nodes by default, so the default node selector
is set by default during cluster installations. The default node selector is
defined in the master configuration file’s projectConfig.defaultNodeSelector
field to determine which node projects will use by default when placing pods. It
is set to node-role.kubernetes.io/compute=true
unless overridden using the
osm_default_node_selector
variable.
If you accept the default node selector of
|
See Setting the Cluster-wide Default Node Selector for steps on adjusting this setting post-installation if needed.
It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.
The openshift_router_selector
and openshift_registry_selector
Ansible
settings determine the label selectors used when placing registry and router
pods. They are set to node-role.kubernetes.io/infra=true
by default:
# default selectors for router and registry services # openshift_router_selector='node-role.kubernetes.io/infra=true' # openshift_registry_selector='node-role.kubernetes.io/infra=true'
The registry and router are only able to run on node hosts with the
node-role.kubernetes.io/infra=true
label, which are then considered dedicated
infrastructure nodes. Ensure that at least one node host in your OKD
environment has the node-role.kubernetes.io/infra=true
label; you can use the
default node-config-infra, which sets this label:
[nodes] infra-node1.example.com openshift_node_group_name='node-config-infra'
If there is not a node in the |
If you do not intend to use OKD to manage the registry and router, configure the following Ansible settings:
openshift_hosted_manage_registry=false openshift_hosted_manage_router=false
If you use an image registry other than the default
registry.redhat.io
, you must
specify the registry
in the /etc/ansible/hosts file.
As described in Configuring Schedulability on Masters,
master hosts are marked schedulable by default. If
you label a master host with node-role.kubernetes.io/infra=true
and have no other dedicated
infrastructure nodes, the master hosts must also be marked as schedulable.
Otherwise, the registry and router pods cannot be placed anywhere.
You can use the default node-config-master-infra node group to achieve this:
[nodes] master.example.com openshift_node_group_name='node-config-master-infra'
To configure the default project settings, configure the following variables in the /etc/ansible/hosts file:
Parameter | Description | Type | Default Value |
---|---|---|---|
|
The string presented to a user if they are unable to request a project via the projectrequest API endpoint. |
String |
null |
|
The template to use for creating projects in response to a projectrequest. If you do not specify a value, the default template is used. |
String with the format |
null |
|
Defines the range of MCS categories to assign to namespaces. If this value is
changed after startup, new projects might receive labels that are already
allocated to other projects. The prefix can be any valid
SELinux set of terms, including user, role, and type. However, leaving the
prefix at its default allows the server to set them automatically. For example,
|
String with the format |
|
|
Defines the number of labels to reserve per project. |
Integer |
|
|
Defines the total set of Unix user IDs (UIDs) automatically allocated to
projects and the size of the block that each namespace gets. For example,
|
String in the format |
|
To configure the default ports used by the master API, configure the following variables in the /etc/ansible/hosts file:
Variable | Purpose |
---|---|
|
This variable sets the port number to access the OKD API. |
For example:
openshift_master_api_port=3443
The web console port setting (openshift_master_console_port
) must match the
API server port (openshift_master_api_port
).
Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of OKD, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.
The following table describes available pre-install checks that will run before every Ansible installation of OKD:
Check Name | Purpose |
---|---|
|
This check ensures that a host has the recommended amount of memory for the
specific deployment of OKD. Default values have been derived from
the
latest
installation documentation. A user-defined value for minimum memory
requirements might be set by setting the |
|
This check only runs on etcd, master, and node hosts. It ensures that the mount
path for an OKD installation has sufficient disk space remaining.
Recommended disk values are taken from the
latest
installation documentation. A user-defined value for minimum disk space
requirements might be set by setting |
|
Only runs on hosts that depend on the docker daemon (nodes and system
container installations). Checks that docker's total usage does not exceed a
user-defined limit. If no user-defined limit is set, docker's maximum usage
threshold defaults to 90% of the total size available. The threshold limit for
total percent usage can be set with a variable in your inventory file:
|
|
Ensures that the docker daemon is using a storage driver supported by
OKD. If the |
|
Attempts to ensure that images required by an OKD installation are available either locally or in at least one of the configured container image registries on the host machine. |
|
Runs on |
|
Runs prior to RPM installations of OKD. Ensures that RPM packages required for the current installation are available. |
|
Checks whether a |
To disable specific pre-install checks, include the variable
openshift_disable_check
with a comma-delimited list of check names in your
inventory file. For example:
openshift_disable_check=memory_availability,disk_availability
A similar set of health checks meant to run for diagnostics on existing clusters can be found in Ansible-based Health Checks. Another set of checks for checking certificate expiration can be found in Redeploying Certificates. |
If you use the default registry at registry.redhat.io
, you must set the following
variables:
oreg_url=registry.redhat.io/openshift3/ose-${component}:${version} oreg_auth_user="<user>" oreg_auth_password="<password>"
For more information about setting up the registry access token, see Red Hat Container Registry Authentication.
If you use an image registry other than the default at registry.redhat.io
,
specify the registry in the /etc/ansible/hosts file.
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
Variable | Purpose |
---|---|
|
Set to the alternate image location. Necessary if you are not using the default
registry at |
|
Set to |
|
If |
|
If |
The default registry requires an authentication token. For more information, see Accessing and Configuring the Red Hat Registry |
For example:
oreg_url=example.com/openshift3/ose-${component}:${version} oreg_auth_user=${user_name} oreg_auth_password=${password} openshift_examples_modify_imagestreams=true
To allow users to push and pull images to the internal container image registry from outside of the OKD cluster, configure the registry route in the /etc/ansible/hosts file. By default, the registry route is docker-registry-default.router.default.svc.cluster.local.
Variable | Purpose |
---|---|
|
Set to the value of the desired registry route. The route contains either
a name that resolves to an infrastructure node where a router manages
communication or the subdomain that you set as the default application subdomain
wildcard value. For example, if you set the |
|
Set the paths to the registry certificates. If you do not provide values for the certificate locations, certificates are generated. You can define locations for the following certificates:
|
|
Set to one of the following values:
|
For example:
openshift_hosted_registry_routehost=<path> openshift_hosted_registry_routetermination=reencrypt openshift_hosted_registry_routecertificates= "{'certfile': '<path>/org-cert.pem', 'keyfile': '<path>/org-privkey.pem', 'cafile': '<path>/org-chain.pem'}"
Router sharding
support is enabled by supplying the correct data to the inventory. The variable
openshift_hosted_routers
holds the data, which is in the form of a list. If no
data is passed, then a default router is created. There are multiple
combinations of router sharding. The following example supports routers on
separate nodes:
openshift_hosted_routers=[{'name': 'router1', 'certificate': {'certfile': '/path/to/certificate/abc.crt', 'keyfile': '/path/to/certificate/abc.key', 'cafile': '/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router', 'namespace': 'default', 'stats_port': 1936, 'edits': [], 'images': 'openshift3/ose-${component}:${version}', 'selector': 'type=router1', 'ports': ['80:80', '443:443']}, {'name': 'router2', 'certificate': {'certfile': '/path/to/certificate/xyz.crt', 'keyfile': '/path/to/certificate/xyz.key', 'cafile': '/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router', 'namespace': 'default', 'stats_port': 1936, 'edits': [{'action': 'append', 'key': 'spec.template.spec.containers[0].env', 'value': {'name': 'ROUTE_LABELS', 'value': 'route=external'}}], 'images': 'openshift3/ose-${component}:${version}', 'selector': 'type=router2', 'ports': ['80:80', '443:443']}]
GlusterFS can be configured to provide persistent storage and dynamic provisioning for OKD. It can be used both containerized within OKD (Containerized GlusterFS) and non-containerized on its own nodes (External GlusterFS).
You configure GlusterFS clusters using variables, which interact with the
OKD clusters. The variables, which you define in the [OSEv3:vars]
group, include host variables, role variables, and image name and version tag
variables.
You use the glusterfs_devices
host variable to define the list of block devices
to manage the GlusterFS cluster. Each host in your configuration must
have at least one glusterfs_devices
variable, and for every configuration,
there must be at least one bare device with no partitions or LVM PVs.
Role variables control the integration of a GlusterFS cluster into a new or existing OKD cluster. You can define a number of role variables, each of which also has a corresponding variable to optionally configure a separate GlusterFS cluster for use as storage for an integrated Docker registry.
You can define image name and version tag variables to prevent OKD pods from upgrading after an outage, which could lead to a cluster with different OKD versions. You can also define these variables to specify the image name and version tags for all containerized components.
Additional information and examples, including the ones below, can be found at Persistent Storage Using GlusterFS.
See Containerized GlusterFS Considerations for specific host preparations and prerequisites. |
In your inventory file, include the following variables in the [OSEv3:vars]
section, and adjust them as required for your configuration:
[OSEv3:vars] ... openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_host_vol_size=100 openshift_storage_glusterfs_block_storageclass=true openshift_storage_glusterfs_block_storageclass_default=false
Add glusterfs
in the [OSEv3:children]
section to enable the [glusterfs]
group:
[OSEv3:children] masters nodes glusterfs
Add a [glusterfs]
section with entries for each storage node that will host
the GlusterFS storage. For each node, set glusterfs_devices
to a list of raw
block devices that will be completely managed as part of a GlusterFS cluster.
There must be at least one device listed. Each device must be bare, with no
partitions or LVM PVs. Specifying the variable takes the form:
<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Add the hosts listed under [glusterfs]
to the [nodes]
group:
[nodes] ... node11.example.com openshift_node_group_name="node-config-compute" node12.example.com openshift_node_group_name="node-config-compute" node13.example.com openshift_node_group_name="node-config-compute"
In your inventory file, include the following variables in the [OSEv3:vars]
section, and adjust them as required for your configuration:
[OSEv3:vars] ... openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_host_vol_size=100 openshift_storage_glusterfs_block_storageclass=true openshift_storage_glusterfs_block_storageclass_default=false openshift_storage_glusterfs_is_native=false openshift_storage_glusterfs_heketi_is_native=true openshift_storage_glusterfs_heketi_executor=ssh openshift_storage_glusterfs_heketi_ssh_port=22 openshift_storage_glusterfs_heketi_ssh_user=root openshift_storage_glusterfs_heketi_ssh_sudo=false openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
Add glusterfs
in the [OSEv3:children]
section to enable the [glusterfs]
group:
[OSEv3:children] masters nodes glusterfs
Add a [glusterfs]
section with entries for each storage node that will host
the GlusterFS storage. For each node, set glusterfs_devices
to a list of raw
block devices that will be completely managed as part of a GlusterFS cluster.
There must be at least one device listed. Each device must be bare, with no
partitions or LVM PVs. Also, set glusterfs_ip
to the IP address of the node.
Specifying the variable takes the form:
<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
An integrated OpenShift Container Registry can be deployed using the installer.
If no registry storage options are used, the default OpenShift Container Registry is ephemeral and all data will be lost when the pod no longer exists.
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components. |
There are several options for enabling registry storage when using the advanced installer:
When the following variables are set, an NFS volume is created during cluster
installation with the path <nfs_directory>/<volume_name> on the host in
the [nfs]
host group. For example, the volume path using these options is
be /exports/registry:
[OSEv3:vars] # nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options is nfs.example.com:/exports/registry.
[OSEv3:vars] # nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_host=nfs.example.com openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
An OpenStack storage configuration must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_kind=openstack openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem=ext4 openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57 openshift_hosted_registry_storage_volume_size=10Gi
The simple storage solution (S3) bucket must already exist.
[OSEv3:vars] #openshift_hosted_registry_storage_kind=object #openshift_hosted_registry_storage_provider=s3 #openshift_hosted_registry_storage_s3_accesskey=access_key_id #openshift_hosted_registry_storage_s3_secretkey=secret_access_key #openshift_hosted_registry_storage_s3_bucket=bucket_name #openshift_hosted_registry_storage_s3_region=bucket_region #openshift_hosted_registry_storage_s3_chunksize=26214400 #openshift_hosted_registry_storage_s3_rootdirectory=/registry #openshift_hosted_registry_pullthrough=true #openshift_hosted_registry_acceptschema2=true #openshift_hosted_registry_enforcequota=true
If you use a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:
openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/
Similar to configuring Containerized GlusterFS, GlusterFS can be configured to provide storage for an OpenShift Container Registry during the initial installation of the cluster to offer redundant and reliable storage for the registry.
See Containerized GlusterFS Considerations for specific host preparations and prerequisites. |
In your inventory file, set the following variable under [OSEv3:vars]
section, and adjust them as required for your configuration:
[OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs (1) openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
1 | Running the integrated OpenShift Container Registry, on infrastructure nodes is recommended. Infrastructure node are nodes dedicated to running applications deployed by administrators to provide services for the OKD cluster. |
Add glusterfs_registry
in the [OSEv3:children]
section to enable the
[glusterfs_registry]
group:
[OSEv3:children] masters nodes glusterfs_registry
Add a [glusterfs_registry]
section with entries for each storage node that
will host the GlusterFS storage. For each node, set glusterfs_devices
to a
list of raw block devices that will be completely managed as part of a
GlusterFS cluster. There must be at least one device listed. Each device must
be bare, with no partitions or LVM PVs. Specifying the variable takes the form:
<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Add the hosts listed under [glusterfs_registry]
to the [nodes]
group:
[nodes] ... node11.example.com openshift_node_group_name="node-config-infra" node12.example.com openshift_node_group_name="node-config-infra" node13.example.com openshift_node_group_name="node-config-infra"
A GCS bucket must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_provider=gcs openshift_hosted_registry_storage_gcs_bucket=bucket01 openshift_hosted_registry_storage_gcs_keyfile=test.key openshift_hosted_registry_storage_gcs_rootdirectory=/registry
The vSphere Cloud Provider must be configured with a datastore accessible by the OKD nodes.
When using vSphere volume for the registry, you must set the storage access mode
to ReadWriteOnce
and the replica count to 1
:
[OSEv3:vars] openshift_hosted_registry_storage_kind=vsphere openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] openshift_hosted_registry_replicas=1
If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.
In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.
See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds. |
Variable | Purpose |
---|---|
|
This variable specifies the |
|
This variable specifices the |
|
This variable is used to set the The host names that do not use the defined proxy include:
|
|
This boolean variable specifies whether or not the names of all defined
OpenShift hosts and |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the HTTP proxy used by |
|
This variable defines the HTTPS proxy used by |
|
While iptables is the default firewall, firewalld is recommended for new installations. |
OKD uses iptables as the default firewall, but you can configure your cluster to use firewalld during the install process.
Because iptables is the default firewall, OKD is designed to have it configured automatically. However, iptables rules can break OKD if not configured correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.
To use firewalld as the firewall for an OKD installation, add the
os_firewall_use_firewalld
variable to the list of configuration variables in
the Ansible host file at install:
[OSEv3:vars] os_firewall_use_firewalld=True (1)
1 | Setting this variable to true opens the required ports and adds rules to
the default zone, ensuring that firewalld is configured correctly. |
Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone. |
Session
options in the OAuth configuration are configurable in the inventory file. By
default, Ansible populates a sessionSecretsFile
with generated
authentication and encryption secrets so that sessions generated by one master
can be decoded by the others. The default location is
/etc/origin/master/session-secrets.yaml, and this file will only be
re-created if deleted on all masters.
You can set the session name and maximum number of seconds with
openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and
openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions
using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets
must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
Custom serving certificates for the public host names of the OKD API and web console can be deployed during cluster installation and are configurable in the inventory file.
Configure custom certificates for the host name associated with
the |
Certificate and key file paths can be configured using the
openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "cafile": "/path/to/custom-ca1.crt"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed in the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
.
Detected names can be overridden by providing the "names"
key when setting
openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"], "cafile": "/path/to/custom-ca1.crt"}]
Certificates configured using openshift_master_named_certificates
are cached
on masters, meaning that each additional Ansible run with a different set of
certificates results in all previously deployed certificates remaining in place
on master hosts and in the master configuration file.
If you want to overwrite openshift_master_named_certificates
with
the provided value (or no value), specify the
openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb-internal.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, set the following parameter values:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"], "cafile": "/root/ca-file.crt"}] openshift_master_overwrite_named_certificates=true
The |
By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):
[OSEv3:vars] openshift_hosted_registry_cert_expire_days=730 openshift_ca_cert_expire_days=1825 openshift_master_cert_expire_days=730 etcd_ca_default_days=1825
These values are also used when redeploying certificates via Ansible post-installation.
Prometheus Cluster Monitoring is set to automatically deploy. To prevent its automatic deployment, set the following:
[OSEv3:vars] openshift_cluster_monitoring_operator_install=false
For more information on Prometheus Cluster Monitoring and its configuration, see Prometheus Cluster Monitoring documentation.
Cluster metrics are not set to automatically deploy. Set the following to enable cluster metrics during cluster installation:
[OSEv3:vars] openshift_metrics_install_metrics=true
The metrics public URL can be set during cluster
installation using the openshift_metrics_hawkular_hostname
Ansible variable,
which defaults to:
https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics
If you alter this variable, ensure the host name is accessible via your router.
openshift_metrics_hawkular_hostname=hawkular-metrics.{{openshift_master_default_subdomain}}
In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of |
You must set an |
The openshift_metrics_cassandra_storage_type
variable must be set in order to
use persistent storage for metrics. If
openshift_metrics_cassandra_storage_type
is not set, then cluster metrics data
is stored in an emptyDir
volume, which will be deleted when the Cassandra pod
terminates.
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes Cassandra for metrics storage. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended. Cassandra is designed to provide redundancy via multiple independent, instances. For this reason, using NFS or a SAN for data directories is an antipattern and is not recommended. However, NFS/SAN implementations on the marketplace might not have issues backing or providing storage to this component. Contact the individual NFS/SAN implementation vendor for more information on any testing that was possibly completed against these OpenShift core components. |
There are three options for enabling cluster metrics storage during cluster installation:
If your OKD environment supports dynamic volume provisioning for your cloud provider, use the following variable:
[OSEv3:vars] openshift_metrics_cassandra_storage_type=dynamic
If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. Use the following variables:
[OSEv3:vars] openshift_metrics_cassandra_storage_type=pv openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block
Check
Volume
Configuration for more information on using DynamicProvisioningEnabled
to
enable or disable dynamic provisioning.
When the following variables are set, an NFS volume is created during cluster
installation with path <nfs_directory>/<volume_name> on the host in the
[nfs]
host group. For example, the volume path using these options is
/exports/metrics:
[OSEv3:vars] # nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_nfs_options='*(rw,root_squash)' openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] # nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_host=nfs.example.com openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
The remote volume path using the following options is nfs.example.com:/exports/metrics.
The use of NFS for the core OKD components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the OKD infrastructure.
As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.
# Enable unsupported configurations, things that will yield a partially # functioning cluster but would not be supported for production use #openshift_enable_unsupported_configurations=false
If you see the following messages when upgrading or installing your cluster, then an additional step is required.
TASK [Run variable sanity checks] ********************************************** fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}
In your Ansible inventory file, specify the following parameter:
[OSEv3:vars] openshift_enable_unsupported_configurations=True
Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging during cluster installation:
[OSEv3:vars] openshift_logging_install_logging=true
When installing cluster logging, you must also specify a node selector,
such as |
For more information on the available cluster logging variables, see Specifying Logging Ansible Variables.
The openshift_logging_es_pvc_dynamic
variable must be set in order to use
persistent storage for logging. If openshift_logging_es_pvc_dynamic
is
not set, then cluster logging data is stored in an emptyDir
volume, which will
be deleted when the Elasticsearch pod terminates.
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes ElasticSearch for logging storage. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended. Due to ElasticSearch not implementing a custom deletionPolicy, the use of NFS storage as a volume or a persistent volume is not supported for Elasticsearch storage, as Lucene and the default deletionPolicy, relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing they might have performed against these OpenShift core components. |
There are three options for enabling cluster logging storage during cluster installation:
If your OKD environment has dynamic volume provisioning, it could be configured
either via the cloud provider or by an independent storage provider. For instance, the cloud provider
could have a StorageClass with provisioner kubernetes.io/gce-pd
on GCE, and an
independent storage provider such as GlusterFS could have a StorageClass
with provisioner
kubernetes.io/glusterfs
. In either case, use the following variable:
[OSEv3:vars] openshift_logging_es_pvc_dynamic=true
For additional information on dynamic provisioning, see Dynamic provisioning and creating storage classes.
If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. Use the following variables:
[OSEv3:vars] openshift_logging_elasticsearch_storage_type=pvc openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block
Check
Volume
Configuration for more information on using DynamicProvisioningEnabled
to
enable or disable dynamic provisioning.
When the following variables are set, an NFS volume is created during cluster
installation with path <nfs_directory>/<volume_name> on the host in the
[nfs]
host group. For example, the volume path using these options is
/exports/logging:
[OSEv3:vars] # nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports (1) openshift_logging_storage_nfs_options='*(rw,root_squash)' (1) openshift_logging_storage_volume_name=logging (2) openshift_logging_storage_volume_size=10Gi openshift_enable_unsupported_configurations=true openshift_logging_elasticsearch_storage_type=pvc openshift_logging_es_pvc_size=10Gi openshift_logging_es_pvc_storage_class_name='' openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_prefix=logging
1 | These parameters work only with the /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
installation playbook. The parameters will not work with the /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
playbook. |
2 | The NFS volume name must be logging . |
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] # nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_host=nfs.example.com (1) openshift_logging_storage_nfs_directory=/exports (1) openshift_logging_storage_volume_name=logging (2) openshift_logging_storage_volume_size=10Gi openshift_enable_unsupported_configurations=true openshift_logging_elasticsearch_storage_type=pvc openshift_logging_es_pvc_size=10Gi openshift_logging_es_pvc_storage_class_name='' openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_prefix=logging
1 | These parameters work only with the /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
installation playbook. The parameters will not work with the /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
playbook. |
2 | The NFS volume name must be logging . |
The remote volume path using the following options is nfs.example.com:/exports/logging.
The use of NFS for the core OKD components is not recommended, as NFS (and the NFS Protocol) does not provide the proper consistency needed for the applications that make up the OKD infrastructure.
As a result, the installer and update playbooks require an option to enable the use of NFS with core infrastructure components.
# Enable unsupported configurations, things that will yield a partially # functioning cluster but would not be supported for production use #openshift_enable_unsupported_configurations=false
If you see the following messages when upgrading or installing your cluster, then an additional step is required.
TASK [Run variable sanity checks] ********************************************** fatal: [host.example.com]: FAILED! => {"failed": true, "msg": "last_checked_host: host.example.com, last_checked_var: openshift_hosted_registry_storage_kind;nfs is an unsupported type for openshift_hosted_registry_storage_kind. openshift_enable_unsupported_configurations=True mustbe specified to continue with this configuration."}
In your Ansible inventory file, specify the following parameter:
[OSEv3:vars] openshift_enable_unsupported_configurations=True
The service catalog is enabled by default during installation. Enabling the service broker allows you to register service brokers with the catalog. When the service catalog is enabled, the OpenShift Ansible broker and template service broker are both installed as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information. If you disable the service catalog, the OpenShift Ansible broker and template service broker are not installed.
To disable automatic deployment of the service catalog, set the following cluster variable in your inventory file:
openshift_enable_service_catalog=false
If you use your own registry, you must add:
openshift_service_catalog_image_prefix
: When pulling the service catalog
image, force the use of a specific prefix (for example, registry
). You must
provide the full registry name up to the image name.
openshift_service_catalog_image_version
: When pulling the service catalog
image, force the use of a specific image version.
For example:
openshift_service_catalog_image="docker-registry.default.example.com/openshift/ose-service-catalog:${version}" openshift_service_catalog_image_prefix="docker-registry-default.example.com/openshift/ose-" openshift_service_catalog_image_version="v3.9.30"
The OpenShift Ansible broker (OAB) is enabled by default during installation.
If you do not want to install the OAB, set the ansible_service_broker_install
parameter value to false
in the inventory file:
ansible_service_broker_install=false
Variable | Purpose |
---|---|
|
Specify the prefix for the service catalog component image. |
The OAB deploys its own etcd instance separate from the etcd used by the rest of
the OKD cluster. The OAB’s etcd instance requires separate storage
using persistent volumes (PVs) to function. If no PV is available, etcd will
wait until the PV can be satisfied. The OAB application will enter a CrashLoop
state until its etcd instance is available.
Some Ansible playbook bundles (APBs) also require a PV for their own usage in order to deploy. For example, each of the database APBs have two plans: the Development plan uses ephemeral storage and does not require a PV, while the Production plan is persisted and does require a PV.
APB | PV Required? |
---|---|
postgresql-apb |
Yes, but only for the Production plan |
mysql-apb |
Yes, but only for the Production plan |
mariadb-apb |
Yes, but only for the Production plan |
mediawiki-apb |
Yes |
To configure persistent storage for the OAB:
The following example shows usage of an NFS host to provide the required PVs, but other persistent storage providers can be used instead. |
In your inventory file, add nfs
to the [OSEv3:children]
section to enable
the [nfs]
group:
[OSEv3:children] masters nodes nfs
Add a [nfs]
group section and add the host name for the system that will
be the NFS host:
[nfs] master1.example.com
Add the following in the [OSEv3:vars]
section:
# nfs_directory must conform to DNS-1123 subdomain must consist of lower case # alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character openshift_hosted_etcd_storage_kind=nfs openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd (1) openshift_hosted_etcd_storage_volume_name=etcd-vol2 (1) openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'} ansible_service_broker_registry_url=registry.redhat.io ansible_service_broker_registry_user=<user_name> (2) ansible_service_broker_registry_password=<password> (2) ansible_service_broker_registry_organization=<organization> (2)
1 | An NFS volume will be created with path <nfs_directory>/<volume_name> on the
host in the [nfs] group. For example, the volume path using these options
is /opt/osev3-etcd/etcd-vol2. |
2 | Only required if ansible_service_broker_registry_url is set to a registry that
requires authentication for pulling APBs. |
These settings create a persistent volume that is attached to the OAB’s etcd instance during cluster installation.
In order to do APB development with the OpenShift Container Registry in conjunction with the OAB, a whitelist of images the OAB can access must be defined. If a whitelist is not defined, the broker will ignore APBs and users will not see any APBs available.
By default, the whitelist is empty so that a user cannot add APB images to the
broker without a cluster administrator configuring the broker. To whitelist all
images that end in -apb
:
In your inventory file, add the following to the [OSEv3:vars]
section:
ansible_service_broker_local_registry_whitelist=['.*-apb$']
The template service broker (TSB) is enabled by default during installation.
If you do not want to install the TSB, set the template_service_broker_install
parameter value to false
:
template_service_broker_install=false
To configure the TSB, one or more projects must be defined as the broker’s
source namespace(s) for loading templates and image streams into the service
catalog. Set the source projects by modifying the following in your inventory
file’s [OSEv3:vars]
section:
openshift_template_service_broker_namespaces=['openshift','myproject']
Variable | Purpose |
---|---|
|
Specify the prefix for the template service broker component image. |
|
Specify the prefix for the ansible service broker component image. |
The following Ansible variables set master configuration options for customizing the web console. See Customizing the Web Console for more details on these customization options.
Variable | Purpose |
---|---|
|
Determines whether to install the web console. Can be set to |
|
Specify the prefix for the web console images. |
|
Sets |
|
Sets |
|
Sets |
|
Sets the OAuth template in the master configuration. See Customizing the Login Page for details. Example value: |
|
Sets |
|
Sets |
|
Configurate the web console to log the user out automatically after a period of inactivity. Must be a whole number greater than or equal to 5, or 0 to disable the feature. Defaults to 0 (disabled). |
|
Boolean value indicating if the cluster is configured for overcommit. When |
|
Enable the context selector in the web console and admin console mastheads for quickly switching between the two consoles. Defaults to |
The cluster console is an additional web interface like the web console, but focused on admin tasks. The cluster console supports many of the same common OKD resources as the web console, but it also allows you to view metrics about the cluster and manage cluster-scoped resources such as nodes, persistent volumes, cluster roles, and custom resource definitions. The following variables can be used to customize the cluster console.
Variable | Purpose |
---|---|
|
Determines whether to install the cluster console. Can be set to |
|
Sets the host name of the cluster console. Defaults to |
|
Optional certificate to use for the cluster console route. This is only needed if using a custom host name. |
|
Optional key to use for the cluster console route. This is only needed if using a custom host name. |
|
Optional CA to use for the cluster console route. This is only needed if using a custom host name. |
|
Optional base path for the cluster console. If set, it should begin and end with a slash like |
|
Optional CA file to use to connect to the OAuth server. Defaults to
|
The Technology Preview Operator Framework includes the Operator Lifecycle Manager (OLM). You can optionally install the OLM during cluster installation by setting the following variables in your inventory file:
Alternatively, the Technology Preview Operator Framework can be installed after cluster installation. See Installing Operator Lifecycle Manager using Ansible for separate instructions. |
Add the openshift_enable_olm
variable in the [OSEv3:vars]
section,
setting it to true
:
openshift_enable_olm=true
Add the openshift_additional_registry_credentials
variable in the
[OSEv3:vars]
section, setting credentials required to pull the Operator
containers:
openshift_additional_registry_credentials=[{'host':'registry.connect.redhat.com','user':'<your_user_name>','password':'<your_password>','test_image':'mongodb/enterprise-operator:0.3.2'}]
Set user
and password
to the credentials that you use to log in to the Red
Hat Customer Portal at https://access.redhat.com.
The test_image
represents an image that will be used to test the credentials
you provided.
After your cluster installation has completed successful, see Launching your first Operator for further steps on using the OLM as a cluster administrator during this Technology Preview phase.