×

You can configure OKD to use Red Hat Virtualization.

Configuring Red Hat Virtualization objects

To integrate OKD with Red Hat Virtualization, take the following actions as part of your host preparation.

  1. To provide high availability in case of the loss of one hypervisor host, add each class of instance to a negative affinity group. See VM Affinity.

  2. To ensure that the OKD environment meets the minimum hardware requirements, create templates for virtual machines that use the following resources:

    Master nodes

    • Minimum 2 CPU Cores

    • 16 GB Memory

    • Minimum 10 GB root disk

    • Minimum 15 GB container image registry disk

    • 30 GB local volume disk

    • Minimum 25 GB etcd disk

    Infrastructure nodes

    • Minimum 2 CPU Cores

    • 16 GB Memory

    • Minimum 10 GB root disk

    • Minimum 15 GB container image registry disk

    • 30 GB local volume disk

    • Minimum 25 GB Gluster registry disk

    Application nodes

    • 2 CPU Cores

    • 8 GB Memory

    • Minimum 10 GB root disk

    • Minimum 15 GB container image registry disk

    • 30 GB local volume disk

    Load balancer node

    • 1 CPU Core

    • 4 GB Memory

    • 10 GB root disk

  3. Create master, infrastructure, and application nodes as well as a load balancer node. Use the templates that you created.

  4. Create DNS entries for the routers. Provide entries for all infrastructure instances and configure a round-robin strategy so that the router can pass traffic to applications.

  5. Create a DNS entry for the OKD web console. Specify the IP address of the load balancer node.

  6. To use Red Hat Virtualization, you must provide external storage, for example GlusterFS, for persistent storage of registry images and for application storage.

Configuring OKD for Red Hat Virtualization

You configure OKD for Red Hat Virtualization by modifying the Ansible inventory file before you install the cluster.

  1. Modify the Ansible inventory file, located at /etc/ansible/hosts by default, to use the following YAML sections:

    [OSEv3:children]
    nodes
    masters
    etcd
    glusterfs_registry
    lb
    
    [OSEv3:vars]
    # General variables
    ansible_ssh_user=root
    openshift_deployment_type=openshift-enterprise
    openshift_release='3.10'
    openshift_master_cluster_method=native
    debug_level=2
    openshift_debug_level="{{ debug_level }}"
    openshift_node_debug_level="{{ node_debug_level | default(debug_level, true) }}"
    openshift_enable_service_catalog=False
    
    app_dns_prefix=apps
    public_hosted_zone=example.com
    load_balancer_hostname=lb.{{public_hosted_zone}}
    openshift_master_cluster_hostname="{{ load_balancer_hostname }}"
    openshift_master_cluster_public_hostname="{{ load_balancer_hostname }}"
    openshift_master_default_subdomain="{{ app_dns_prefix }}.{{ public_hosted_zone }}"
    
    # Pod Networking
    os_sdn_network_plugin_name=redhat/openshift-ovs-networkpolicy
    
    # Registry
    openshift_hosted_registry_storage_kind=glusterfs
    
    # Authentication (example here creates one user, myuser with password changeme)
    openshift_master_identity_providers="[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'True', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]"
    openshift_master_htpasswd_users={'myuser': '$apr1$zAhyA9Ko$rBxBOwAwwtRuuaw8OtCwH0'}
    
    # Docker and extra file system setup
    container_runtime_docker_storage_setup_device=/dev/vdb
    container_runtime_docker_storage_type=overlay2
    openshift_docker_use_system_container=False
    openshift_node_local_quota_per_fsgroup=512Mi (1)
    openshift_use_system_containers=False
    
    [masters]
    master0.example.com
    master1.example.com
    master2.example.com
    
    [etcd]
    master0.example.com
    master1.example.com
    master2.example.com
    
    [infras]
    infra0.example.com
    infra1.example.com
    infra2.example.com
    
    [glusterfs_registry]
    infra0.example.com glusterfs_devices="['/dev/vdd']"
    infra1.example.com glusterfs_devices="['/dev/vdd']"
    infra2.example.com glusterfs_devices="['/dev/vdd']"
    
    [lb]
    lb.example.com
    
    [nodes]
    master0.example.com openshift_node_group_name=node-config-master
    master1.example.com openshift_node_group_name=node-config-master
    master2.example.com openshift_node_group_name=node-config-master
    infra0.example.com openshift_node_group_name=node-config-infra
    infra1.example.com openshift_node_group_name=node-config-infra
    infra2.example.com openshift_node_group_name=node-config-infra
    app0.example.com openshift_node_group_name=node-config-compute
    app1.example.com openshift_node_group_name=node-config-compute
    app2.example.com openshift_node_group_name=node-config-compute
    1 If you use the openshift_node_local_quota_per_fsgroup parameter, you must specify the partition or LVM to use for the directory of /var/lib/origin/openshift.local.volumes. The partition must be mounted with the option of gquota in fstab.

    This inventory file uses the following nodes and disks:

    • One load balancer instance

    • Three master instances

      • Extra disks attached: 15 GB for the container image registry, 30 GB for local volume storage, and 25 GB for etcd

    • Three infrastructure instances

      • Extra disks attached: 15 GB for Docker, 30 GB for local volume storage, and, because this cluster uses GlusterFS for persistent storage, 25 GB for a GlusterFS registry

    • One or more application instance

      • Extra disks attached: 15 GB for Docker, 30 GB for local volume storage

  2. Continue to install the cluster following the Installing OKD steps. During that process, make any changes to your inventory file that your cluster needs.