×

Overview

Using a router is the most common way to allow external access to an OKD cluster.

A router is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications.

Administrator Prerequisites

Before starting this procedure, the administrator must:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.

  • Make sure that the local firewall on each node permits the request to reach the IP address.

  • Configure the OKD cluster to use an identity provider that allows appropriate user access.

  • Make sure there is at least one user with cluster-admin role. To add this role to a user, run the following command:

    $ oc adm policy add-cluster-role-to-user cluster-admin <username>
  • Have an OKD cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.

Defining the Public IP Range

The first step in allowing access to a service is to define an external IP address range in the master configuration file:

  1. Log in to OKD as a user with the cluster admin role.

    $ oc login
    Authentication required (openshift)
    Username: admin
    Password:
    Login successful.
    
    You have access to the following projects and can switch between them with 'oc project <projectname>':
      * default
    Using project "default".
  2. Configure the externalIPNetworkCIDRs parameter in the /etc/origin/master/master-config.yaml file as shown:

    networkConfig:
      externalIPNetworkCIDRs:
      - <ip_address>/<cidr>

    For example:

    networkConfig:
      externalIPNetworkCIDRs:
      - 192.168.120.0/24
  3. Restart the OKD master service to apply the changes.

    # master-restart api
    # master-restart controllers

The IP address pool must terminate at one or more nodes in the cluster.

Create a Project and Service

If the project and service that you want to expose do not exist, first create the project, then the service.

If the project and service already exist, go to the next step: Expose the Service to Create a Route.

  1. Log in to OKD.

  2. Create a new project for your service:

    $ oc new-project <project_name>

    For example:

    $ oc new-project external-ip
  3. Use the oc new-app command to create a service:

    For example:

    $ oc new-app \
        -e MYSQL_USER=admin \
        -e MYSQL_PASSWORD=redhat \
        -e MYSQL_DATABASE=mysqldb \
        registry.redhat.io/openshift3/mysql-55-rhel7
  4. Run the following command to see that the new service is created:

    $ oc get svc
    NAME               CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    mysql-55-rhel7     172.30.131.89   <none>        3306/TCP   13m

    By default, the new service does not have an external IP address.

Expose the Service to Create a Route

You must expose the service as a route using the oc expose command.

To expose the service:

  1. Log in to OKD.

  2. Log in to the project where the service you want to expose is located.

    $ oc project project1
  3. Run the following command to expose the route:

    $ oc expose service <service_name>

    For example:

    $ oc expose service mysql-55-rhel7
    route "mysql-55-rhel7" exposed
  4. On the master, use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:

    $ curl <pod_ip>:<port>

    For example:

    $ curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connected to the service.

    If you have a MySQL client, log in with the standard CLI command:

    $ mysql -h 172.30.131.89 -u admin -p
    Enter password:
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    
    MySQL [(none)]>

Configure the Router

Work with your administrator to configure a router to accept external requests and proxy them based on the configured routes.

The administrator can create a wildcard DNS entry and then set up a router. Then, you can self-service the edge router without having to contact the administrators.

The router has controls to allow the administrator to specify whether the users can self-provision host names or the host names require a specific pattern.

When a set of routes is created in various projects, the overall set of routes is available to the set of routers. Each router admits (or selects) routes from the set of routes. By default, all routers admit all routes.

Routers that have permission to view all of the labels in all projects can select routes to admit based on the labels. This is called router sharding. This is useful when balancing incoming traffic load among a set of routers and when isolating traffic to a specific router. For example, company A goes to one router and company B to another.

Since a router runs on a specific node, when it or the node fails traffic ingress stops. The impact of this can be reduced by creating redundant routers on different nodes and using high availability to switch the router IP address when a node fails.

Configure IP Failover using VIPs

Optionally, an administrator can configure IP failover.

IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long as a single node is available, the VIPs will be served. There is no way to explicitly distribute the VIPs over the nodes. As such, there may be nodes with no VIPs and other nodes with multiple VIPs. If there is only one node, all VIPs will be on it.

The VIPs must be routable from outside the cluster.