Common ways to connect to OKD include "from the outside" (via a pre-existing kubeconfig file), and "from the inside" (running within a Pod, using the Pod’s ServiceAccount principal).

Connecting via a pre-existing kubeconfig file

An example showing how to connect to OKD via a pre-existing kubeconfig file is included in Getting Started.

In particular, the example shows:

  • Instantiating a loader for the kubeconfig file:

    kubeconfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
  • Determining the namespace referenced by the current context in the kubeconfig file:

    namespace, _, err := kubeconfig.Namespace()
  • Getting a rest.Config from the kubeconfig file. This is passed into all the client objects created:

    restconfig, err := kubeconfig.ClientConfig()
  • Creating clients from the rest.Config:

    coreclient, err := corev1client.NewForConfig(restconfig)
    buildclient, err := buildv1client.NewForConfig(restconfig)

Connecting from within a pod running in the cluster

The following example connects to OKD from within a Pod, using the Pod’s ServiceAccount principal.

package main

import (

        buildv1client "github.com/openshift/client-go/build/clientset/versioned/typed/build/v1"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        corev1client "k8s.io/client-go/kubernetes/typed/core/v1"

func main() {
        // Build a rest.Config from configuration injected into the Pod by
        // Kubernetes. Clients will use the Pod's ServiceAccount principal.
        restconfig, err := rest.InClusterConfig()
        if err != nil {

        // If you need to know the Pod's Namespace, adjust the Pod's spec to pass
        // the information into an environment variable in advance via the downward
        // API.
        namespace := os.Getenv("NAMESPACE")
        if namespace == "" {
                panic("NAMESPACE was not set")

        // Create a Kubernetes core/v1 client.
        coreclient, err := corev1client.NewForConfig(restconfig)
        if err != nil {

        // Create an OpenShift build/v1 client.
        buildclient, err := buildv1client.NewForConfig(restconfig)
        if err != nil {

        mux := http.NewServeMux()
        mux.HandleFunc("/", func(rw http.ResponseWriter, req *http.Request) {
                rw.Header().Set("Cache-Control", "no-store, must-revalidate")
                rw.Header().Set("Content-Type", "text/plain")

                // List all Pods in our current Namespace.
                pods, err := coreclient.Pods(namespace).List(metav1.ListOptions{})
                if err != nil {

                fmt.Fprintf(rw, "Pods in namespace %s:\n", namespace)
                for _, pod := range pods.Items {
                        fmt.Fprintf(rw, "  %s\n", pod.Name)

                // List all Builds in our current Namespace.
                builds, err := buildclient.Builds(namespace).List(metav1.ListOptions{})
                if err != nil {

                fmt.Fprintf(rw, "Builds in namespace %s:\n", namespace)
                for _, build := range builds.Items {
                        fmt.Fprintf(rw, "  %s\n", build.Name)

        // Run an HTTP server on port 8080 which will serve the pod and build list.
        err = http.ListenAndServe(":8080", mux)
        if err != nil {

Note: to try out the above example, you will need to ensure:

  • The Pod’s ServiceAccount (called "default" by default) has permissions to list Pods and Builds. One way to achieve this is by running oc policy add-role-to-user view -z default.

  • The downward API is used to pass the Pod’s Namespace into an environment variable so that it can be picked up by the application. The following Pod spec achieves this:

    kind: Pod
    apiVersion: v1
      name: getting-started
      - name: c
        image: ...
        - name: NAMESPACE
                fieldPath: metadata.namespace