×

Understanding networking is essential for building resilient, secure, and scalable applications in OKD. From basic pod-to-pod communication to complex traffic routing and security rules, every component of your application relies on the network to function correctly.

Core network layers and components

Red Hat OpenShift Networking is built on two fundamental layers: the pod network and the service network. The pod network is where your applications live. The service network makes your applications reliably accessible.

The pod network

The pod network is a flat network space where every pod in the cluster receives its own unique IP address. This network is managed by the Container Network Interface (CNI) plugin. The CNI plugin is responsible for wiring each pod into the cluster network.

This design allows pods to communicate directly with each other using their IP addresses, regardless of which node they are running on. However, these pod IPs are ephemeral. This means the IPs are destroyed when the pod is destroyed and a new IP address is assigned when a new pod is created. Because of this, you should never rely on pod IP addresses directly for long-lived communication.

The service network

A service is a networking object that provides a single, stable virtual IP address, called a ClusterIP, and a DNS name for a logical group of pods.

When a request is sent to a service’s ClusterIP, OKD automatically load-balances the traffic to one of the healthy pods backing that service. It uses Kubernetes labels and selectors to keep track of which pods belong to which service. This abstraction makes your applications resilient because individual pods can be created or destroyed without affecting the applications trying to reach them.

Managing traffic within the cluster

Your applications need to communicate with each other inside the cluster. OKD provides two primary mechanisms for internal traffic: direct pod-to-pod communication for simple exchanges and robust service discovery for reliable connections.

Pod-to-pod communication

Pods communicate directly using the unique IP addresses assigned by the pod network. A pod on one node can send traffic directly to a pod on another node without any network address translation (NAT). This direct communication model is efficient for services that need to exchange data quickly. Applications can simply target another pod’s IP address to establish a connection.

Service discovery with DNS

Pods need a reliable way to find each other because pod IP addresses are ephemeral. OKD uses CoreDNS, a built-in DNS server, to provide this service discovery.

Every service you create automatically receives a stable DNS name. A pod can use this DNS name to connect to the service. The DNS system resolves the name to the service’s stable ClusterIP address. This process ensures reliable communication even when individual pod IPs change.

Managing traffic entering and leaving the cluster

You need a way for external users to access your applications and for your applications to securely access external services. OKD provides several tools to manage this flow of traffic into and out of your cluster.

Exposing applications with Ingress and Route objects

To allow external traffic to reach services inside your cluster, you use an Ingress Controller. This component acts as the front door that directs incoming requests to the correct application. You define the traffic rules using one of two primary resources:

  • Ingress: The standard Kubernetes resource for managing external access to services, typically for HTTP and HTTPS traffic.

  • Route object: A resource that provides the same functionality as Ingress but includes additional features like more advanced TLS termination options and traffic splitting. Route objects are specific to OKD.

Distributing traffic with Load Balancers

A Load Balancer provides a single, highly available IP address for directing traffic to your cluster. It typically runs outside the cluster on a cloud provider or using MetalLB on bare-metal infrastructure and distributes incoming requests across multiple nodes that are running the Ingress Controller.

This prevents any single node from becoming a bottleneck or a point of failure to ensure that your applications remain accessible.

Controlling Egress traffic

Egress refers to outbound traffic that originates from a pod inside the cluster and is destined for an external system. OKD provides several mechanisms to manage this:

  • EgressIP: You can assign a specific, predictable source IP address to all outbound traffic from a given project. This is useful when you need to access an external service like a database that has a firewall requiring you to allow specific source IPs.

  • Egress Router: This is a dedicated pod that acts as a gateway for outbound traffic. It allows you to route connections through a single, controlled exit point.

  • Egress Firewall: This acts as a cluster-level firewall for all outbound traffic. It enhances your security posture by allowing you to create rules that explicitly allow or deny connections from pods to specific external destinations.

Securing network traffic

OKD provides tools to secure your network by creating rules that control which components are allowed to communicate. This is primarily managed through two types of policy resources: network policies and administrative network policies.

Network policies

A network policy is a resource that allows you to control the flow of traffic at the IP address or port level. These policies operate at the namespace (project) level. This means they are typically managed by developers or project administrators to secure their specific applications.

By default, all pods in a project can communicate with each other freely. However, when you apply a NetworkPolicy to a pod, it adopts a "default-deny" stance. This means it rejects any connection that is not explicitly allowed by a policy rule. You use labels and selectors to define which pods a policy applies to and what ingress and egress traffic is permitted.

Administrative network policies

An AdminNetworkPolicy object is a more powerful, cluster-scoped version of a NetworkPolicy object. It can only be created and managed by a cluster administrator.

Administrative network policies have a higher priority than standard NetworkPolicy objects. This allows administrators to enforce cluster-wide security rules that cannot be overridden by users in their own projects. For example, an administrator could use an AdminNetworkPolicy to block all traffic between development and production namespaces or to enforce baseline security rules for the entire cluster.

Additional resources