enable
As an administrator, you can observe the network traffic in the OKD web console for detailed troubleshooting and analysis. This feature helps you get insights from different graphical representations of traffic flow.
The Network Traffic Overview view provides aggregated flow metrics and visual insights into application communications. Administrators can use the metrics to monitor data volume, troubleshoot connectivity, and detect unusual traffic patterns across the cluster.
The Overview view shows aggregate network traffic in your OKD cluster, allowing you to see which applications are communicating and the volume of data being transferred. It provides detailed insights by source, destination, and flow type, along with the top traffic flows and average byte rates.
As an administrator, you can troubleshoot connectivity issues, detect unusual traffic patterns, and optimize application performance. It provides a quick overview of network behavior, making it easier to prioritize actions and ensure efficient resource usage.
Navigate to the network traffic Overview view in the OKD console to see graphical representations of flow rate statistics and configure the display scope using available options.
Access to the cluster with administrator rights.
Navigate to Observe → Network Traffic.
In the Network Traffic page, click the Overview tab.
You can configure the scope of each flow rate data by clicking the menu icon.
Customize the network traffic Overview view by configuring advanced options, such as graph scope, label truncation, and panel management, to refine the display of flow rate statistics and traffic data.
To access the advanced options, click Show advanced options. You can configure the details in the graph by using the Display options drop-down menu. The options available are as follows:
Scope: Select to view the components that network traffic flows between. You can set the scope to Node, Namespace, Owner, Zones, Cluster or Resource. Owner is an aggregation of resources. Resource can be a pod, service, node, in case of host-network traffic, or an unknown IP address. The default value is Namespace.
Truncate labels: Select the required width of the label from the drop-down list. The default value is M.
You can select the required panels to be displayed, reorder them, and focus on a specific panel. To add or remove panels, click Manage panels.
The following panels are shown by default:
Top X average bytes rates
Top X bytes rates stacked with total
Other panels can be added in Manage panels:
Top X average packets rates
Top X packets rates stacked with total
Query options allows you to choose whether to show the Top 5, Top 10, or Top 15 rates.
Monitor and analyze network packet loss by using eBPF-based packet drop tracking, which identifies drop locations, detects host or OVS-specific drop reasons, and provides dedicated graphical panels in the Overview view.
You can configure graphical representation of network flow records with packet loss in the Overview view. By employing eBPF tracepoint hooks, you can gain valuable insights into packet drops for TCP, UDP, SCTP, ICMPv4, and ICMPv6 protocols, which can result in the following actions:
Identification: Pinpoint the exact locations and network paths where packet drops are occurring. Determine whether specific devices, interfaces, or routes are more prone to drops.
Root cause analysis: Examine the data collected by the eBPF program to understand the causes of packet drops. For example, are they a result of congestion, buffer issues, or specific network events?
Performance optimization: With a clearer picture of packet drops, you can take steps to optimize network performance, such as adjust buffer sizes, reconfigure routing paths, or implement Quality of Service (QoS) measures.
When packet drop tracking is enabled, you can see the following panels in the Overview by default:
Top X packet dropped state stacked with total
Top X packet dropped cause stacked with total
Top X average dropped packets rates
Top X dropped packets rates stacked with total
Other packet drop panels are available to add in Manage panels:
Top X average dropped bytes rates
Top X dropped bytes rates stacked with total
Two kinds of packet drops are detected by network observability: host drops and OVS drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP. Dropped flows are shown in the side panel of the Traffic flows table along with a link to a description of each drop type. Examples of host drop reasons are as follows:
SKB_DROP_REASON_NO_SOCKET: the packet dropped due to a missing socket.
SKB_DROP_REASON_TCP_CSUM: the packet dropped due to a TCP checksum error.
Examples of OVS drops reasons are as follows:
OVS_DROP_LAST_ACTION: OVS packets dropped due to an implicit drop action, for example due to a configured network policy.
OVS_DROP_IP_TTL: OVS packets dropped due to an expired IP TTL.
See the Additional resources of this section for more information about enabling and working with packet drop tracking.
Monitor DNS activity by using eBPF-based DNS tracking to gain insights into query patterns, detect security threats, and troubleshoot latency issues through dedicated graphical panels in the Overview view.
You can configure graphical representation of Domain Name System (DNS) tracking of network flows in the Overview view. Using DNS tracking with extended Berkeley Packet Filter (eBPF) tracepoint hooks can serve various purposes:
Network Monitoring: Gain insights into DNS queries and responses, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues.
Security Analysis: Detect suspicious DNS activities, such as domain name generation algorithms (DGA) used by malware, or identify unauthorized DNS resolutions that might indicate a security breach.
Troubleshooting: Debug DNS-related issues by tracing DNS resolution steps, tracking latency, and identifying misconfigurations.
By default, when DNS tracking is enabled, you can see the following non-empty metrics represented in a donut or line chart in the Overview:
Top X DNS Response Code
Top X average DNS latencies with overall
Top X 90th percentile DNS latencies
Other DNS tracking panels can be added in Manage panels:
Bottom X minimum DNS latencies
Top X maximum DNS latencies
Top X 99th percentile DNS latencies
This feature is supported for IPv4 and IPv6 UDP and TCP protocols.
See the Additional resources in this section for more information about enabling and working with this view.
Analyze network flow latencies by using TCP Round-Trip Time (RTT) metrics, which use eBPF hookpoints to identify performance bottlenecks and troubleshoot TCP-related issues through dedicated panels in the Overview view.
You can use TCP smoothed Round-Trip Time (sRTT) to analyze network flow latencies. You can use RTT captured from the fentry/tcp_rcv_established eBPF hookpoint to read sRTT from the TCP socket to help with the following:
Network Monitoring: Gain insights into TCP latencies, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues.
Troubleshooting: Debug TCP-related issues by tracking latency and identifying misconfigurations.
By default, when RTT is enabled, you can see the following TCP RTT metrics represented in the Overview:
Top X 90th percentile TCP Round Trip Time with overall
Top X average TCP Round Trip Time with overall
Bottom X minimum TCP Round Trip Time with overall
Other RTT panels can be added in Manage panels:
Top X maximum TCP Round Trip Time with overall
Top X 99th percentile TCP Round Trip Time with overall
See the Additional resources in this section for more information about enabling and working with this view.
Control packet capture volume by using eBPF flow rule filtering to specify capture criteria based on ports and CIDR notation, while monitoring filter performance through dedicated health dashboards and Prometheus metrics.
You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be captured. Then only the packets that match the filter are captured and the rest are dropped.
You can apply multiple filter rules.
Classless Inter-Domain Routing (CIDR) notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation.
After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the peerIP to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached.
When this option is enabled, the Netobserv/Health dashboard for eBPF agent statistics now has the Filtered flows rate view. Additionally, in Observe → Metrics you can query netobserv_agent_filtered_flows_total to observe metrics with the reason in FlowFilterAcceptCounter, FlowFilterNoMatchCounter or FlowFilterRecjectCounter.
Reference the required and optional parameters for configuring flow filter rules in the FlowCollector resource, including CIDR ranges, filter actions, protocols, and specific port configurations.
| Parameter | Description |
|---|---|
|
Set |
|
Provides the IP address and CIDR mask for the flow filter rule. Supports both IPv4 and IPv6 address format. If you want to match against any IP, you can use |
|
Describes the action that is taken for the flow filter rule. The possible values are
|
| Parameter | Description |
|---|---|
|
Defines the direction of the flow filter rule. Possible values are |
|
Defines the protocol of the flow filter rule. Possible values are |
|
Defines the TCP flags to filter flows. Possible values are |
|
Defines the ports to use for filtering flows. It can be used for either source or destination ports. To filter a single port, set a single port as an integer value. For example |
|
Defines the source port to use for filtering flows. To filter a single port, set a single port as an integer value, for example |
|
DestPorts defines the destination ports to use for filtering flows. To filter a single port, set a single port as an integer value, for example |
|
Defines the ICMP type to use for filtering flows. |
|
Defines the ICMP code to use for filtering flows. |
|
Defines the IP address to use for filtering flows, for example: |
Use the Traffic flows view to monitor real-time and historical network communication between cluster components. By analyzing granular flow data collected via eBPF, you can audit network traffic, validate network policies, and export data for external reporting and analysis.
The Traffic flows view in the Network Observability Operator provides a granular, tabular representation of network activity across a OKD cluster. By leveraging eBPF technology to collect flow data, this view allows administrators to monitor real-time and historical communication between pods, services, and nodes. This visibility is essential for auditing network traffic, validating network policies, and identifying unexpected communication patterns within the cluster infrastructure.
In the Traffic flows interface, you can analyze specific connection details by interacting with individual rows to retrieve detailed flow information. The view supports advanced customization through the Display options menu, where you can adjust row density and manage columns. By selecting and reordering specific columns, you can tailor the table to highlight the most relevant data points for your environment, such as source and destination endpoints or traffic volume.
To support external analysis and reporting, the Traffic flows view includes data export capabilities. You can export the entire data set or select specific fields to generate a targeted report of network activity. This functionality ensures that network flow data is accessible for long-term auditing or for use in third-party monitoring tools, providing a flexible way to document and analyze the network health of your OKD environment.
View and analyze detailed network flow information by using the Traffic flows table.
As an administrator, you can navigate to Traffic flows table to see network flow information.
You have administrator access.
Navigate to Observe → Network Traffic.
In the Network Traffic page, click the Traffic flows tab.
You can click on each row to get the corresponding flow information.
Customize the Traffic flows view by adjusting row density, selecting specific data columns, and exporting filtered flow data for external analysis.
You can customize and export the view by using Show advanced options. You can set the row size by using the Display options drop-down menu. The default value is Normal.
Enable IPsec tracking in the FlowCollector resource to monitor encrypted traffic, adding an IPsec status column to the traffic flow view and generating a dedicated encryption dashboard.
In OKD, IPsec is disabled by default. You can enable IPsec by following the instructions in "Configuring IPsec encryption".
You have enabled IPsec encryption on OKD.
In the web console, navigate to Operators → Installed Operators.
Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
Select cluster then select the YAML tab.
Configure the FlowCollector custom resource for IPsec:
FlowCollector for IPsecapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
type: eBPF
ebpf:
features:
- "IPSec"
When IPsec is enabled:
A new column named IPsec Status is displayed in the network observability Traffic flows view to show whether a flow was successfully IPsec-encrypted or if there was an error during encryption/decryption.
A new dashboard showing the percent of encrypted traffic is generated.
Configure the FlowCollector custom resource to enable conversation tracking for grouping and analyzing related network flows in the web console.
As an administrator, you can group network flows that are part of the same conversation. A conversation is defined as a grouping of peers that are identified by their IP addresses, ports, and protocols, resulting in an unique Conversation Id. You can query conversation events in the web console. These events are represented in the web console as follows:
Conversation start: This event happens when a connection is starting or TCP flag intercepted
Conversation tick: This event happens at each specified interval defined in the FlowCollector spec.processor.conversationHeartbeatInterval parameter while the connection is active.
Conversation end: This event happens when the FlowCollector spec.processor.conversationEndTimeout parameter is reached or the TCP flag is intercepted.
Flow: This is the network traffic flow that occurs within the specified interval.
In the web console, navigate to Operators → Installed Operators.
Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
Select cluster then select the YAML tab.
Configure the FlowCollector custom resource so that spec.processor.logTypes, conversationEndTimeout, and conversationHeartbeatInterval parameters are set according to your observation needs. A sample configuration is as follows:
FlowCollector for conversation trackingapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
processor:
logTypes: Flows (1)
advanced:
conversationEndTimeout: 10s (2)
conversationHeartbeatInterval: 30s (3)
| 1 | When logTypes is set to Flows, only the Flow event is exported. If you set the value to All, both conversation and flow events are exported and visible in the Network Traffic page. To focus only on conversation events, you can specify Conversations which exports the Conversation start, Conversation tick and Conversation end events; or EndedConversations exports only the Conversation end events. Storage requirements are highest for All and lowest for EndedConversations. |
| 2 | The Conversation end event represents the point when the conversationEndTimeout is reached or the TCP flag is intercepted. |
| 3 | The Conversation tick event represents each specified interval defined in the FlowCollector conversationHeartbeatInterval parameter while the network connection is active. |
|
If you update the |
Refresh the Network Traffic page on the Traffic flows tab. Notice there are two new columns, Event/Type and Conversation Id. All the Event/Type fields are Flow when Flow is the selected query option.
Select Query Options and choose the Log Type, Conversation. Now the Event/Type shows all of the desired conversation events.
Next you can filter on a specific conversation ID or switch between the Conversation and Flow log type options from the side panel.
Enable packet drop tracking in the Network Observability Operator by configuring the FlowCollector resource to monitor and visualize network data loss in the web console.
Packet loss occurs when one or more packets of network flow data fail to reach their destination. You can track these drops by editing the FlowCollector to the specifications in the following YAML example.
|
CPU and memory usage increases when this feature is enabled. |
In the web console, navigate to Operators → Installed Operators.
Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
Select cluster, and then select the YAML tab.
Configure the FlowCollector custom resource for packet drops, for example:
FlowCollector configurationapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
type: eBPF
ebpf:
features:
- PacketDrop (1)
privileged: true (2)
| 1 | You can start reporting the packet drops of each network flow by listing the PacketDrop parameter in the spec.agent.ebpf.features specification list. |
| 2 | The spec.agent.ebpf.privileged specification value must be true for packet drop tracking. |
When you refresh the Network Traffic page, the Overview, Traffic Flow, and Topology views display new information about packet drops:
Select new choices in Manage panels to choose which graphical visualizations of packet drops to display in the Overview.
Select new choices in Manage columns to choose which packet drop information to display in the Traffic flows table.
In the Traffic Flows view, you can also expand the side panel to view more information about packet drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP.
In the Topology view, red lines are displayed where drops are present.
Configure the FlowCollector custom resource to enable DNS tracking for monitoring network performance, security analysis, and DNS troubleshooting in the web console.
You can track DNS by editing the FlowCollector to the specifications in the following YAML example.
|
CPU and memory usage increases are observed in the eBPF agent when this feature is enabled. |
In the web console, navigate to Operators → Installed Operators.
Under the Provided APIs heading for Network Observability, select Flow Collector.
Select cluster then select the YAML tab.
Configure the FlowCollector custom resource. A sample configuration is as follows:
FlowCollector for DNS trackingapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
type: eBPF
ebpf:
features:
- DNSTracking (1)
sampling: 1 (2)
| 1 | You can set the spec.agent.ebpf.features parameter list to enable DNS tracking of each network flow in the web console. |
| 2 | You can set sampling to a value of 1 for more accurate metrics and to capture DNS latency. For a sampling value greater than 1, you can observe flows with DNS Response Code and DNS Id, and it is unlikely that DNS Latency can be observed. |
When you refresh the Network Traffic page, there are new DNS representations you can choose to view in the Overview and Traffic Flow views and new filters you can apply.
Select new DNS choices in Manage panels to display graphical visualizations and DNS metrics in the Overview.
Select new choices in Manage columns to add DNS columns to the Traffic Flows view.
Filter on specific DNS metrics, such as DNS Id, DNS Error DNS Latency and DNS Response Code, and see more information from the side panel. The DNS Latency and DNS Response Code columns are shown by default.
|
TCP handshake packets do not have DNS headers. TCP protocol flows without DNS headers are shown in the traffic flow data with DNS Latency, ID, and Response code values of "n/a". You can filter out flow data to view only flows that have DNS headers using the Common filter "DNSError" equal to "0". |
Enable Round Trip Time (RTT) tracing by configuring the FlowCollector custom resource to monitor and analyze network latency across your cluster by using the web console.
You can track RTT by editing the FlowCollector to the specifications in the following YAML example.
In the web console, navigate to Operators → Installed Operators.
In the Provided APIs heading for the NetObserv Operator, select Flow Collector.
Select cluster, and then select the YAML tab.
Configure the FlowCollector custom resource for RTT tracing, for example:
FlowCollector configurationapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
type: eBPF
ebpf:
features:
- FlowRTT (1)
| 1 | You can start tracing RTT network flows by listing the FlowRTT parameter in the spec.agent.ebpf.features specification list. |
When you refresh the Network Traffic page, the Overview, Traffic Flow, and Topology views display new information about RTT:
In the Overview, select new choices in Manage panels to choose which graphical visualizations of RTT to display.
In the Traffic flows table, the Flow RTT column can be seen, and you can manage display in Manage columns.
In the Traffic Flows view, you can also expand the side panel to view more information about RTT.
Click the Common filters → Protocol.
Filter the network flow data based on TCP, Ingress direction, and look for FlowRTT values greater than 10,000,000 nanoseconds (10ms).
Remove the Protocol filter.
Filter for Flow RTT values greater than 0 in the Common filters.
In the Topology view, click the Display option dropdown. Then click RTT in the edge labels drop-down list.
The histogram provides a visualization of network flow logs that you can use to analyze traffic volume trends and filter flow data by specific time intervals.
You can click Show histogram to display a toolbar view for visualizing the history of flows as a bar chart. The histogram shows the number of logs over time. You can select a part of the histogram to filter the network flow data in the table that follows the toolbar.
Configure the FlowCollector custom resource to collect availability zone data, enabling the visualization and analysis of network traffic across different cluster zones in the web console.
You can configure the FlowCollector to collect information about the cluster availability zones. This allows you to enrich network flow data with the topology.kubernetes.io/zone label value applied to the nodes.
In the web console, go to Operators → Installed Operators.
Under the Provided APIs heading for the NetObserv Operator, select Flow Collector.
Select cluster then select the YAML tab.
Configure the FlowCollector custom resource so that the spec.processor.addZone parameter is set to true. A sample configuration is as follows:
FlowCollector for availability zones collectionapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
# ...
processor:
addZone: true
# ...
When you refresh the Network Traffic page, the Overview, Traffic Flow, and Topology views display new information about availability zones:
In the Overview tab, you can see Zones as an available Scope.
In Network Traffic → Traffic flows, Zones are viewable under the SrcK8S_Zone and DstK8S_Zone fields.
In the Topology view, you can set Zones as Scope or Group.
Configure multiple filtering rules in the FlowCollector custom resource to refine network traffic data collection by accepting or rejecting specific eBPF flows based on IP addresses and packet conditions.
|
In the web console, navigate to Operators → Installed Operators.
Under the Provided APIs heading for Network Observability, select Flow Collector.
Select cluster, then select the YAML tab.
Configure the FlowCollector custom resource, similar to the following sample configurations:
By default, all other flows are rejected.
apiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
deploymentModel: Direct
agent:
type: eBPF
ebpf:
flowFilter:
enable: true (1)
rules:
- action: Accept (2)
cidr: 0.0.0.0/0 (3)
sampling: 1 (4)
- action: Accept
cidr: 10.128.0.0/14
peerCIDR: 10.128.0.0/14 (5)
- action: Accept
cidr: 172.30.0.0/16
peerCIDR: 10.128.0.0/14
sampling: 50
| 1 | To enable eBPF flow filtering, set spec.agent.ebpf.flowFilter.enable to true. |
| 2 | To define the action for the flow filter rule, set the required action parameter. Valid values are Accept or Reject. |
| 3 | To define the IP address and CIDR mask for the flow filter rule, set the required cidr parameter. This parameter supports both IPv4 and IPv6 address formats. To match any IP address, use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. |
| 4 | To define the sampling interval for matched flows and override the global sampling setting spec.agent.ebpf.sampling, set the sampling parameter. |
| 5 | To filter flows by Peer IP CIDR, set the peerCIDR parameter. |
By default, all other flows are rejected.
apiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
deploymentModel: Direct
agent:
type: eBPF
ebpf:
privileged: true (1)
features:
- PacketDrop (2)
flowFilter:
enable: true (3)
rules:
- action: Accept (4)
cidr: 172.30.0.0/16
pktDrops: true (5)
| 1 | To enable packet drops, set spec.agent.ebpf.privileged to true. |
| 2 | To report packet drops for each network flow, add the PacketDrop value to the spec.agent.ebpf.features list. |
| 3 | To enable eBPF flow filtering, set spec.agent.ebpf.flowFilter.enable to true. |
| 4 | To define the action for the flow filter rule, set the required action parameter. Valid values are Accept or Reject. |
| 5 | To filter flows containing drops, set pktDrops to true. |
Endpoint translation (xlat) uses eBPF to enrich network flow logs with translated pod-level metadata, providing visibility into the specific backend pods serving traffic behind services or load balancers.
You can gain visibility into the endpoints serving traffic in a consolidated view using network observability and extended Berkeley Packet Filter (eBPF). Typically, when traffic flows through a service, egressIP, or load balancer, the traffic flow information is abstracted as it is routed to one of the available pods. If you try to get information about the traffic, you can only view service related info, such as service IP and port, and not information about the specific pod that is serving the request. Often the information for both the service traffic and the virtual service endpoint is captured as two separate flows, which complicates troubleshooting.
To solve this, endpoint xlat can help in the following ways:
Capture the network flows at the kernel level, which has a minimal impact on performance.
Enrich the network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request.
As network packets are processed, the eBPF hook enriches flow logs with metadata about the translated endpoint that includes the following pieces of information that you can view in the Network Traffic page in a single row:
Source Pod IP
Source Port
Destination Pod IP
Destination Port
Enable endpoint translation (xlat) in the FlowCollector resource to enrich network flows with translated packet information. You can use this information to identify the specific pods and objects serving service traffic through dedicated xlat columns.
You can use network observability and eBPF to enrich network flows from a Kubernetes service with translated endpoint information, gaining insight into the endpoints serving traffic.
In the web console, navigate to Operators → Installed Operators.
In the Provided APIs heading for the NetObserv Operator, select Flow Collector.
Select cluster, and then select the YAML tab.
Configure the FlowCollector custom resource for PacketTranslation, for example:
FlowCollector configurationapiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
agent:
type: eBPF
ebpf:
features:
- PacketTranslation (1)
| 1 | You can start enriching network flows with translated packet information by listing the PacketTranslation parameter in the spec.agent.ebpf.features specification list. |
When you refresh the Network Traffic page you can filter for information about translated packets:
Filter the network flow data based on Destination kind: Service.
You can see the xlat column, which distinguishes where translated information is displayed, and the following default columns:
Xlat Zone ID
Xlat Src Kubernetes Object
Xlat Dst Kubernetes Object
You can manage the display of additional xlat columns in Manage columns.
The Topology view in the Network Traffic page provides a graphical representation of network flows and traffic volume across your OKD cluster. As an administrator, you can use this view to monitor application traffic data and visualize the relationships between various network components.
The visualization represents network entities as nodes and traffic flows as edges. By selecting individual components within the graph, you can access a side panel containing specific metrics and health details for that resource. This interactive approach allows for rapid identification of traffic patterns and connectivity issues within the cluster.
To manage complex environments, the Topology view includes advanced configuration options that allow you to customize the layout and data density. You can adjust the Scope of the view, apply Groups to represent resource ownership, and choose different Layout algorithms to optimize the graphical display. Additionally, you can enable Edge labels to show real-time measurements, such as the average byte rate, directly on the flow lines.
For reporting or external analysis, the Topology view provides an export feature. You can download the current graphical representation as a PNG image or generate a direct link to the specific view configuration to share with other administrators. These tools ensure that network insights are both accessible and easily documented.
Access the Topology view to visually inspect cluster network relationships and select individual components to view detailed traffic metrics and metadata.
As an administrator, you can navigate to the Topology view to see the details and metrics of the component.
You have administrator access.
Navigate to Observe → Network Traffic.
In the Network Traffic page, click the Topology tab.
You can click each component in the Topology to view the details and metrics of the component.
Review the available advanced options in the Topology view to customize display settings, configure component grouping and layouts, and export the network graph as an image.
You can customize and export the view by using Show advanced options. The advanced options view has the following features:
Find in view: To search the required components in the view.
Display options: To configure the following options:
Edge labels: To show the specified measurements as edge labels. The default is to show the Average rate in Bytes.
Scope: To select the scope of components between which the network traffic flows. The default value is Namespace.
Groups: To enhance the understanding of ownership by grouping the components. The default value is None.
Layout: To select the layout of the graphical representation. The default value is ColaNoForce.
Show: To select the details that need to be displayed. All the options are checked by default. The options available are: Edges, Edges label, and Badges.
Truncate labels: To select the required width of the label from the drop-down list. The default value is M.
Collapse groups: To expand or collapse the groups. The groups are expanded by default. This option is disabled if Groups has the value of None.
Review the available query options and filtering parameters in the Network Traffic view to optimize data searches, analyze specific log types, and manage directional traffic visibility.
By default, the Network Traffic page displays the traffic flow data in the cluster based on the default filters configured in the FlowCollector instance. You can use the filter options to observe the required data by changing the preset filter.
Alternatively, you can access the traffic flow data in the Network Traffic tab of the Namespaces, Services, Routes, Nodes, and Workloads pages which provide the filtered data of the corresponding aggregations.
You can use Query Options to optimize the search results, as listed below:
Log Type: The available options Conversation and Flows provide the ability to query flows by log type, such as flow log, new conversation, completed conversation, and a heartbeat, which is a periodic record with updates for long conversations. A conversation is an aggregation of flows between the same peers.
Match filters: You can determine the relation between different filter parameters selected in the advanced filter. The available options are Match all and Match any. Match all provides results that match all the values, and Match any provides results that match any of the values entered. The default value is Match all.
Datasource: You can choose the datasource to use for queries: Loki, Prometheus, or Auto. Notable performance improvements can be realized when using Prometheus as a datasource rather than Loki, but Prometheus supports a limited set of filters and aggregations. The default datasource is Auto, which uses Prometheus on supported queries or uses Loki if the query does not support Prometheus.
Drops filter: You can view different levels of dropped packets with the following query options:
Fully dropped shows flow records with fully dropped packets.
Containing drops shows flow records that contain drops but can be sent.
Without drops shows records that contain sent packets.
All shows all the aforementioned records.
Limit: The data limit for internal backend queries. Depending upon the matching and the filter settings, the number of traffic flow data is displayed within the specified limit.
The default values in Quick filters drop-down menu are defined in the FlowCollector configuration. You can modify the options from console.
You can set the advanced filters, Common, Source, or Destination, by selecting the parameter to be filtered from the dropdown list. The flow data is filtered based on the selection. To enable or disable the applied filter, you can click on the applied filter listed below the filter options.
You can toggle between One way and
Back and forth filtering. The
One way filter shows only Source and Destination traffic according to your filter selections. You can use Swap to change the directional view of the Source and Destination traffic. The
Back and forth filter includes return traffic with the Source and Destination filters. The directional flow of network traffic is shown in the Direction column in the Traffic flows table as
Ingress`or `Egress for inter-node traffic and `Inner`for traffic inside a single node.
You can click Reset defaults to remove the existing filters, and apply the filter defined in FlowCollector configuration.
|
To understand the rules of specifying the text value, click Learn More. |