$ TOKEN=$(oc whoami -t)
In OKD 4.17, you can access web service APIs for some monitoring components from the command line interface (CLI).
In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations:
|
You can directly access web service API endpoints from the command line for the following monitoring stack components:
Prometheus
Alertmanager
Thanos Ruler
Thanos Querier
To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have |
When you access web service API endpoints for monitoring components, be aware of the following limitations:
You can only use bearer token authentication to access API endpoints.
You can only access endpoints in the /api
path for a route.
If you try to access an API endpoint in a web browser, an Application is not available
error occurs.
To access monitoring features in a web browser, use the OKD web console to review monitoring dashboards.
The following example shows how to query the service API receivers for the Alertmanager service used in core platform monitoring.
You can use a similar method to access the prometheus-k8s
service for core platform Prometheus and the thanos-ruler
service for Thanos Ruler.
You are logged in to an account that is bound against the monitoring-alertmanager-edit
role in the openshift-monitoring
namespace.
You are logged in to an account that has permission to get the Alertmanager API route.
If your account does not have permission to get the Alertmanager API route, a cluster administrator can provide the URL for the route. |
Extract an authentication token by running the following command:
$ TOKEN=$(oc whoami -t)
Extract the alertmanager-main
API route URL by running the following command:
$ HOST=$(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.status.ingress[].host})
Query the service API receivers for Alertmanager by running the following command:
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v2/receivers"
You can use the federation endpoint for Prometheus to scrape platform and user-defined metrics from a network location outside the cluster.
To do so, access the Prometheus /federate
endpoint for the cluster via an OKD route.
A delay in retrieving metrics data occurs when you use federation. This delay can affect the accuracy and timeliness of the scraped metrics. Using the federation endpoint can also degrade the performance and scalability of your cluster, especially if you use the federation endpoint to retrieve large amounts of metrics data. To avoid these issues, follow these recommendations:
If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. |
You have installed the OpenShift CLI (oc
).
You have access to the cluster as a user with the cluster-monitoring-view
cluster role or have obtained a bearer token with get
permission on the namespaces
resource.
You can only use bearer token authentication to access the Prometheus federation endpoint. |
You are logged in to an account that has permission to get the Prometheus federation route.
If your account does not have permission to get the Prometheus federation route, a cluster administrator can provide the URL for the route. |
Retrieve the bearer token by running the following the command:
$ TOKEN=$(oc whoami -t)
Get the Prometheus federation route URL by running the following command:
$ HOST=$(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={.status.ingress[].host})
Query metrics from the /federate
route.
The following example command queries up
metrics:
$ curl -G -k -H "Authorization: Bearer $TOKEN" https://$HOST/federate --data-urlencode 'match[]=up'
# TYPE up untyped
up{apiserver="kube-apiserver",endpoint="https",instance="10.0.143.148:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035322214
up{apiserver="kube-apiserver",endpoint="https",instance="10.0.148.166:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035338597
up{apiserver="kube-apiserver",endpoint="https",instance="10.0.173.16:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035343834
...
You can query Prometheus metrics from outside the cluster when monitoring your own services with user-defined projects. Access this data from outside the cluster by using the thanos-querier
route.
This access only supports using a bearer token for authentication.
You have deployed your own service, following the "Enabling monitoring for user-defined projects" procedure.
You are logged in to an account with the cluster-monitoring-view
cluster role, which provides permission to access the Thanos Querier API.
You are logged in to an account that has permission to get the Thanos Querier API route.
If your account does not have permission to get the Thanos Querier API route, a cluster administrator can provide the URL for the route. |
Extract an authentication token to connect to Prometheus by running the following command:
$ TOKEN=$(oc whoami -t)
Extract the thanos-querier
API route URL by running the following command:
$ HOST=$(oc -n openshift-monitoring get route thanos-querier -ojsonpath={.status.ingress[].host})
Set the namespace to the namespace in which your service is running by using the following command:
$ NAMESPACE=ns1
Query the metrics of your own services in the command line by running the following command:
$ curl -H "Authorization: Bearer $TOKEN" -k "https://$HOST/api/v1/query?" --data-urlencode "query=up{namespace='$NAMESPACE'}"
The output shows the status for each application pod that Prometheus is scraping:
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "up",
"endpoint": "web",
"instance": "10.129.0.46:8080",
"job": "prometheus-example-app",
"namespace": "ns1",
"pod": "prometheus-example-app-68d47c4fb6-jztp2",
"service": "prometheus-example-app"
},
"value": [
1591881154.748,
"1"
]
}
],
}
}
The formatted example output uses a filtering tool, such as |
This document describes the following resources deployed and managed by the Cluster Monitoring Operator (CMO):
Use this information when you want to configure API endpoint connections to retrieve, send, or query metrics data.
In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations:
|
Expose the /api
endpoints of the alertmanager-main
service via a router.
Expose the /api
endpoints of the prometheus-k8s
service via a router.
Expose the /federate
endpoint of the prometheus-k8s
service via a router.
Expose the /federate
endpoint of the prometheus-user-workload
service via a router.
Expose the admission webhook service which validates PrometheusRules
and AlertmanagerConfig
custom resources on port 8443.
Expose the user-defined Alertmanager web server within the cluster on the following ports:
Port 9095 provides access to the Alertmanager endpoints. Granting access requires binding a user to the monitoring-alertmanager-api-reader
role (for read-only operations) or monitoring-alertmanager-api-writer
role in the openshift-user-workload-monitoring
project.
Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit
cluster role or monitoring-edit
cluster role in the project.
Port 9097 provides access to the /metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
Expose the Alertmanager web server within the cluster on the following ports:
Port 9094 provides access to all the Alertmanager endpoints. Granting access requires binding a user to the monitoring-alertmanager-view
role (for read-only operations) or the monitoring-alertmanager-edit
role in the openshift-monitoring
project.
Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit
cluster role or monitoring-edit
cluster role in the project.
Port 9097 provides access to the /metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
Expose kube-state-metrics /metrics
endpoints within the cluster on the following ports:
Port 8443 provides access to the Kubernetes resource metrics. This port is for internal use, and no other usage is guaranteed.
Port 9443 provides access to the internal kube-state-metrics metrics. This port is for internal use, and no other usage is guaranteed.
Expose the metrics-server web server on port 443. This port is for internal use, and no other usage is guaranteed.
Expose the monitoring plugin service on port 9443. This port is for internal use, and no other usage is guaranteed.
Expose the /metrics
endpoint on port 9100. This port is for internal use, and no other usage is guaranteed.
Expose openshift-state-metrics /metrics
endpoints within the cluster on the following ports:
Port 8443 provides access to the OpenShift resource metrics. This port is for internal use, and no other usage is guaranteed.
Port 9443 provides access to the internal openshift-state-metrics
metrics. This port is for internal use, and no other usage is guaranteed.
Expose the Prometheus web server within the cluster on the following ports:
Port 9091 provides access to all the Prometheus endpoints. Granting access requires binding a user to the cluster-monitoring-view
cluster role.
Port 9092 provides access to the /metrics
and /federate
endpoints only. This port is for internal use, and no other usage is guaranteed.
Expose the /metrics
endpoint on port 8443. This port is for internal use, and no other usage is guaranteed.
Expose the /metrics
endpoint on port 8443. This port is for internal use, and no other usage is guaranteed.
Expose the Prometheus web server within the cluster on the following ports:
Port 9091 provides access to the /metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
Port 9092 provides access to the /federate
endpoint only. Granting access requires binding a user to the cluster-monitoring-view
cluster role.
This also exposes the /metrics
endpoint of the Thanos sidecar web server on port 10902. This port is for internal use, and no other usage is guaranteed.
Expose the /metrics
endpoint on port 8443. This port is for internal use, and no other usage is guaranteed.
Expose the Thanos Querier web server within the cluster on the following ports:
Port 9091 provides access to all the Thanos Querier endpoints. Granting access requires binding a user to the cluster-monitoring-view
cluster role.
Port 9092 provides access to the /api/v1/query
, /api/v1/query_range/
, /api/v1/labels
, /api/v1/label/*/values
, and /api/v1/series
endpoints restricted to a given project. Granting access requires binding a user to the view
cluster role in the project.
Port 9093 provides access to the /api/v1/alerts
, and /api/v1/rules
endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit
cluster role, monitoring-edit
cluster role or monitoring-rules-view
cluster role in the project.
Port 9094 provides access to the /metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
Expose the Thanos Ruler web server within the cluster on the following ports:
Port 9091 provides access to all Thanos Ruler endpoints. Granting access requires binding a user to the cluster-monitoring-view
cluster role.
Port 9092 provides access to the /metrics
endpoint only. This port is for internal use, and no other usage is guaranteed.
This also exposes the gRPC endpoints on port 10901. This port is for internal use, and no other usage is guaranteed.