apiVersion: v1
kind: Namespace
metadata:
name: openshift-metering (1)
annotations:
openshift.io/node-selector: "" (2)
labels:
openshift.io/cluster-monitoring: "true"
Metering is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes. |
Review the following sections before installing metering into your cluster.
To get started installing metering, first install the Metering Operator from OperatorHub. Next, configure your instance of metering by creating a MeteringConfig
custom resource (CR). Installing the Metering Operator creates a default MeteringConfig
resource that you can modify using the examples in the documentation. After creating your MeteringConfig
resource, install the metering stack. Last, verify your installation.
Metering requires the following components:
A StorageClass
resource for dynamic volume provisioning. Metering supports a number of different storage solutions.
4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available.
The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores.
Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters.
You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack.
You cannot create a project starting with |
If the Metering Operator is installed using a namespace other than |
You can use the OKD web console to install the Metering Operator.
Create a namespace object YAML file for the Metering Operator with the oc create -f <file-name>.yaml
command. You must use the CLI to create the namespace. For example, metering-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-metering (1)
annotations:
openshift.io/node-selector: "" (2)
labels:
openshift.io/cluster-monitoring: "true"
1 | It is strongly recommended to deploy metering in the openshift-metering namespace. |
2 | Include this annotation before configuring specific node selectors for the operand pods. |
In the OKD web console, click Operators → OperatorHub. Filter for metering
to find the Metering Operator.
Click the Metering card, review the package description, and then click Install.
Select an Update Channel, Installation Mode, and Approval Strategy.
Click Install.
Verify that the Metering Operator is installed by switching to the Operators → Installed Operators page. The Metering Operator has a Status of Succeeded when the installation is complete.
It might take several minutes for the Metering Operator to appear. |
Click Metering on the Installed Operators page for Operator Details. From the Details page you can create different resources related to metering.
To complete the metering installation, create a MeteringConfig
resource to configure metering and install the components of the metering stack.
You can use the OKD CLI to install the Metering Operator.
Create a Namespace
object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, metering-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-metering (1)
annotations:
openshift.io/node-selector: "" (2)
labels:
openshift.io/cluster-monitoring: "true"
1 | It is strongly recommended to deploy metering in the openshift-metering namespace. |
2 | Include this annotation before configuring specific node selectors for the operand pods. |
Create the Namespace
object:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f openshift-metering.yaml
Create the OperatorGroup
object YAML file. For example, metering-og
:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-metering (1)
namespace: openshift-metering (2)
spec:
targetNamespaces:
- openshift-metering
1 | The name is arbitrary. |
2 | Specify the openshift-metering namespace. |
Create a Subscription
object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the redhat-operators
catalog source. For example, metering-sub.yaml
:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: metering-ocp (1)
namespace: openshift-metering (2)
spec:
channel: "4.6" (3)
source: "redhat-operators" (4)
sourceNamespace: "openshift-marketplace"
name: "metering-ocp"
installPlanApproval: "Automatic" (5)
1 | The name is arbitrary. |
2 | You must specify the openshift-metering namespace. |
3 | Specify 4.6 as the channel. |
4 | Specify the redhat-operators catalog source, which contains the metering-ocp package manifests. If your OKD is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). |
5 | Specify "Automatic" install plan approval. |
After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack.
Review the configuration options
Create a MeteringConfig
resource. You can begin the following process to generate a default MeteringConfig
resource, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create your MeteringConfig
resource:
For configuration options, review About configuring metering.
At a minimum, you need to configure persistent storage and configure the Hive metastore.
There can only be one |
From the web console, ensure you are on the Operator Details page for the Metering Operator in the openshift-metering
project. You can navigate to this page by clicking Operators → Installed Operators, then selecting the Metering Operator.
Under Provided APIs, click Create Instance on the Metering Configuration card. This opens a YAML editor with the default MeteringConfig
resource file where you can define your configuration.
For example configuration files and all supported configuration options, review the configuring metering documentation. |
Enter your MeteringConfig
resource into the YAML editor and click Create.
The MeteringConfig
resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation.
You can verify the metering installation by performing any of the following checks:
Check the Metering Operator ClusterServiceVersion
(CSV) resource for the metering version. This can be done through either the web console or CLI.
Navigate to Operators → Installed Operators in the openshift-metering
namespace.
Click Metering Operator.
Click Subscription for Subscription Details.
Check the Installed Version.
Check the Metering Operator CSV in the openshift-metering
namespace:
$ oc --namespace openshift-metering get csv
NAME DISPLAY VERSION REPLACES PHASE
metering-operator.v4.6.0 Metering 4.6.0 Succeeded
Check that all required pods in the openshift-metering
namespace are created. This can be done through either the web console or CLI.
Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation. |
Navigate to Workloads → Pods in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack.
Check that all required pods in the openshift-metering
namespace are created:
$ oc -n openshift-metering get pods
NAME READY STATUS RESTARTS AGE
hive-metastore-0 2/2 Running 0 3m28s
hive-server-0 3/3 Running 0 3m28s
metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s
presto-coordinator-0 2/2 Running 0 3m9s
reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s
Verify that the ReportDataSource
resources are beginning to import data, indicated by a valid timestamp in the EARLIEST METRIC
column. This might take several minutes. Filter out the "-raw" ReportDataSource
resources, which do not import data:
$ oc get reportdatasources -n openshift-metering | grep -v raw
NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE
node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s
node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s
node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s
node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s
persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s
persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s
persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s
persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s
pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s
pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s
pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s
pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s
pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s
pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s
pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s
After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster.
For more information on configuration steps and available storage platforms, see Configuring persistent storage.
For the steps to configure Hive, see Configuring the Hive metastore.