×

As an Operator author, you can use the scorecard tool in the Operator SDK to do the following tasks:

  • Validate that your Operator project is free of syntax errors and packaged correctly

  • Review suggestions about ways you can improve your Operator

The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OKD. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OKD releases.

The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OKD 4.17 to maintain their projects and create Operator releases targeting newer versions of OKD.

The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.

  • The base image for Ansible-based Operator projects

  • The base image for Helm-based Operator projects

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework).

About the scorecard tool

While the Operator SDK bundle validate subcommand can validate local bundle directories and remote bundle images for content and structure, you can use the scorecard command to run tests on your Operator based on a configuration file and test images. These tests are implemented within test images that are configured and constructed to be executed by the scorecard.

The scorecard assumes it is run with access to a configured Kubernetes cluster, such as OKD. The scorecard runs each test within a pod, from which pod logs are aggregated and test results are sent to the console. The scorecard has built-in basic and Operator Lifecycle Manager (OLM) tests and also provides a means to execute custom test definitions.

Scorecard workflow
  1. Create all resources required by any related custom resources (CRs) and the Operator

  2. Create a proxy container in the deployment of the Operator to record calls to the API server and run tests

  3. Examine parameters in the CRs

The scorecard tests make no assumptions as to the state of the Operator being tested. Creating Operators and CRs for an Operators are beyond the scope of the scorecard itself. Scorecard tests can, however, create whatever resources they require if the tests are designed for resource creation.

scorecard command syntax
$ operator-sdk scorecard <bundle_dir_or_image> [flags]

The scorecard requires a positional argument for either the on-disk path to your Operator bundle or the name of a bundle image.

For further information about the flags, run:

$ operator-sdk scorecard -h

Scorecard configuration

The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named config.yaml, which is generated by the make bundle command, located in your bundle/ directory:

./bundle
...
└── tests
    └── scorecard
        └── config.yaml
Example scorecard configuration file
kind: Configuration
apiversion: scorecard.operatorframework.io/v1alpha3
metadata:
  name: config
stages:
- parallel: true
  tests:
  - image: quay.io/operator-framework/scorecard-test:v1.36.1
    entrypoint:
    - scorecard-test
    - basic-check-spec
    labels:
      suite: basic
      test: basic-check-spec-test
  - image: quay.io/operator-framework/scorecard-test:v1.36.1
    entrypoint:
    - scorecard-test
    - olm-bundle-validation
    labels:
      suite: olm
      test: olm-bundle-validation-test

The configuration file defines each test that scorecard can execute. The following fields of the scorecard configuration file define the test as follows:

Configuration field Description

image

Test container image name that implements a test

entrypoint

Command and arguments that are invoked in the test image to execute a test

labels

Scorecard-defined or custom labels that select which tests to run

Built-in scorecard tests

The scorecard ships with pre-defined tests that are arranged into suites: the basic test suite and the Operator Lifecycle Manager (OLM) suite.

Table 1. Basic test suite
Test Description Short name

Spec Block Exists

This test checks the custom resource (CR) created in the cluster to make sure that all CRs have a spec block.

basic-check-spec-test

Table 2. OLM test suite
Test Description Short name

Bundle Validation

This test validates the bundle manifests found in the bundle that is passed into scorecard. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library.

olm-bundle-validation-test

Provided APIs Have Validation

This test verifies that the custom resource definitions (CRDs) for the provided CRs contain a validation section and that there is validation for each spec and status field detected in the CR.

olm-crds-have-validation-test

Owned CRDs Have Resources Listed

This test makes sure that the CRDs for each CR provided via the cr-manifest option have a resources subsection in the owned CRDs section of the ClusterServiceVersion (CSV). If the test detects used resources that are not listed in the resources section, it lists them in the suggestions at the end of the test. Users are required to fill out the resources section after initial code generation for this test to pass.

olm-crds-have-resources-test

Spec Fields With Descriptors

This test verifies that every field in the CRs spec sections has a corresponding descriptor listed in the CSV.

olm-spec-descriptors-test

Status Fields With Descriptors

This test verifies that every field in the CRs status sections have a corresponding descriptor listed in the CSV.

olm-status-descriptors-test

Running the scorecard tool

A default set of Kustomize files are generated by the Operator SDK after running the init command. The default bundle/tests/scorecard/config.yaml file that is generated can be immediately used to run the scorecard tool against your Operator, or you can modify this file to your test specifications.

Prerequisites
  • Operator project generated by using the Operator SDK

Procedure
  1. Generate or regenerate your bundle manifests and metadata for your Operator:

    $ make bundle

    This command automatically adds scorecard annotations to your bundle metadata, which is used by the scorecard command to run tests.

  2. Run the scorecard against the on-disk path to your Operator bundle or the name of a bundle image:

    $ operator-sdk scorecard <bundle_dir_or_image>

Scorecard output

The --output flag for the scorecard command specifies the scorecard results output format: either text or json.

Example JSON output snippet
{
  "apiVersion": "scorecard.operatorframework.io/v1alpha3",
  "kind": "TestList",
  "items": [
    {
      "kind": "Test",
      "apiVersion": "scorecard.operatorframework.io/v1alpha3",
      "spec": {
        "image": "quay.io/operator-framework/scorecard-test:v1.36.1",
        "entrypoint": [
          "scorecard-test",
          "olm-bundle-validation"
        ],
        "labels": {
          "suite": "olm",
          "test": "olm-bundle-validation-test"
        }
      },
      "status": {
        "results": [
          {
            "name": "olm-bundle-validation",
            "log": "time=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found metadata directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Found annotations file\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test\n",
            "state": "pass"
          }
        ]
      }
    }
  ]
}
Example text output snippet
--------------------------------------------------------------------------------
Image:      quay.io/operator-framework/scorecard-test:v1.36.1
Entrypoint: [scorecard-test olm-bundle-validation]
Labels:
	"suite":"olm"
	"test":"olm-bundle-validation-test"
Results:
	Name: olm-bundle-validation
	State: pass
	Log:
		time="2020-07-15T03:19:02Z" level=debug msg="Found manifests directory" name=bundle-test
		time="2020-07-15T03:19:02Z" level=debug msg="Found metadata directory" name=bundle-test
		time="2020-07-15T03:19:02Z" level=debug msg="Getting mediaType info from manifests directory" name=bundle-test
		time="2020-07-15T03:19:02Z" level=info msg="Found annotations file" name=bundle-test
		time="2020-07-15T03:19:02Z" level=info msg="Could not find optional dependencies file" name=bundle-test

The output format spec matches the Test type layout.

Selecting tests

Scorecard tests are selected by setting the --selector CLI flag to a set of label strings. If a selector flag is not supplied, then all of the tests within the scorecard configuration file are run.

Tests are run serially with test results being aggregated by the scorecard and written to standard output, or stdout.

Procedure
  1. To select a single test, for example basic-check-spec-test, specify the test by using the --selector flag:

    $ operator-sdk scorecard <bundle_dir_or_image> \
        -o text \
        --selector=test=basic-check-spec-test
  2. To select a suite of tests, for example olm, specify a label that is used by all of the OLM tests:

    $ operator-sdk scorecard <bundle_dir_or_image> \
        -o text \
        --selector=suite=olm
  3. To select multiple tests, specify the test names by using the selector flag using the following syntax:

    $ operator-sdk scorecard <bundle_dir_or_image> \
        -o text \
        --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'

Enabling parallel testing

As an Operator author, you can define separate stages for your tests using the scorecard configuration file. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable parallel setting.

By default, or when a stage explicitly sets parallel to false, tests in a stage are run sequentially in the order they are defined in the configuration file. Running tests one at a time is helpful to guarantee that no two tests interact and conflict with each other.

However, if tests are designed to be fully isolated, they can be parallelized.

Procedure
  • To run a set of isolated tests in parallel, include them in the same stage and set parallel to true:

    apiVersion: scorecard.operatorframework.io/v1alpha3
    kind: Configuration
    metadata:
      name: config
    stages:
    - parallel: true (1)
      tests:
      - entrypoint:
        - scorecard-test
        - basic-check-spec
        image: quay.io/operator-framework/scorecard-test:v1.36.1
        labels:
          suite: basic
          test: basic-check-spec-test
      - entrypoint:
        - scorecard-test
        - olm-bundle-validation
        image: quay.io/operator-framework/scorecard-test:v1.36.1
        labels:
          suite: olm
          test: olm-bundle-validation-test
    1 Enables parallel testing

    All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceding to the next stage. This can make your tests run much faster.

Custom scorecard tests

The scorecard tool can run custom tests that follow these mandated conventions:

  • Tests are implemented within a container image

  • Tests accept an entrypoint which include a command and arguments

  • Tests produce v1alpha3 scorecard output in JSON format with no extraneous logging in the test output

  • Tests can obtain the bundle contents at a shared mount point of /bundle

  • Tests can access the Kubernetes API using an in-cluster client connection

Writing custom tests in other programming languages is possible if the test image follows the above guidelines.

The following example shows of a custom test image written in Go:

Example custom scorecard test
// Copyright 2020 The Operator-SDK Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package main

import (
	"encoding/json"
	"fmt"
	"log"
	"os"

	scapiv1alpha3 "github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3"
	apimanifests "github.com/operator-framework/api/pkg/manifests"
)

// This is the custom scorecard test example binary
// As with the Redhat scorecard test image, the bundle that is under
// test is expected to be mounted so that tests can inspect the
// bundle contents as part of their test implementations.
// The actual test is to be run is named and that name is passed
// as an argument to this binary.  This argument mechanism allows
// this binary to run various tests all from within a single
// test image.

const PodBundleRoot = "/bundle"

func main() {
	entrypoint := os.Args[1:]
	if len(entrypoint) == 0 {
		log.Fatal("Test name argument is required")
	}

	// Read the pod's untar'd bundle from a well-known path.
	cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot)
	if err != nil {
		log.Fatal(err.Error())
	}

	var result scapiv1alpha3.TestStatus

	// Names of the custom tests which would be passed in the
	// `operator-sdk` command.
	switch entrypoint[0] {
	case CustomTest1Name:
		result = CustomTest1(cfg)
	case CustomTest2Name:
		result = CustomTest2(cfg)
	default:
		result = printValidTests()
	}

	// Convert scapiv1alpha3.TestResult to json.
	prettyJSON, err := json.MarshalIndent(result, "", "    ")
	if err != nil {
		log.Fatal("Failed to generate json", err)
	}
	fmt.Printf("%s\n", string(prettyJSON))

}

// printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are.
func printValidTests() scapiv1alpha3.TestStatus {
	result := scapiv1alpha3.TestResult{}
	result.State = scapiv1alpha3.FailState
	result.Errors = make([]string, 0)
	result.Suggestions = make([]string, 0)

	str := fmt.Sprintf("Valid tests for this image include: %s %s",
		CustomTest1Name,
		CustomTest2Name)
	result.Errors = append(result.Errors, str)
	return scapiv1alpha3.TestStatus{
		Results: []scapiv1alpha3.TestResult{result},
	}
}

const (
	CustomTest1Name = "customtest1"
	CustomTest2Name = "customtest2"
)

// Define any operator specific custom tests here.
// CustomTest1 and CustomTest2 are example test functions. Relevant operator specific
// test logic is to be implemented in similarly.

func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
	r := scapiv1alpha3.TestResult{}
	r.Name = CustomTest1Name
	r.State = scapiv1alpha3.PassState
	r.Errors = make([]string, 0)
	r.Suggestions = make([]string, 0)
	almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
	if almExamples == "" {
		fmt.Println("no alm-examples in the bundle CSV")
	}

	return wrapResult(r)
}

func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus {
	r := scapiv1alpha3.TestResult{}
	r.Name = CustomTest2Name
	r.State = scapiv1alpha3.PassState
	r.Errors = make([]string, 0)
	r.Suggestions = make([]string, 0)
	almExamples := bundle.CSV.GetAnnotations()["alm-examples"]
	if almExamples == "" {
		fmt.Println("no alm-examples in the bundle CSV")
	}
	return wrapResult(r)
}

func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus {
	return scapiv1alpha3.TestStatus{
		Results: []scapiv1alpha3.TestResult{r},
	}
}