Tests

As a tester I want to store test metadata close to the test source code.

Tests, or L1 metadata, define attributes which are closely related to individual test cases such test script, framework, directory path where the test should be executed, maximum test duration or packages required to run the test.

In addition to the attributes defined here, tests also support common Core attributes which are shared across all metadata levels.

Examples:

summary: Check the man page
test: man tmt | grep friendly
require: grep
duration: 1m
tier: 1
tag: docs

component

Relevant fedora/rhel source package names

As a SELinux tester testing the ‘checkpolicy’ component I want to run Tier 1 tests for all SELinux components plus all checkpolicy tests.

It’s useful to be able to easily select all tests relevant for given component or package. As they do not always have to be stored in the same repository and because many tests cover multiple components a dedicated field is needed. Should be a string or a list of strings. Component name usually corresponds to the source package name.

Examples:

component: libselinux

component: [libselinux, checkpolicy]

component:
    - libselinux
    - checkpolicy

Status: implemented

contact

Test maintainer contact

As a developer reviewing a complex test which failed I would like to contact the person who maintains the code and understands it well.

When there are several people collaborating on tests it’s useful to have a way how find who is responsible for what. Should be a string or a list of strings (email address format with name and surname).

Examples:

contact: Name Surname <email@address.org>

contact:
    - First Person <first@address.org>
    - Second Person <second@address.org>

Status: implemented

description

Detailed description of what the test does

As a developer I review existing test coverage for my component and would like to get an overall idea what is covered without having to read the whole test code.

For complex tests it makes sense to provide more detailed description to better clarify what is covered by the test. This can be useful for test writer as well when reviewing a test written long time ago. Should be a string (multi line, plain text).

Examples:

description:
    This test checks all available wget options related to
    downloading files recursively. First a tree directory
    structure is created for testing. Then a file download
    is performed for different recursion depth specified by
    the "--level=depth" option.

Status: implemented

duration

Maximum time for test execution

As a test harness I need to know after how long time I should kill test if it is still running to prevent resource wasting.

In order to prevent stuck tests consuming resources we define a maximum time for test execution. If the limit is exceeded the running test is killed by the test harness. Use the same format as the sleep command. Should be a string. The default value is 5m.

Examples:

duration: 3m
duration: 2h
duration: 1d

Status: implemented and verified

environment

Environment variables to be set before running the test

As a tester I need to pass environment variables to my test script to properly execute the desired test scenario.

Test scripts might require certain environment variables to be set. Although this can be done on the shell command line as part of the test attribute it makes sense to have a dedicated field for this, especially when the number of parameters grows. This might be useful for virtual test cases as well. Should be a dictionary.

Examples:

environment:
    PACKAGE: python37
    PYTHON: python3.7

Status: implemented and verified

framework

Test framework defining how tests should be executed

As a tester I want to include tests using different test execution framework in a single plan.

The framework defines how test code should be executed and how test results should be interpreted (e.g. checking exit code of a shell test versus checking beakerlib test results file). It also determines possible additional required packages to be installed on the test environment.

Currently shell and beakerlib are supported. Each execute step plugin should list which frameworks it supports and raise an error when an unsupported framework is detected.

Should be a string, by default shell is used.

Examples:

framework: shell

framework: beakerlib

Status: implemented and verified

manual

Test automation state

As a tester I need to store detailed manual instructions covering test scenario I have to perform manually.

Attribute marks whether this test needs human interaction during its execution. Such tests are not likely to be executed in automation pipelines. In the future they can be executed in a semi-automated way, waiting on human interaction.

It’s value should be a boolean. The default value is false. When set to true, the test attribute should point to a Markdown document following the CommonMark specification.

This is a minimal example of a manual test document containing a single test with one test step and one expected result:

# Test

## Step
Do this and that.

## Expect
Check this and that.

The following sections are recognized by tmt and have a special meaning. Any other features of Markdown can be used, but tmt will just show them.

Setup
Optional heading # Setup under which any necessary preparation actions are documented. These actions are not part of the test itself.
Test
Required level 1 heading # Test or # Test .* starting with the word ‘Test’ marks beginning of the test itself. Multiple Test sections can be defined in a single document.
Step
Required level 2 heading ## Step or ## Test Step marking a single step of the test, should be in pair with the Expect section which follows it. Cannot be used outside of test sections.
Expect
Required level 2 heading ## Expect, ## Result or ## Expected Result marking expected outcome of the previous step. Cannot be used outside of test sections.
Cleanup
Optional heading # Cleanup under which any cleanup actions which are not part of the test itself are documented.
Code block
Optional, can be used in any section to mark code snippets. Code type specification (bash, python…) is recommended. It can be used for syntax highlighting and in the future for the semi-automated test execution as well.

See the manual test examples to get a better idea.

Examples:

manual: true

Status: implemented

path

Filesystem directory to be entered before executing the test

As a test writer I want to define two virtual test cases, both using the same script for executing.

As the object hierarchy does not need to copy the filesystem structure (e.g. when using virtual test cases) we need a way how to define where the test is located. Automation is expected to change directory to provided path starting from the fmf tree root directory before executing the test. Use a string containing an absolute path starting with slash. If path is not defined, the directory where the test metadata are stored is used by default.

Examples:

path: /protocols/https

Status: implemented and verified

recommend

Packages or libraries recommended for the test execution

As a tester I want to specify additional packages which should be installed on the system if available and no error should be reported if they cannot be installed.

Sometimes there can be additional packages which are not strictly needed to run tests but can improve test execution in some way, for example provide better results presentation.

Also package names can differ across product versions. Using this attribute it is possible to specify all possible package names and only those which are available will be installed.

If possible, for the second use case it is recommended to specify such packages using the prepare step configuration which is usually branched according to the version and thus can better ensure that the right packages are correctly installed as expected.

Note that beakerlib libraries are supported by this attribute as well. See the require attribute for more details about available reference options.

Should be a string or a list of strings using package specification supported by dnf which takes care of the installation.

Examples:

recommend: mariadb

recommend: [mariadb, mysql]

recommend:
    - mariadb
    - mysql

Status: implemented and verified

relevancy

Filter tests relevant for given environment

As a tester I want to skip execution of a particular test case in given test environment.

Note

This is a draft, the story is not implemented yet.

Sometimes a test is only relevant for specific environment. Test Case Relevancy allows to filter irrelevant tests out.

Warning

Test Case Relevancy has been obsoleted. Use the new adjust attribute instead to modify test metadata for given Context.

Examples:

summary: Test for the new feature
adjust:
    enabled: false
    when: distro ~< fedora-33

Status: idea

require

Packages or libraries required for test execution

As a tester I want to specify packages and libraries which are required by the test and need to be installed on the system so that the test can be successfully executed.

In order to execute the test, additional packages may need to be installed on the system. For example gcc and make are needed to compile tests written in C on the target machine. If the package cannot be installed test execution should result in an error.

For tests shared across multiple components or product versions where required packages have different names it is recommended to use the prepare step configuration to specify required packages for each component or product version individually.

When referencing beakerlib libraries it is possible to use both the backward-compatible syntax library(repo/lib) which fetches libraries from the default location as well as provide a dictionary with a full fmf identifier.

By default, fetched repositories are stored in the discover step workdir under the libs directory. Use optional key destination to choose a different location. The nick key can be used to override the default git repository name.

For debugging beakerlib libraries it is useful to reference the development version directly from the local filesystem. Use the path key to specify a full path to the library.

Should be a string or a list of strings using package specification supported by dnf which takes care of the installation or a dictionary if using fmf identifier to fetch dependent repositories.

Examples:

# Require a single package
require: make

# Multiple packages
require: [gcc, make]

require:
    - gcc
    - make

# Library from the default upstream location
require: library(openssl/certgen)

# Library from a custom remote git repository
require:
    - url: https://github.com/beakerlib/openssl
      name: /certgen

# Library from the local filesystem
require:
    - path: /tmp/library/openssl
      name: /certgen

# Use a custom git ref and nick for the library
require:
    - url: https://github.com/redhat-qe-security/certgen
      ref: devel
      nick: openssl
      name: /certgen

Status: implemented and verified

result

Specify how test result should be interpreted

As a tester I want to regularly execute the test but temporarily ignore test result until more investigation is done and the test can be fixed properly.

Note

This is a draft, the story is not implemented yet.

Even if a test fails it might makes sense to execute it to be able to manually review the results (ignore test result) or ensure the behaviour has not unexpectedly changed and the test is still failing (expected fail). The following values should be supported:

respect
test result is respected (fails when test failed)
ignore
ignore the test result (test always passes)
xfail
expected fail (pass when test fails, fail when test passes)

The default value is respect.

Examples:

result: ignore

Status: idea

summary

Concise summary of what the test does

As a developer reviewing multiple failed tests I would like to get quickly an idea of what my change broke.

In order to efficiently collaborate on test maintenance it’s crucial to have a short summary of what the test does. Should be a string (one line, up to 50 characters).

Examples:

summary: Test wget recursive download options

Status: implemented

tag

Free form tags for easy filtering

As a user I want to run only a subset of available tests.

Throughout the years, free-form tags proved to be useful for many, many scenarios. Primarily to provide an easy way how to select a subset of objects. Tags are case-sensitive. Using lowercase is recommended. Should be a string or a list of strings. Tag name must not contain spaces or commas.

Examples:

tag: security

tag: [security, fast]

tag:
    - security
    - fast

Status: implemented and verified

test

Shell command which executes the test

As a test writer I want to run a single test script in multiple ways (e.g. by providing different parameters).

This attribute defines how the test is to be executed. Allows to parametrize a single test script and in this way create virtual test cases.

If the test is manual, it points to the document describing the manual test case steps in Markdown format with defined structure.

Should be a string. This is a required attribute.

Shell options errexit and pipefail are applied by default using set -eo pipefail to avoid potential errors going unnoticed. You may revert this setting by explicitly using set +eo pipefail. These options are not applied when beakerlib is used as the framework.

Examples:

test: ./test.sh
test: ./test.sh --depth 1000
test: pytest -k performance
test: make run

test: manual.md
manual: true

Status: implemented and verified

tier

Name of the tier set this test belongs to

It’s quite common to organize tests into “tiers” based on their importance, stability, duration and other aspects. For this tags have been used quite often as there was not corresponding attribute available. It might make sense to have a dedicated field for this functionality as well. Should be a string.

Examples:

tier: 1

Status: implemented