Plans

As a tester I want to easily execute selected tests or selected test steps in given environment.

Plans, also called L2 metadata, are used to group relevant tests and enable the CI. They describe how to discover tests for execution, how to provision the environment, how to prepare it for testing, how to execute tests, report results and finally how to finish the test job.

Each of the six steps mentioned above supports multiple implementations. The default methods are listed below.

Thanks to clearly separated test steps it is possible to run only selected steps, for example tmt run discover to see which tests would be executed.

In addition to the attributes defined here, plans also support common Core attributes which are shared across all metadata levels.

Examples:

# Enabled a minimal smoke test
execute:
    script: foo --version

# Run tier one tests in a container
discover:
    how: fmf
    filter: tier:1
provision:
    how: container
execute:
    how: tmt

# Verify that apache can serve pages
summary: Basic httpd smoke test
provision:
    how: virtual
    memory: 4096
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd
execute:
    how: shell
    script:
    - echo foo > /var/www/html/index.html
    - curl http://localhost/ | grep foo

context

Definition of the test execution context

As a user I want to define the default context in which tests included in given plan should be executed.

Define the default context values for all tests executed under the given plan. This overrides context values auto-detected from the environment, such as distro, variant or arch but can overriden by context provided directly on the command line. See the Context definition for the full list of supported context dimensions.

Examples:

summary:
    Test basic functionality of the httpd24 collection
discover:
    how: fmf
execute:
    how: tmt
context:
    collection: httpd24

summary:
    Verify dash against the shared shell tests repository
discover:
    how: fmf
    url: https://src.fedoraproject.org/tests/shell
execute:
    how: tmt
context:
    component: dash

Status: implemented

discover

Discover tests relevant for execution

Gather information about tests which are supposed to be run. Provide method tests() returning a list of discovered tests and requires() returning a list of all required packages aggregated from the require attribute of the individual test metadata.

Store the list of aggregated tests with their corresponding metadata in the tests.yaml file. The format should be a dictionary of dictionaries structured in the following way:

/test/one:
    summary: Short test summary.
    description: Long test description.
    contact: Petr Šplíchal <psplicha@redhat.com>
    component: [tmt]
    test: tmt --help
    path: /test/path/
    require: [package1, package2]
    environment:
        key1: value1
        key2: value2
        key3: value3
    duration: 5m
    enabled: true
    result: respect
    tag: [tag]
    tier: 1

/test/two:
    summary: Short test summary.
    description: Long test description.
    ...

fmf

Discover available tests using the fmf format

Use the Flexible Metadata Format to explore all available tests in given repository. The following parameters are supported:

url
Git repository containing the metadata tree. Current git repository used by default.
ref
Branch, tag or commit specifying the desired git revision. Defaults to the master branch if url given or to the current HEAD if url not provided.
path
Path to the metadata tree root. Should be relative to the git repository root if url provided, absolute local filesystem path otherwise. By default . is used.
test
List of test names or regular expressions used to select tests by name.
filter
Apply advanced filter based on test metadata attributes. See pydoc fmf.filter for more info.

See also the fmf identifier documentation.

It is also possible to limit tests only to those that have changed in git since a given revision. This can be particularly useful when testing changes to tests themselves (e.g. in a pull request CI).

Related config options (all optional):

modified-only
Set to True if you want to filter modified tests only. The test is modified if its name starts with the name of any directory modified since modified-ref.
modified-url
Will be fetched as a “reference” remote in the test dir.
modified-ref
The ref to compare against, master branch is used by default.

Examples:

discover:
    how: fmf

discover:
    how: fmf
    url: https://github.com/psss/tmt
    ref: master
    path: /metadata/tree/path
    test: [regexp]
    filter: tier:1

Status: implemented

shell

Provide a manual list of shell test cases

List of test cases to be executed can be defined manually directly in the plan as a list of dictionaries containing test name and actual test script. Optionally it is possible to define any other Tests attributes such as path or duration here as well. The default duration for tests defined directly in the discover step is 1h.

Examples:

discover:
    how: shell
    tests:
    - name: /help/main
      test: tmt --help
    - name: /help/test
      test: tmt test --help
    - name: /help/smoke
      test: ./smoke.sh
      path: /tests/shell
      duration: 1m

Status: implemented

environment-file

Environment variables from files

In addition to the environment key it is also possible to provide environment variables in a file. Supported formats are dotenv/shell with KEY=VALUE pairs and yaml. Full url can be used to fetch variables from a remote source. The environment key has a higher priority. File path should be relative to the metadata tree root.

Examples:

# Load from a dotenv/shell format
/plan:
    environment-file:
      - env

# Load from a yaml format
/plan:
    environment-file:
      - environment.yml
      - environment.yaml

# Fetch from remote source
/plan:
    environment-file:
      - https://example.org/project/environment.yaml

Status: implemented and verified

environment

Environment variables

Specifies environment variables available in all steps. Plugins need to include these environment variables while running commands or other programs. These environment variables should override L1 environment if present. Use the environment-file key to load variables from files.

Examples:

/plan:
    environment:
        KOJI_TASK_ID: 42890031
        RELEASE: f33
    execute:
        script:
            - echo "Testing $KOJI_TASK_ID on release $RELEASE"

Status: implemented and verified

execute

Define how tests should be executed

Execute discovered tests in the provisioned environment using selected test executor. By default tests are executed using the internal tmt executor which allows to show detailed progress of the testing and supports interactive debugging.

This is a required attribute. Each plan has to define this step.

For each test, a separate directory is created for storing artifacts related to the test execution. Its path is constructed from the test name and it’s stored under the execute/data directory. It contains a metadata.yaml file with the aggregated L1 metadata which can be used by the test framework. In addition to supported Tests attributes it also contains fmf name of the test.

For each plan the execute step should produce a results.yaml file with the list of results for each test in the following format:

/test/one:
    result: OUTCOME
    log: PATH

/test/two:
    result: OUTCOME
    log: PATH
    duration: DURATION

Where OUTCOME is the result of the test execution. It can have the following values:

pass
Test execution successfully finished and passed.
info
Test finished but only produced an informational message. Represents a soft pass, used for skipped tests and for tests with the result attribute set to ignore. Automation should treat this as a passed test.
warn
A problem appeared during test execution which does not affect test results but might be worth checking and fixing. For example test cleanup phase failed. Automation should treat this as a failed test.
error
Undefined problem encountered during test execution. Human inspection is needed to investigate whether it was a test bug, infrastructure error or a real test failure. Automation should treat it as a failed test.
fail
Test execution successfully finished and failed.

The PATH is the test log output path, relative to the execute step working directory. It can be a single string or a list of strings when multiple log files available, in which case the first log will be considered as the main one (e.g. presented to the user for inspection).

The DURATION is an optional section stating how long did the test run. Its value is in the hh:mm:ss format.

detach

Detached test executor

As a user I want to execute tests in a detached way.

The detach executor runs tests in one batch using a shell script directly executed on the provisioned guest. This can be useful when the connection to the guest is slow or the test execution takes a long time and you want to disconnect your laptop while keeping tests running in the background.

Examples:

execute:
    how: detach

Status: implemented and verified

isolate

Run tests in an isolated environment

Note

This is a draft, the story is not implemented yet.

Optional boolean attribute isolate can be used to request a clean test environment for each test.

Examples:

execute:
    how: tmt
    isolate: true

Status: idea

on

Execute tests on selected guests

Note

This is a draft, the story is not implemented yet.

In the multihost scenarios it is often necessary to execute test code on selected guests only or execute different test code on individual guests. The on key allows to select guests where the tests should be executed by providing their name or the role they play in the scenario. Use a list to specify multiple names or roles. By default, when the on key is not defined, tests are executed on all provisioned guests.

Examples:

# Execute discovered tests on the client
execute:
  - how: tmt
    on: client

# Run different script for each role
execute:
  - name: run-the-client-code
    script: client.py
    on: client
  - name: run-the-server-code
    script: server.py
    on: server

Status: idea

script

Execute shell scripts

As a user I want to easily run shell script as a test.

Execute arbitratry shell commands and check their exit code which is used as a test result. Default shell options are applied to the script, see test for more details. The default duration for tests defined directly under the execute step is 1h. Use the duration attribute to modify the default limit.

Examples:

execute:
    how: tmt
    script: tmt --help

execute:
    how: tmt
    script: a-long-test-suite
    duration: 3h

Multi-line script

Multi-line shell script

Providing a multi-line shell script is also supported. In that case executor will store given script into a file and execute.

Examples:

execute:
    script: |
        dnf -y install httpd curl
        systemctl start httpd
        echo foo > /var/www/html/index.html
        curl http://localhost/ | grep foo

Status: implemented

Multiple commands

Multiple shell commands

You can also include several commands as a list. Executor will run commands one-by-one and check exit code of each.

Examples:

execute:
    script:
        - dnf -y install httpd curl
        - systemctl start httpd
        - echo foo > /var/www/html/index.html
        - curl http://localhost/ | grep foo

Status: implemented

The simplest usage

Simple use case should be super simple to write

As the how keyword can be omitted when using the default executor you can just define the shell script to be run. This is how a minimal smoke test configuration for the tmt command can look like:

Examples:

execute:
    script: tmt --help

Status: implemented

tmt

Internal test executor

As a user I want to execute tests directly from tmt.

The internal tmt executor runs tests in the provisioned environment one by one directly from the tmt code which allows features such as showing live progress or the interactive session . This is the default execute step implementation.

Examples:

execute:
    how: tmt

Status: implemented and verified

finish

Finishing tasks

Additional actions to be performed after the test execution has been completed. Counterpart of the prepare step useful for various cleanup actions.

Examples:

finish:
    how: shell
    script: upload-logs.sh

Status: implemented

gate

Gates relevant for testing

Note

This is a draft, the story is not implemented yet.

Multiple gates can be defined in the process of releasing a change. Currently we define the following gates:

merge-pull-request
block merging a pull request into a git branch
add-build-to-update
attaching a build to an erratum / bodhi update
add-build-to-compose
block adding a build to a compose
release-update
block releasing an erratum / bodhi update

Each plan can define one or more gates it should be blocking. Attached is an example of configuring multiple gates.

Examples:

/test:
    /pull-request:
        /pep:
            summary: All code must comply with the PEP8 style guide
            # Do not allow ugly code to be merged into master
            gate:
                - merge-pull-request
        /lint:
            summary: Run pylint to catch common problems (no gating)
    /build:
        /smoke:
            summary: Basic smoke test (Tier1)
            # Basic smoke test is used by three gates
            gate:
                - merge-pull-request
                - add-build-to-update
                - add-build-to-compose
        /features:
            summary: Verify important features
    /update:
        # This enables the 'release-update' gate for all three plans
        gate:
            - release-update
        /basic:
            summary: Run all Tier1, Tier2 and Tier3 tests
        /security:
            summary: Security tests (extra job to get quick results)
        /integration:
            summary: Integration tests with related components

Status: idea

prepare

Prepare the environment for testing

The prepare step is used to define how the guest environment should be prepared so that the tests can be successfully executed.

The install plugin provides an easy way to install required or recommended packages from disk and from the offical distribution or copr repositories. Use the ansible plugin for applying custom playbooks or execute shell scripts to perform arbitrary preparation tasks.

Use the order attribute to select in which order the preparation should happen if there are multiple configs. Default order is 50. For installation of required packages gathered from the require attribute of individual tests order 70 is used, for recommended packages it is 75.

Examples:

# Install fresh packages from a custom copr repository
prepare:
  - how: install
    copr: psss/tmt
    package: tmt-all

# Install required packages and start the service
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd

ansible

Apply ansible playbook to get the desired final state.

One or more playbooks can be provided as a list under the playbooks attribute. Each of them will be applied using ansible-playbook in the given order. The path should be relative to the metadata tree root.

Examples:

prepare:
    how: ansible
    playbook:
        - playbooks/common.yml
        - playbooks/os/rhel7.yml

Status: implemented

install

Install packages on the guest

One or more RPM packages can be specified under the package attribute. The packages will be installed on the guest. They can either be specified using their names or paths to local rpm files.

Additionaly, the directory attribute can be used to install all packages from the given directory. Copr repositories can be enabled using the copr attribute. Use the exclude option to skip selected packages from installation (globbing characters are supported as well).

It’s possible to change the behaviour when a package is missing using the missing attribute. The missing packages can either be silently ignored (‘skip’) or a preparation error is thrown (‘fail’), which is the default behaviour.

Examples:

prepare:
    how: install
    package:
        - tmp/RPMS/noarch/tmt-0.15-1.fc31.noarch.rpm
        - tmp/RPMS/noarch/python3-tmt-0.15-1.fc31.noarch.rpm

prepare:
    how: install
    directory:
        - tmp/RPMS/noarch
    exclude:
        - tmt-all
        - tmt-provision-virtual

prepare:
    how: install
    copr: psss/tmt
    package: tmt-all
    missing: fail

Status: implemented and verified

on

Apply preparation on selected guests

Note

This is a draft, the story is not implemented yet.

In the multihost scenarios it is often necessary to perform different preparation tasks on individual guests. The on key allows to select guests where the preparation should be applied by providing their name or the role they play in the scenario. Use a list to specify multiple names or roles. By default, when the on key is not defined, preparation is done on all provisioned guests.

Examples:

# Start Apache on the server
prepare:
  - how: shell
    script: systemctl start httpd
    on: server

# Apply common setup on the primary server and all replicas
prepare:
  - how: ansible
    playbook: common.yaml
    on: [primary, replica]

Status: idea

shell

Prepare system using shell commands

Execute arbitratry shell commands to set up the system. Default shell options are applied to the script, see test for more details.

Examples:

prepare:
    how: shell
    script: dnf install -y httpd

Status: implemented

provision

Provision a system for testing

Describes what environment is needed for testing and how it should be provisioned. Provides a generic and an extensible way to write down essential hardware requirements. For example one consistent way how to specify “at least 2 GB of RAM” for all supported provisioners. Might fail if it cannot provision according to the constraints.

Examples:

provision:
    how: virtual
    image: fedora
    memory: 8 GB

beaker

Provision a machine in Beaker

Note

This is a draft, the story is not implemented yet.

Reserve a machine from the Beaker pool.

Examples:

provision:
    how: beaker
    family: Fedora31
    tag: released

Status: idea

connect

Connect to a provisioned box

Do not provision a new system. Instead, use provided authentication data to connect to a running machine.

guest
hostname or ip address of the box
user
user name to be used to log in, root by default
password
user password to be used to log in
key
path to the file with private key
port
use specific port to connect to

Examples:

provision:
    how: connect
    guest: hostname or ip address
    user: username
    password: password

provision:
    how: connect
    guest: hostname or ip address
    key: private-key-path

Status: implemented

container

Provision a container

Download (if necessary) and start a new container using podman or docker.

Examples:

provision:
    how: container
    image: fedora:latest

Status: implemented

hardware

Hardware specification

Note

This is a draft, the story is not implemented yet.

As part of the provision step it is possible to specify additional requirements for the testing environment. Individual requirements are provided as a simple key: value pairs, for example the minimum amount of memory or the related information is grouped under a common parent, for example cores or model under the cpu key.

When multiple environment requirements are provided the provision implementation should attempt to satisfy all of them. It is also possible to write this explicitly using the and operator containing a list of dictionaries with individual requirements. When the or operator is used, any of the alternatives provided in the list should be sufficient.

Regular expressions can be used for selected fields such as the model_name. Please note, that the full extent of regular expressions might not be supported across all provision implementations. The .* notation, however, should be supported everywhere.

The pint library is used for processing various units, both decimal and binary prefixes can be used:

1 MB = 1 000 000 B
1 MiB = 1 048 576 B

Note

Although the hardware specification is not implemented yet we do not expect any significant changes to the currently proposed format.

Examples:

# Basic key-value format used to specify the memory size
memory: 8 GB

# Processor-related stuff grouped together
cpu:
    processors: 2
    cores: 16
    model: 37

# Disk group used to allow possible future extensions
disk:
    space: 500 GB

# Optional operators at the start of the value
memory: '> 8 GB'
memory: '>= 8 GB'
memory: '< 8 GB'
memory: '<= 8 GB'

# By default exact value expected, these are equivalent:
cpu:
    model: 37
cpu:
    model: '= 37'

# Enable regular expression search
cpu:
    model_name: =~ .*AMD.*

# Using advanced logic operators
and:
  - cpu:
        family: 15
  - or:
      - cpu:
            model: 65
      - cpu:
            model: 67
      - cpu:
            model: 69

Status: idea

local

Use the localhost for testing

Do not provision any system. Tests will be executed directly on the localhost. Note that for some actions like installing additional packages you need root permission or enabled sudo.

Examples:

provision:
    how: local

Status: implemented and verified

multihost

Multihost testing specification

Note

This is a draft, the story is not implemented yet.

As a part of the provision step it is possible to request multiple guests to be provisioned for testing. Each guest has to be assigned a unique name which is used to identify it.

The optional parameter role can be used to mark related guests so that common actions can be applied to all such guests at once. An example role name can be client or server but arbitrary identifier can be used.

Both name and role can be used together with the on key to select guests on which the preparation tasks should be applied or where the test execution should take place.

Examples:

# Request two guests
provision:
  - name: server
    how: virtual
  - name: client
    how: virtual

# Assign role to guests
provision:
  - name: main-server
    role: primary
  - name: backup-one
    role: replica
  - name: backup-two
    role: replica
  - name: tester-one
    role: client
  - name: tester-two
    role: client

Status: idea

openstack

Provision a virtual machine in OpenStack

Note

This is a draft, the story is not implemented yet.

Create a virtual machine using OpenStack.

Examples:

provision:
    how: openstack
    image: f31

Status: idea

virtual

Provision a virtual machine (default)

Create a new virtual machine on the localhost using libvirt or another provider which is enabled in vagrant.

Examples:

provision:
    how: virtual
    image: fedora/31-cloud-base

Status: implemented

report

Report test results

As a tester I want to have a nice overview of results once the testing if finished.

Report test results according to user preferences.

display

Show results in the terminal window

As a tester I want to see test results in the plain text form in my shell session.

Test results will be displayed as part of the command line tool output directly in the terminal. Allows to select the desired level of verbosity

Examples:

tmt run -l report        # overall summary only
tmt run -l report -v     # individual test results
tmt run -l report -vv    # show full paths to logs
tmt run -l report -vvv   # provide complete test output

Status: implemented

file

Note

This is a draft, the story is not implemented yet.

Save the report into a report.yaml file with the following format:

result: OVERALL_RESULT
plans:
    /plan/one:
        result: PLAN_RESULT
        tests:
            /test/one:
                result: TEST_RESULT
                log: LOG_PATH

            /test/two:
                result: TEST_RESULT
                log:
                    - LOG_PATH
                    - LOG_PATH
                    - LOG_PATH
    /plan/two:
        result: PLAN_RESULT
            /test/one:
                result: TEST_RESULT
                log: LOG_PATH

Where OVERALL_RESULT is the overall result of all plan results. It is counted the same way as PLAN_RESULT.

Where TEST_RESULT is the same as in execute step definition:

  • info - test finished and produced only information message
  • passed - test finished and passed
  • failed - test finished and failed
  • error - a problem encountered during test execution

Note the priority of test results is as written above, with info having the lowest priority and error has the highest. This is important for PLAN_RESULT.

Where PLAN_RESULT is the overall result or all test results for the plan run. It has the same values as TEST_RESULT. Plan result is counted according to the priority of the test outcome values. For example:

  • if the test results are info, passed, passed - the plan result will be passed
  • if the test results are info, passed, failed - the plan result will be failed
  • if the test results are failed, error, passed - the plan result will be error

Where LOG_PATH is the test log output path, relative to the execute step plan run directory. The log can be a single log path or a list of log paths, in case the test has produced more log files.

Status: idea

html

Generate a web page with test results

As a tester I want to review results in a nicely arranged web page with links to detailed test output.

Create a local html file with test results arranged in a table. Optionally open the page in the default browser.

Examples:

# Enable html report from the command line
tmt run --all report --how html
tmt run --all report --how html --open
tmt run -l report -h html -o

# Use html as the default report for given plan
report:
    how: html
    open: true

Status: implemented

summary

Concise summary describing the plan

Should shortly describe purpose of the test plan. One-line string with up to 50 characters. It is challenging to be both concise and descriptive, but that is what a well-written summary should do.

Examples:

/pull-request:
    /pep:
        summary: All code must comply with the PEP8 style guide
    /lint:
        summary: Run pylint to catch common problems (no gating)
/build:
    /smoke:
        summary: Basic smoke test (Tier1)
    /features:
        summary: Verify important features

Status: implemented