Steps

Test steps

As a user I want to run selected steps of the test execution.

There are several separate steps defined for the test execution. Clearly separating testing stages gives users control over their individual aspects. Each step makes it clear what and how can be influenced for that particular stage of the process.

This approach also allows to run only selected steps when desired. For example run discover only to see which tests will be executed or skip provision and prepare when quickly testing on localhost. Each step can be supported by multiple implementations. Special keyword how defines which implementation should be used. Default implementations are as follows:

  • discover: shell
  • provision: virtual
  • prepare: shell
  • execute: tmt
  • report: display
  • finish: shell

The following directory structure is used to store work data under /var/lib/tmt for each run:

run-001
└── plans
    ├── basic
    │   ├── discover
    │   ├── execute
    │   ├── finish
    │   ├── prepare
    │   ├── provision
    │   └── report
    └── smoke
        ├── discover
        ├── execute
        ├── finish
        ├── prepare
        ├── provision
        └── report

Each step is responsible to store data required by the specification under its folder.

discover

Discover tests relevant for execution

Gather information about tests which are supposed to be run. Provide method tests() returning a list of discovered tests and requires() returning a list of all required packages aggregated from the require attribute of the individual test metadata.

Store the list of aggregated tests with their corresponding metadata in the tests.yaml file. The format should be a dictionary of dictionaries structured in the following way:

/test/one:
    summary: Short test summary.
    description: Long test description.
    contact: Petr Šplíchal <psplicha@redhat.com>
    component: [tmt]
    test: tmt --help
    path: /test/path/
    require: [package1, package2]
    environment:
        key1: value1
        key2: value2
        key3: value3
    duration: 5m
    enabled: true
    result: respect
    tag: [tag]
    tier: 1
    relevancy: |
        distro < rhel-8: False

/test/two:
    summary: Short test summary.
    description: Long test description.
    ...

fmf

Discover available tests using the fmf format

Use the Flexible Metadata Format to explore all available tests in given repository. The following parameters are supported:

url
Git repository containing the metadata tree. Current git repository used by default.
ref
Branch, tag or commit specifying the desired git revision. Defaults to the master branch if url given or to the current HEAD if url not provided.
path
Path to the metadata tree root. Should be relative to the git repository root if url provided, absolute local filesystem path otherwise. By default . is used.
test
List of test names or regular expressions used to select tests by name.
filter
Apply advanced filter based on test metadata attributes. See pydoc fmf.filter for more info.

See also the fmf identifier documentation.

Examples:

discover:
    how: fmf

discover:
    how: fmf
    url: https://github.com/psss/tmt
    ref: master
    path: /metadata/tree/path
    test: [regexp]
    filter: tier:1

Status: implemented

shell

Provide a manual list of shell test cases

List of test cases to be executed can be defined manually directly in the plan as a list of dictionaries containing test name and actual test script. Optionally it is possible to define any other Tests attributes such as path or duration here as well. The default duration for tests defined directly in the discover step is 1h.

Examples:

discover:
    how: shell
    tests:
    - name: /help/main
      test: tmt --help
    - name: /help/test
      test: tmt test --help
    - name: /help/smoke
      test: ./smoke.sh
      path: /tests/shell
      duration: 1m

Status: implemented

execute

Define how tests should be executed

Execute discovered tests in the provisioned environment using selected test executor. By default tests are executed using the internal tmt executor which allows to show detailed progress of the testing and supports interactive debugging.

This is a required attribute. Each plan has to define this step.

For each test, a separate directory is created for storing artifacts related to the test execution. Its path is constructed from the test name and it’s stored under the execute/data directory. It contains a metadata.yaml file with the aggregated L1 metadata which can be used by the test framework. In addition to supported Tests attributes it also contains fmf name of the test.

For each plan the execute step should produce a results.yaml file with the list of results for each test in the following format:

/test/one:
    result: OUTCOME
    log: PATH

/test/two:
    result: OUTCOME
    log: PATH
    duration: DURATION

Where OUTCOME is the result of the test execution. It can have the following values:

pass
Test execution successfully finished and passed.
info
Test finished but only produced an informational message. Represents a soft pass, used for skipped tests and for tests with the result attribute set to ignore. Automation should treat this as a passed test.
warn
A problem appeared during test execution which does not affect test results but might be worth checking and fixing. For example test cleanup phase failed. Automation should treat this as a failed test.
error
Undefined problem encountered during test execution. Human inspection is needed to investigate whether it was a test bug, infrastructure error or a real test failure. Automation should treat it as a failed test.
fail
Test execution successfully finished and failed.

The PATH is the test log output path, relative to the execute step working directory. It can be a single string or a list of strings when multiple log files available, in which case the first log will be considered as the main one (e.g. presented to the user for inspection).

The DURATION is an optional section stating how long did the test run. Its value is in the hh:mm:ss format.

detach

Detached test executor

As a user I want to execute tests in a detached way.

The detach executor runs tests in one batch using a shell script directly executed on the provisioned guest. This can be useful when the connection to the guest is slow or the test execution takes a long time and you want to disconnect your laptop while keeping tests running in the background.

Examples:

execute:
    how: detach

Status: implemented and tested

isolate

Run tests in an isolated environment

Optional boolean attribute isolate can be used to request a clean test environment for each test.

Examples:

execute:
    how: tmt
    isolate: true

Status: idea

shell

Execute shell scripts

As a user I want to easily run shell script as a test.

Execute arbitratry shell commands and check their exit code which is used as a test result. The default duration for tests defined directly under the execute step is 1h. Use the duration attribute to modify the default limit.

Examples:

execute:
    how: tmt
    script: tmt --help

execute:
    how: tmt
    script: a-long-test-suite
    duration: 3h

multi

Multiple shell commands

You can also include several commands as a list. Executor will run commands one-by-one and check exit code of each.

Examples:

execute:
    script:
        - dnf -y install httpd curl
        - systemctl start httpd
        - echo foo > /var/www/html/index.html
        - curl http://localhost/ | grep foo

Status: implemented

script

Multi-line shell script

Providing a multi-line shell script is also supported. In that case executor will store given script into a file and execute.

Examples:

execute:
    script: |
        dnf -y install httpd curl
        systemctl start httpd
        echo foo > /var/www/html/index.html
        curl http://localhost/ | grep foo

Status: implemented

simple

Simple use case should be super simple to write

As the how keyword can be omitted when using the default executor you can just define the shell script to be run. This is how a minimal smoke test configuration for the tmt command can look like:

Examples:

execute:
    script: tmt --help

Status: implemented

tmt

Internal test executor

As a user I want to execute tests directly from tmt.

The internal tmt executor runs tests in the provisioned environment one by one directly from the tmt code which allows features such as showing live progress or the interactive session . This is the default execute step implementation.

Examples:

execute:
    how: tmt

Status: implemented and tested

finish

Finishing tasks

Additional actions to be performed after the test execution has been completed. Counterpart of the prepare step useful for various cleanup actions.

Examples:

finish:
    how: shell
    command: upload-logs.sh

Status: idea

prepare

Prepare system for testing

Additional configuration of the provisioned environment needed for testing.

  • Install artifact (customizable according to user needs)
    • Conflicts between rpms
    • Optionally add debuginfo
    • Install with devel module
  • Additional setup possible if needed
    • Inject arbitrary commands
    • Before/after artifact installation

ansible

Apply ansible playbook to get the desired final state.

One or more playbooks can be provided as a list under the playbooks attribute. Each of them will be applied using ansible-playbook in the given order. Optional attribute roles can be used to enable additional roles. Role can be either an ansible galaxy role name, git url or a path to file with detailed requirements.

Examples:

prepare:
    how: ansible
    roles:
        - nginxinc.nginx
    playbooks:
        - common.yml
        - rhel7.yml

Status: implemented

install

Install packages on the guest

One or more RPM packages can be specified under the package attribute. The packages will be installed on the guest. They can either be specified using their names or paths to local rpm files.

Additionaly, the directory attribute can be used to install all packages from the given directory. Copr repositories can be enabled using the copr attribute. It’s possible to change the behaviour when a package is missing using the missing attribute. The missing packages can either be silently ignored (‘skip’) or a preparation error is thrown (‘fail’), which is the default behaviour.

Examples:

prepare:
    how: install
    package:
        - tmp/RPMS/noarch/tmt-0.15-1.fc31.noarch.rpm
        - tmp/RPMS/noarch/python3-tmt-0.15-1.fc31.noarch.rpm

prepare:
    how: install
    directory:
        - tmp/RPMS/noarch

prepare:
    how: install
    copr: psss/tmt
    package: tmt-all
    missing: fail

Status: implemented

shell

Prepare system using shell commands

Execute arbitratry shell commands to set up the system.

Examples:

prepare:
    how: shell
    script: dnf install -y httpd

Status: implemented

provision

Provision a system for testing

Describes what environment is needed for testing and how it should be provisioned. Provides a generic and an extensible way to write down essential hardware requirements. For example one consistent way how to specify “at least 2 GB of RAM” for all supported provisioners.

  • Possible grouping of test cases based on test case relevancy
  • Might fail if cannot provision according to the constraints

Examples:

provision:
    how: virtual
    image: fedora
    memory: 8 GB

beaker

Provision a machine in Beaker

Reserve a machine from the Beaker pool.

Examples:

provision:
    how: beaker
    family: Fedora31
    tag: released

Status: idea

connect

Connect to a provisioned box

Do not provision a new system. Instead, use provided authentication data to connect to a running machine.

guest
hostname or ip address of the box
user
user name to be used to log in, root by default
password
user password to be used to log in
key
path to the file with private key

Examples:

provision:
    how: connect
    guest: hostname or ip address
    user: username
    password: password

provision:
    how: connect
    guest: hostname or ip address
    key: private-key-path

Status: implemented

container

Provision a container

Download (if necessary) and start a new container using podman or docker.

Examples:

provision:
    how: container
    image: fedora:latest

Status: implemented

environment

Environment specification

As part of the provision step it is possible to specify additional requirements for the testing environment. Individual requirements are provided as a simple key: value pairs, for example the minimum amount of memory and disk space, or the related information is grouped under a common parent, for example cores or model under the cpu key.

When multiple environment requirements are provided the provision implementation should attempt to satisfy all of them. It is also possible to write this explicitly using the and operator containing a list of dictionaries with individual requirements. When the or operator is used, any of the alternatives provided in the list should be sufficient.

Regular expressions can be used for selected fields such as the model_name. Please note, that the full extent of regular expressions might not be supported across all provision implementations. The .* notation, however, should be supported everywhere.

Examples:

# Simple use cases simple
memory: 8 GB
disk: 500 GB

# Processor-related stuff grouped together
cpu:
    processors: 2
    cores: 16
    model: 37

# Optional operators at the start of the value
memory: '> 8 GB'
memory: '>= 8 GB'
memory: '< 8 GB'
memory: '<= 8 GB'

# By default exact value expected, these are equivalent:
cpu:
    model: 37
cpu:
    model: '= 37'

# Enable regular expression search
cpu:
    model_name: =~ .*AMD.*

# Using advanced logic operators
and:
  - cpu:
        family: 15
  - or:
      - cpu:
            model: 65
      - cpu:
            model: 67
      - cpu:
            model: 69

Status: idea

local

Use the localhost for testing

Do not provision any system. Tests will be executed directly on the localhost. Note that for some actions like installing additional packages you need root permission or enabled sudo.

Examples:

provision:
    how: local

Status: implemented

minute

Provision a virtual machine using the 1minutetip OpenStack backend

First check whether a prereserved machine exists using the 1minutetip backend. Use it, if it exists, otherwise boot a new machine using the OpenStack backend.

Note that you need to have the 1minutetip script installed in order for this provision method to work.

Examples:

provision:
    how: minute
    image: 1MT-Fedora-32
    flavor: m1.large

Status: implemented

openstack

Provision a virtual machine in OpenStack

Create a virtual machine using OpenStack.

Examples:

provision:
    how: openstack
    image: f31

Status: idea

virtual

Provision a virtual machine (default)

Create a new virtual machine on the localhost using libvirt or another provider which is enabled in vagrant.

Examples:

provision:
    how: virtual
    image: fedora/31-cloud-base

Status: implemented

report

Report test results

Create report for all the test results. The report is saved into report.yaml file with the following format:

result: OVERALL_RESULT
plans:
    /plan/one:
        result: PLAN_RESULT
        tests:
            /test/one:
                result: TEST_RESULT
                log: LOG_PATH

            /test/two:
                result: TEST_RESULT
                log:
                    - LOG_PATH
                    - LOG_PATH
                    - LOG_PATH
    /plan/two:
        result: PLAN_RESULT
            /test/one:
                result: TEST_RESULT
                log: LOG_PATH

Where OVERALL_RESULT is the overall result of all plan results. It is counted the same way as PLAN_RESULT.

Where TEST_RESULT is the same as in execute step definition:

  • info - test finished and produced only information message
  • passed - test finished and passed
  • failed - test finished and failed
  • error - a problem encountered during test execution

Note the priority of test results is as written above, with info having the lowest priority and error has the highest. This is important for PLAN_RESULT.

Where PLAN_RESULT is the overall result or all test results for the plan run. It has the same values as TEST_RESULT. Plan result is counted according to the priority of the test outcome values. For example:

  • if the test results are info, passed, passed - the plan result will be passed
  • if the test results are info, passed, failed - the plan result will be failed
  • if the test results are failed, error, passed - the plan result will be error

Where LOG_PATH is the test log output path, relative to the execute step plan run directory. The log can be a single log path or a list of log paths, in case the test has produced more log files.

This step also defines additional notification settings, which can be used by CI or reporting systems to enable and customize notifications. The following values are planned:

email
send email notification
irc
notify on irc chat

Examples:

report:
    email:
        - email@address.org

Status: idea