As a user I want to run selected steps of the test execution.
There are several separate steps defined for the test execution. Clearly separating testing stages gives users control over their individual aspects. Each step makes it clear what and how can be influenced for that particular stage of the process.
This approach also allows to run only selected steps when
desired. For example run
discover only to see which tests
will be executed or skip
quickly testing on localhost. Each step can be supported by
multiple implementations. Special keyword
which implementation should be used. Default implementations
are as follows:
- discover: shell
- provision: virtual
- prepare: shell
- execute: tmt
- report: display
- finish: shell
The following directory structure is used to store work data
/var/lib/tmt for each run:
run-001 └── plans ├── basic │ ├── discover │ ├── execute │ ├── finish │ ├── prepare │ ├── provision │ └── report └── smoke ├── discover ├── execute ├── finish ├── prepare ├── provision └── report
Each step is responsible to store data required by the specification under its folder.
Discover tests relevant for execution
Gather information about tests which are supposed to be run.
tests() returning a list of discovered
requires() returning a list of all required
packages aggregated from the require attribute of the
individual test metadata.
Store the list of aggregated tests with their corresponding
metadata in the
tests.yaml file. The format should be a
dictionary of dictionaries structured in the following way:
/test/one: summary: Short test summary. description: Long test description. contact: Petr Šplíchal <firstname.lastname@example.org> component: [tmt] test: tmt --help path: /test/path/ require: [package1, package2] environment: key1: value1 key2: value2 key3: value3 duration: 5m enabled: true result: respect tag: [tag] tier: 1 /test/two: summary: Short test summary. description: Long test description. ...
Discover available tests using the fmf format
Use the Flexible Metadata Format to explore all available tests in given repository. The following parameters are supported:
- Git repository containing the metadata tree. Current git repository used by default.
- Branch, tag or commit specifying the desired git
revision. Defaults to the
masterbranch if url given or to the current
HEADif url not provided.
- Path to the metadata tree root. Should be relative to
the git repository root if url provided, absolute
local filesystem path otherwise. By default
- List of test names or regular expressions used to select tests by name.
- Apply advanced filter based on test metadata
pydoc fmf.filterfor more info.
See also the fmf identifier documentation.
It is also possible to limit tests only to those that have changed in git since a given revision. This can be particularly useful when testing changes to tests themselves (e.g. in a pull request CI).
Related config options (all optional):
- Set to True if you want to filter modified tests only.
- Will be fetched as a “reference” remote in the test dir.
- The ref to compare against,
masterbranch is used by default.
discover: how: fmf discover: how: fmf url: https://github.com/psss/tmt ref: master path: /metadata/tree/path test: [regexp] filter: tier:1
Provide a manual list of shell test cases
List of test cases to be executed can be defined manually directly in the plan as a list of dictionaries containing test
name and actual
test script. Optionally it is possible to define any other Tests attributes such as
duration here as well. The default duration for tests defined directly in the discover step is
discover: how: shell tests: - name: /help/main test: tmt --help - name: /help/test test: tmt test --help - name: /help/smoke test: ./smoke.sh path: /tests/shell duration: 1m
Define how tests should be executed
Execute discovered tests in the provisioned environment using
selected test executor. By default tests are executed using
tmt executor which allows to show detailed
progress of the testing and supports interactive debugging.
This is a required attribute. Each plan has to define this step.
For each test, a separate directory is created for storing
artifacts related to the test execution. Its path is
constructed from the test name and it’s stored under the
execute/data directory. It contains a
file with the aggregated L1 metadata which can be used by the
test framework. In addition to supported
Tests attributes it also contains fmf
plan the execute step should produce a
results.yaml file with the list of results for each test
in the following format:
/test/one: result: OUTCOME log: PATH /test/two: result: OUTCOME log: PATH duration: DURATION
OUTCOME is the result of the test execution. It can
have the following values:
- Test execution successfully finished and passed.
- Test finished but only produced an informational message. Represents a soft pass, used for skipped tests and for tests with the result attribute set to ignore. Automation should treat this as a passed test.
- A problem appeared during test execution which does not affect test results but might be worth checking and fixing. For example test cleanup phase failed. Automation should treat this as a failed test.
- Undefined problem encountered during test execution. Human inspection is needed to investigate whether it was a test bug, infrastructure error or a real test failure. Automation should treat it as a failed test.
- Test execution successfully finished and failed.
PATH is the test log output path, relative to the
execute step working directory. It can be a single string
or a list of strings when multiple log files available, in
which case the first log will be considered as the main one
(e.g. presented to the user for inspection).
DURATION is an optional section stating how long did
the test run. Its value is in the
Detached test executor
As a user I want to execute tests in a detached way.
detach executor runs tests in one batch using a shell script directly executed on the provisioned guest. This can be useful when the connection to the guest is slow or the test execution takes a long time and you want to disconnect your laptop while keeping tests running in the background.
execute: how: detach
Status: implemented and verified
Run tests in an isolated environment
Optional boolean attribute isolate can be used to request a clean test environment for each test.
execute: how: tmt isolate: true
Execute shell scripts
As a user I want to easily run shell script as a test.
Execute arbitratry shell commands and check their exit code which is used as a test result.
The default duration for tests defined directly under the execute step is
1h. Use the
duration attribute to modify the default limit.
execute: how: tmt script: tmt --help execute: how: tmt script: a-long-test-suite duration: 3h
Multi-line shell script
Providing a multi-line shell script is also supported. In that case executor will store given script into a file and execute.
execute: script: | dnf -y install httpd curl systemctl start httpd echo foo > /var/www/html/index.html curl http://localhost/ | grep foo
Multiple shell commands
You can also include several commands as a list. Executor will run commands one-by-one and check exit code of each.
execute: script: - dnf -y install httpd curl - systemctl start httpd - echo foo > /var/www/html/index.html - curl http://localhost/ | grep foo
The simplest usage¶
Simple use case should be super simple to write
As the how keyword can be omitted when using the default executor you can just define the shell script to be run. This is how a minimal smoke test configuration for the tmt command can look like:
execute: script: tmt --help
Internal test executor
As a user I want to execute tests directly from tmt.
tmt executor runs tests in the provisioned environment one by one directly from the tmt code which allows features such as showing live progress or the interactive session . This is the default execute step implementation.
execute: how: tmt
Status: implemented and verified
Additional actions to be performed after the test execution has been completed. Counterpart of the
prepare step useful for various cleanup actions.
finish: how: shell script: upload-logs.sh
Prepare system for testing
Additional configuration of the provisioned environment needed for testing.
- Install artifact (customizable according to user needs)
- Conflicts between rpms
- Optionally add debuginfo
- Install with devel module
- Additional setup possible if needed
- Inject arbitrary commands
- Before/after artifact installation
Apply ansible playbook to get the desired final state.
One or more playbooks can be provided as a list under the
playbooks attribute. Each of them will be applied using
ansible-playbook in the given order. Optional attribute
roles can be used to enable additional roles. Role can be either an ansible galaxy role name, git url or a path to file with detailed requirements.
prepare: how: ansible role: - nginxinc.nginx playbook: - common.yml - rhel7.yml
Install packages on the guest
One or more RPM packages can be specified under the
package attribute. The packages will be installed
on the guest. They can either be specified using their
names or paths to local rpm files.
directory attribute can be used to
install all packages from the given directory. Copr
repositories can be enabled using the
It’s possible to change the behaviour when a package is
missing using the
missing attribute. The missing
packages can either be silently ignored (‘skip’) or a
preparation error is thrown (‘fail’), which is the default
prepare: how: install package: - tmp/RPMS/noarch/tmt-0.15-1.fc31.noarch.rpm - tmp/RPMS/noarch/python3-tmt-0.15-1.fc31.noarch.rpm prepare: how: install directory: - tmp/RPMS/noarch prepare: how: install copr: psss/tmt package: tmt-all missing: fail
Prepare system using shell commands
Execute arbitratry shell commands to set up the system.
prepare: how: shell script: dnf install -y httpd
Provision a system for testing
Describes what environment is needed for testing and how it should be provisioned. Provides a generic and an extensible way to write down essential hardware requirements. For example one consistent way how to specify “at least 2 GB of RAM” for all supported provisioners. Might fail if cannot provision according to the constraints.
provision: how: virtual image: fedora memory: 8 GB
Provision a machine in Beaker
Reserve a machine from the Beaker pool.
provision: how: beaker family: Fedora31 tag: released
Connect to a provisioned box
Do not provision a new system. Instead, use provided authentication data to connect to a running machine.
- hostname or ip address of the box
- user name to be used to log in,
- user password to be used to log in
- path to the file with private key
provision: how: connect guest: hostname or ip address user: username password: password provision: how: connect guest: hostname or ip address key: private-key-path
Provision a container
Download (if necessary) and start a new container using podman or docker.
provision: how: container image: fedora:latest
As part of the provision step it is possible to specify
additional requirements for the testing environment.
Individual requirements are provided as a simple
value pairs, for example the minimum amount of
disk space, or the related information
is grouped under a common parent, for example
model under the
When multiple environment requirements are provided the
provision implementation should attempt to satisfy all of
them. It is also possible to write this explicitly using
and operator containing a list of dictionaries
with individual requirements. When the
or operator is
used, any of the alternatives provided in the list should
Regular expressions can be used for selected fields such
model_name. Please note, that the full extent
of regular expressions might not be supported across all
provision implementations. The
.* notation, however,
should be supported everywhere.
# Simple use cases simple memory: 8 GB disk: 500 GB # Processor-related stuff grouped together cpu: processors: 2 cores: 16 model: 37 # Optional operators at the start of the value memory: '> 8 GB' memory: '>= 8 GB' memory: '< 8 GB' memory: '<= 8 GB' # By default exact value expected, these are equivalent: cpu: model: 37 cpu: model: '= 37' # Enable regular expression search cpu: model_name: =~ .*AMD.* # Using advanced logic operators and: - cpu: family: 15 - or: - cpu: model: 65 - cpu: model: 67 - cpu: model: 69
Use the localhost for testing
Do not provision any system. Tests will be executed directly on the localhost. Note that for some actions like installing additional packages you need root permission or enabled sudo.
provision: how: local
Status: implemented and verified
Provision a virtual machine using the 1minutetip OpenStack backend
First check whether a prereserved machine exists using the 1minutetip backend. Use it, if it exists, otherwise boot a new machine using the OpenStack backend.
Note that you need to have the
installed in order for this provision method to work.
provision: how: minute image: 1MT-Fedora-32 flavor: m1.large
Provision a virtual machine in OpenStack
Create a virtual machine using OpenStack.
provision: how: openstack image: f31
Provision a virtual machine (default)
Create a new virtual machine on the localhost using libvirt or another provider which is enabled in vagrant.
provision: how: virtual image: fedora/31-cloud-base
Report test results
As a tester I want to have a nice overview of results once the testing if finished.
Report test results according to user preferences.
Show results in the terminal window
As a tester I want to see test results in the plain text form in my shell session.
Test results will be displayed as part of the command line tool output directly in the terminal. Allows to select the desired level of verbosity
tmt run -l report # overall summary only tmt run -l report -v # individual test results tmt run -l report -vv # show full paths to logs tmt run -l report -vvv # provide complete test output
Save the report into a
report.yaml file with the
result: OVERALL_RESULT plans: /plan/one: result: PLAN_RESULT tests: /test/one: result: TEST_RESULT log: LOG_PATH /test/two: result: TEST_RESULT log: - LOG_PATH - LOG_PATH - LOG_PATH /plan/two: result: PLAN_RESULT /test/one: result: TEST_RESULT log: LOG_PATH
OVERALL_RESULT is the overall result of all plan
results. It is counted the same way as
TEST_RESULT is the same as in execute step
- info - test finished and produced only information message
- passed - test finished and passed
- failed - test finished and failed
- error - a problem encountered during test execution
Note the priority of test results is as written above,
info having the lowest priority and
the highest. This is important for
PLAN_RESULT is the overall result or all test
results for the plan run. It has the same values as
TEST_RESULT. Plan result is counted according to the
priority of the test outcome values. For example:
- if the test results are info, passed, passed - the plan result will be passed
- if the test results are info, passed, failed - the plan result will be failed
- if the test results are failed, error, passed - the plan result will be error
LOG_PATH is the test log output path, relative
to the execute step plan run directory. The log can be a
single log path or a list of log paths, in case the test
has produced more log files.
Generate a web page with test results
As a tester I want to review results in a nicely arranged web page with links to detailed test output.
Create a local
html file with test results arranged in a table. Optionally open the page in the default browser.
tmt run --all report --how html tmt run --all report --how html --open tmt run -l report -h html -o