Plans

As a tester I want to easily execute selected tests or selected test steps in given environment.

Plans, also called L2 metadata, are used to group relevant tests and enable the CI. They describe how to discover tests for execution, how to provision the environment, how to prepare it for testing, how to execute tests, report results and finally how to finish the test job.

Each of the six steps mentioned above supports multiple implementations. The default methods are listed below.

Thanks to clearly separated test steps it is possible to run only selected steps, for example tmt run discover to see which tests would be executed.

In addition to the attributes defined here, plans also support common Core attributes which are shared across all metadata levels.

Examples:

# Enabled a minimal smoke test
execute:
    script: foo --version

# Run tier one tests in a container
discover:
    how: fmf
    filter: tier:1
provision:
    how: container
execute:
    how: tmt

# Verify that apache can serve pages
summary: Basic httpd smoke test
provision:
    how: virtual
    memory: 4096
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd
execute:
    how: shell
    script:
    - echo foo > /var/www/html/index.html
    - curl http://localhost/ | grep foo

context

Definition of the test execution context

As a user I want to define the default context in which tests included in given plan should be executed.

Define the default context values for all tests executed under the given plan. Can be overriden by context provided directly on the command line. See the Context definition for the full list of supported context dimensions. Must be a dictionary.

Note

The context dimensions which are defined in the plan are not evaluated when the adjust rules are applied to the plan itself. Use the command line option --context to provide the desired context instead.

Examples:

summary:
    Test basic functionality of the httpd24 collection
discover:
    how: fmf
execute:
    how: tmt
context:
    collection: httpd24
summary:
    Verify dash against the shared shell tests repository
discover:
    how: fmf
    url: https://src.fedoraproject.org/tests/shell
execute:
    how: tmt
context:
    component: dash

Status: implemented

discover

Discover tests relevant for execution

Gather information about tests which are supposed to be run. Provide method tests() returning a list of discovered tests and requires() returning a list of all required packages aggregated from the require attribute of the individual test metadata.

Store the list of aggregated tests with their corresponding metadata in the tests.yaml file. The format must be a list of dictionaries structured in the following way:

- name: /test/one
  summary: Short test summary.
  description: Long test description.
  contact: Petr Šplíchal <psplicha@redhat.com>
  component: [tmt]
  test: tmt --help
  path: /test/path/
  require: [package1, package2]
  environment:
      key1: value1
      key2: value2
      key3: value3
  duration: 5m
  enabled: true
  result: respect
  tag: [tag]
  tier: 1
  serial-number: 1

- name: /test/two
  summary: Short test summary.
  description: Long test description.
  ...

Examples:

# Enabled a minimal smoke test
execute:
    script: foo --version

# Run tier one tests in a container
discover:
    how: fmf
    filter: tier:1
provision:
    how: container
execute:
    how: tmt

# Verify that apache can serve pages
summary: Basic httpd smoke test
provision:
    how: virtual
    memory: 4096
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd
execute:
    how: shell
    script:
    - echo foo > /var/www/html/index.html
    - curl http://localhost/ | grep foo

dist-git-source

Download rpm sources for dist-git repositories

Downloads the source files specified in the sources file of a DistGit (Fedora, CentOS) repository. Plan using the option has to be defined in a DistGit repository or the url option needs to point to the root of such repository.

All source files are available for further use in the directory provided in the TMT_SOURCE_DIR variable.

Patches are applied by rpm-build -bp command which runs in prepare step on the provisioned guest, with order 60.

New in version 1.32.

dist-git-source

Download and (optionally) extract the sources.

dist-git-type

Use the provided DistGit handler instead of the auto detection. Useful when running from forked repositories.

dist-git-download-only

When true sources are just downloaded. No rpmbuild -bp, nor installation of require or buildddeps happens.

New in version 1.29.

dist-git-require

List of packages to install before rpmbuild -bp is attempted on the sources. The rpm-build package itself is installed automatically.

New in version 1.32.

dist-git-install-builddeps

When set to true build requirements are installed.

New in version 1.32.

The fmf plugin supports additional dist-git options, see its documentation for details.

Note

In order to discover which tests would be executed, without actually running them, it is necessary to enable the provision and prepare steps as well:

tmt run -v discover provision prepare finish

Examples:

# Download & extract sources from another repo, print
# single file as a test
discover:
    how: shell
    url: https://src.fedoraproject.org/rpms/tmt
    dist-git-source: true
    tests:
      - name: /print/pyproject
        test: cat $TMT_SOURCE_DIR/tmt-*/pyproject.toml
# Just download sources, test is reponsible for rpmbuild
# and running tests
discover:
    how: shell
    dist-git-source: true
    dist-git-download-only: true
    tests:
      - name: /unit
        test: >
            rpmbuild -bp
                --define "_sourcedir $TMT_SOURCE_DIR"
                --define "_builddir $TMT_SOURCE_DIR/BUILD"
                $TMT_SOURCE_DIR/*.spec &&
            cd $TMT_SOURCE_DIR/BUILD/* &&
            make test
        require:
          - rpm-build

Status: implemented

fmf

Discover available tests using the fmf format

Use the Flexible Metadata Format to explore all available tests in given repository. The following parameters are supported:

url

Git repository containing the metadata tree. Current git repository used by default.

ref

Branch, tag or commit specifying the desired git revision. Defaults to the remote repository’s default branch if url given or to the current HEAD if url not provided.

Additionally, one can set ref dynamically. This is possible using a special file in tmt format stored in the default branch of a tests repository. This special file should contain rules assigning attribute ref in an adjust block, for example depending on a test run context.

Dynamic ref assignment is enabled whenever a test plan reference has the format ref: @FILEPATH.

path

Path to the metadata tree root. Must be relative to the git repository root if url provided, absolute local filesystem path otherwise. By default . is used.

See also the fmf identifier documentation for details. Use the following keys to limit the test discovery by test name, an advanced filter or link:

test

List of test names or regular expressions used to select tests by name. Duplicate test names are allowed to enable repetitive test execution, preserving the listed test order.

link

Select tests using the link keys. Values must be in the form of RELATION:TARGET, tests containing at least one of them are selected. Regular expressions are supported for both relation and target. Relation part can be omitted to match all relations.

filter

Apply advanced filter based on test metadata attributes. See pydoc fmf.filter for more info.

exclude

Exclude tests which match a regular expression.

prune

Copy only immediate directories of executed tests and their required files.

New in version 1.29.

It is also possible to limit tests only to those that have changed in git since a given revision. This can be particularly useful when testing changes to tests themselves (e.g. in a pull request CI). Related config options (all optional) are:

modified-only

Set to true if you want to filter modified tests only. The test is modified if its name starts with the name of any directory modified since modified-ref.

modified-url

An additional remote repository to be used as the reference for comparison. Will be fetched as a reference remote in the test dir.

modified-ref

The ref to compare against. Defaults to the local repository’s default branch. Note that you need to specify reference/<branch> to compare to a branch from the repository specified in modified-url.

Use the dist-git-source options to download rpm sources for dist-git repositories. In addition to the common dist-git-source features, the fmf plugin also supports the following options:

dist-git-init

Set to true to initialize fmf root inside extracted sources at dist-git-extract location or top directory. To be used when the sources contain fmf files (for example tests) but do not have an associated fmf root.

dist-git-remove-fmf-root

Set to true to remove any fmf root present in the sources. dist-git-init can be used to create it later at desired location.

dist-git-merge

Set to true to combine fmf root from the sources and fmf root from the plan. It allows to have plans and tests defined in the DistGit repo which use tests and other resources from the downloaded sources. Any plans in extracted sources will not be processed.

dist-git-extract

Path specifying what should be copied from the sources. Defaults to top fmf root or top directory /.

Examples:

# Discover all fmf tests in the current repository
discover:
    how: fmf
# Fetch tests from a remote repo, filter by name/tier
discover:
    how: fmf
    url: https://github.com/teemtee/tmt
    ref: main
    path: /metadata/tree/path
    test: [regexp]
    filter: tier:1
# Choose tests verifying given issue
discover:
    how: fmf
    link: verifies:issues/123$
# Select only tests which have been modified
discover:
    how: fmf
    modified-only: true
    modified-url: https://github.com/teemtee/tmt-official
    modified-ref: reference/main
# Extract tests from the distgit sources
discover:
    how: fmf
    dist-git-source: true

Status: implemented

shell

Provide a manual list of shell test cases

List of test cases to be executed can be defined manually directly in the plan as a list of dictionaries containing test name and actual test script. Optionally it is possible to define any other Tests attributes such as path or duration here as well. The default duration for tests defined directly in the discover step is 1h.

It is possible to fetch code from a remote git repository using url. In that case repository is cloned and all paths are relative to the remote git root. Using remote repo and local test code at the same time is not possible within the same discover config, use Multiple Configs instead.

url

Git repository, used directly as a git clone argument.

ref

Branch, tag or commit specifying the desired git revision. Defaults to the remote repository’s default branch.

keep-git-metadata

By default the .git directory is removed to save disk space. Set to true to sync the git metadata to guest as well. Implicit if dist-git-source is used.

Use the dist-git-source options to download rpm sources for dist-git repositories.

Examples:

# Define several local tests
discover:
    how: shell
    tests:
      - name: /help/main
        test: tmt --help
      - name: /help/test
        test: tmt test --help
      - name: /help/smoke
        test: ./smoke.sh
        path: /tests/shell
        duration: 1m
# Fetch tests from a remote repository
discover:
    how: shell
    url: https://github.com/teemtee/tmt
    tests:
      - name: Use tests/full/test.sh from the remote repo
        path: /tests/full
        test: ./test.sh

Status: implemented

where

Execute tests on selected guests

In the multihost scenarios it is often necessary to execute test code on selected guests only or execute different test code on individual guests. The where key allows to select guests where the tests should be executed by providing their name or the role they play in the scenario. Use a list to specify multiple names or roles. By default, when the where key is not defined, tests are executed on all provisioned guests.

There is also an alternative to the syntax using a where dictionary encapsulating the discover config under keys corresponding to guest names or roles. This can result in much more concise config especially when defining several shell scripts for each guest or role.

Examples:

# Run different script for each guest or role
discover:
    how: shell
    tests:
      - name: run-the-client-code
        test: client.py
        where: client
      - name: run-the-server-code
        test: server.py
        where: server
# Filter different set of tests for each guest or role
discover:
  - how: fmf
    filter: tag:client-tests
    where: client
  - how: fmf
    filter: tag:server-tests
    where: server
# Alternative syntax using the 'where' dictionary
# encapsulating for tests defined by fmf
discover:
    where:
        client:
          - how: fmf
            filter: tag:client-tests
        server:
          - how: fmf
            filter: tag:server-tests
# Alternative syntax using the 'where' dictionary
# encapsulating for shell script tests
discover:
    where:
        server:
            how: shell
            tests:
              - test: first server script
              - test: second server script
              - test: third server script
        client:
            how: shell
            tests:
              - test: first client script
              - test: second client script
              - test: third client script

Status: implemented, verified and documented

environment-file

Environment variables from files

In addition to the environment key it is also possible to provide environment variables in a file. Supported formats are dotenv/shell with KEY=VALUE pairs and yaml. Full url can be used to fetch variables from a remote source. The environment key has a higher priority. File path must be relative to the metadata tree root.

Examples:

# Load from a dotenv/shell format
/plan:
    environment-file:
      - env

# Load from a yaml format
/plan:
    environment-file:
      - environment.yml
      - environment.yaml

# Fetch from remote source
/plan:
    environment-file:
      - https://example.org/project/environment.yaml

Status: implemented and verified

environment

Environment variables

Specifies environment variables available in all steps. Plugins need to include these environment variables while running commands or other programs. These environment variables override test environment if present. Command line option --environment can be used to override environment variables defined in both tests and plan. Use the environment-file key to load variables from files. The environment+ notation can be used to extend environment defined in the parent plan, see also the Inherit Plans section for more examples.

Examples:

# Environment variables defined in a plan
environment:
    KOJI_TASK_ID: 42890031
    RELEASE: f33
execute:
    script: echo "Testing $KOJI_TASK_ID on release $RELEASE"
# Share common variables across plans using inheritance
/plans:
    environment:
        COMMON: This is a common variable content

    /mini:
        environment+:
            VARIANT: mini
    /full:
        environment+:
            VARIANT: full
# Variables from the command line
tmt run --environment X=1 --environment Y=2
tmt run --environment "X=1 Y=2"
# Make sure to quote properly values which include spaces
tmt run --environment "BUGS='123 456 789'"

Status: implemented and verified

execute

Define how tests should be executed

Execute discovered tests in the provisioned environment using selected test executor. By default tests are executed using the internal tmt executor which allows to show detailed progress of the testing and supports interactive debugging.

This is a required attribute. Each plan has to define this step.

For each test, a separate directory is created for storing artifacts related to the test execution. Its path is constructed from the test name and it’s stored under the execute/data directory. It contains a metadata.yaml file with the aggregated L1 metadata which can be used by the test framework. In addition to supported Tests attributes it also contains fmf name of the test.

In each plan, the execute step must produce a results.yaml file with results for executed tests. The format of the file is described at Results Format.

Examples:

# Enabled a minimal smoke test
execute:
    script: foo --version

# Run tier one tests in a container
discover:
    how: fmf
    filter: tier:1
provision:
    how: container
execute:
    how: tmt

# Verify that apache can serve pages
summary: Basic httpd smoke test
provision:
    how: virtual
    memory: 4096
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd
execute:
    how: shell
    script:
    - echo foo > /var/www/html/index.html
    - curl http://localhost/ | grep foo

exit-first

Stop execution after a test fails

As a user I want to avoid waiting for all discovered tests to finish if one of them fails.

Optional boolean attribute exit-first can be used to make the executor stop executing tests once a test failure is encountered.

Examples:

execute:
    how: tmt
    exit-first: true

Status: implemented and verified

isolate

Run tests in an isolated environment

Note

This is a draft, the story is not implemented yet.

Optional boolean attribute isolate can be used to request a clean test environment for each test.

Examples:

execute:
    how: tmt
    isolate: true

Status: idea

script

Execute shell scripts

As a user I want to easily run shell script as a test.

Execute arbitratry shell commands and check their exit code which is used as a test result. The script field is provided to cover simple test use cases only and must not be combined with the discover step which is more suitable for more complex test scenarios.

Default shell options are applied to the script, see test for more details. The default duration for tests defined directly under the execute step is 1h. Use the duration attribute to modify the default limit.

Examples:

# Run a simple smoke test
execute:
    how: tmt
    script: tmt --help
# Modify the default maximum duration
execute:
    how: tmt
    script: a-long-test-suite
    duration: 3h

Multi-line script

Multi-line shell script

Providing a multi-line shell script is also supported. Note that the first command with non-zero exit code will finish the execution. See the test key for details about default shell options.

Examples:

execute:
    script: |
        dnf -y install httpd curl
        systemctl start httpd
        echo foo > /var/www/html/index.html
        curl http://localhost/ | grep foo

Status: implemented

Multiple commands

Multiple shell commands

You can also include several commands as a list. Executor will run commands one-by-one and check exit code of each.

Examples:

execute:
    script:
      - dnf -y install httpd curl
      - systemctl start httpd
      - echo foo > /var/www/html/index.html
      - curl http://localhost/ | grep foo

Status: implemented

The simplest usage

Simple use case should be super simple to write

As the how keyword can be omitted when using the default executor you can just define the shell script to be run. This is how a minimal smoke test configuration for the tmt command can look like:

Examples:

execute:
    script: tmt --help

Status: implemented

tmt

Internal test executor

As a user I want to execute tests directly from tmt.

The internal tmt executor runs tests in the provisioned environment one by one directly from the tmt code which allows features such as showing live progress or the interactive session . This is the default execute step implementation.

The executor provides following shell scripts which can be used by the tests for certain operations.

tmt-file-submit

Archive the given file in the tmt test data directory. See the Save a log file section for more details.

tmt-reboot

Soft reboot the machine from inside the test. After reboot the execution starts from the test which rebooted the machine. An environment variable TMT_REBOOT_COUNT is provided which the test can use to handle the reboot. The variable holds the number of reboots performed by the test. For more information see the Reboot during test feature documentation.

tmt-report-result

Generate a result report file from inside the test. Can be called multiple times by the test. The generated report file will be overwritten if a higher hierarchical result is reported by the test. The hierarchy is as follows: SKIP, PASS, WARN, FAIL. For more information see the Report test result feature documentation.

tmt-abort

Generate an abort file from inside the test. This will set the current test result to failed and terminate the execution of subsequent tests. For more information see the Abort test execution feature documentation.

Examples:

execute:
    how: tmt

Status: implemented and verified

upgrade

Perform system upgrades during testing

As a tester I want to verify that a configured application or service still correctly works after the system upgrade.

In order to enable developing tests for upgrade testing, we need to provide a way how to execute these tests easily. This does not cover unit tests for individual actors but rather system tests which verify the whole upgrade story.

The upgrade executor runs the discovered tests (using the internal executor, hence the same config options can be used), then performs a set of upgrade tasks from a remote repository, and finally, re-runs the tests on the upgraded system.

The IN_PLACE_UPGRADE environment variable is set during the test execution to differentiate between the stages of the test. It is set to old during the first execution and new during the second execution. Test names are prefixed with this value to make the names unique. Based on this variable, the test can perform appropriate actions.

old

setup, test

new

test, cleanup

without

setup, test, cleanup

The upgrade tasks performing the actual system upgrade are taken from a remote repository (specified by the url key) based on an upgrade path (specified by the upgrade-path key) or other filters (e.g. specified by the filter key). If both upgrade-path and extra filters are specified, the discover keys in the remote upgrade path plan are overridden by the filters specified in the local plan.

The upgrade path must correspond to a plan name in the remote repository whose discover selects tests (upgrade tasks). The environment variables defined in the upgrade path are passed to the upgrade tasks. If the url is not provided, upgrade path and upgrade tasks are taken from the current repository.

Examples:

# Main testing plan
discover:
    how: fmf
execute:
    how: upgrade
    url: https://github.com/teemtee/upgrade
    upgrade-path: /paths/fedora35to36
# Upgrade path /paths/fedora35to36.fmf in the remote repository
discover: # Selects appropriate upgrade tasks (L1 tests)
    how: fmf
    filter: "tag:fedora"
environment: # This is passed to upgrade tasks
    SOURCE: 35
    TARGET: 36
execute:
    how: tmt
# Alternative main testing plan, without upgrade path
execute:
    how: upgrade
    url: https://github.com/teemtee/upgrade
    filter: "tag:fedora"
# A simple beakerlib test using the $IN_PLACE_UPGRADE variable
. /usr/share/beakerlib/beakerlib.sh || exit 1

VENV_PATH=/var/tmp/venv_test

rlJournalStart
    # Perform the setup only for the old distro
    if [[ "$IN_PLACE_UPGRADE" !=  "new" ]]; then
        rlPhaseStartSetup
            rlRun "python3.9 -m venv $VENV_PATH"
            rlRun "$VENV_PATH/bin/pip install pyjokes"
        rlPhaseEnd
    fi

    # Execute the test for both old & new distro
    rlPhaseStartTest
        rlAsssertExists "$VENV_PATH/bin/pyjoke"
        rlRun "$VENV_PATH/bin/pyjoke"
    rlPhaseEnd

    # Skip the cleanup phase when on the old distro
    if [[ "$IN_PLACE_UPGRADE" !=  "old" ]]; then
        rlPhaseStartCleanup
            rlRun "rm -rf $VENV_PATH"
        rlPhaseEnd
    fi
rlJournalEnd

Status: implemented and verified

finish

Finishing tasks

Additional actions to be performed after the test execution has been completed. Counterpart of the prepare step useful for various cleanup actions. Use the order attribute to select in which order finishing tasks should happen if there are multiple configs. Default order is 50.

Examples:

finish:
    how: shell
    script: upload-logs.sh

ansible

Perform finishing tasks using ansible

One or more playbooks can be provided as a list under the playbook attribute. Each of them will be applied using ansible-playbook in the given order. The path must be relative to the metadata tree root.

Remote playbooks can be referenced as well as the local ones, and both kinds can be used at the same time.

Examples:

finish:
    how: ansible
    playbook:
        - playbooks/common.yml
        - playbooks/os/rhel7.yml
        - https://foo.bar/rhel7-final-touches.yml

Status: implemented

shell

Perform finishing tasks using shell (bash) scripts

Execute arbitrary shell commands to finish the testing. Default shell options are applied to the script, see the test key specification for more details.

Examples:

finish:
    how: shell
    script:
    - upload-logs.sh || true
    - rm -rf /tmp/temporary-files

Status: implemented

where

Perform finishing tasks on selected guests

In the multihost scenarios it is often necessary to perform different preparation tasks on individual guests. The where key allows to select guests where the finishing tasks should be applied by providing their name or the role they play in the scenario. Use a list to specify multiple names or roles. By default, when the where key is not defined, finishing tasks are performed on all provisioned guests.

Examples:

# Stop Apache on the server
prepare:
  - how: shell
    script: systemctl stop httpd
    where: server

Status: implemented and documented

prepare

Prepare the environment for testing

The prepare step is used to define how the guest environment should be prepared so that the tests can be successfully executed.

The install plugin provides an easy way to install required or recommended packages from disk and from the offical distribution or copr repositories. Use the ansible plugin for applying custom playbooks or execute shell scripts to perform arbitrary preparation tasks.

Use the order attribute to select in which order the preparation should happen if there are multiple configs. Default order is 50. For installation of required packages gathered from the require attribute of individual tests order 70 is used, for recommended packages it is 75. The Dist-git prepare happens before those, with order 60.

Note

If you want to use the prepare step to generate data files needed for testing during the execute step, move or copy them into ${TMT_PLAN_DATA} directory. Only files in this directory are guaranteed to be preserved.

Examples:

# Install fresh packages from a custom copr repository
prepare:
  - how: install
    copr: psss/tmt
    package: tmt+all

# Install required packages and start the service
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd

ansible

Apply ansible playbook to get the desired final state.

One or more playbooks can be provided as a list under the playbook attribute. Each of them will be applied using ansible-playbook in the given order. The path must be relative to the metadata tree root. Use extra-args attribute to enable optional arguments for ansible-playbook. Remote playbooks can be referenced as well as the local ones, and both kinds can be used at the same time.

Examples:

prepare:
    how: ansible
    playbook:
        - playbooks/common.yml
        - playbooks/os/rhel7.yml
        - https://foo.bar/rhel7-final-touches.yml
    extra-args: '-vvv'

Status: implemented and verified

feature

Easily enable and disable common features

The feature plugin provides a comfortable way to enable and disable some commonly used functionality. As for now enabling and disabling the epel repository is supported, crb and fips are coming in the near future.

New in version 1.31.

Examples:

prepare:
    how: feature
    epel: enabled

Status: implemented and verified

install

Install packages on the guest

One or more RPM packages can be specified under the package attribute. The packages will be installed on the guest. They can either be specified using their names, paths to local rpm files or urls to remote rpms.

Additionaly, the directory attribute can be used to install all packages from the given directory. Copr repositories can be enabled using the copr attribute. Use the exclude option to skip selected packages from installation (globbing characters are supported as well).

It’s possible to change the behaviour when a package is missing using the missing attribute. The missing packages can either be silently ignored (‘skip’) or a preparation error is thrown (‘fail’), which is the default behaviour.

Examples:

# Install local rpms using file path
prepare:
    how: install
    package:
      - tmp/RPMS/noarch/tmt-0.15-1.fc31.noarch.rpm
      - tmp/RPMS/noarch/python3-tmt-0.15-1.fc31.noarch.rpm
# Install remote packages using url
prepare:
    how: install
    package:
      - https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
      - https://dl.fedoraproject.org/pub/epel/epel-next-release-latest-8.noarch.rpm
# Install the whole directory, exclude selected packages
prepare:
    how: install
    directory:
      - tmp/RPMS/noarch
    exclude:
      - tmt+all
      - tmt+provision-virtual
# Enable copr repository, skip missing packages
prepare:
    how: install
    # Repository with a group owner (@ prefixed) requires quotes, e.g.
    # copr: "@osci/rpminspect"
    copr: psss/tmt
    package: tmt-all
    missing: skip

Status: implemented and verified

shell

Prepare system using shell (bash) commands

Execute arbitrary shell commands to set up the system. Default shell options are applied to the script, see the test key specification for more details.

Examples:

prepare:
    how: shell
    script: dnf install -y httpd

Status: implemented

where

Apply preparation on selected guests

In the multihost scenarios it is often necessary to perform different preparation tasks on individual guests. The where key allows to select guests where the preparation should be applied by providing their name or the role they play in the scenario. Use a list to specify multiple names or roles. By default, when the where key is not defined, preparation is done on all provisioned guests.

Examples:

# Start Apache on the server
prepare:
  - how: shell
    script: systemctl start httpd
    where: server

# Apply common setup on the primary server and all replicas
prepare:
  - how: ansible
    playbook: common.yaml
    where: [primary, replica]

Status: implemented, verified and documented

provision

Provision a system for testing

Describes what environment is needed for testing and how it should be provisioned. There are several provision plugins supporting multiple ways to provision the environment for testing, for example virtual, container, connect, local or artemis. See individual plugin documentation for details about supported options.

As part of the provision step it is also possible to specify detailed hardware requirements for the testing environment. See the Hardware specification section for details.

As part of the provision step it is also possible to specify kickstart file used during the installation. See the kickstart specification section for details.

Examples:

# Provision a local virtual machine with the latest Fedora
provision:
    how: virtual
    image: fedora

beaker

Provision a machine in Beaker

Reserve a machine from the Beaker pool using the mrack plugin. mrack is a multicloud provisioning library supporting multiple cloud services including Beaker.

The following two files are used for configuration:

/etc/tmt/mrack.conf

for basic configuration

/etc/tmt/provisioning-config.yaml

configuration per supported provider

Beaker installs distribution specified by the image key. If the image can not be translated using the provisioning-config.yaml file mrack passes the image value to Beaker hub and tries to request distribution based on the image value. This way we can bypass default translations and use desired distribution specified like the one in the example below.

New in version 1.22.

Examples:

# Use image name translation
provision:
    how: beaker
    image: fedora
# Specify the distro directly
provision:
    how: beaker
    image: Fedora-37%
# Set custom whiteboard description (added in 1.30)
provision:
    how: beaker
    whiteboard: Just a smoke test for now

Status: implemented and verified

connect

Connect to a provisioned box

Do not provision a new system. Instead, use provided authentication data to connect to a running machine.

guest

hostname or ip address of the box

user

user name to be used to log in, root by default

become

whether to run scripts with sudo, ignored if user is already root, false by default

password

user password to be used to log in

key

path to the file with private key

port

use specific port to connect to

Examples:

# Connect to existing box with username and password
provision:
    how: connect
    guest: hostname or ip address
    user: username
    password: password
# Connect with user ``fedora`` using a key, and use
# ``sudo`` to run scripts
provision:
    how: connect
    guest: hostname or ip address
    user: fedora
    become: true
    key: private-key-path

Status: implemented

container

Provision a container

Download (if necessary) and start a new container using podman or docker.

Examples:

# Use the fedora:latest image
provision:
    how: container
    image: fedora:latest
# Use an image with a non-root user with sudo privileges,
# and run scripts with sudo.
provision:
    how: container
    image: image with non-root user with sudo privileges
    user: tester
    become: true

Status: implemented

local

Use the localhost for testing

Do not provision any system. Tests will be executed directly on the localhost. Note that for some actions like installing additional packages you need root permission or enabled sudo.

Examples:

provision:
    how: local

Status: implemented and verified

multihost

Multihost testing specification

Changed in version 1.24.

As a part of the provision step it is possible to request multiple guests to be provisioned for testing. Each guest has to be assigned a unique name which is used to identify it.

The optional parameter role can be used to mark related guests so that common actions can be applied to all such guests at once. An example role name can be client or server but arbitrary identifier can be used.

Both name and role can be used together with the where key to select guests on which the preparation tasks should be applied or where the test execution should take place.

See Guest Topology Format for details on how this information is exposed to tests and prepare and finish tasks.

Examples:

# Request two guests
provision:
  - name: server
    how: virtual
  - name: client
    how: virtual

# Assign role to guests
provision:
  - name: main-server
    role: primary
  - name: backup-one
    role: replica
  - name: backup-two
    role: replica
  - name: tester-one
    role: client
  - name: tester-two
    role: client

Status: implemented and verified

openstack

Provision a virtual machine in OpenStack

Note

This is a draft, the story is not implemented yet.

Create a virtual machine using OpenStack.

Examples:

provision:
    how: openstack
    image: f31

Status: idea

virtual

Provision a virtual machine (default)

Create a new virtual machine on the localhost using testcloud (libvirt). Testcloud takes care of downloading an image and making necessary changes to it for optimal experience (such as disabling UseDNS and GSSAPI for SSH).

Examples:

# Provision a fedora virtual machine
provision:
    how: virtual
    image: fedora
# Provision a virtual machine from a specific QCOW2 file,
# using specific memory and disk settings, using the fedora user,
# and using sudo to run scripts.
provision:
    how: virtual
    image: https://download.fedoraproject.org/pub/fedora/linux/releases/39/Cloud/x86_64/images/Fedora-Cloud-Base-39-1.5.x86_64.qcow2
    user: fedora
    become: true
    # in MB
    memory: 2048
    # in GB
    disk: 30

Status: implemented

artemis

Provision a guest via an Artemis service

Reserve a machine using the Artemis service. Users can specify many requirements, mostly regarding the desired OS, RAM, disk size and more. Most of the HW specifications defined in the Hardware are supported. Including the kickstart.

Artemis takes machines from AWS, OpenStack, Beaker or Azure. By default, Artemis handles the selection of a cloud provider to its best abilities and the required specification. However, it is possible to specify the keyword pool and select the desired cloud provider.

Artemis project: https://gitlab.com/testing-farm/artemis

Note

When used together with TF infrastructure some of the options from the first example below will be filled for you by the TF service.

Examples:

provision:
    how: artemis
    # Specify the Artemis URL where the service is running.
    # Here is an example of a local Artemis instance
    api-url: "http://127.0.0.1:8001/v0.0.56"
    api-version: "0.0.56"
    image: Fedora-37
    # ssh key used to connect to the machine
    keyname: master-key
provision:
    how: artemis
    # How long (seconds) to wait for guest provisioning to complete
    provision-timeout: 300
    # How often (seconds) check Artemis API for provisioning status
    provision-tick: 40
    # How long (seconds) to wait for API operations to complete
    api-timeout: 15
    # How many attempts to use when talking to API
    api-retries: 5
    # How long (seconds) before the guest "is-alive" watchdog is dispatched
    watchdog-dispatch-delay: 200
    # How often (seconds) check that the guest "is-alive"
    watchdog-period-delay : 500
provision:
    how: artemis
    arch: x86_64
    pool: beaker
    hardware:
        # Pick a guest with at least 8 GB RAM
        memory: ">= 8 GB"

Status: implemented

kickstart

As a tester I want to specify detailed installation of a guest using the kickstart script.

Note

This is a draft, the story is not implemented yet.

As part of the provision step it is possible to use the kickstart key to specify additional requirements for the installation of a guest. It is possible to specify a kickstart script that will for example specify specific partitioning.

The structure of a kickstart file is separated into several sections.

pre-install

Corresponds to the %pre section of a file. It can contain bash commands, this part is run before the installation of a guest.

post-install

Corresponds to the %post section of a file. It can contain bash commands, this part is run after the installation of a guest.

script

Contains the kickstart specific commands that are run during the installation of a guest.

It is also possible to specify metadata. This part may be interpreted differently for each of the pools that the guest is created from. For example, in Beaker this section can be used to modify the default kickstart template used by Beaker. Similarly works the kernel-options and kernel-options-post. Kernel options are passed on the kernel command line when the installer is booted. Post-install kernel options are set in the boot loader configuration, to be passed on the kernel command line after installation.

Note

The implementation for the kickstart key is in progress. Support of a kickstart file is currently limited to Beaker provisioning, as implemented by tmt’s beaker and artemis plugins, and may not be fully supported by other provisioning plugins in the future. Check individual plugin documentation for additional information on the kickstart support.

Examples:

# Use the artemis plugin to provision a guest from Beaker.
# The following `kickstart` specification will be run
# during the guest installation.
provision:
    how: artemis
    pool: beaker
    image: rhel-7
    kickstart:
        pre-install: |
            %pre --log=/dev/console
            disk=$(lsblk | grep disk | awk '{print $1}')
            echo $disk
            %end
        script: |
            lang en_US.UTF-8
            zerombr
            clearpart --all --initlabel
            part /boot --fstype="xfs" --size=200
            part swap --fstype="swap" --size=4096
            part / --fstype="xfs" --size=10000 --grow
        post-install: |
            %post
            systemctl disable firewalld
            %end
        metadata: |
            "no-autopart harness=restraint"
        kernel-options: "ksdevice=eth1"
        kernel-options-post: "quiet"

Status: idea

report

Report test results

As a tester I want to have a nice overview of results once the testing if finished.

Report test results according to user preferences.

Examples:

# Enabled a minimal smoke test
execute:
    script: foo --version

# Run tier one tests in a container
discover:
    how: fmf
    filter: tier:1
provision:
    how: container
execute:
    how: tmt

# Verify that apache can serve pages
summary: Basic httpd smoke test
provision:
    how: virtual
    memory: 4096
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd
execute:
    how: shell
    script:
    - echo foo > /var/www/html/index.html
    - curl http://localhost/ | grep foo

display

Show results in the terminal window

As a tester I want to see test results in the plain text form in my shell session.

Test results will be displayed as part of the command line tool output directly in the terminal. Allows to select the desired level of verbosity

Examples:

tmt run -l report        # overall summary only
tmt run -l report -v     # individual test results
tmt run -l report -vv    # show full paths to logs
tmt run -l report -vvv   # provide complete test output

Status: implemented

file

Note

This is a draft, the story is not implemented yet.

Save the report into a report.yaml file with the following format:

result: OVERALL_RESULT
plans:
    /plan/one:
        result: PLAN_RESULT
        tests:
            /test/one:
                result: TEST_RESULT
                log:
                  - LOG_PATH

            /test/two:
                result: TEST_RESULT
                log:
                    - LOG_PATH
                    - LOG_PATH
                    - LOG_PATH
    /plan/two:
        result: PLAN_RESULT
            /test/one:
                result: TEST_RESULT
                log:
                  - LOG_PATH

Where OVERALL_RESULT is the overall result of all plan results. It is counted the same way as PLAN_RESULT.

Where TEST_RESULT is the same as in execute step definition:

  • info - test finished and produced only information message

  • passed - test finished and passed

  • failed - test finished and failed

  • error - a problem encountered during test execution

Note the priority of test results is as written above, with info having the lowest priority and error has the highest. This is important for PLAN_RESULT.

Where PLAN_RESULT is the overall result or all test results for the plan run. It has the same values as TEST_RESULT. Plan result is counted according to the priority of the test outcome values. For example:

  • if the test results are info, passed, passed - the plan result will be passed

  • if the test results are info, passed, failed - the plan result will be failed

  • if the test results are failed, error, passed - the plan result will be error

Where LOG_PATH is the test log output path, relative to the execute step plan run directory. The log key will be a list of such paths, even if there is just a single log.

Examples:

# Enabled a minimal smoke test
execute:
    script: foo --version

# Run tier one tests in a container
discover:
    how: fmf
    filter: tier:1
provision:
    how: container
execute:
    how: tmt

# Verify that apache can serve pages
summary: Basic httpd smoke test
provision:
    how: virtual
    memory: 4096
prepare:
  - name: packages
    how: install
    package: [httpd, curl]
  - name: service
    how: shell
    script: systemctl start httpd
execute:
    how: shell
    script:
    - echo foo > /var/www/html/index.html
    - curl http://localhost/ | grep foo

Status: idea

html

Generate a web page with test results

As a tester I want to review results in a nicely arranged web page with links to detailed test output.

Create a local html file with test results arranged in a table. Optionally open the page in the default browser.

Examples:

# Enable html report from the command line
tmt run --all report --how html
tmt run --all report --how html --open
tmt run -l report -h html -o

# Use html as the default report for given plan
report:
    how: html
    open: true

Status: implemented

junit

Generate a JUnit report file

As a tester I want to review results in a JUnit xml file.

Create a JUnit file junit.xml with test results.

Examples:

# Enable junit report from the command line
tmt run --all report --how junit
tmt run --all report --how junit --file test.xml

# Use junit as the default report for given plan
report:
    how: junit
    file: test.xml

Status: implemented

polarion

Generate an xUnit file and export it into Polarion

As a tester I want to review tests in Polarion and have all results linked to existing test cases there.

Write test results into an xUnit file and upload to Polarion.

In order to get quickly started create a pylero config file ~/.pylero in your home directory with the following content:

[webservice]
url=https://{your polarion web URL}/polarion
svn_repo=https://{your polarion web URL}/repo
default_project={your project name}
user={your username}
password={your password}

See the Pylero Documentation for more details on how to configure the pylero module.

Note

Your Polarion project might need a custom value format for the arch, planned-in and other fields. The format of these fields might differ across Polarion projects, for example, x8664 can be used instead of x86_64 for the architecture.

Examples:

# Enable polarion report from the command line
tmt run --all report --how polarion --project-id tmt
tmt run --all report --how polarion --project-id tmt --no-upload --file test.xml
# Use polarion as the default report for given plan
report:
    how: polarion
    file: test.xml
    project-id: tmt
    title: tests_that_pass
    planned-in: RHEL-9.1.0
    pool-team: sst_tmt

Status: implemented

reportportal

Report test results to a ReportPortal instance

As a tester I want to review results in a nicely arranged web page, filter them via context attributes and get links to detailed test output and other test information.

Provide test results and fmf data per each plan, and send it to a Report Portal instance via its API with token, url and project name given.

Examples:

# Optionally set environment variables according to TMT_PLUGIN_REPORT_REPORTPORTAL_${OPTION}
export TMT_PLUGIN_REPORT_REPORTPORTAL_URL=${url-to-RP-instance}
export TMT_PLUGIN_REPORT_REPORTPORTAL_TOKEN=${token-from-RP-profile}
# Enable ReportPortal report from the command line depending on the use case:

## Simple upload with all project, url endpoint and user token passed in command line
tmt run --all report --how reportportal --project=baseosqe --url="https://reportportal.xxx.com" --token="abc...789"

## Simple upload with url and token exported in environment variable
tmt run --all report --how reportportal --project=baseosqe

## Upload with project name in fmf data, filtering out parameters (environemnt variables) that tend to be unique and break the history aggregation
tmt run --all report --how reportportal --exclude-variables="^(TMT|PACKIT|TESTING_FARM).*"

## Upload all plans as suites into one ReportPortal launch
tmt run --all report --how reportportal --suite-per-plan --launch=Errata --launch-description="..."

## Rerun the launch with suite structure for the test results to be uploaded into the latest launch with the same name as a new 'Retry' tab (mapping based on unique paths)
tmt run --all report --how reportportal --suite-per-plan --launch=Errata --launch-rerun

## Rerun the tmt run and append the new result logs under the previous one uploaded in ReportPortal (precise mapping)
tmt run --id run-012 --all report --how reportportal --again

## Additional upload of new suites into given launch with suite structure
tmt run --all report --how reportportal --suite-per-plan --upload-to-launch=4321

## Additional upload of new tests into given launch with non-suite structure
tmt run --all report --how reportportal --launch-per-plan --upload-to-launch=1234

## Additional upload of new tests into given suite
tmt run --all report --how reportportal --upload-to-suite=123456

## Upload Idle tests, then execute it and add result logs into prepared empty tests
tmt run discover report --how reportportal --defect-type=Idle
tmt run --last --all report --how reportportal --again
# Use ReportPortal as the default report for given plan
report:
    how: reportportal
    project: baseosqe

# Report context attributes for given plan
context:
    ...
# Report description, contact, id and environment variables for given test
summary: ...
contact: ...
id: ...
environment:
    ...

Status: implemented

summary

Concise summary describing the plan

Should shortly describe purpose of the test plan. Must be a one-line string, should be up to 50 characters long. It is challenging to be both concise and descriptive, but that is what a well-written summary should do.

Examples:

/pull-request:
    /pep:
        summary: All code must comply with the PEP8 style guide
    /lint:
        summary: Run pylint to catch common problems (no gating)
/build:
    /smoke:
        summary: Basic smoke test (Tier1)
    /features:
        summary: Verify important features

Status: implemented

Import Plans

Importing plans from a remote repository

As a user I want to reference a plan from a remote repository in order to prevent duplication and minimize maintenance.

In some cases the configuration stored in a plan can be quite large, for example the prepare step can define complex scripts to set up the guest for testing. Using a reference to a remote plan makes it possible to reuse the same config on multiple places without the need to duplicate the information. This can be useful for example when enabling integration testing between related components.

Remote plans are identified by the plan key which must contain an import dictionary with an fmf identifier of the remote plan. The url and name keys have to be defined, ref and path are optional. Only one remote plan can be referenced and a full plan name must be used (no string matching is applied).

Additionally, one can utilize dynamic ref assignment when importing a plan in order to avoid hardcoding ref value in the importing plan. See the Dynamic ref Evaluation section for usage details and examples.

Plan steps must not be defined in the remote plan reference. Inheriting or overriding remote plan config with local plan steps might be possible in the future but currently is not supported. The imported plan can be modified in only two ways. First way is via environment variables. Variables defined in the plan override any variables defined in the remote plan.

The imported plan can also be altered using the enabled key. If the local plan is enabled, it will follow the status of the remote plan – whether it’s enabled or disabled. If the local plan is disabled, the remote plan will also be disabled. Adjust rules are respected during this process.

New in version 1.19.

Examples:

# Minimal reference is using 'url' and 'name'
plan:
    import:
        url: https://github.com/teemtee/tmt
        name: /plans/features/basic
# A 'ref' can be used to select specific branch or commit
plan:
    import:
        url: https://github.com/teemtee/tmt
        name: /plans/features/basic
        ref: fedora
# Use 'path' when fmf tree is deeper in the git repository
plan:
    import:
        url: https://github.com/teemtee/tmt
        path: /examples/httpd
        name: /smoke

Status: implemented and verified

Results Format

Define format of on-disk storage of results

The following text defines a YAML file structure tmt uses for storing results. tmt itself will use it when saving results of execute step, and custom test results are required to follow it when creating their results.yaml file.

Tests may choose JSON instead of YAML for their custom results file and create results.json file, but tmt itself will always stick to YAML, the final results would be provided in results.yaml file in any case.

Results are saved as a single list of dictionaries, each describing a single test result.

 # String, name of the test.
 name: /test/one

 # fmf ID of the test.
 fmf_id:
   url: http://some.git.host.com/project/tests.git
   name: /test/one
   path: /

 # String, outcome of the test execution.
 result: "pass"|"fail"|"info"|"warn"|"error"|"skip"

 # String, optional comment to report with the result.
 note: "Things were great."

 # List of strings, paths to file logs.
 log:
   - path/to/log1
   - path/to/log1
     ...

 # Mapping, collection of various test IDs, if there are any to track.
 ids:
   some-id: foo
   another-id: bar

 # String, when the test started, in an ISO 8601 format.
 start-time: "yyyy-mm-ddThh:mm:ss.mmmmm+ZZ:ZZ"

 # String, when the test finished, in an ISO 8601 format.
 end-time: "yyyy-mm-ddThh:mm:ss.mmmmm+ZZ:ZZ"

 # String, how long did the test run.
 duration: hh:mm:ss

 # Integer, serial number of the test in the sequence of all tests of a plan.
 serial-number: 1

 # Mapping, describes the guest on which the test was executed.
 guest:
   name: client-1
   role: clients

 # String, path to /data directory storing possible test artifacts
 data-path: path/to/test/data

 # Mapping, stores the actual fmf context defined for this test.
 # It's a combination of the context provided via command line
 # and plan's `context` key.
 context:
   some-dimension:
     - its-value
   another-dimension:
     - first-value
     - second-value
       ...

# Represents results of all test checks executed as driven by test's `check`
# key. Fields have the same meaning as fields of the "parent" test result, but
# relate to each check alone.
check:
    # String, outcome of the test execution.
  - result: "pass"|"fail"|"info"|"warn"|"error"|"skip"

    # String, optional comment to report with the result.
    note: "Things were great."

    # List of strings, paths to file logs.
    log:
      - path/to/check/log1
      - path/to/check/log1
        ...

    # String, when the check started, in an ISO 8601 format.
    start-time: "yyyy-mm-ddThh:mm:ss.mmmmm+ZZ:ZZ"

    # String, when the check finished, in an ISO 8601 format.
    end-time: "yyyy-mm-ddThh:mm:ss.mmmmm+ZZ:ZZ"

    # String, how long did the check run.
    duration: hh:mm:ss

    # String, name of the check. Corresponds to the name of the check
    # specified in test metadata.
    name: dummy

    # String, the place in test workflow when the check was executed.
    event: "before-test"|"after-test"

The result key can have the following values:

pass

Test execution successfully finished and passed.

info

Test finished but only produced an informational message. Represents a soft pass, used for tests with the result attribute set to info. Automation must treat this as a passed test.

warn

A problem appeared during test execution which does not affect test results but might be worth checking and fixing. For example test cleanup phase failed. Automation must treat this as a failed test.

error

Undefined problem encountered during test execution. Human inspection is needed to investigate whether it was a test bug, infrastructure error or a real test failure. Automation must treat it as a failed test.

fail

Test execution successfully finished and failed.

skip

Test was discovered but not executed. Can be used when a single process produces multiple results but not all tests were run.

The name and result keys are required. Also, name, result, and event keys are required for entries under check key. Custom result files may omit all other keys, although tmt plugins will strive to provide as many keys as possible.

When importing the custom results file, each test name referenced in the file by the name key would be prefixed by the original test name. A special case, name: /, sets the result for the original test itself.

The log key must list relative paths. Paths in the custom results file are treated as relative to ${TMT_TEST_DATA} path. Paths in the final results file, saved by the execute step, will be relative to the location of the results file itself.

The first log item is considered to be the “main” log, presented to the user by default.

The serial-number, guest and fmf_id keys, if present in the custom results file, will be overwritten by tmt during their import after test completes. This happens on purpose, to assure this vital information is correct.

Similarly, the duration, start-time and end-time keys, if present in the special custom result, representing the original test itself - name: / -, will be overwritten by tmt with actual observed values. This also happens on purpose: while tmt cannot tell how long it took to produce various custom results, it is still able to report the duration of the whole test.

The same applies to context, tmt will set this key for the original test result in the custom result set to the value known to tmt.

See also the complete JSON schema.

For custom results files in JSON format, the same rules and schema apply.

Changed in version 1.30: fmf context is now saved within results, in the context key.

Examples:

# Example content of results.yaml
- name: /test/passing
  result: pass
  serial-number: 1
  log:
    - pass_log
  start-time: "2023-03-10T09:44:14.439120+00:00"
  end-time: "2023-03-10T09:44:24.242227+00:00"
  duration: 00:00:09
  note: good result
  ids:
    extra-nitrate: some-nitrate-id
  guest:
    name: default-0

- name: /test/failing
  result: fail
  serial-number: 2
  log:
    - fail_log
    - another_log
  start-time: "2023-03-10T09:44:14.439120+00:00"
  end-time: "2023-03-10T09:44:24.242227+00:00"
  duration: 00:00:09
  note: fail result
  guest:
    name: default-0
# Example content of custom results file
- name: /test/passing
  result: pass
  log:
    - pass_log
  duration: 00:11:22
  note: good result
  ids:
    extra-nitrate: some-nitrate-id

- name: /test/failing
  result: fail
  log:
    - fail_log
    - another_log
  duration: 00:22:33
  note: fail result
# Example of a perfectly valid, yet stingy custom results file
- name: /test/passing
  result: pass

- name: /test/failing
  result: fail
# Example of test check results
- name: /test/passing
  result: pass
  serial-number: 1
  log:
    - pass_log
  start-time: "2023-03-10T09:44:14.439120+00:00"
  end-time: "2023-03-10T09:44:24.242227+00:00"
  duration: 00:00:09
  note: good result
  ids:
    extra-nitrate: some-nitrate-id
  guest:
    name: default-0
  check:
    - name: abrt
      event: after-test
      result: pass
      log: []
      note:
    - name: kernel-panic
      event: after-test
      result: pass
      log: []
      note:
/* Example content of custom results.json */
[
  {
    "name": "/test/passing",
    "result": "pass",
    "log": ["pass_log"],
    "duration": "00:11:22",
    "note": "good result"
  }
]

Status: implemented and verified

Guest Topology Format

Define format of on-disk description of provisioned guest topology

The following text defines structure of files tmt uses for exposing guest names, roles and other properties to tests and steps that run on a guest (prepare, execute, finish). tmt saves these files on every guest used, and exposes their paths to processes started by tmt on these guests through environment variables:

  • TMT_TOPOLOGY_YAML for a YAML file

  • TMT_TOPOLOGY_BASH for a shell-friendly NAME=VALUE file

Both files are always available, and both carry the same information.

Warning

The shell-friendly file contains arrays, therefore it’s compatible with Bash 4.x and newer.

Note

The shell-friendly file is easy to ingest for a shell-based tests, it can be simply source-ed. For parsing the YAML file in shell, pure shell parsers like https://github.com/sopos/yash can be used.

TMT_TOPOLOGY_YAML
# Guest on which the test or script is running.
guest:
    name: ...
    role: ...
    hostname: ...

# List of names of all provisioned guests.
guest-names:
  - guest1
  - guest2
  ...

# Same as `guest`, but one for each provisioned guest, with guest names
# as keys.
guests:
    guest1:
        name: guest1
        role: ...
        hostname: ...
    guest2:
        name: guest2
        role: ...
        hostname: ...
    ...

# List of all known roles.
role-names:
  - role1
  - role2
  ...

# Roles and their guests, with role names as keys.
roles:
    role1:
      - guest1
      - guest2
      - ...
    role2:
      - guestN
      ...
TMT_TOPOLOGY_BASH
# Guest on which the test is running.
declare -A TMT_GUEST
TMT_GUEST[name]="..."
TMT_GUEST[role]="..."
TMT_GUEST[hostname]="..."

# Space-separated list of names of all provisioned guests.
TMT_GUEST_NAMES="guest1 guest2 ..."

# Same as `guest`, but one for each provisioned guest. Keys are constructed
# from guest name and the property name.
declare -A TMT_GUESTS
TMT_GUESTS[guest1.name]="guest1"
TMT_GUESTS[guest1.role]="..."
TMT_GUESTS[guest1.hostname]="..."
TMT_GUESTS[guest2.name]="guest2"
TMT_GUESTS[guest2.role]="..."
TMT_GUESTS[guest2.hostname]="..."
...

# Space-separated list of all known roles.
TMT_ROLE_NAMES="client server"

# Roles and their guests, with role names as keys.
declare -A TMT_ROLES
TMT_ROLES[role1]="guest1 guest2 ..."
TMT_ROLES[role2]="guestN ..."
...

Examples:

# A trivial pseudo-test script
. "$TMT_TOPOLOGY_BASH"

echo "I'm running on ${TMT_GUEST[name]}"

Status: implemented and verified