Standard Test Interface

The Standard Test Interface (STI) defines the interactions between a test suite and a testing system (Taskotron, in our case). It describes the format in which a test suite is written (based on Ansible playbook), the available input arguments, and the required outputs the test suite needs to produce.

STI originated in Fedora distribution. You can read the specification at:

The test authors guide that is specific to Fedora implementation is available at:

Taskotron implementation

Taskotron is geared towards executing generic tasks - tasks which run the same test suite on multiple (or all) “subjects” (e.g. packages, modules, repositories, containers, images, etc). That is something that the STI specification doesn’t have in mind - its main goal is to execute a specific test suite closely matching to a specific subject (e.g. an ssh test suite for an ssh package). Because of this, STI specification alone is not sufficient for describing a generic test. As a result, a Taskotron-specific extension of the STI is needed. We try to keep as close to the STI specification as possible and maintain full compatibility, just extend the parts where the specification is lacking.


Taskotron also includes support for plain STI tests (i.e. specific, not generic tasks). However, this support is experimental and not maintained at the moment. There are other systems which handle this area, at least in Fedora. See CI.

Taskotron-flavored Standard Test Interface

This is a list of areas where Taskotron tasks format definition is different from STI (specification is either extended or modified):

  • In order to recognize that a task is a generic Taskotron task, it needs to include taskotron_generic_task: true variable in the first play of each tests*.yml playbook.
  • Generic tasks do not receive subjects variable as defined by the STI, but they receive taskotron_item variable instead.
  • The item/subject is not downloaded and installed automatically as required by STI.
  • The playbook’s exit code is not used for generating PASS/FAIL result automatically (and in our case, for submitting the result to ResultsDB). Instead, the task must create {{artifacts}}/taskotron/results.yml file in Task Result Format on its own.

Read Writing Taskotron Tasks to see some examples of the format described here.