Writing Taskotron Tasks

Because the task format is based on Standard Test Interface (STI), you can write, run and debug your tasks locally quite easily even without dealing with Taskotron integration at all. Only once your task is working fine, you can start looking into extending it slightly with Taskotron-flavored Standard Test Interface. Of course, even in this case you can still easily run the task locally. In this document, we’ll show you how. You might also want to see Taskotron Quick Start, if you haven’t already.

Writing Standard Test Interface tests

First, read the specification of Standard Test Interface, to understand how STI tests work in detail. The most core concepts to understand are:

  • The whole test suite is executed in the form of an ansible playbook (files named tests*.yml).
  • The playbook is given some specified input arguments.
  • The test suite is expected to create test artifacts in {{artifacts}} directory, which means test.log file as a minimum.
  • The test suite is expected to return an exit code of the playbook execution depending on the PASS/FAIL overall result of the test.
  • All necessary plumbing (like installing packages, running services or spawning VMs/containers) needs to be handled by the playbook.
  • The playbooks are executed as root.

This has the following consequences:

  • The test suite is very portable, it has almost no ties to the testing system.
  • You can easily execute the test suite locally, just provide the correct variables to the playbook.
  • You should never run the playbook on your production system, due to the required root privileges. Run it in a throwaway VM or a container.


You can see Fedora-specific instructions on how to write and run STI tests at: https://fedoraproject.org/wiki/CI/Tests

Extending Standard Test Interface tests into Taskotron tasks

There are a few slight differences between vanilla STI tests and generic Taskotron tasks. Please read their description at Taskotron-flavored Standard Test Interface.

Since your task will be a generic task, you need to know which event your task will respond to, and how the “item” to operate on will look like. You can see the supported list in libtaskotron.main. These values are available as taskotron_item and taskotron_item_type variables. There are many other variables to make use of, see the Special task variables section.


There are some real-world tasks that you can study. The simplest ones are stated first:

Saving task results in ResultYAML format

Once your task finishes execution, the results need to get reported to ResultsDB. To accomplish that, you need to generate {{artifacts}}/taskotron/results.yml file in ResultYAML format. You can either generate it by hand, or you can use either a standalone command taskotron_result or a Python library, both provided by the libtaskotron-core package.

Usage of taskotron_result is very simple. Run it like this to save a result to a file:

taskotron_result --file {{artifacts}}/taskotron/results.yml \
                 --item {{taskotron_item}} \
                 --report_type {{taskotron_item_type}} \
                 --outcome PASSED \
                 --checkname mytest

If you want to have multiple results in a single file, just run the command again on the same output file. See more info by running taskotron_result --help.

If you prefer using a Python library, see libtaskotron.check.CheckDetail class documentation and use libtaskotron.check.export_YAML() method to generate the ResultYAML file. For example:

from libtaskotron import check

def run_mytask(koji_build, artifacts):
    print "Running mytask on %s" % koji_build
    result = 'PASSED'
    detail = check.CheckDetail(item=koji_build,
    output = check.export_YAML(detail)
    results_path = os.path.join(artifacts, 'taskotron', 'results.yml')
    with open(results_path, 'w') as results_file:

Special task variables

Variables provided by Taskotron

In the ansible playbooks tests*.yml, you can make use of the following Taskotron-specific variables:

(string) As defined by STI, the directory path where to store artifacts to save after the execution is finished.
(string) The item/subject to be tested. The value depends on taskotron_item_type value - for koji builds this will be a NVR, for koji tag the tag name, for bodhi update the update ID, etc. See more at libtaskotron.main.
(string) The type of taskotron_item. See the available types defined in libtaskotron.main.
(string) The architecture of taskotron_item to be tested. Some generic tasks might want to ignore this (for example, if they’re capable of checking all architectures in a single run).
(list of strings) A list of all base architectures (i.e. x86_64, i386, armhfp) that are officially supported by this Taskotron deployment. If your task processes multiple architectures in a single run, you should honor this list to determine what to operate on.
(list of strings) The same concept as taskotron_supported_arches, but listing all base + their binary arches (i.e. instead of having just i386, this would list i386, i486, i586 and i686; instead of just armhfp this would list armhfp and armv7hl).

Variables your task can provide

When you set the following variables, you can modify how Taskotron handles your task. All the variables need to be defined in the first play of each tests*.yml playbook for which they should apply.

(boolean) You need to set this one to True, if your task should be handled as a Taskotron generic task. See Standard Test Interface.
(integer) If a task doesn’t finish in a reasonable time (currently set to 20 minutes, but that might change any time), it is killed. If your task needs more time, you can set this variable and Taskotron will wait for the specified number of minutes (the keepalive time) and only then the standard timeout counter will be started.