The files in this directory control how tests are run on the Chromium buildbots. In addition to specifying what tests run on which builders, they also specify special arguments and constraints for the tests.
Adding a new test suite?
The bar for adding new test suites is high. New test suites result in extra linking time for builders, and sending binaries around to the swarming bots. This is especially onerous for suites such as browser_tests (more than 300MB as of this writing). Unless there is a compelling reason to have a standalone suite, include your tests in existing test suites. For example, all InProcessBrowserTests should be in browser_tests. Similarly any unit-tests in components should be in components_unittests.
Logic in the Chromium recipe looks up each builder for each master and test generators in chromium_tests/steps.py parse the data. For example, as of a6e11220 generate_gtest parses any entry in a builder's ‘gtest_tests’ entry.
All of the JSON files in this directory are autogenerated. The “how to use” section below describes the main tool, generate_buildbot_json.py
, which manages most of the waterfalls. It's no longer possible to hand-edit the JSON files; presubmit checks forbid doing so.
Note that trybots mirror regular waterfall bots, with the mapping defined in trybots.py. This means that, as of 81fcc4bc, if you want to edit linux_android_rel_ng, you actually need to edit Android Tests.
You should be able to try build changes that affect the trybots directly (for example, adding a test to linux_android_rel_ng should show up immediately in your tryjob). Non-trybot changes have to be landed manually :(.
When adding tests or bumping timeouts, care must be taken to ensure the infrastructure has capacity to handle the extra load. This is especially true for the established Chromium CQ builders, as they operate under strict execution requirements. Make sure to get a resource owner or a member of Chrome Browser Core EngProd to sign off that there is both builder and swarmed test shard capacity available.
In particular, pay attention to the capacity of the builder which compiles and then triggers and collects swarming task shards. If you‘re adding a new test suite to a bot, and know that the test suite adds one hour of testing time to the swarming shards, and know that you have enough swarmed capacity to handle that one hour of testing, that’s a good start. But if that test also happens to run in shards which take 10 minutes longer than any other shards on that current bot, that means that the top-level builder will also take 10 minutes longer to run -- or 20 minutes longer if there are failures and retries. Ensure that the builder pool has enough capacity to handle that increase as well.
The test_suites.pyl file describes groups of tests that run on bots -- both waterfalls and trybots. In order to specify that a test like base_unittests
runs on a bot, it must be put inside a test suite. This organization helps enforce sharing of test suites among multiple bots.
An example of a simple test suite:
'basic_chromium_gtests': { 'base_unittests': {}, }
If a bot in waterfalls.pyl refers to the test suite basic_chromium_gtests
, then that bot will run base_unittests
.
The test's name is usually both the build target as well as how the test appears in the steps that the bot runs. However, this can be overridden using dictionary arguments like test
and isolate_name
; see below.
The dictionary following the test's name can contain multiple entries that affect how the test runs. Generally speaking, these are copied verbatim into the generated JSON file. Commonly used arguments include:
args
: an array of command line arguments for the test.
swarming
: a dictionary of Swarming parameters. Note that these will be applied to every bot that refers to this test suite. It is often more useful to specify the Swarming dimensions at the bot level, in waterfalls.pyl. More on this below.
can_use_on_swarming_builders
: if set to False, disables running this test on Swarming on any bot.
idempotent
: if set to False, prevents Swarming from returning the same results of a similar run of the same test. See task deduplication for more info.
experiment_percentage
: an integer indicating that the test should be run as an experiment in the given percentage of builds. Tests running as experiments will not cause the containing builds to fail. Values should be in [0, 100]
and will be clamped accordingly.
android_swarming
: Swarming parameters to be applied only on Android bots. (This feature was added mainly to match the original handwritten JSON files, and further use is discouraged. Ideally it should be removed.)
Arguments specific to GTest-based tests:
test
: the target to build and run, if different from the test's name. This allows the same test to be run multiple times on the same bot with different command line arguments or Swarming dimensions, for example.Arguments specific to isolated script tests:
isolate_name
: the target to build and run, if different than the test's name.There are other arguments specific to other test types (script tests, JUnit tests); consult the generator script and test_suites.pyl for more details and examples.
One level of grouping of test suites is composition test suites. A composition test suite is an array whose contents must all be names of individual test suites. Composition test suites may not refer to other composition or matrix compound test suites. This restriction is by design. First, adding multiple levels of indirection would make it more difficult to figure out which bots run which tests. Second, having only one minimal grouping construct motivates authors to simplify the configurations of tests on the bots and reduce the number of test suites.
An example of a composition test suite:
'common_gtests': { 'base_unittests': {}, }, 'linux_specific_gtests': { 'x11_unittests': {}, }, # Composition test suite 'linux_gtests': [ 'common_gtests', 'linux_specific_gtests', ],
A bot referring to linux_gtests
will run both base_unittests
and x11_unittests
.
Another level of grouping of basic test suites is the matrix compound test suite. A matrix compound test suite is a dictionary, composed of references to basic test suites (key) and configurations (value). Matrix compound test suites have the same restrictions as composition test suites, in that they cannot reference other composition or matrix test suites. Configurations defined for a basic test suite in a matrix test suite are applied to each tests for the referenced basic test suite. “variants” is the only supported key via matrix compound suites at this time.
“variants” is a top-level group introduced into matrix compound suites designed to allow targeting a test against multiple variants. Each variant supports args, mixins and swarming definitions. When variants are defined, args, mixins and swarming aren’t specified at the same level.
Args, mixins, and swarming configurations that are defined by both the test suite and variants are merged together. Args and mixins are lists, and thus are appended together. Swarming configurations follow the same merge process - dimension sets are merged via the existing dictionary merge behavior, and other keys are appended.
identifier is a required key for each variant. The identifier is used to make the test name unique. Each test generated from the resulting .json file is identified uniquely by name, thus, the identifier is appended to the test name in the format: “test_name” + “_” + “identifier”
For example, iOS requires running a test suite against multiple devices. If we have the following basic test suite:
'ios_eg2_tests': { 'basic_unittests': { 'args': [ '--some-arg', ] } }
and a matrix compound suite with this variants definition:
'matrix_compound_test': { 'ios_eg2_tests': { 'variants': [ { 'args': [ '--platform', 'iPhone X', '--version', '13.3' ], 'identifier': 'iPhone_X_13.3', }, { 'identifier': 'device_iPhone_X_13.3', 'swarming': { 'dimension_sets': [ { 'os': 'iOS-iPhone10,3' } ] } } ] } }
we can expect the following output:
{ 'args': [ '--some-arg', '--platform', 'iPhone X', '--version', '13.3' ], 'merge': { 'args': [], 'script': 'some/merge/script.py' } 'name': 'basic_unittests_iPhone_X_13.3', 'test': 'basic_unittests' }, { 'args': [ '--some-arg' ], 'merge': { 'args': [], 'script': 'some/merge/script.py', }, 'name': 'basic_unittests_device_iPhone_X_13.3', 'swarming': { 'dimension_sets': [ { 'os': 'iOS-iPhone10,3' } ] }, 'test': 'basic_unittests' }
Due to limitations of the merging algorithm, merging dimension sets fail when there are more dimension sets defined in the matrix test suite than the basic test suite. On failure, the user is notified of an error merging list key dimension sets.
waterfalls.pyl describes the waterfalls, the bots on those waterfalls, and the test suites which those bots run.
A bot can specify a swarming
dictionary including dimension_sets
. These parameters are applied to all tests that are run on this bot. Since most bots run their tests on Swarming, this is one of the mechanisms that dramatically reduces redundancy compared to maintaining the JSON files by hand.
A waterfall is a dictionary containing the following:
name
: the waterfall's name, for example 'chromium.win'
.machines
: a dictionary mapping machine names to dictionaries containing bot descriptions.Each bot's description is a dictionary containing the following:
additional_compile_targets
: if specified, an array of compile targets to build in addition to those for all of the tests that will run on this bot.
test_suites
: a dictionary optionally containing any of these kinds of tests. The value is a string referring either to a basic or composition test suite from test_suites.pyl.
gtest_tests
: GTest-based tests (or other kinds of tests that emulate the GTest-based API), which can be run either locally or under Swarming.isolated_scripts
: Isolated script tests. These are bundled into an isolate, invoke a wrapper script from src/testing/scripts as their top-level entry point, and are used to adapt to multiple kinds of test harnesses. These must implement the Test Executable API and can also be run either locally or under Swarming.junit_tests
: (Android-specific) JUnit tests. These are not run under Swarming.scripts
: Legacy script tests living in src/testing/scripts. These also are not (and usually can not) be run under Swarming. These types of tests are strongly discouraged.swarming
: a dictionary specifying Swarming parameters to be applied to all tests that run on the bot.
os_type
: the type of OS this bot tests. The only useful value currently is 'android'
, and enables outputting of certain Android-specific entries into the JSON files.
skip_cipd_packages
: (Android-specific) when True, disables emission of the 'cipd_packages'
Swarming dictionary entry. Not commonly used; further use is discouraged.
skip_merge_script
: (Android-specific) when True, disables emission of the 'merge'
script key. Not commonly used; further use is discouraged.
skip_output_links
: (Android-specific) when True, disables emission of the 'output_links'
Swarming dictionary entry. Not commonly used; further use is discouraged.
use_swarming
: can be set to False to disable Swarming on a bot.
test_suite_exceptions.pyl contains specific exceptions to the general rules about which tests run on which bots described in test_suites.pyl and waterfalls.pyl.
In general, the design should be to have no exceptions. Roughly speaking, all bots should be treated identically, and ideally, the same set of tests should run on each. In practice, of course, this is not possible.
The test suite exceptions can only be used to remove tests from a bot, modify how a test is run on a bot, or remove keys from a test's specification on a bot. The exceptions can not be used to add a test to a bot. This restriction is by design, and helps prevent taking shortcuts when designing test suites which would make the test descriptions unmaintainable. (The number of exceptions needed to describe Chromium's waterfalls in their previous hand-maintained state has already gotten out of hand, and a concerted effort should be made to eliminate them wherever possible.)
The exceptions file supports the following options per test:
remove_from
: a list of bot names on which this test should not run. Currently, bots on different waterfalls that have the same name can be disambiguated by appending the waterfall's name: for example, Nougat Phone Tester chromium.android
.
modifications
: a dictionary mapping a bot‘s name to a dictionary of modifications that should be merged into the test’s specification on that bot. This can be used to add additional command line arguments, Swarming parameters, etc.
replacements
: a dictionary mapping bot names to a dictionaries of field names to dictionaries of key/value pairs to replace. If the given value is None
, then the key will simply be removed. For example:
'foo_tests': { 'Foo Tester': { 'args': { '--some-flag': None, '--another-flag': 'some-value', }, }, }
would remove the --some-flag
and replace whatever value --another-flag
was set to with some-value
. Note that passing None
only works if the flag being removed either has no value or is in the --key=value
format. It does not work if the key and value are two separate entries in the args list.
A test's final JSON description comes from the following, in order:
The dictionary specified in test_suites.pyl. This is used as the starting point for the test's description on all bots.
The specific bot‘s description in waterfalls.pyl. This dictionary is merged in to the test’s dictionary. For example, the bot's Swarming parameters will override those specified for the test.
Any exceptions specified per-bot in test_suite_exceptions.pyl. For example, any additional command line arguments will be merged in here. Any Swarming dictionary entries specified here will override both those specified in test_suites.pyl and waterfalls.pyl.
In general, the only specialization of test suites that should be necessary is per operating system. If you add a new test to the bots and find yourself adding lots of exceptions to exclude the test from bots all of one particular type (like Android, Chrome OS, etc.), here are options to consider:
Look for a different test suite to add it to -- such as one that runs everywhere except on that OS type.
Add a new test suite that runs on all of the OS types where your new test should run, and add that test suite to the composition test suites referenced by the appropriate bots.
Split one of the existing test suites into two, and add the newly created test suite (including your new test) to all of the bots except those which should not run the new test.
If adding a new waterfall, or a new bot to a waterfall, please avoid adding new test suites. Instead, refer to one of the existing ones that is most similar to the new bot(s) you are adding. There should be no need to continue over-specializing the test suites.
If you see an opportunity to reduce redundancy or simplify test descriptions, please consider making a contribution to the generate_buildbot_json script or the data files. Some examples might include:
Automatically doubling the number of shards on Debug bots, by describing to the tool which bots are debug bots. This could eliminate the need for a lot of exceptions.
Specifying a single hard_timeout per bot, and eliminating all per-test timeouts from test_suites.pyl and test_suite_exceptions.pyl.
Merging some test suites. When the generator tool was written, the handwritten JSON files were replicated essentially exactly. There are many opportunities to simplify the configuration of which tests run on which bots. For example, there's no reason why the top-of-tree Clang bots should run more tests than the bots on other waterfalls running the same OS.
dpranke
, jbudorick
or kbr
will be glad to review any improvements you make to the tools. Thanks in advance for contributing!