The primary function of the LayoutTests is as a regression test suite; this means that, while we care about whether a page is being rendered correctly, we care more about whether the page is being rendered the way we expect it to. In other words, we look more for changes in behavior than we do for correctness.
All layout tests have “expected results”, or “baselines”, which may be one of several forms. The test may produce one or more of:
For any of these types of tests, there are files checked into the LayoutTests directory named -expected.{txt,png,wav}
. Lastly, we also support the concept of “reference tests”, which check that two pages are rendered identically (pixel-by-pixel). As long as the two tests' output match, the tests pass. For more on reference tests, see Writing ref tests.
When the output doesn't match, there are two potential reasons for it:
In both cases, the convention is to check in a new baseline (aka rebaseline), even though that file may be codifying errors. This helps us maintain test coverage for all the other things the test is testing while we resolve the bug.
Bugs at crbug.com should track fixing incorrect behavior, not lines in TestExpectations. If a test is never supposed to pass (e.g. it‘s testing Windows-specific behavior, so can’t ever pass on Linux/Mac), move it to the NeverFixTests file. That gets it out of the way of the rest of the project.
There are some cases where you can‘t rebaseline and, unfortunately, we don’t have a better solution than either:
In this case, reverting the patch is strongly preferred.
These are the cases where you can't rebaseline:
The flakiness dashboard is a tool for understanding a test’s behavior over time. Originally designed for managing flaky tests, the dashboard shows a timeline view of the test’s behavior over time. The tool may be overwhelming at first, but the documentation should help. Once you decide that a test is truly flaky, you can suppress it using the TestExpectations file, as described below.
We do not generally expect Chromium sheriffs to spend time trying to address flakiness, though.
Since baselines themselves are often platform-specific, updating baselines in general requires fetching new test results after running the test on multiple platforms.
The recommended way to rebaseline for a currently-in-progress CL is to use results from try jobs. To do this:
git cl try
.third_party/WebKit/Tools/Scripts/webkit-patch rebaseline-cl
to fetch new baselines.This way, the new baselines can be reviewed along with the changes, which helps the reviewer verify that the new baselines are correct. It also means that there is no period of time when the layout test results are ignored.
The tests which webkit-patch rebaseline-cl
tries to download new baselines for depends on its arguments.
--only-changed-tests
, then only tests modified in the CL will be considered.If the test is not already listed in TestExpectations, you can mark it as [ NeedsRebaseline ]
. The rebaseline-o-matic bot will automatically detect when the bots have cycled (by looking at the blame on the file) and do the rebaseline for you. As long as the test doesn‘t timeout or crash, it won’t turn the bots red if it has a NeedsRebaseline
expectation. When all of the continuous builders on the waterfall have cycled, the rebaseline-o-matic bot will commit a patch which includes the new baselines and removes the [ NeedsRebaseline ]
entry from TestExpectations.
NeedsManualRebaseline
and comment out the flaky line so that your patch can land without turning the tree red. If the test is not in TestExpectations, you can add a [ Rebaseline ]
line to TestExpectations.third_party/WebKit/Tools/Scripts/webkit-patch rebaseline-expectations
NeedsRebaseline
/NeedsManualRebaseline
lines.It is possible to handle tests that only fail when run with a particular flag being passed to content_shell
. See LayoutTests/FlagExpectations/README.txt for more.
The file is not ordered. If you put new changes somewhere in the middle of the file, this will reduce the chance of merge conflicts when landing your patch.
The syntax of the file is roughly one expectation per line. An expectation can apply to either a directory of tests, or a specific tests. Lines prefixed with #
are treated as comments, and blank lines are allowed as well.
The syntax of a line is roughly:
[ bugs ] [ "[" modifiers "]" ] test_name [ "[" expectations "]" ]
crbug.com/12345
, code.google.com/p/v8/issues/detail?id=12345
or Bug(username)
.Mac
, Mac10.9
, Mac10.10
, Mac10.11
, Retina
, Win
, Win7
, Win10
, Linux
, Linux32
, Precise
, Trusty
, Android
, Release
, Debug
.Win
represents both Win7
and Win10
. See the CONFIGURATION_SPECIFIER_MACROS
dictionary in third_party/WebKit/Tools/Scripts/webkitpy/layout_tests/port/base.py for the meta keywords and which modifiers they represent.Crash
, Failure
, Pass
, Rebaseline
, Slow
, Skip
, Timeout
, WontFix
, Missing
, NeedsRebaseline
, NeedsManualRebaseline
. If multiple expectations are listed, the test is considered “flaky” and any of those results will be considered as expected.For example:
crbug.com/12345 [ Win Debug ] fast/html/keygen.html [ Crash ]
which indicates that the “fast/html/keygen.html” test file is expected to crash when run in the Debug configuration on Windows, and the tracking bug for this crash is bug #12345 in the Chromium issue tracker. Note that the test will still be run, so that we can notice if it doesn't actually crash.
Assuming you're running a debug build on Mac 10.9, the following lines are all equivalent (in terms of whether the test is performed and its expected outcome):
fast/html/keygen.html [ Skip ] fast/html/keygen.html [ WontFix ] Bug(darin) [ Mac10.9 Debug ] fast/html/keygen.html [ Skip ]
WontFix
implies Skip
and also indicates that we don't have any plans to make the test pass.WontFix
lines always go in the [NeverFixTests file]((../../third_party/WebKit/LayoutTests/NeverFixTests) as we never intend to fix them. These are just for tests that only apply to some subset of the platforms we support.WontFix
and Skip
must be used by themselves and cannot be specified alongside Crash
or another expectation keyword.Slow
causes the test runner to give the test 5x the usual time limit to run. Slow
lines go in the SlowTests file . A given line cannot have both Slow and Timeout.Also, when parsing the file, we use two rules to figure out if an expectation line applies to the current run:
For example, if you had the following lines in your file, and you were running a debug build on Mac10.10
:
crbug.com/12345 [ Mac10.10 ] fast/html [ Failure ] crbug.com/12345 [ Mac10.10 ] fast/html/keygen.html [ Pass ] crbug.com/12345 [ Win7 ] fast/forms/submit.html [ Failure ] crbug.com/12345 fast/html/section-element.html [ Failure Crash ]
You would expect:
fast/html/article-element.html
to fail with a text diff (since it is in the fast/html directory).fast/html/keygen.html
to pass (since the exact match on the test name).fast/html/submit.html
to pass (since the configuration parameters don't match).fast/html/section-element.html
to either crash or produce a text (or image and text) failure, but not time out or pass.You can verify that any changes you've made to an expectations file are correct by running:
third_party/WebKit/Tools/Scripts/lint-test-expectations
which will cycle through all of the possible combinations of configurations looking for problems.