This script is intended to identify common test file formatting errors
prior to their acceptance into the project. It is designed to support
future extensions for additional validation rules.
In order to promote readability of the generated test material, the test
generation tool may insert whitespace if the context a given expanded
variable calls for it. Avoid inserting such whitespace within literal
values that span multiple lines.
Since the argument is required, we mark it as so. Using this approach
gives the user a much nicer error message, as compared to just the "not
enough args" message.
When inspecting previously-generated files, a new `Test` instance should
be used. This avoids over-writing the in-memory representation of the
latest test, and allows previously-existing test files to be partially
updated according to subsequent changes in their respective source/case
files.
In expecting "case directories" to contain a sub-directory named
"default", the test generation tool is unable to generate tests for
features where a directory named "default" is not appropriate.
Modify the heuristic that identifies "case directories" to use a more
fundamental aspect (i.e. the existence of at least one "case" file).
For asynchronous tests, the contract between test file and test runner
is implicit: runners are expected to inspect the source code for
references to a global `$DONE` identifier.
Promote a more explicit contract between test file and test runner by
introducing a new frontmatter "tag", `async`. This brings asynchronous
test configuration in-line with other configuration mechanisms and also
provides a more natural means of test filtering.
The modifications to test files was made programatically using the
`grep` and `sed` utilities:
$ grep "\$DONE" test/ -r --files-with-match --null | \
xargs -0 sed -i 's/^\(flags:\s*\)\[/\1[async, /g'
$ grep "\$DONE" test/ -rl --null | \
xargs -0 grep -E '^flags:' --files-without-match --null | \
xargs -0 sed -i 's/^---\*\//flags: [async]\n---*\//'
When executing multiple tests in parallel, each "child" thread would
write to the process's standard output buffer immediately upon test
completion. Because thread execution order and instruction interleaving
is non-deterministic, this made it possible for characters to be emitted
out-of-order.
When extended to support multiple concurrent threads, the runner was
outfitted with a "log lock" dedicated to sharing access to the output
file (when applicable). Re-use this lock when writing to standard out,
ensuring proper ordering of test result messages.
A recent extension to the test runner introduced support for running
tests in parallel using multi-threading. Following this, the runner
would incorrectly emit the "final report" before all individual test
results.
In order to emit the "final report" at the end of the output stream, the
parent thread would initialize all children and wait for availability of
a "log lock" shared by all children.
According to the documentation on the "threading" module's Lock object
[1]:
> When more than one thread is blocked in acquire() waiting for the state
> to turn to unlocked, only one thread proceeds when a release() call
> resets the state to unlocked; which one of the waiting threads proceeds
> is not defined, and may vary across implementations.
This means the primitive cannot be used by the parent thread to reliably
detect completion of all child threads.
Update the parent to maintain a reference for each child thread, and to
explicitly wait for every child thread to complete before emitting the
final result.
[1] https://docs.python.org/2/library/threading.html#lock-objects
Adds a `-j`/`--workers-count` parameter to `tools/packaging/test262.py`, defaulting to `[number of cores] - 1`.
Speeds up running the test suite by about ~3x on my 4-core machine, with the SpiderMonkey shell. This could certainly be optimized more by just appending test results to per-thread lists and merging them at the end, but it's better than nothing.
Some tests involving the directive prologue are invalidated by source
text transformations that insert executable code in the beginning of the
script. Implement a `raw` flag that allows these tests to opt-out of
this transformation. Update the relevant tests to use this flag (and
remove references to globals only available when code is injected).
Update the Python runner accordingly:
- Do not run tests marked as "raw" in strict mode
- Reject invalid test configurations
Update the browser runner accordingly:
- Do not modify the script body of tests marked as "raw"
The "monkeyYaml" parser is intended to serve as a lightweight fallback
to Python's standard YAML parser in contexts where the latter is not
available. Any intentionally-simplified implementation will necessarily
exhibit non-standard behavior for different input, so not all input
accepted by the standard parser will be accepted by "monkeyYaml". If
loaded exclusively in fallback situations, these edge cases can only be
identified (and debugged) in the environments that require the fallback.
This has allowed developers to unknowingly author tests that cause
errors.
Update the test runner to use "monkeyYaml" in all cases, ensuring more
consistent behavior across contexts and precluding this class of
regression.
Some JavaScript source files are only relevant in the context of the
Test262 website. They should not be explicitly included by individual
tests, so their presence in the `harness/` directory alongside "include"
files is misleading.
Move the scripts to a location within the `website/` directory to
better-reflect their intended use. Update the relevant HTML templates
with the new locations.
Although test files once expressed dependencies on external files using
a global `$INCLUDE` function, that pattern was removed in favor of
declarative meta-data [1].
Remove the associated logic from the Python runner and the browser.
[1] See commit d4354d14d5.
Since the Python runner was updated to include `assert.js` in all tests
unconditionally, a number of tests have been written that implicitly
rely on its presence. The browser runner does not currently provide this
file's contents to these tests, so they fail unconditionally.
Update the browser runner to inject that file's contents into every test
context.
Note: the existing approach to file retrieval (namely loading via
synchronous XHR requests) is inefficient and deprecated in some
browsers. It is honored here for the sake of consistency and to minimize
the changeset necessary to fix the browser runner.