Tests/Acceptance Criteria

From Core Mobile Web Platform Community Group
Revision as of 04:17, 27 September 2012 by Jorabin (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Please note that this material is now out of date - we're reviewing the testing approach - test runner and source of tests - at our Face To Face meeting 2012-10-02 in London. A draft position paper has been prepared by Tobie.


A number of tests have been submitted to the Coremob CG for inclusion in our test suite. These tests need to be reviewed and approved, and it would naturally be most helpful if we can do this as a group, which in turn requires some ground rules.

The tests are available at: https://github.com/coremob/coremob-tests. All help is welcome!

Preliminary steps

The basic things you need to help in the reviewing process are:

  • A GitHub account. These are free, and can be set up in a couple minutes. We use GitHub rather extensively in this CG so getting an account is a useful step no matter what if you intend to contribute at any point.
  • You need to be added to the coremob-tests project. For this to happen simply ask Robin or Tobie.
  • Get a feel for how the test suite system works: https://github.com/coremob/coremob-tests/blob/master/readme.md.
  • You should quickly read through http://www.w3.org/2008/webapps/wiki/Testing as it will give some useful background on testing (even though we don't use exactly the same setup).

Since adding a test is done through a pull request, the list of tests waiting for inclusion can be found in the pull requests list: https://github.com/coremob/coremob-tests/pulls

Reporting issues with tests is done by commenting directly on the pull request for that test, without accepting it.

Criteria

The first criterion to apply is that all Level 2 tests can be rejected for now. Level 2 is too far on the horizon and too dependent on work being done in the meantime for these tests to be meaningful. The level can be found in the config.yml file that describes the test, as the (shockingly enough) "level" field.

The second most important criterion is that the test being checked needs to have been properly contributed to W3C. Tests currently in the pull request list as of April 30 are ok in this regard. [TBD: describe how people can validate this by themselves.]

Then the following aspects need to be considered:

  • Is the test vendor prefixed? If so, it is only acceptable as a level 0 test and not as a level 1 test. Also, it always needs to test the non-prefixed form, and to test it first.
  • Are you aware of existing equivalent W3C tests already? If so, since we don't want to duplicate tests the test ought to be rejected. If there is overlap but not complete (or you are not sure) please point the test submitter to the test suite that you believe it overlaps with.
  • Is the test only doing feature detection (e.g. checking that a given interface or attribute exists)? If so, that it not enough for it to be accepted. Detecting the presence of a feature is a fine first step in a test suite for a given feature (if only because you can then bail early) but is not sufficient on its own to assert conformance.