Mobile Web Test Suites Working Group Blog

Categories: Announcements (11) | Opinions (1) | Testing tools (2) | Web Compatibility Test (7) | Widgets testing (3) |

Automated tests creation for WebIDL-based specifications — 30 November 2009

A growing number of W3C specifications describe JavaScript APIs using WebIDL, including HTML5, XmlHTTPRequest, the Geolocation API, and the many other APIs in development in the Web Applications and Device APIs and Policy Working Groups.

WebIDL allows to define these interfaces with their methods and properties in an abstract language, while giving specifics on how they have to be implemented in EcmaScript (JavaScript’s official name).

Using that abstract language makes it possible to automatically generate a number of test cases to check the specified interfaces are correctly implemented (or as often, correctly specified!): I discovered a few weeks ago the great WTTJS tool that does exactly this — it takes a WebIDL definition, some indications on how to instantiate the declared interfaces, and it then generates a bunch of test cases that can easily be used directly in browsers.

For instance, after having extracted the WebIDL from the Geolocation API using the WebIDL checker, I got a set of test cases that allowed me to find out that the Geolocation API was not clear enough on defining which interfaces were supposed to be directly instantiable — this has now been partially corrected in the latest Editors draft.

The WebIDL specification is still evolving, and as a result, not all its constructs are currently supported in WTTJS, so running it on a WebIDLs fragments that use the latest syntax capabilities will likely require some light hand edits; but it certainly remains a great tool to help in the development of JavaScript specifications. Thank you, Wakaba!

by Dominique Hazael-Massieux in Testing tools 1 comment Permalink

Automated testing of a browser engine — 10 November 2009

“The cornerstone of all testing done on the core of the Opera browser is our automated regression testing system, named SPARTAN. The system consists of a central server and about 50 test machines running our 120 000 automated tests on all core reference builds. The purpose of this system is to help us discover any new bugs we introduce as early as possible, so that we can fix them before they cause any trouble for our users.”

Read more on the Core Concerns blog.

by Wilhelm Joys Andersen in Announcements, Testing tools Permalink

Contacts: Dominique Hazael-Massieux