WHATWG blogSunsetting the JavaScript Standard

Back in 2012, the WHATWG set out to document the differences between the ECMAScript 5.1 specification and the compatibility and interoperability requirements for ECMAScript implementations in web browsers.

A specification draft was first published under the name of “Web ECMAScript”, but later renamed to just “JavaScript”. As such, the JavaScript Standard was born.

Our work on the JavaScript Standard consisted of three tasks:

  1. figuring out implementation differences for various non-standard features;
  2. filing browser bugs to get implementations to converge;
  3. and finally writing specification text for the common or most sensible behavior, hoping it would one day be upstreamed to ECMAScript.

That day has come.

Some remaining web compatibility issues are tracked in the repository for the ECMAScript spec, which javascript.spec.whatwg.org now redirects to. The rest of the contents of the JavaScript Standard have been upstreamed into ECMAScript, Annex B.

This is good news for everyone. Thanks to the JavaScript Standard, browser behavior has converged, increasing interoperability; non-standard features got well-defined and standardized; and the ECMAScript standard more closely matches reality.

Highlights:

  • The infamous “string HTML methods”: String.prototype.anchor(name), String.prototype.big(), String.prototype.blink(), String.prototype.bold(), String.prototype.fixed(), String.prototype.fontcolor(color), String.prototype.fontsize(size), String.prototype.italics(), String.prototype.link(href), String.prototype.small(), String.prototype.strike(), String.prototype.sub(), and String.prototype.sup(). Browsers implemented these slightly differently in various ways, which in one case lead to a security issue (and not just in theory!). It was an uphill battle, but eventually browsers and the ECMAScript spec matched the behavior that the JavaScript Standard had defined.

  • Similarly, ECMAScript now has spec text for String.prototype.substr(start, length).

  • ECMAScript used to require a fixed, heavily outdated, version of the Unicode Standard for determining the set of whitespace characters, and what’s a valid identifier name. The JavaScript Standard required the latest available Unicode version instead. ECMAScript first updated the Unicode version number and later removed the fixed version reference altogether.

  • ECMAScript Annex B, which specifies things like escape and unescape, used to be purely informative and only there “for compatibility with some older ECMAScript programs”. The JavaScript Standard made it normative and required for web browsers. Nowadays, the ECMAScript spec does the same.

  • The JavaScript Standard documented the existence of HTML-like comment syntax (<!-- and -->). As of ECMAScript 2015, Annex B fully defines this syntax.

  • The __defineGetter__, __defineSetter__, __lookupGetter__, and __lookupSetter__ methods on Object.prototype are defined in ECMAScript Annex B, as is __proto__.

So long, JavaScript Standard, and thanks for all the fish!

Reddit: BrowsersNeed opinions; Lowest memory footprint browser for Win10(& Win7) 64bit, but needs to support a few things!

Back in the day I remember using things like Mozilla Suite, Seamonkey, and then I moved to Firefox until late version 2. The memory usage became horrendous, and because of that I used Lolifox(Modded Firefox) and then moved to Chrome later. I dabbled with Opera for a bit until they started just being a fork of Chromium, and recently I went back to Firefox.

I'm running Win10 64bit with an AMD FX 8350, 16GB of ram, and I don't have any issues with Chrome, Firefox, or Opera for that matter. However, I've always been obsessed with playing with new browsers and improving my experience as much as possible.

For the things that I do, I do need support for Flash and HTML5 video, but not much else.

  • A site that I frequent uses webm, so it's important that I can view those appropriately.
  • Sites that I frequent use Flash, so this is also important.
  • Support for GIF, PNG, JPG (Which shouldn't be an issue these days)

Optionally support for Adblock Plus would be awesome, but not essential to what I want.

A high speed-battle score would also be pretty neat. Currently Lightfox and Firefox outperform Chrome and Opera overall in this benchmark site. Where Firefox produces a score of 1014+ and Chrome(and Opera) don't exceed the high 800s.

I don't mean to flame bait anyone, so I guess the best thing to ask is to make your opinion with as much information as possible and don't attack others for their views. I'm looking for the most information possible and I think I might have as much as I'm going to get, but there is always someone out there that could have more.

Also, if you get better scores on speed-battle in the browsers I mentioned, that's great. It just doesn't do it on my hardware and that's the results I get. I just want to see what people think, not make people upset. :P

submitted by /u/lmotaku
[link] [comments]

WHATWG blogDRM and Web security

For a few years now, the W3C has been working on a specification that extends the HTML standard to add a feature that literally, and intentionally, does nothing but limit the potential of the Web. They call this specification "Encrypted Media Extensions" (EME). It's essentially a plug-in mechanism for proprietary DRM modules.

Much has been written on how DRM is bad for users because it prevents fair use, on how it is technically impossible to ever actually implement, on how it's actually a tool for controlling distributors, a purpose for which it is working well (as opposed to being to prevent copyright violations, a purpose for which it isn't working at all), and on how it is literally an anti-accessibility technology (it is designed to make content less accessible, to prevent users from using the content as they see fit, even preventing them from using the content in ways that are otherwise legally permissible, e.g. in the US, for parody or criticism). Much has also been written about the W3C's hypocrisy in supporting DRM, and on how it is a betrayal to all Web users. It is clear that the W3C allowing DRM technologies to be developed at the W3C is just a naked ploy for the W3C to get more (paying) member companies to join. These issues all remain. Let's ignore them for the rest of post, though.

One of the other problems with DRM is that, since it can't work technically, DRM supporters have managed to get the laws in many jurisdictions changed to make it illegal to even attempt to break DRM. For example, in the US, there's the DMCA clauses 17 U.S.C. § 1201 and 1203: "No person shall circumvent a technological measure that effectively controls access to a work protected under this title", and "Any person injured by a violation of section 1201 or 1202 may bring a civil action in an appropriate United States district court for such violation".

This has led to a chilling effect in the security research community, with scientists avoiding studying anything that might relate to a DRM scheme, lest they be sued. The more technology embeds DRM, therefore, the less secure our technology stack will be, with each DRM-impacted layer getting fewer and fewer eyeballs looking for problems.

We can ill afford a chilling effect on Web browser security research. Browsers are continually attacked. Everyone who uses the Web uses a browser, and everyone would therefore be vulnerable if security research on browsers were to stop.

Since EME introduces DRM to browsers, it introduces this risk.

A proposal was made to avoid this problem. It would simply require each company working on the EME specification to sign an agreement that they would not sue security researchers studying EME. The W3C already requires that members sign a similar agreement relating to patents, so this is a simple extension. Such an agreement wouldn't prevent members from suing for copyright infringement, it wouldn't reduce the influence of content producers over content distributors; all it does is attempt to address this even more critical issue that would lead to a reduction in security research on browsers.

The W3C is refusing to require this. We call on the W3C to change their mind on this. The security of the Web technology stack is critical to the health of the Web as a whole.

- Ian Hickson, Simon Pieters, Anne van Kesteren

Planet WebKitMotionMark:A New Graphics Benchmark

Co-written with Said Abou-Hallawa and Simon Fraser

<figure class="widescreen"> MotionMarkLogo
</figure>

Today, we are pleased to introduce MotionMark, a new graphics benchmark for web browsers.

We’ve seen the web grow in amazing ways, making it a rich platform capable of running complex web apps, rendering beautiful web pages, and providing user experiences that are fast, responsive, and visibly smooth. With the development and wide adoption of web standards like CSS animations, SVG, and HTML5 canvas, it’s easier than ever for a web author to create an engaging and sophisticated experience. Since these technologies rely on the performance of the browser’s graphics system, we created this benchmark to put it to the test.

We’d like to talk about how the benchmark works, how it has helped us improve the performance of WebKit, and what’s in store for the future.

Limitations of Existing Graphics Benchmarks

We needed a way to monitor and measure WebKit rendering performance, and looked for a graphics benchmark to guide our work. Most graphics benchmarks measured performance using frame rate while animating a fixed scene, but we found several drawbacks in their methodology.

First, some test harnesses used setTimeout() to drive the test and calculate frame rate, but that could fire at more than 60 frames per second (fps), causing the test to try to render more frames than were visible to the user. Since browsers and operating systems often have mechanisms to avoid generating frames that will never be seen by the user, such tests ran up against these throttling mechanisms. In reality, they only tested the optimizations for avoiding work when a frame was dropped, rather than the capability of the full graphics stack.

Second, most benchmarks we found were not written to accommodate a wide variety of devices. They failed to scale their tests to accommodate hardware with different performance characteristics, or to leave headroom for future hardware and software improvements.

Finally, we found that benchmarks often tested too many things at once. This made it difficult to interpret their final scores. It also hindered iterative work to enhance WebKit performance.

The Design of MotionMark

We wanted to avoid these problems in MotionMark. So we designed it using the following principles:

  1. Peak performance. Instead of animating a fixed scene and measuring the browser’s frame rate, MotionMark runs a series of tests and measures how complex the scene in each test can become before falling below a threshold frame rate, which we chose to be 60 fps. Conveniently, it reports the complexity as the test’s score. And by using requestAnimationFrame() instead of setTimeout(), MotionMark avoids drawing at frame rates over 60 fps.
  2. Test simplicity. Rather than animating a complicated scene that utilized the full range of graphics primitives, MotionMark tests draw multiple rendering elements, each of which uses the same small set of graphics primitives. An element could be an SVG node, an HTML element with CSS style, or a series of canvas operations. Slight variations among the elements avoid trivial caching optimizations by the browser. Although fairly simple, the chosen effects aim to reflect techniques commonly used on the web. Tests are visually rich, and are designed to stress the graphics system rather than JavaScript.
  3. Quick to run. We wanted the benchmark to be convenient and quick to run while maintaining accuracy. MotionMark runs each test within the same period of time, and calculates a score from a relatively small sample of animation frames.
  4. Device-agnostic. We wanted MotionMark to run on a wide variety of devices. It adjusts the size of the drawing area, called the stage, based on the device’s screen size.

Mechanics

MotionMark’s test harness contains three components:

  1. The animation loop
  2. The stage
  3. A controller that adjusts the difficulty of the test

The animation loop uses requestAnimationFrame() to animate the scene. Measurement of the frame rate is done by taking the difference in frame timestamps using performance.now().

For each frame in the animation loop, the harness lets the test animate a scene with a specified number of rendering elements. That number is called the complexity of the scene. Each element represents roughly the same amount of work, but may vary slightly in size, shape, or color. For example, the “Suits” test renders SVG rects with a gradient fill and a clip, but each rect’s gradient is different, its clip is one of four shapes, and its size varies within a narrow range.

The stage contains the animating scene, and its size depends on the window’s dimensions. The harness classifies the dimensions into one of three sizes:

  • Small: 568 x 320, targeting mobile phones
  • Medium: 900 x 600, targeting tablets and laptops
  • Large: 1600 x 800, targeting desktops

The controller has two responsibilities. First, it monitors the frame rate and adjusts the scene complexity by adding or removing elements based on this data. Second, it reports the score to the benchmark when the test concludes.

MotionMark uses this harness for each test in the suite, and takes the geometric mean of the tests’ scores to report a single score for the run.

The Development of MotionMark

The architectural modularity of the benchmark made it possible for us to do rapid iteration during its development. For example, we could iterate over how we wanted the controller to adjust the complexity of the tests it was running.

Our initial attempts at writing a controller tried to arrive at the exact threshold, or change point, past which the system could not maintain 60 fps. For example, we tried having the controller perform a binary search for the right change point. The measurement noise inherent in testing graphics performance at the browser level required the controller to run for a long time, which did not meet one of our requirements for the benchmark. In another example, we programmed a feedback loop using a technique found in industrial control systems, but we found the results unstable on browsers that behaved differently when put under stress (for example dropping from 60 fps directly down to 30 fps).

So we changed our focus from writing a controller that found the change point at the test’s conclusion, to writing one that sampled a narrow range which was likely to contain the change point. From this we were able to get repeatable results within a relatively short period of time and on a variety of browser behaviors.

The controller used in MotionMark animates the scene in two stages. First, it finds an upper bound by exponentially increasing the scene’s complexity until it drops significantly below 60 fps. Second, it goes through a series of iterations, repeatedly starting at a high complexity and ending at a low complexity. Each iteration, called a ramp, crosses the change point, where the scene animates slower than 60 fps at the higher bound, and animates at 60 fps at the lower bound. With each ramp the controller tries to converge the bounds so that the test runs across the most relevant complexity range.

With the collected sample data the controller calculates a piecewise regression using least squares. This regression makes two assumptions about how increased complexity affects the browser. First, it assumes the browser animates at 60 fps up to the change point. Second, it assumes the frame rate either declines linearly or jumps to a lower rate when complexity increases past the change point. The test’s score is the change point. The score’s confidence interval is calculated using a method called bootstrapping.

MotionMark’s modular architecture made writing new tests fast and easy. We could also replicate a test visually but use different technologies including DOM, SVG, and canvas by substituting the stage.

Creating a new test required implementing the rendering element and the stage. The stage required overriding three methods of the Stage class:

  • animate() updates the animation and renders one frame. This is called within the requestAnimationFrame() loop.
  • tune() is called by the controller when it decides to update the complexity of the animation. The stage is told how many elements to add or remove from the scene.
  • complexity() simply returns the number of rendering elements being drawn in the stage.

Because some graphics subsystems try to reduce its refresh rate when it detects a static scene, tests had to be written such that the scenes changed on every frame. Moreover, the amount of work tied to each rendering element had to be small enough such that all systems could handle animating at least one of them at 60 fps.

What MotionMark’s Tests Cover

MotionMark’s test suite covers a wide wariety of graphics techniques available to web authors:

  • Multiply: CSS border radius, transforms, opacity
  • Arcs and Fills: Canvas path fills and arcs
  • Leaves: CSS-transformed <img> elements
  • Paths: Canvas line, quadratic, and Bezier paths
  • Lines: Canvas line segments
  • Focus: CSS blur filter, opacity
  • Images: Canvas getImageData() and putImageData()
  • Design: HTML text rendering
  • Suits: SVG clip paths, gradients and transforms

We hope to expand and update this suite with more tests as the benchmark matures and graphics performance improves.

Optimizations in WebKit

MotionMark enabled us to do a lot more than just monitor WebKit’s performance; it became an important tool for development. Because each MotionMark test focused on a few graphics primitives, we could easily identify rendering bottlenecks, and analyze the tradeoffs of a given code change. In addition we could ensure that changes to the engine did not introduce new performance regressions.

For example, we discovered that WebKit was spending time just saving and restoring the state of the graphics context in some code paths. These operations are expensive, and they were happening in critical code paths where only a couple properties like the transform were being changed. We replaced the operations with setting and restoring those properties explicitly.

On iOS, our traces on the benchmark showed a subtle timing issue with requestAnimationFrame(). CADisplayLink is used to synchronize drawing to the display’s refresh rate. When its timer fired, the current frame was drawn, and the requestAnimationFrame() handler was invoked for the next frame if drawing completed. If drawing did not finish in time when the timer fired for the next frame, the timer was not immediately reset when drawing finally did finish, which caused a delay of one frame and effectively cut the animation speed in half.

These are just two examples of issues we were able to diagnose and fix by analyzing the traces we gathered while running MotionMark. As a result, we were able to improve our MotionMark scores:

<figure class="widescreen mattewhite"> MotionMark on macOSMotionMark on iOS
</figure>

Conclusion

We’re excited to be introducing this new benchmark, and using it as a tool to improve WebKit’s performance. We hope the broader web community will join us. To run it, visit http://browserbench.org/MotionMark. We welcome you to file bugs against the benchmark using WebKit’s bug management system under the Tools/Tests component. For any comments or questions, feel free to contact the WebKit team on Twitter at @WebKit or Jonathan Davis, our Web Technologies Evangelist, at @jonathandavis.

Planet Mozilla[worklog] Edition 036 - Administrative week

Busy week without much things done for bugs. W3C is heading to Lisbon for the TPAC, so tune of the week: Amalia Rodrigues. I'll be there in spirit.

Webcompat Life

Progress this week:

326 open issues
----------------------
needsinfo       12
needsdiagnosis  106
needscontact    8
contactready    28
sitewait        158
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • yet another appearance: none implemented in Blink. This time for meter.

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Planet WebKitRelease Notes for Safari Technology Preview Release 13

Safari Technology Preview Release 13 is now available for download for both macOS Sierra betas and OS X El Capitan 10.11.6. If you already have Safari Technology Preview installed, you can update from the Mac App Store’s Updates tab. This release covers WebKit revisions 204876–205519.

Fetch API

  • Added support for BufferSource bodies (r205115)
  • Fixed blob resource handling to raise a network error when the URL is not found (r205190)
  • Set the blob type correctly for an empty body (r205250)
  • Set the blob type from Response/Request contentType header (r205076)
  • Made the body mix-in text() decode data as UTF–8 (r205188)
  • Ensured response cloning works when data is loading (r205110)
  • Enabled the Fetch API to load the data URL in same-origin mode (r205265)
  • Prevented any body for opaque responses (r205082)
  • Changed opaqueredirect responses to have their URL set to the original URL (r205081)
  • Prevented setting bodyUsed when request construction fails (r205253)
  • Set Response bodyUsed to check for its body-disturbed state (r205251)
  • Changed response cloning to use structureClone when teeing a Response stream (r205117)
  • Aligned the internal structure of ReadableStream with the specifications (r205289)
  • Aligned data:// URL behavior of XHR to match specifications (r205113)

Custom Elements

  • Added adopted callback for custom elements on appendChild() (r205085)
  • Enabled reaction callbacks for adopted custom elements (r205060)
  • Updated the semantics of :defined to re-align with specification changes (r205416)
  • Added validations for a synchronously constructed custom element (r205386)
  • Added support for the whenDefined() method on the CustomElementRegistry (r205315)
  • Added a CustomElementRegistry check for reentrancy (r205261)

JavaScript

  • Enabled assignments in for…in head in non-strict mode (r204895)
  • Changed newPromiseCapabilities to check that the given argument is a constructor (r205027)
  • Fixed toString() to return the correct tag when called on proxy objects (r205023)

Web APIs

  • Added event support for <link preload> (r205269)
  • Implemented x, y and ScrollToOptions arguments for Element.scroll(), Element.scrollTo(), and Element.scrollBy() (r205505)
  • Updated location.toString to make it enumerable (r204953)
  • Updated location.toString in Web Workers to make it enumerable (r204954)
  • Changed Object.preventExtensions(window) to throw a TypeError exception (r205404)
  • Aligned coords and srcset attribute parsing with the HTML specification (r205030, r205515)
  • Added support for CanvasRenderingContext2D.prototype.resetTransform (r204878)
  • Aligned cross-origin Object.getOwnPropertyNames() with the HTML specification (r205409)

Web Inspector

  • Added IndexedDB Database, ObjectStore, and Index data to the details sidebar (r205043)
  • Added support for Shift-Command-D (⇧⌘D) to switch to the last used dock configuration (r205413)
  • Added support for Shift-Tab (⇧⇥) to un-indent the selected line (r204924)
  • Changed Command-D (⌘D) to select the next occurrence instead of deleting the line (r205414)
  • Added a visual indicator for shadow content in the DOM tree (r205322)
  • Allowed hiding of CSS variables in the Computed styles panel (r205518)
  • Fixed an issue that prevented using an Undo action in the breakpoint editor (r205499)
  • Prevented the resource content view from showing “CR” characters (r205517)
  • Fixed an issue preventing re-inspecting the page after a Web Inspector process crash (r205370)
  • Improved the minification detection heuristic for small resources (r205314)
  • Fixed an issue causing network record bars to be positioned on unexpected rows (r205349)
  • Provided a way to clear an IndexedDB object store (r205041)
  • Improved the debugger popover to pretty print functions (r205223)
  • Corrected unexpected cursor changes while dragging ruler handle in the rendering frames timeline (r204940)
  • Corrected the display of a plain text XHR response with responseType="blob" (r205268)

CSS

  • Implemented CSS.escape according to the CSSOM specification (r204952)
  • Improved CSS stylesheet checks to ensure clean stylesheets are accessible from JavaScript (r205455)
  • Improved :enabled and :disabled selectors to only match elements that can be disabled (r205050)

Rendering

  • Fixed scrollbars for a <table> with overflow content inside <div align="right"> (r205489)
  • Added support for non-BMP MathML operators U+1EEF0 and U+1EEF1 (r205111)
  • Fixed getting font bounding rect for MathML (r205031)

Security

  • Changed the Image Loader to set the fetch mode according its crossOrigin attribute (r205134)
  • Added a SecurityError when trying to access cross-origin Location properties (r205026)
  • Updated Object.defineProperty() and Object.preventExtensions() to throw an error for a cross-origin Window or Location object (r205358, r205359)
  • Updated Object.setPrototypeOf() to throw an error and return null when used on a cross-origin Window or Location object (r205205, r205258)

Plugins

  • Replaced YouTube.com Flash embeds with HTML5 equivalents on macOS (r205274)

Planet MozillaWhat’s Up with SUMO – 15th September

Hello, SUMO Nation!

We had a bit of a delay with the release of the 49th version of Firefox this week… but for good reasons! The release is coming next week – but our latest news are coming right here, right now. Dig in!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 14th of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 21st of September!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

  • PLATFORM REMINDER! The Platform Meetings are BACK! If you missed the previous ones, you can find the notes in this document. (here’s the channel you can subscribe to).
    • We have a first version of working {for} implementation on the staging site for the Lithium migration – thanks to Tyson from the Lithium team.
    • Some of the admins will be meeting with members of the Lithium team in two weeks to work face-to-face on the migration.
    • More questions from John99 and answers from our team – do check the document linked above for more details.
    • If you are interested in test-driving the new platform now, please contact Madalina.
      • IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing.
  • QUESTIONS? CONCERNS? Please take a look at this migration document and use this migration thread to put questions/comments about it for everyone to share and discuss. As much as possible, please try to keep the migration discussion and questions limited to those two places – we don’t want to chase ten different threads in too many different places.

Social

Support Forum

  • SUMO Day coming up next week! (As mentioned above).
  • The Norton startup crash for version 49 is still waiting for a fix from Symantec – if that doesn’t happen, expect a few questions in the forums about that.
  • A vulnerability was found in the Flash player last week – if you’re using it, please update it as soon as you can to the latest version!
  • Reminder: If you are using email notifications to know what posts to return to, jscher2000 has a great tip (and tool) for you. Check it out here!

Knowledge Base & L10n

  • We are (still) 1 week before next release / 5 weeks after current release. What does that mean? (Reminder: we are following the process/schedule outlined here)

    • Only Joni or other admins can introduce and/or approve potential last minute changes of next release content; only Joni or other admins can set new content to RFL; localizers should focus on this content.
  • We have some extra time, so please remember to localize the main articles for the upcoming release:
    • https://support.mozilla.org/kb/hello-status/translate
    • https://support.mozilla.org/kb/firefox-reader-view-clutter-free-web-pages/translate
    • https://support.mozilla.org/kb/html5-audio-and-video-firefox/translate
    • https://support.mozilla.org/kb/your-hardware-no-longer-supported/translate

Firefox

  • for Android
    • To repeat what you’ve heard last week (because it’s still true!): version is 49 coming next week. Highlights include:

      • caching selected pages (e.g. mozilla.org) for offline retrieval
      • usual platform and bug fixes
  • for Desktop
    • You’ve heard it before, you’ll hear it again: version 49 is coming next week – read more about it in the release thread (thank you, Philipp!). Highlights include:
      • text-to-speech in Reader mode
      • ending support for older Mac OS versions
      • ending support for older CPUs
      • ending support for Firefox Hello
      • usual platform and bug fixes
  • for iOS
    • …I hear there’s a new iPhone in town, but it’s far from being a jack of all trades ;-)

OK, I admit it, I’m not very good at making hardware jokes. I’m sorry! I guess you’ll have to find better jokes somewhere on the internet – do you have any interesting places that provide you with fun online? Tell us in the comments – and see you all next week!

W3C Team blogHTML – from 5.1 to 5.2

There is a First Public Working Draft of HTML 5.2. There is also a Proposed Recommendation of HTML 5.1. What does that mean? What happened this year, what didn’t? And what next?

First, the Proposed Recommendation. W3C develops specifications, like HTML 5.1, and when they are “done”, as agreed by the W3C, they are published as a “Recommendation”. Which means what it says – W3C Recommends that the Web use the specification as a standard.

HTML 5.0 was published as a Recommendation a bit over 2 years ago. It was a massive change from HTML 4, published before the 21st Century began. And it was a very big improvement. But not everything was as good as it could be.

A couple of years before the HTML 5 Recommendation was published, a decision was taken to get it done in 2014. Early this year, we explained that we were planning to release HTML 5.1 this year.

There is an implementation report for HTML 5.1 that shows almost all of the the things we added since HTML 5.0 are implemented, and work out there on the Web already. Some things that didn’t work, or did but don’t any more, were removed.

HTML 5.1 certainly isn’t perfect, but we are convinced it is a big improvement over HTML 5.0, and so it should become the latest W3C Recommendation for HTML. That’s why we have asked W3C to make it a Proposed Recommendation. That means it gets a formal review from W3C’s members to advise Tim Berners-Lee whether this should be a W3C Recommendation, before he makes a decision.

Meanwhile, we are already working on a replacement. We believe HTML 5.1 is today the best forward looking, reality-based, HTML specification ever. So our goal with HTML 5.2 is to improve on that.

As well as fixing bugs people find in HTML 5.1, we are working to describe HTML as it really will be in late 2017. By then Custom Elements are likely to be more than just a one-browser project and we will document how they fit in with HTML. We expect improvements in the ability to use HTML for editing content, using e.g. contenteditable, and perhaps some advances in javascript. Other features that have been incubating, for example in the Web Platform Incubator Community Group, will reach the level of maturity needed for a W3C Recommendation.

We have wanted to make the specification of HTML more modular, and easier to read, for a long time. Both of those are difficult, time-consuming jobs. They are both harder to do than people have hoped over the last few years. We have worked on strategies to deal with making HTML more modular, but so far we have only broken out one “module”: ARIA in HTML.

We hope to break out at least one substantial more module in the next year. Whether it happens depends on sufficient participation and commitment from across the community.

We will further improve our testing efforts, and make sure that HTML 5.2 describes things that work, and will be implemented around the Web. We have developed a process for HTML 5.1 that ensures we don’t introduce things that don’t work, and remove things already there that don’t reflect reality.

And we will continue working to a timeline, with the HTML 5.2 specification heading for Recommendation around the end of 2017.

By which time, we will probably also be working on a replacement for it, because the Web seems like it will continue to develop for some time to come…

Planet WebKitManuel Rego: TPAC, Web Engines Hackfest & Igalia 15th anniversary on the horizon

W3C’s TPAC

Next week I’ll be in Lisbon attending TPAC. This is the annual conference organized by the W3C where all the different groups meet face to face during one week. It seems a huge event where you can meet lots of important people working on the web. Looking forward to being there. 😃

Due to our involvement on the implementation of CSS Grid Layout specification in Chromium/Blink and and Safari/WebKit, we’ve been interacting quite a lot with the CSS Working Group (CSS WG). Thus, I’ll be participating on their meetings during the event and also following the work around Houdini, with a close eye on the Layout API. BTW, thanks to the CSS WG chairs for letting me join them.

Like past year, Igalia will have a booth in the conference where you can chat with us about our involvement on the W3C, from the specs edition to the implementation of web standards on the different browsers. My colleagues Joanie and Juanjo will be attending the event too, so don’t hesitate to ping any of us to talk about Igalia and our contributions to the open web platform.

Web Engines Hackfest

This year Igalia is organizing and hosting again a new edition of the Web Engines Hackfest. The event will take place during the last week of the month (26-28th September), it used to be on December but we looked for a better date this year and it seems we’ve been successful as we’ll be around 40 people (more than ever). Thanks everyone attending, we just hope you really enjoy the event.

As usual my main goal for the hackfest is related to CSS Grid Layout implementation. Probably it’d be a good moment to draft a plan for shipping it on Chromium as we’ll have Christian Biesinger around, who is usually reviewing most of our Grid Layout patches.

On top of that and due to my involvement on the MathML refactoring we did on WebKit, I’ll be very interested on keep discussing about MathML and the next steps to make it a reality on the different web engines. Some fonts experts will be around, so we won’t miss the opportunity to try to improve OpenType MATH table support in HarfBuzz.

Apart from this, it’s worth highlighting the big number of Servo contributors we’ll have in the event, from Mozilla employees to external collaborators (like my colleague Martin). I’m eager to check firsthand the status of this engine and their future plans.

Last, but not least, hopefully during the hackfest we’ll find some time to discuss about the upstreaming process of WebKit for Wayland, trying to convert it in an official WebKit port like WebKitGTK+.

And there’ll be more topics that will be discussed there like: accessibility, multimedia, V8, WebRTC, etc. We’re quite a lot of people, so surely we’ll have productive meetings on most of these things.

Igalia 15th Anniversary

This month Igalia is becoming 15 years old! It’s amazing that a company with a completely different model (focused on free software and with a flat cooperative-like structure) has survived during all these years. I’m very grateful to the people who founded a company with such wonderful values, and all the people that have made possible to reach this point in our history. Thanks for letting me be part of it! 😊

Igalia 15th Anniversary Logo Igalia 15th Anniversary Logo

We’ll be celebrating the anniversary on the last week of September, we’ll have several parties during that week and we’ll have one of our summits at the weekend.

It’s awesome to see back in time and realize how many contributions we’ve been doing to a lot of different free software projects, from our first days inside the GNOME project, to our current work on the different browsers or graphics drivers, among others. Lots of programs you’re using every day have some code contributed by Igalia; from your computer to your phones, TVs, watches, etc.

Happy birthday Igalia, I wish we’ll have many more years of success on the free world! 🎂

Planet MozillaGoosebumps! empty img src error events in Firefox

It's almost Septemberween, which means it's that time of the year we gather 'round our spinning MacBook fans and share website ghost stories:

The tale of the disappearing burger and lobster site content (i.e., webcompat bug 2760).

Chapter 1.

Once upon a time (that time being the present), there was this burger and lobster "restaurant" called, um well, burger and lobster. In Firefox, as of today, the site content never renders — you just end up with some fancy illustrations (none of which are burgers or lobsters).

Opening devtools, you've got the following mysterious stacktrace:

Error: node is undefined
compositeLinkFn@http://www.burgerandlobster.../angular-1.2.25.js:6108:13
compositeLinkFn@http://www.burgerandlobster.../angular-1.2.25.js:6108:13
nodeLinkFn@http://www.burgerandlobster.../angular-1.2.25.js:6705:24
(...stack frames descend into hell...)
jQuery@http://www.burgerandlobster.../jquery-1.11.1.js:73:10
@http://www.burgerandlobster.../jquery-1.11.1.js:3557:1
@http://www.burgerandlobster.../jquery-1.11.1.js:34:3
@http://www.burgerandlobster.../jquery-1.11.1.js:15:2

Cool, time to debug AngularJS.

But it turns out that leads nowhere besides the abyss, AKA funtions that return functions that compose with other functions with dollar signs in them... and the bug is elsewhere. Besides, Chrome has a similar error, and the page works there. Just a haunted node, maybe.

Dennis Schubert discovered that adding a <base href="/"> fixes the site, which happens to be required by Angular is later versions for $locationProvider.html5Mode. But this bug has nothing to do with pushState or history, or even SVG xlink:hrefs, all of which the <base> element can affect.

Another (dead) rabbit hole, another dead end (spooky).

At some point, all your debugging tricks and intuitions fail and it's time to just page a thousand lines of framework-specific JS into your brain and see where that leads. Two hours later, if you're lucky, you notice something weird like so:

var illustArr = [
  {
    "url": "/Assets/images/illustrations/alert-man.png",
    "x": "2508",
    "y": "2028"
  },
...
(bunch of similar objects objects, then...)
  {
    "url": "",
    "x": "",
    "y": ""
  };

And you recall a method that dispatches a allImagesLoaded event which tells the app it's OK to load the rest of the page content. It looks like this:

b.allImagesLoaded = function() {
  d += 1,
  d === a.imageArr.length && $("body").trigger("allImagesLoaded")
}

But it only does that once it's counted that all images have loaded (or fired an error event):

l.loadImage = function(a) {
  var b = $(document.createElement("div"))
    , c = $(document.createElement("img"));
  [...]
  c.attr("src", a.url),
  [...]
  c.bind("load error", function(e) {
      $(this).addClass("illustration--show"),
      h.allImagesLoaded()
  })
}

So yeah, that looks fishy. And that's why Firefox gets stuck—it doesn't fire error events when img.src is set to the empty string, which is required per HTML. Here's a small test case, which also demonstrates why the <base href="/"> fixed the page—it'll fire an error event when requesting an image from the site root (and eventually barf on the HTML, I guess).

Anyways, the Gecko bug for that is Bug 599975. That will probably get fixed soon.

Epilogue.

So what's the moral of this ghost story? There is none. Septemberween is cruel that way.

Steve Faulkner et alNotes on ZoomText Web Finder

ZoomText Magnifier/Reader is a popular combination magnifier/screen reader, primarily for users with low vision. A feature it provides is Web Finder, which makes use of HTML semantics to provide navigation to, interaction with, and understanding of content structure.

ZoomText Web Finder

The product page describes it thus:

ZoomText’s Enhanced Web Finder allows you to search webpages for specific words or phrases, or skim through pages to find items of interest. When an item of interest is found you can have Web Finder start reading aloud from that location (Magnifier/Reader only). If the item is a link to another page you can have Web Finder execute the link and continue your search on the new page.

The Web Finder provides a user interface which lists various HTML elements found in a page and allows the user to navigate to and interact with these elements. It can be used with either the magnifier only or with speech enabled.

zoomtext UI with The magnifier tab selected.

Web Finder can be enabled by pressing the Web button located in the Finders section of the Magnifier tab. Note: the Web button will be disabled unless the currently currently active application is one that ZoomText recognizes as a web browser.

Supported Browsers

Web Finder currently works with Firefox, Chrome and Internet Explorer on Windows. Note: ZoomText appears to have issues with pre Windows 10, pre IE 11 combinations.

Whilst you can activate the Web Finder when running Opera, no semantics are recognized.

If you open Microsoft Edge with ZoomText running, the following dialog is displayed:

ZoomText Comaptibility Warning dialog

[Dialog text:]

You just started Microsoft Edge to browse the Internet or to view a PDF file.  Microsoft Edge is not fully accessible by ZoomText at this time.

Please use Internet Explorer to browse the web and Microsoft Reader to view PDF files. To make these the default programs for these tasks, select the following link.

Supported Semantics

In Chrome, Firefox and Internet Explorer the following HTML features are recognised and listed (if present in the page):

  • Headings h1-h6
  • Landmarks (native HTML5  – header, footer, main, section, aside, nav) identified as type role name landmark. For example the header element and any element with role=banner are identified as Banner landmark
    Notes:

    • The article element is also included as a landmark with the type section: HTML5 article.  Which is funky on a few levels: article is not a landmark and the use of “HTML5” as part of the type, this does not provide any useful information to the large majority of users, most of which have no idea what HTML5 is and why it is called out in reference to the ‘article’ type.
    • header and footer elements only recognised as landmarks (and listed) if scoped to the body element (as per the HTML Accessibility API Mappings 1.0 specification)
  • Lists (ol, ul and dl)
  • Controls: A limited set of controls are identified – buttons (input type="image|button|reset|submit" and the button element, checkboxes input type="checkbox", comboboxes (select element), edit boxes (input type="text|search|url|tel|email"), multiline text boxes (textarea element) and radio buttons input type="radio".
  • links (a href)
  • images (img elements)
  • forms (form elements)

Name and Type

The semantics are conveyed to users via 2 pieces of information name and type, with elements present in the page displayed in a listbox

For example, headings listed from a headings example page:

headings listed with name from first word in each heading and type from element type.

In the case of headings (and Landmarks), the name is derived from the first descendant text node in the source order unless the element has a name provided by the title, aria-label or aria-labelledby attributes, in which case these sources are used.

In the case of buttons (and other supported controls), if the control allows child text nodes, then these are used to provide the name, unless the element has a name provided by the  aria-label or aria-labelledby attributes, in which case these sources are used.

Note: If a button has no child text node but has a title, the attribute content is used as the name.

It is heartening to note that the name calculation used reflects the HTML Accessibility API Mappings 1.0 specification.

In cases where there is a name and a description provided (via title or aria-describedby) these are displayed in the listbox name field separated by a | pipe character:

List of landmarks with example of the name field with both a a name and description.

Note: state attributes are unsupported, so for example if a button is disabled (via the disabled attribute) there will be no indication in the list that this is the case.

What can users do with all this?

web finder Ui showing filter list and function buttons

Users can filter the list to find the types of content they want, and can choose an item from the list to move to that item in the web page (by double clicking, hotkey or the Goto button) and have it highlighted, and spoken (if using reader functionality). They can also activate controls and links directly from within the Web Finder interface via the Execute button.

What about ARIA?

From the content above you may already realise that Web Finder supports ARIA landmark roles and label/description attributes. It also appears to support ARIA equivalents for the other limited set of native HTML features it supports. For example, it recognises role=button as a button control.

Advice for Web Developers

There is nothing special you need to do to support ZoomText Web Finder apart from that which you should already be doing: marking up your UI using the semantics provided in the HTML5 standard and if you must roll your own UI, use ARIA to supplement the role, state and property information. If you don’t use HTML/ARIA to correctly convey UI semantics then ZoomText users will encounter a lot of this:

web finder listbox with message "No Elements Found".

 

 

Steve Faulkner et alWhat does accessibility supported mean?

With the recent news that Microsoft Edge now has 100% accessibility support for HTML5, this post looks at what “accessibility supported” means, and where it fits into the bigger accessibility picture.

Accessibility comes in many forms and all of them are important. For the purposes of this post however, the term “accessibility” is used to mean the ability of an assistive technology like a speech recognition tool or screen magnifier to access content in the browser.

For a feature of HTML (or any other web technology) to be considered accessible, three things have to happen:

1. The browser must support the feature

This means that the browser recognises the feature and provides the expected visual rendering and/or behaviour. When the W3C releases a new version of HTML, it makes sure that each element, attribute and API is supported in at least two browsers. This gives developers a reasonable degree of confidence that the features of W3C HTML will work in the wild.

2. The browser must expose the feature to the platform accessibility API

This is what is meant by “browser accessibility support”. In addition to supporting a feature as described above, the browser must also expose information about the feature’s role, name, properties and states to the platform accessibility API.

The level of HTML5 accessibility support in popular browsers is tracked on html5accessibility.com. These are publicly available tests and results that are updated as new browser versions are released.

3. The assistive technology must obtain and use information about the feature using the platform accessibility API

When a feature is accessibility supported by the browser, the assistive technology must use the information exposed to the platform accessibility API. This information is used by assistive technologies in different ways – by a screen reader to tell someone what kind of feature they are using, or by a speech recognition tool to let someone target a particular object for example.

In other words the browser is responsible for supporting a feature and exposing it to the platform accessibility API, and the assistive technology is responsible for utilising that information to help people access content in the browser. If either the browser or the assistive technology do not fulfill their responsibilities, then accessibility support is not complete.

In practice this means that different combinations of browsers and assistive technologies offer different levels of accessibility support. For example, Edge now has 100% accessibility support for HTML5, and Narrator (the integrated Windows screen reader) takes full advantage of this – meaning that Edge is extremely usable with the Narrator screen reader. In contrast, other screen readers have yet to take advantage of the accessibility information exposed by Edge, and so for now that browser remains largely unusable with those products.

According to html5accessibility.com, Chrome, Firefox and Safari all expose less information about HTML5 than Edge. However most screen readers make good use of that information, so all three browsers are usable with screen readers on the relevant platform.

The goal is for all browsers to hit the 100% benchmark set by Edge, and for all assistive technologies to make full use of that information. When complete accessibility support becomes a given, people are then free to choose their browser and/or assistive technology based on features and capability instead.

Planet MozillaMitigating MIME Confusion Attacks in Firefox

Scanning the content of a file allows web browsers to detect the format of a file regardless of the specified Content-Type by the web server. For example, if Firefox requests script from a web server and that web server sends that script using a Content-Type of “image/jpg” Firefox will successfully detect the actual format and will execute the script. This technique, colloquially known as “MIME sniffing”, compensates for incorrect, or even complete absence of metadata browsers need to interpret the contents of a page resource. Firefox uses contextual clues (the HTML element that triggered the fetch) or also inspects the initial bytes of media type loads to determine the correct content type. While MIME sniffing increases the web experience for the majority of users, it also opens up an attack vector known as MIME confusion attack.

Consider a web application which allows users to upload image files but does not verify that the user actually uploaded a valid image, e.g., the web application just checks for a valid file extension. This lack of verification allows an attacker to craft and upload an image which contains scripting content. The browser then renders the content as HTML opening the possibility for a Cross-Site Scripting attack (XSS). Even worse, some files can even be polyglots, which means their content satisfies two content types. E.g., a GIF can be crafted in a way to be valid image and also valid JavaScript and the correct interpretation of the file solely depends on the context.

Starting with Firefox 50, Firefox will reject stylesheets, images or scripts if their MIME type does not match the context in which the file is loaded if the server sends the response header “X-Content-Type-Options: nosniff” (view specification). More precisely, if the Content-Type of a file does not match the context (see detailed list of accepted Content-Types for each format underneath) Firefox will block the file, hence prevent such MIME confusion attacks and will display the following message in the console:

The resource from “https://example.com/bar.jpg” was blocked due to MIME type mismatch (X-Content-Type-Options: nosniff).

Valid Content-Types for Stylesheets:
– “text/css”

Valid Content-Types for images:
– have to start with “image/”

Valid Content-Types for Scripts:
– “application/javascript”
– “application/x-javascript”
– “application/ecmascript”
– “application/json”
– “text/ecmascript”
– “text/javascript”
– “text/json”

Tantek ÇelikIntense @W3CAB mtg day 1 topics from #HTML5 futures to security. Exhausted, skipped dinner plans to rest, catch up on emails.

Intense @W3CAB mtg day 1 topics from #HTML5 futures to security. Exhausted, skipped dinner plans to rest, catch up on emails.

W3C Team blogW3C China celebrated its 10th Anniversary in July

W3C Beihang Host celebrated W3C China10th Anniversary in Beihang University on July 9th 2016. To honor the past fruitful 10 years and look forward to brighter future, W3C China team invited the local web community to celebrate this great moment together.

The event was organized in 3 sessions, including Core Web technology, Future of the Web and Web&Indsutry. 11 speakers from W3C team, W3C members as well as some notedresearchers shared their insights. More than 200 participants attended this event on site and about 20000 remote attendees watched the onsite video streaming. The Core Web Technology Session focused on the current achievements of the Open Web Platform. Presentations about Web design principles, Web applications and web accessibility were shared with the audience; in the Future of Web Session, the speakers talked about the hot topics such as blockchain, virtual reality and data visualization. Prof. Wei-Tek Tsai who just came back from W3C Blockchain workshop shared his experience on this workshop as well as his vision about blockchain; the Web & Industry Session were mainly for W3C’s efforts in the vertical industries such as payment, automotive as well as web of things. Dr.Jie Bao, a former W3C Linked data activity participants talked about the use of linked data in financial industry and brought the audience a fresh new angle to view the linked data technologies.

Prof. Jinpeng Huai, former Beihang Host representative, ex-President of Beihang University, the Vice Minister of Ministry of Industry and Information joint this event and expressed his best wishes for the future of W3C and the Web.

A brief history of W3C in China

In the spring of 2006, W3C China Office was launched in Beihang University and Beihang University starts to host W3C in China ever since. In 2008, W3C China Office took over the related business from W3C Hong Kong Office and W3C Hong Kong Office was terminated for a good reason. W3C China Office appreciated the contribution from Hong Kong Office, especially the efforts and supports from Prof. Vincent Shen the Office Manager of W3C Hong Kong Office. With the continuous endeavor from W3C team home and abroad, as well as the strong support from Web community, W3C has grown robustly together with the web industry in China. More and more noted Chinese ICT organizations such as Alibaba, Tencent, Huawei, Baidu, China Mobile, Chine Unicom and Chinese Academy of Science joint W3C as members. New web technologies like HTML5 gains increasing popularity among the Chinese developers. In January 2016, W3C upgraded its China Office and launched the fourth international R&D center in Beihang AKA a W3C Host in China.

Planet Mozillaall the links that’s fit to save for later

jared has ~a million links for me to review in response to every 1 question i ask. i ask a lot of questions.

needless to say, we have been filing links away on an imaginary “to read later” list for several weeks now.

i’m starting an actual “to read later” list here, with the hope that i’ll make it back around to some of these:

to be continued…

Planet MozillaWorkshop day two

HTTP Workshop At 5pm we rounded off another fully featured day at the HTTP workshop. Here’s some of what we touched on today:

Moritz started the morning with an interesting presentation about experiments with running the exact same site and contents on h1 vs h2 over different kinds of networks, with different packet loss scenarios and with different ICWND set and more. Very interesting stuff. If he makes his presentation available at some point I’ll add a link to it.

I then got the honor to present the state of the TCP Tuning draft (which I’ve admittedly been neglecting a bit lately), the slides are here. I made it brief but I still got some feedback and in general this is a draft that people seem to agree is a good idea – keep sending me your feedback and help me improve it. I just need to pull myself together now and move it forward. I tried to be quick to leave over to…

Jana, who was back again to tell us about QUIC and the state of things in that area. His presentation apparently was a subset of slides he presented last week in the Berlin IETF. One interesting take-away for me, was that they’ve noticed that the amount of connections for which they detect UDP rate limiting on, has decreased with 2/3 during the last year!

Here’s my favorite image from his slide set. Apparently TCP/2 is not a name for QUIC that everybody appreciates! ;-)

call-it-tcp2-one-more-time

While I think the topic of QUIC piqued the interest of most people in the room and there were a lot of questions, thoughts and ideas around the topic we still managed to get the lunch break pretty much in time and we could run off and have another lovely buffet lunch. There’s certainly no risk for us loosing weight during this event…

After lunch we got ourselves a series of Lightning talks presented for us. Seven short talks on various subjects that people had signed up to do

One of the lightning talks that stuck with me was what I would call the idea about an extended Happy Eyeballs approach that I’d like to call Even Happier Eyeballs: make the client TCP connect to all IPs in a DNS response and race them against each other and use the one that responds with a SYN-ACK first. There was interest expressed in the room to get this concept tested out for real in at least one browser.

We then fell over into the area of HTTP/3 ideas and what the people in the room think we should be working on for that. It turned out that the list of stuff we created last year at the workshop was still actually a pretty good list and while we could massage that a bit, it is still mostly the same as before.

Anne presented fetch and how browsers use HTTP. Perhaps a bit surprising that soon brought us over into the subject of trailers, how to support that and voilá, in the end we possibly even agreed that we should perhaps consider handling them somehow in browsers and even for javascript APIs… ( nah, curl/libcurl doesn’t have any particular support for trailers, but will of course get that if we’ll actually see things out there start to use it for real)

I think we deserved a few beers after this day! The final workshop day is tomorrow.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>