Planet MozillaIntroducing a New Thimble and Bramble

Introduction

This week we're shipping something really cool with Mozilla, and I wanted to pause and tell you about what it is, and how it works.

The tldr; is that we took the Mozilla Foundation's existing web code editor, Thimble, and rewrote it to use Bramble, our forked version of the Brackets editor, which runs in modern web browsers. You can try it now at https://thimble.mozilla.org/

If you're the type who prefers animated pictures to words, I made you a bunch over on the wiki, showing what a few of the features look like in action. You can also check out Luke's great intro video.

If you're the type who likes words, the rest of this is for you.

Why?

I started working on this project two years ago. While at MozFest 2013 I wrote about an idea I had for a new concept app that merged Thimble and Brackets; at the time I called it Nimble.

I was interested in merging these two apps for a number of reasons. First, I wanted to eliminate the "ceiling" users had when using Thimble, wherein they would graduate beyond its abilities, and be forced to use other tools. In my view, Thimble should be able to grow and expand along with a learner's abilities, and a teacher's needs.

Second, people were asking for lots of new features in Thimble, and I knew from experience that the best code is code you don't have to write. I wanted to leverage the hard work of an existing community that was already focused on building a great web coding platform. Writing a coding environment is a huge challenge, and our team wasn't equipped to take it on by ourselves. Thankfully the Brackets project had already solved this.

On Brackets

Brackets was an easy codebase to get started on, and the community was encouraging and willing to help us with patches, reviews, and questions (I'm especially thankful for @randyedmunds and @busykai).

Brackets is written in an AMD module system, and uses requirejs, react, CodeMirror, LESS, jQuery, Bootstrap, loadash, acorn, tern, etc. One of the things I've loved most about working with the Brackets source is that it uses so much of the the best of the open web. It's ~1.3 million lines of code offer APIs for things things like:

  • code hinting, static analysis, and linting
  • language parsing and tokenizing (html, css, js, xml, less)
  • file system operations
  • editors
  • live DOM diff'ing and in-browser preview
  • swappable servers
  • layout, widgets, and dialogs
  • localization, theming, and preferences
  • extension loading at runtime, with hundreds already written

In short, Brackets isn't an editor so much as a rich platform for coding and designing front-end web pages and apps. Bracket's killer feature is its ability to render a live preview of what's in your editor, including dynamic updates as you type, often without needing to save. The preview even has an awareness of changes to linked files (e.g., external stylesheets and scripts).

Another thing I loved was that Brackets wasn't trying to solve code editing in general: they had a very clear mandate that favoured web development, and front-end web development in particular. HTML, CSS, and JavaScript get elevated status in Brackets, and don't have to fight with every other language for features.

All of these philosophies and features melded perfectly with our goal of making a great learning and teaching tool for web programming.

But what about X?

Obviously there are a ton of code editing tools available. If we start with desktop editors, there are a lot to choose from; but they all suffer from the same problem: you have to download 10s of megs of installer, and then you have to install them, along with a web server, in order to preview your work. Consider what's involved in installing each of these (on OS X):

Thimble, on the other hand, is ~1M (877K for Bramble, the rest for the front-end app). We worked extremely hard to get Brackets (38.5M if you install it) down to something that fits in the size of an average web page. If we changed how Brackets loads more significantly, we could get it smaller yet, but we've chosen to keep existing extensions working. The best part is that there is no install: the level of commitment for a user is the URL.

In addition to desktop editors, there are plenty of popular online options, too:

The list goes on. They are all great, and I use, and recommend them all. Each of these tools has a particular focus, and none of them do exactly what the new Thimble does; specifically, none of them tries to deal with trees of files and folders. We don't need to do what these other tools do, because they already do it well. Instead, we focused on making it possible for users to create a rich and realistic environment for working with arbitrary web site/app structures without needing to install a and run a web server.

From localhost to nohost

I've always been inspired by @jswalden's httpd.js. It was written back before there was node.js, back in a time when it wasn't yet common knowledge that you could do anything in JS. The very first time I saw it I knew that I wanted to find some excuse to make a web server in the browser. With nohost, our in-browser web server, we've done it.

In order to run in a browser, Bramble has to be more than just a code editor; it also has to include a bunch of stuff that would normally be provided by the Brackets Shell (similar to Electron.io) and node.js. This means providing a:

  • web server
  • web browser
  • filesystem

and glue to connect those three. Bracket's uses Chrome's remote debugging protocol and node.js to talk between the editor, browser, and server. This works well, but ties it directly to Chrome.

At first I wasn't sure how we'd deal with this. But then an experimental implementation of the Bracket's LiveDevelopment code landed, which switched away from using Chrome and the remote dev tools protocol to any browser and a WebSocket. Then, in the middle of the docs, we found an offhand comment that someone could probably rewrite it to use an iframe and postMessage...a fantastic idea! So we did.

Making it possible for an arbitrary web site to work in a browser-based environment is a little like Firefox's Save Page... feature. You can't just deal with the HTML alone--you also have to get all the linked assets.

Consider an example web page:

<!DOCTYPE html>  
<html>  
  <head>
   <meta charset="utf-8">
    <title>Example Page</title>
    <link rel="stylesheet”
          href="styles/style.css">
  </head>
  <body>
    <img src="images/cat.png">
    <script src="script.js"></script>
    <script>
      // Call function f in script.js
      f();
    </script>
  </body>
</html>  

In this basic web page we have three external resources referenced by URL. The browser needs to be able to request styles/style.css, images/cat.png, and script.js in order to fully render this page. And we're not done yet.

The stylesheet might also reference other stylesheets using @import, or might use other images (e.g., background-image: url(...)).

It gets worse. The script might need to XHR a JSON file from the server in order to do whatever f() requires.

Bramble tries hard to deal with these situations through a combination of static and dynamic rewriting of the URLs. Eventually, if/when all browsers ship it, we could do a lot of this with ServiceWorkers. Until then, we made do with what we already have cross browser.

First, Bramble's nohost server recursively rewrites the HTML, and its linked resources, in order to find relative filesystem paths (images/cat.png) and replace them with Blobs and URL objects that point to cached memory resources read out of the browser filesystem.

Parsing HTML with regex is a non-starter. Luckily browsers have a full parser built in, DOMParser. Once we have an in memory DOM vs. HTML text string, we can accurately querySelectorAll to find things that might contain URLs (img, link, video, iframe, etc., avoiding a due to circular references) and swap those for generated Blob URLs from the filesystem. When we're done, we can extract rewritten HTML text from our live in-memory DOM via documentElement.outerHTML, obtaining something like this:

<!DOCTYPE html>  
<html>  
  <head>
   <meta charset="utf-8">
    <title>Example Page</title>
    <link rel="stylesheet”
          href="blob:https%3A//mozillathimblelivepreview.net/346526f5-3c14-4073-b667-997324a5bfa9">
  </head>
  <body>
    <img src="blob:https%3A//mozillathimblelivepreview.net/ab090911-9ec1-499c-a9fc-7fce180704f7">
    <script src="blob:https%3A//mozillathimblelivepreview.net/264a3524-5316-47e5-a835-451e78247678"></script>
    <script>
      // Call function f in script.js
      f();
    </script>
  </body>
</html>  

All external resources now use URLs to cached memory resources. This HTML can then be itself turned into a Blob and URL object, and used as the src for our iframe browser (this works everywhere except IE, where you have to document.write the HTML, but can use Blob URLs for everything else).

For CSS we do use regex, looking for url(...) and other places where URLs can lurk. Thankfully there aren't a lot, and it's just a matter of reading the necessary resources from disk, caching to a Blob URL, and replacing the filesystem paths for URLs, before generating a CSS Blob URL that can be used in the HTML.

Despite what everyone tells you about the DOM being slow, the process is really fast. And because we own the filesystem layer, whenever the editor does something like a writeFile(), we can pre-generate a URL for the resource, and maintain a cache of such URLs keyed on filesystem paths for when we need to get them again in the future during a rewrite step. Using this cache we are able to live refresh the browser quite often without causing any noticeable slowdown on the main thread.

As an aside, it would be so nice if we could move the whole thing to a worker and be able to send an HTML string, and get back a URL. Workers can already access IndexedDB, so we could read from the filesystem there, too. This would mean having access to DOMParser (even if we can't touch the main DOM from a worker, being able to parse HTML is still incredibly useful for rewriting, diff'ing, etc).

Finally, we do dynamic substitutions of relative paths for generated Blob URLs at runtime by hijacking XMLHttpRequest and using our postMessage link from the iframe to the editor in order to return response data for a given filename.

And it all works! Sure, there's lots of things we won't ever be able to cope with, from synchronous XHR to various types of DOM manipulation by scripts that reference URLs as strings. But for the general case, it works remarkably well. Try downloading and dragging a zipped web site template from http://html5up.net/ into the editor. Bramble doesn't claim to be able to replace a full, local development environment for every use case; however, it makes it unnecessary in most common cases. It's amazing what the modern web can do via storage, file, drag-and-drop, parser, and worker APIs.

Origin Sandboxing

I talk about Thimble and Bramble as different things, and they are, especially at runtime. Bramble is an embeddable widget with an iframe API, and Thimble hosts it and provides some UI for common operations.

I've put a simple demo of the the Bramble API online for people to try (source is here). Bramble uses, but doesn't own its filesystem; nor does it have any notion of where the files came from or where they are going. It also doesn't have opinions about how the filesystem should be laid out.

This is all done intentionally so that we can isolate the editor and preview from the hosting app, running each on a different domain. We want users to be able to write arbitrary code, execute and store it; but we don't want to mix code for the hosting app and the editor/preview. The hosting app needs to decide on a filesystem layout, get and write the files, and then "boot" Bramble.

I've written previously about how we use MessageChannel to remotely host an IndexedDB backed filesystem in a remote window running on another domain: Thimble owns the filesystem and database and responds to proxied requests to do things via postMessage.

In the case of Thimble, we store data in a Heroku app using postgres on the server. Thimble listens for filesystem events, and then queues and executes file update requests over the network to sync the data upstream. Published projects are written to S3, and we then serve them on a secure domain. Because users can upload files to their filesystem in the editor, it makes it easier to transition to an https:// only web.

When the user starts Thimble, we request a project as a gzipped tarball from the publishing server, then unpack it in a Worker and recreate the filesystem locally. Bramble then "mounts" this local folder and begins working with the local files and folders, with no knowledge of the servers (all data is autosaved, and survives refreshes).

Conclusion

Now that we've got the major pieces in place, I'm interested to see what people will do with both Thimble and Bramble. Because we're in a full browser vs. an "almost-browser" shell, we have access to all the latest toys (for example, WebRTC and the camera). Down the road we could use this for some amazing pair programming setups, so learners and mentors could work with each other directly over the web on the same project.

We can also do interesting things with different storage providers. It would be just as easy to have Bramble talk to Github, Dropbox, or some other cloud storage provider. We intentionally kept Thimble and Bramble separate in order to allow different directions in the future.

Then there's all the possibilities that custom extensions opens up (did I mention that Bramble has dynamic extension loading? because it does!). I'd love to see us use bundles of extensions to enable different sorts of learning activities, student levels, and instructional modes. I'm also really excited to see what kind of new curriculum people will build using all of this.

In the meantime, please try things out, file bugs, chat with us on irc #thimble on moznet and have fun making something cool with just your browser. Even better, teach someone how to do it.

Let me close by giving a big shout out to the amazing students (current and former) who hacked on this with me. You should hire them: Gideon Thomas, Kieran Sedgwick, Kenny Nguyen, Jordan Theriault, Andrew Benner, Klever Loza Vega, Ali Al Dallal, Yoav Gurevich, as well as the following top notch Mozilla folks, who have been amazing to us: Hannah Kane, Luke Pacholski, Pomax, Cassie McDaniel, Ashley Williams, Jon Buckley, and others.

Planet MozillaES6 for now: Template strings

ES6 is the future of JavaScript and it is already here. It is a finished specification, and it brings a lot of features a language requires to stay competitive with the needs of the web of now. Not everything in ES6 is for you and in this little series of posts I will show features that are very handy and already usable.

If you look at JavaScript code I’ve written you will find that I always use single quotes to define strings instead of double quotes. JavaScript is OK with either, the following two examples do exactly the same thing:

var animal = "cow";
 
var animal = 'cow';

The reason why I prefer single quotes is that, first of all, it makes it easier to assemble HTML strings with properly quoted attributes that way:

// with single quotes, there's no need to 
// escape the quotes around the class value
var but = '<button class="big">Save</button>';
 
// this is a syntax error:
var but = "<button class="big">Save</button>";
 
// this works:
var but = "<button class=\"big\">Save</button>";

The only time you need to escape now is when you use a single quote in your HTML, which should be a very rare occasion. The only thing I can think of is inline JavaScript or CSS, which means you are very likely to do something shady or desperate to your markup. Even in your texts, you are probably better off to not use a single quote but the typographically more pleasing ‘.

Aside: Of course, HTML is forgiving enough to omit the quotes or to use single quotes around an attribute, but I prefer to create readable markup for humans rather than relying on the forgiveness of a parser. We made the HTML5 parser forgiving because people wrote terrible markup in the past, not as an excuse to keep doing so.

I’ve suffered enough in the DHTML days of document.write to create a document inside a frameset in a new popup window and other abominations to not want to use the escape character ever again. At times, we needed triple ones, and that was even before we had colour coding in our editors. It was a mess.

Expression substition in strings?

Another reason why I prefer single quotes is that I wrote a lot of PHP in my time for very large web sites where performance mattered a lot. In PHP, there is a difference between single and double quotes. Single quoted strings don’t have any substitution in them, double quoted ones have. That meant back in the days of PHP 3 and 4 that using single quotes was much faster as the parser doesn’t have to go through the string to substitute values. Here is an example what that means:

<?php
  $animal = 'cow';
  $sound = 'moo';
 
  echo 'The animal is $animal and its sound is $sound';
  // => The animal is $animal and its sound is $sound
 
  echo "The animal is $animal and its sound is $sound";
  // => The animal is cow and its sound is moo
?>

JavaScript didn’t have this substitution, which is why we had to concatenate strings to achieve the same result. This is pretty unwieldy, as you need to jump in and out of quotes all the time.

var animal = 'cow';
var sound = 'moo';
 
alert('The animal is ' + animal + ' and its sound is ' +
 sound);
// => "The animal is cow and its sound is moo"

Multi line mess

This gets really messy with longer and more complex strings and especially when we assemble a lot of HTML. And, most likely you will sooner or later end up with your linting tool complaining about trailing whitespace after a + at the end of a line. This is based on the issue that JavaScript has no multi-line strings:

 
// this doesn't work
var list = '<ul>
  <li>Buy Milk</li>
  <li>Be kind to Pandas</li>
  <li>Forget about Dre</li>
</ul>';
 
// This does, but urgh… 
var list = '<ul>\
  <li>Buy Milk</li>\
  <li>Be kind to Pandas</li>\
  <li>Forget about Dre</li>\
</ul>';
 
// This is the most common way, and urgh, too…
var list = '<ul>' +
'  <li>Buy Milk</li>' +
'  <li>Be kind to Pandas</li>' +
'  <li>Forget about Dre</li>' +
'</ul>';

Client side templating solutions

In order to work around the mess that is string handling and concatenation in JavaScript, we did what we always do – we write a library. There are many HTML templating libraries with Mustache.js probably having been the seminal one. All of these follow an own – non standardised – syntax and work in that frame of mind. It’s a bit like saying that you write your content in markdown and then realising that there are many different ideas of what “markdown” means.

Enter template strings

With the advent of ES6 and its standardisation we now can rejoice as JavaScript has now a new kid on the block when it comes to handling strings: Template Strings. The support of template strings in current browsers is encouraging: Chrome 44+, Firefox 38+, Microsoft Edge and Webkit are all on board. Safari, sadly enough, is not, but it’ll get there.

The genius of template strings is that it uses a new string delimiter, which isn’t in use either in HTML nor in normal texts: the backtick (`).

Using this one we now have string expression substitution in JavaScript:

var animal = 'cow';
var sound = 'moo';
 
alert(`The animal is ${animal} and its sound is ${sound}`);
// => "The animal is cow and its sound is moo"

The ${} construct can take any JavaScript expression that returns a value, you can for example do calculations, or access properties of an object:

var out = `ten times two totally is ${ 10 * 2 }`;
// => "ten times two totally is 20"
 
var animal = {
  name: 'cow',
  ilk: 'bovine',
  front: 'moo',
  back: 'milk',
}
alert(`
  The ${animal.name} is of the 
  ${animal.ilk} ilk, 
  one end is for the ${animal.front}, 
  the other for the ${animal.back}
`);
// => 
/*
  The cow is of the 
  bovine ilk, 
  one end is for the moo, 
  the other for the milk
*/

That last example also shows you that multi line strings are not an issue at all any longer.

Tagged templates

Another thing you can do with template strings is prepend them with a tag, which is the name of a function that is called and gets the string as a parameter. For example, you could encode the resulting string for URLs without having to resort to the horridly named encodeURIComponent all the time.

function urlify (str) {
  return encodeURIComponent(str);
}
 
urlify `http://beedogs.com`;
// => "http%3A%2F%2Fbeedogs.com"
urlify `woah$£$%£^$"`;
// => "woah%24%C2%A3%24%25%C2%A3%5E%24%22"
 
// nesting also works:
 
var str = `foo ${urlify `&&`} bar`;
// => "foo %26%26 bar"

This works, but relies on implicit array-to-string coercion. The parameter sent to the function is not a string, but an array of strings and values. If used the way I show here, it gets converted to a string for convenience, but the correct way is to access the array members directly.

Retrieving strings and values from a template string

Inside the tag function you can not only get the full string but also its parts.

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
}
 
tag `you ${3+4} it`;
/* =>
 
Array [ "you ", " it" ]
7
it
 
*/

There is also an array of the raw strings provided to you, which means that you get all the characters in the string, including control characters. Say for example you add a linebreak with \n you will get the double whitespace in the string, but the \n characters in the raw strings:

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
  console.log(string.raw[1]);
}
 
tag `you ${3+4} \nit`;
/* =>
 
Array [ "you ", "  it" ]
7
 
it
 \nit
*/

Conclusion

Template strings are one of those nifty little wins in ES6, that can be used right now. If you have to support older browsers, you can of course transpile your ES6 to ES5, you can do a feature test for template string support using a library like featuretests.io or with the following code:

var templatestrings = false;
try {
  new Function( "`{2+2}`" );
  templatestrings = true;
} catch (err) {
  templatestrings = false;
} 
 
if (templatestrings) {
	// …
}

More articles on template strings:

Tantek Çelikupdated tantek.com/relmeauth @CSSWG meeting to demo :-moz-ui-invalid, :-moz-ui-valid, and :user-changed polyfilled to work while modifying the input (typing, paste, etc.) before clicking outside. Note that :-moz-ui-invalid already only triggers *after* the user leaves the input, thus acting as a warning to the user that they need to go back and fix something, instead of interrupting them while they're typing. However for the valid case, as soon as the user has entered something valid, you want to give them positive feedback thus letting them know they can take the next step (leave the input, submit the form, etc.). References: * alistapart.com/article/inline-validation-in-web-forms * alistapart.com/article/forward-thinking-form-validation * alistapart.com/d/forward-thinking-form-validation/enhanced_2.html * https://developer.mozilla.org/en-US/docs/Web/CSS/:-moz-ui-invalid * https://developer.mozilla.org/en-US/docs/Web/CSS/:-moz-ui-valid

updated tantek.com/relmeauth @CSSWG meeting to demo :-moz-ui-invalid, :-moz-ui-valid, and :user-changed polyfilled to work while modifying the input (typing, paste, etc.) before clicking outside.

Note that :-moz-ui-invalid already only triggers *after* the user leaves the input, thus acting as a warning to the user that they need to go back and fix something, instead of interrupting them while they're typing.

However for the valid case, as soon as the user has entered something valid, you want to give them positive feedback thus letting them know they can take the next step (leave the input, submit the form, etc.).

References:
* alistapart.com/article/inline-validation-in-web-forms
* alistapart.com/article/forward-thinking-form-validation
* alistapart.com/d/forward-thinking-form-validation/enhanced_2.html
* https://developer.mozilla.org/en-US/docs/Web/CSS/:-moz-ui-invalid
* https://developer.mozilla.org/en-US/docs/Web/CSS/:-moz-ui-valid

Anne van KesterenStatement regarding the URL Standard

The goal of the URL Standard is to reflect where all implementations will converge. It should not describe today’s implementations as that will not lead to convergence. It should not describe yesterday’s implementations as that will also not lead to convergence. And it should not describe an unreachable ideal, e.g., by requiring something that is known to be incompatible with web content.

This is something all documents published by the WHATWG have in common, but I was asked to clarify this for the URL Standard in particular. Happy to help!

Planet MozillaMultilingual slides in HTML5 Mozilla Sandstone slidedeck

Edit: You can now preview the changes live. Also, the pull request got accepted!

I just submitted a GitHub PR for adding multilingual support to Mozilla’s HTML5 Sandstone slidedeck. The selection is persistent across slide changes, and in Firefox, the URL bar will update the lang attribute as well.

Pictures are worth thousands of words, so here you go:

Screenshot showing language menu dropdown

Screenshot showing language menu dropdown

To add languages:

  1. Add the style tags to the stylesheet
  2. Modify the language menu
  3. Place your translation within <div> tags with language code class names
Screenshot showing slide after language selection

Slide after language selection

Example code:

<div class=”en-US”>This is English.</div>
<div class=”zh-CN”>这是中文(简体)。</div>
<div class=”zh-TW”>這是中文(繁體)。</div>
<div class=”ja-JP”>これは日本語です。</div>

Screenshot showing the next slide with persistent language selection change

Next slide with persistent language selection change

If you would like to use this now, my changes are on a GitHub fork.

Massive thanks go out to :MattN who greatly helped me out. Thanks Matt!


Planet MozillaFirefox Password Manager Update: 2015-Q2

Continuing from the Q1 update, here's a summary of the password manager progress made in the second quarter of 2015 (in no particular order):
  • <form>-less login – Unfortunately some sites don't use a <form> submission for login and/or registration despite the many downsides involving accessibility, HTML5 validation, inconsistent UX, lack of form/password manager support, etc. If you're building something which looks like a form (likely using <input>) then you should almost always use a <form>, even if it's a client-side form which isn't doing a GET/POST submission itself as you can simply use event.preventDefault(); for the submit event.
    Since evangelizing best practices isn't going to get sites to change in the short term and we want users to be able to rely on the password manager, we're implementing support for capturing and filling passwords on pages not using forms. Most code has been changed to pass around a FormLike abstraction instead of <form> references so we don't need to implement special logic throughout the code. Autofill is implemented while autocomplete and capture are in progress.
  • Edit logins at capture time (desktop and android) – If the wrong fields are detected for a username and/or the password or a site modifies the values after you type in them (e.g. to implement a custom masking with asterisks: user****), the user can edit both the username and password so the correct values are filled upon the next visit to the site.
    <figure style="display: inline-block; margin: 0.5em; vertical-align: middle;"><video controls="controls" height="300" src="http://matthew.noorenberghe.com/sites/matthew.noorenberghe.com/files/capture_password_capture.mp4" width="385"></video>Source: @ryanfeeley on Twitter</figure>
  • Copy passwords from the Android Site Identity doorhanger – If for whatever reason a login can't be autofilled, you can now copy the password to the clipboard from the Site Identity panel on Android.
  • Experimental Fill UI – Similar to the above on Android, there's experimental UI to be able to fill and manage logins from the key icon in the identity block (eventually probably integrated into the Site Identity panel). You can enable the basic experimental UI with the preference signon.ui.experimental in about:config.
  • View your password in the manager on Android – Sometimes you just need to see what your saved password is e.g. to type it on another device without Firefox Sync so the ability to view passwords was added in the Firefox password manager on Android.
  • Making HTTPS upgrades smoother – When deciding whether to autofill a form, we will now also consider logins saved for the HTTP version of the saved form action while on HTTPS in order to make sites upgrades to HTTP easier. Note that handling upgrades for the form's own origin is still in progress.
  • Other bug fixes:
    • Bug 1152422 – Ask to save the new password in a change form with no username even if we have no saved logins for the site
    • Bug 1155390 – Don't prompt to update a password when there is no username field and the password is identical
    • Bug 998893 – Login/password not autocompleted due to custom placeholder implementation swapping @value
    • Bug 1170772 – Get password manager xpcshell tests running on Android
    • Bug 1173688 – Password manager sync promo appears when signing in/up for Sync from an iframe
    Expect to see many more improvements in upcoming months as we continue to make major improvements to the password manager. If you'd like to contribute to this project, check out the password manager wiki page for mailing list, IRC, bug list and other information.

Planet MozillaThe ES6 conundrum – new article on SitePoint

conundrum

I just released an article over on Sitepoint called The ES6 conundrum. In it, I am discussing the current issues we’re facing with using ES6:

  • We can’t use it safely in the wild – as ES6 is a syntax change to the language, legacy browsers will see it as a JavaScript error and give our end users a broken experience. This violates the Priority of Constituencies design principle of HTML5
  • We can use TypeScript or transpile it – which means we don’t debug the code we write but generated code. This can also lead to a lot of code bloat.
  • We can feature test for it – which that can get complex quickly and we can’t assume that support for one features means others are supported
  • Browser support for ES6 only makes a difference internally – as we transpile, we never send any ES6 to the browser
  • The performance of ES6 is bad right now which is normal, as we have no way to tweak and test it in the browser and it offers much more complexity than ES5

All in all, we need to have a good think about ES6, and – to me – it feels we are at a turning point in web development. I will talk in more detail about this in my BrazilJS keynote in two weeks.

Read “The ES6 conundrum” on Sitepoint

Steve Faulkner et alShort Note on HTML conformance checking

When you check a HTML document, using the W3C HTML conformance checker, to find out whether its code conforms to the rules defined in the HTML specification (and other referenced specifications). It’s useful to understand what the output means.

W3C Nu Markup checker

Errors

Errors are instances where the code you are checking does not conform to MUST level requirements defined in the HTML specification.

1. MUST This word, or the terms "REQUIRED" or "SHALL", mean that the
   definition is an absolute requirement of the specification.

2. MUST NOT  This phrase, or the phrase "SHALL NOT", mean that the
   definition is an absolute prohibition of the specification.

For example, the following code snippet breaks the rule:

Content model for element ol: Zero or more li and script-supporting elements.

<body>
<ol>
<div></div>
</ol>
</body>

In other words, an ol element must only contain li, script or template elements as child elements.

<body>
<ol>
<template></template>
<script></script>
<mark><li></mark><div></div><mark></li></mark>
</ol>
</body>

MUST level requirements and the errors they produce are there to stop you doing stuff that can cause problems or remind you to do stuff that you need to do to avoid problems.

W3C Old Skool W3C validator

Warnings

Warnings are instances where the code you are checking does not conform to SHOULD level requirements defined in the HTML specification.

3. SHOULD   This word, or the adjective "RECOMMENDED", mean that there
   may exist valid reasons in particular circumstances to ignore a
   particular item, but the full implications must be understood and
   carefully weighed before choosing a different course.

4. SHOULD NOT   This phrase, or the phrase "NOT RECOMMENDED" mean that
   there may exist valid reasons in particular circumstances when the
   particular behavior is acceptable or even useful, but the full
   implications should be understood and the case carefully weighed
   before implementing any behavior described with this label.

For example, the following code snippet breaks the rule:

Default Implicit ARIA semantics – SHOULD NOT be used

<body>
<ol <mark>role="list"</mark>>
<li>item 1
</ol>
</body>

In other words, ol has a an implicit role of list, which is conveyed by browsers automatically, so there is no neeed to add the explicit role as an attribute.

<body>
<ol>
<mark><li></mark>list item
</ol>
</body>

SHOULD level requirements and the warnings they produce are there to stop you doing stuff that is unecessary or harmful in general or as a reminder to do stuff that it is useful or helpful to do, in general.

Where do these requirement terms come from?

An ancient (1997) text handed down by our ancestors: Key words for use in RFCs to Indicate Requirement Levels. Which you will find referenced by many (all?) W3C specifications that define what are known as normative requirements. Requirements are defined in HTML, for user agent implementers, conformance tool implementers or web developers (AKA authors).

Further Reading

Planet MozillaDiscovering Accessibility

My final project working at the Mozilla Foundation was teach.mozilla.org, which was the first content-based website I’ve helped create in quite some time. During the site’s development, I finally gave myself the time to learn about a practice I’d been procrastinating to learn about for an embarrassingly long time: accessibility.

One of the problems I’ve had with a lot of guides on accessibility is that they focus on standards instead of people. As a design-driven engineer, I find standards necessary but not sufficient to create compelling user experiences. What I really wanted to know about was not the ARIA markup to use for my code, but how to empathize with the way “extreme users”–people with disabilities–use the Web.

I finally found a book with such a holistic approach to accessibility called A Web For Everyone by Sarah Horton and Whitney Quesenbery. I’m still not done reading it, but I highly recommend it.

Stage 1: Accessibility Is Awesome!

The first thing I did in an attempt to empathize with users of screen readers was to actually be proactive and learn to use a screen reader. The first one I learned how to use was the open-source NVDA screen reader for Windows. Learning how to use it actually reminded me a bit of learning vi and emacs for the first time: for example, because I couldn’t visually scan through a page to see its headings, I had to learn special keyboard commands to advance to the next and previous heading.

Obviously, however, I am a very particular kind of user when I use a screen reader: because I don’t actually rely on auditory information as much as a blind person, I can’t listen to a screen reader’s narration very fast. And because I’m a highly technical user who is good at remembering keyboard shortcuts, I can remember a lot of them. So it was useful to compare my own use of screen readers against Ginny Redish’s paper on Observing Users Who Work With Screen Readers (PDF).

After learning the basics of NVDA, I found Terrill Thompson’s blog post on Good Examples of Accessible Web Sites and tried visiting some of them with my shiny new screen reader. Doing this gave me lots of inspiration on how to make my own sites more accessible.

The web service tenon.io was also quite helpful in educating me on best practices my existing websites lacked, and The Paciello Group’s Web Components Punch List was helpful when I needed to create or evaluate custom UI widgets.

All of this has constituted what I’ve begun to call my “honeymoon” with accessibility. It was quite satisfying to empathize with the needs of extreme users, and I was excited about creating sites that were delightful to use with NVDA.

Stage 2: Accessibility Is Hard!

What ended up being much harder, though, was actually building a delightful experience for users who might be using any screen reader.

The second screen reader I learned how to use was Apple’s excellent VoiceOver, which comes built-in with all OS X and iOS devices. And like the early days of the Web, when a delightful experience on one browser was completely unusable in another, I often found that my hard work to improve my site’s usability on NVDA often made the site less usable on VoiceOver. For example, as Steve Faulkner has documented, the behavior of the ARIA role="alert" varies immensely across different browser and screen reader combinations, which led to some frustrating trade-offs on the Teach site.

One potential short-term solution to this might be for sites to have slightly different code depending on the particular browser/screen-reader combination being used. Aside from being a bad idea for a number of reasons, though, it’s also technically impossible–the current screen reader isn’t reflected in navigator.userAgent or anything else.

So, that’s the current situation I find myself in with respect to accessibility: creating accessible static content is easy and helps extreme users, but creating accessible rich internet applications is quite difficult because screen readers implement the standards so differently. I’m eagerly hoping that this situation improves over the coming years.

Planet WebKitIntroducing Backdrop Filters

Our recent blog posts have focused on important performance and developer features added to WebKit. But WebKit is about more than just great developer tools; we also build features for authoring amazing web content.

In this post I’m excited to share a great new feature that designers have been demanding for some time: backdrop filters. Let’s start with a few words explaining why this feature is important, then we can delve into how you can start using it. If you are running a recent nightly build of WebKit, you can try out the example yourself!

Background

The User Interface design language for iOS 7 and OS X Yosemite changed to incorporate some beautiful backdrop blur effects. This layering gives a sense of depth, while preventing detail from the content underneath from cluttering the foreground.

The following image shows the WebKit media controls on top of a video. Notice that you can see some of the background content through the frosted glass effect.

View of the media controls in the context of video playback.
It’s even easier to see in this close up, which was captured using a more vivid source video.

Detail view of the media controls, showing frosted glass backdrop effect.
We wanted the WebKit media controls to have this visual style on relevant platforms. However, the controls are implemented in HTML, CSS, and JavaScript.

Designers want to use these kind of beautiful effects in their web designs, but have been unable to because these effects were only available to native applications. This prevented embedded web views from looking as good as native controls. It also prevented these kinds of effects from being used for authoring websites.

Until recently, there was no standards-compliant method for producing these kinds of effects. Many designers were forced to create the illusion of blurred backdrops using pre-rendered background content and carefully clipping and positioning these assets to achieve the desired effect.

Unfortunately, as with most illusions, this approach doesn’t hold up to close scrutiny.

  • New artwork must be generated any time the background image or blur characteristics are changed.
  • The careful alignment and clipping required to maintain this illusion can lead to pixel cracks and other display gitches.
  • Different blurred images are needed for each targeted display resolution.
  • Dynamic layouts or animations require complicated and potentially costly calculations as the user interacts with the page.

In short, instead of focusing on their site design, developers were forced to take heroic measures to achieve the desired effect.

Backdrop Filter

We saw so many instances of these Sisyphean techniques being used that we created a new CSS style and proposed it as part of the CSS Filter Effects Module Level 2.

The backdrop-filter style allows us to style elements with backdrop effects that resemble those in iOS and OS X that motivated this discussion.

This new style allows the browser engine to do the complicated calculations and positioning needed to achieve this effect.

  1. WebKit starts with the content behind the styled element. Note that this is not the background of the element, but rather the content that would be drawn behind the element.
  2. WebKit then applies the blur effect to the content.
  3. Finally, the backdrop is composited with the other elements on the page to yield the final result.

Example: backdrop-filter: blur(10px);

Backdrop Filter Example
Since these blur operations are being done in the browser engine, we can take advantage of hardware support, resulting in very efficient operations. However, be warned! The nature of this backdrop effect forces the engine to perform more rendering passes, which will have an impact on performance. Make sure you only use this feature where it is most necessary.

We wanted to give developers the freedom to use all kinds of filters in their designs, so backdrop-filter supports the full range of effects provided by our CSS Filters implementation. This means we can do all kinds of exciting things with our backdrops:

Example: backdrop-filter: invert();

Inverted Color Backdrop Example

And we can combine multiple filters:

Example: backdrop-filter: blur(10px) grayscale(100%);

Mixed Filters on Backdrop Example
And best of all, this effect is completely dynamic — it can be used on top of HTML5 media, CSS animations, WebGL, and other dynamic content.

<video controls="controls" height="335" src="https://www.webkit.org/blog-files/backdrop-filters/dynamic_backdrop.m4v" width="600"></video>
This is an amazing advancement of what you can do with your designs. Prior to this style, you could not achieve this kind of effect.

Standardization

WebKit proposed this feature to the CSS Working Group last year, and it is currently in the Editor’s Draft of the CSS Filters Level 2 specification. We are currently prefixing this property to comply with the W3C requirements for features that have not completed the standardization process. Consequently, you will need to write -webkit-backdrop-filter when using it in your own CSS.

Feedback

We hope you enjoy this new feature, and share your creations with us! If you find bugs, please report them at bugs.webkit.org.

As always, if you have suggestions or feedback on this (or other) Filter Effects, please share them on the public-fx@w3.org mailing list.

For short questions, you can contact me or Dean Jackson on Twitter. For longer questions, you can email webkit-help.

Acknowledgements

The images and video of the International Space Station were obtained from NASA’s website, and are used under the Public Domain.

Planet MozillaAnnouncing Rust 1.2

Today marks the completion of the Rust 1.2 stable and 1.3 beta release cycles! Read on for the highlight, or check the release notes for more detail.

What’s in 1.2 stable

As we previously announced, Rust 1.2 comes with two major performance improvements for the compiler:

  • An across-the-board improvement to real-world compiler performance. Representative crates include hyper (compiles 1.16x faster), html5ever (1.62x faster), regex (1.32x faster) and rust-encoding (1.35x faster). You can explore some of this performance data at Nick Cameron’s preliminary tracking site, using dates 2015-05-15 to 2015-06-25.

  • Parallel codegen is now working, and produces a 33% speedup when bootstrapping on a 4 core machine. Parallel codegen is particularly useful for debug builds, since it prevents some optimizations; but it can also be used with optimizations as an effective -O1 flag. It can be activated by passing -C codegen-units=N to rustc, where N is the desired number of threads.

Cargo’s performance has also improved dramatically:

  • Builds that do not require any recompilation (“no-op builds”) for large projects are much faster: for Servo, build time went from 5 seconds to 0.5 seconds.

  • Cargo now supports shared target directories that cache dependencies across multiple packages, which results in significant build-time reduction for complex projects.

The 1.2 release also introduces support for the MSVC (Microsoft Visual C) toolchain, as opposed to GNU variants. The upshot is that Rust code is now directly linkable against code built using the native Windows toolchain. The compiler bootstraps on MSVC, we have preliminary nightlies, and we are testing all rust-lang crates against MSVC. Unwinding support is not yet available (the process aborts on panic), but work is underway to land it.

On the language side, Rust 1.2 marks the completion of the dynamically-sized type (DST) work, allowing smart pointers like Rc to seamless apply to arrays and trait objects, so that Rc<[T]> is fully usable. This final enhancement applies to all smart pointers in the standard library. Support for external smart pointer types is available in nightlies, and will be stabilized soon.

What’s in 1.3 beta

One of the most exciting developments during the 1.3 cycle was the introduction of the Rustonomicon, a new book covering “The Dark Arts of Advanced and Unsafe Rust Programming”. While it’s still in its early days, this book already provides indispensable coverage of some of Rust’s more subtle aspects.

The 1.3 cycle also saw additional focus on performance, though most wins here are within the standard library:

We have also made strides in our Windows support, landing preliminary support for targeting Windows XP. While we do not intend to treat Windows XP as a “first tier” platform, it is now feasible to build Rust code for XP as long as you avoid certain parts of the standard library.

On the Cargo front, we have landed support for lint capping as specified by an earlier RFC. The idea is that lints in your dependencies should not affect your ability to compile cleanly, which in turn makes it easier to tweak the way lints work without undue hassle in the ecosystem.

Contributors to 1.2

The 1.2 stable release represents the hard work of 180 fine folks:

  • Aaron Turon
  • Abhishek Chanda
  • Adolfo Ochagavía
  • Aidan Hobson Sayers
  • Akshay Chiwhane
  • Alex Burka
  • Alex Crichton
  • Alex Stokes
  • Alexander Artemenko
  • Alexis Beingessner
  • Andrea Canciani
  • Andrew Foote
  • Andrew Kensler
  • Andrew Straw
  • Ariel Ben-Yehuda
  • Austin Hellyer
  • Barosl Lee
  • Ben Striegel
  • Björn Steinbrink
  • Brian Anderson
  • Brian Campbell
  • Brian Leibig
  • Brian Quinlan
  • Carol (Nichols || Goulding)
  • Chris Hellmuth
  • Christian Stadelmann
  • Chuck Bassett
  • Corey Farwell
  • Cornel Punga
  • Cruz Julian Bishop
  • Dave Huseby
  • David Campbell
  • David Stygstra
  • David Voit
  • Eduard Bopp
  • Eduard Burtescu
  • Eli Friedman
  • Emilio Cobos Álvarez
  • Emily Dunham
  • Eric Ye
  • Erik Michaels-Ober
  • Falco Hirschenberger
  • Felix S. Klock II
  • FuGangqiang
  • Geoffrey Thomas
  • Gleb Kozyrev
  • Guillaume Gomez
  • Gulshan Singh
  • Heejong Ahn
  • Huachao Huang
  • Huon Wilson
  • Ivan Ukhov
  • Iven Hsu
  • Jake Goulding
  • Jake Hickey
  • James Miller
  • Jared Roesch
  • Jeremy Schlatter
  • Jexell
  • Jim Blandy
  • Johann Tuffe
  • Johannes Hoff
  • Johannes Oertel
  • John Hodge
  • Jonathan Reem
  • Joshua Landau
  • Kevin Ballard
  • Kubilay Kocak
  • Lee Jeffery
  • Leo Correa
  • Liigo Zhuang
  • Lorenz
  • Luca Bruno
  • Luqman Aden
  • Manish Goregaokar
  • Marcel Müller
  • Marcus Klaas
  • Marin Atanasov Nikolov
  • Markus Westerlind
  • Martin Pool
  • Marvin Löbel
  • Matej Lach
  • Mathieu David
  • Matt Brubeck
  • Matthew Astley
  • Max Jacobson
  • Maximilian Haack
  • Michael Layzell
  • Michael Macias
  • Michael Rosenberg
  • Michael Sproul
  • Michael Woerister
  • Mihnea Dobrescu-Balaur
  • Mikhail Zabaluev
  • Mohammed Attia
  • Ms2ger
  • Murarth
  • Mário Feroldi
  • Nathan Long
  • Nathaniel Theis
  • Nick Cameron
  • Nick Desaulniers
  • Nick Fitzgerald
  • Nick Hamann
  • Nick Howell
  • Niko Matsakis
  • Nils Liberg
  • OlegTsyba
  • Oliver ‘ker’ Schneider
  • Oliver Schneider
  • P1start
  • Parker Moore
  • Pascal Hertleif
  • Paul Faria
  • Paul Oliver
  • Peer Aramillo Irizar
  • Peter Atashian
  • Peter Elmers
  • Philip Munksgaard
  • Ralph Giles
  • Rein Henrichs
  • Ricardo Martins
  • Richo Healey
  • Ricky Taylor
  • Russell Johnston
  • Russell McClellan
  • Ryan Pendleton
  • Ryman
  • Rémi Audebert
  • Sae-bom Kim
  • Sean Collins
  • Sean Gillespie
  • Sean Patrick Santos
  • Seo Sanghyeon
  • Simon Sapin
  • Simonas Kazlauskas
  • Steve Gury
  • Steve Klabnik
  • Steven Allen
  • Steven Fackler
  • Steven Walter
  • Sébastien Marie
  • Tamir Duberstein
  • Thomas Karpiniec
  • Tim Ringenbach
  • Tshepang Lekhonkhobe
  • Ulrik Sverdrup
  • Vadim Petrochenkov
  • Wei-Ming Yang
  • Wesley Wiser
  • Wilfred Hughes
  • Will Andrews
  • Will Engler
  • Xuefeng Wu
  • XuefengWu
  • Yongqian Li
  • York Xiang
  • Z1
  • ben fleis
  • benaryorg
  • bluss
  • bors
  • clatour
  • diwic
  • dmgawel
  • econoplas
  • frankamp
  • funkill
  • inrustwetrust
  • joliv
  • klutzy
  • marcell
  • mdinger
  • olombard
  • peferron
  • ray glover
  • saml
  • simplex
  • sumito3478
  • webmobster

Planet MozillaTying ecosystems through browsers

One of the principles behind HTML5, and the community building it, is that the specifications that say how the Web works should have enough detail that somebody reading them can implement the specification. This makes it easier for new Web browsers to enter the market, which in turn helps users through competitive pressure on existing and new browsers.

I worry that the Web standards community is in danger of losing this principle, quite quickly, and at a cost to competition on the Web.

Some of the recent threats to the ability to implement competitive browsers are non-technical:

  • Many leading video and audio codecs are subject to non-free patent licenses, due at least in part to the patent policies and practices of the standards bodies building such codecs.
  • Implementing EME in a way that is usable in practice requires having a proprietary DRM component and then convincing the sites that use EME to support that component. This can be done by building such a component or forming a business relationship with somebody else who already has. But this threat to browser competition is at least partly related to the nature of DRM, whose threat model treats the end user as the attacker.

Many parts of the technology industry today are dominated by a small group of large companies (effectively an oligopoly) that have an ecosystem of separate products that work better together than with their competitors' products. Apple has Mac OS (software and hardware), iOS (again, software and hardware), Apple TV, Apple Pay, etc. Google has its search engine and other Web products, Android (software only), Chrome OS, Chromecast and Google Cast, Android Pay, etc. Microsoft has Windows, Bing, Windows Phone, etc. These products don't line up precisely, but they cover many of the same areas while varying based on the companies strengths and business models. Many of these products are tied together in ways that both help users and, since these ties aren't standardized and interoperable, strongly encourage users to use other products from the same company.

There are some Web technologies in development that deal with connections between parts of these ecosystems. For example:

  • The Presentation API defines a way for a Web page to show content on something like a Chromecast or an Apple TV. But it only specifies the API between the Web page and the browser; the API between the browser and the TV is completely unspecified. (Mozilla participants in the group tried to change that early in the group's history, but gave up.)
  • The future Web Payments Working Group (which I wrote about last week) is intended to build technology in which the browser connects a user making a payment to a Web site. This has the risk that instead of specifying how browsers talk to payment networks or banks, a browser is expected to make business deals with them, or make business deals with somebody who already has such deals.

In both cases, specifying the system fully is more work. But it's work that needs to happen to keep the Web open and competitive. That's why we've had the principle of complete specification, and it still applies here.

I'm worried that the ties that connect the parts of these ecosystems together will start running through unspecified parts of Web technologies. This would, through the loss of the principle of specification for competition, makes it harder for new browsers (or existing browsers made by smaller companies) to compete, and would make the Web as a whole a less competitive place.

Planet MozillaTab audio indicators and muting in Firefox Nightly

Sometimes when you have several tabs open, and one of them starts to make some noise, you may wonder where the noise is coming from.  Other times, you may want to quickly mute a tab without figuring out if the web page provides its own UI for muting the audio.  On Wednesday, I landed the user facing bits of a feature to add an audio indicator to the tabs that are playing audio, and enable muting them.  You can see a screenshot of what this will look like in action below.

Tab audio indicators in action

Tab audio indicators in action

As you can see in the screenshot, my Soundcloud tab is playing audio, and so is my Youtube tab, but the Youtube tab has been muted.  Muting and unmuting a tab is easy by clicking on the tab audio indicator icon.  You can now test this out yourself on Firefox Nightly tomorrow!

This feature should work with all APIs that let you play audio, such as HTML5 <audio> and <video>, and Web Audio.  Also, it works with the latest Flash beta.  Note that you actually need to install the latest Flash beta, that is, version 19.0.0.124 which was released yesterday.  Earlier versions of Flash won’t work with this feature.

We’re interested in your feedback about this feature, and especially about any bugs that you may encounter.  We hope to iron out the rough edges and then let this feature ride the trains.  If you are curious about this progress, please follow along on the tracking bug.

Last but not least, this is the results of the effort of many of my colleagues, most notably Andrea Marchesini, Benoit Girard, and Stephen Horlander.  Thanks to those and everyone else who helped with the code, reviews, and other things!

Planet MozillaCSS Vendor Prefixes

I have read everything and its contrary about CSS vendor prefixes in the last 48 hours. Twitter, blogs, Facebook are full of messages or articles about what are or are supposed to be CSS vendor prefixes. These opinions are often given by people who were not members of the CSS Working Group when we decided to launch vendor prefixes. These opinions are too often partly or even entirely wrong so let me give you my own perspective (and history) about them. This article is with my CSS Co-chairman's hat off, I'm only an old CSS WG member in the following lines...

  • CSS Vendor Prefixes as we know them were proposed by Mike Wexler from Adobe in September 1998 to allow browser vendors to ship proprietary extensions to CSS.

    In order to allow vendors to add private properties using the CSS syntax and avoid collisions with future CSS versions, we need to define a convention for private properties. Here is my proposal (slightly different than was talked about at the meeting). Any vendors that defines a property that is not specified in this spec must put a prefix on it. That prefix must start with a '-', followed by a vendor specific abbreviation, and another '-'. All property names that DO NOT start with a '-' are RESERVED for using by the CSS working group.

  • One of the largest shippers of prefixed properties at that time was Microsoft that introduced literally dozens of such properties in Microsoft Office.
  • The CSS Working Group slowly evolved from that to « vendor prefixes indicate proprietary features OR experimental features under discussion in the CSS Working Group ». In the latter case, the vendor prefixes were supposed to be removed when the spec stabilized enough to allow it, i.e. reaching an official Call for Implementation.
  • Unfortunately, some prefixed « experimental features » were so immensely useful to CSS authors that they spread at fast pace on the Web, even if the CSS authors were instructed not to use them. CSS Gradients (a feature we originally rejected: « Gradients are an example. We don't want to have to do this in CSS. It's only a matter of time before someone wants three colors, or a radial gradient, etc. ») are the perfect example of that. At some point in the past, my own editor BlueGriffon had to output several different versions of CSS gradients to accomodate the various implementation states available in the wild (WebKit, I'm looking at you...).
  • Unfortunately, some of those prefixed properties took a lot, really a lot, of time to reach a stable state in a Standard and everyone started relying on prefixed properties in production web sites...
  • Unfortunately again, some vendors did not apply the rules they decided themselves: since the prefixed version of some properties was so widely used, they maintained them with their early implementation and syntax in parallel to a "more modern" implementation matching, or not, what was in the Working Draft at that time.
  • We ended up just a few years ago in a situation where prefixed proprerties were so widely used they started being harmful to the Web. The indredible growth of first WebKit and then Chrome triggered a massive adoption of prefixed properties by CSS authors, up to the point other vendors seriously considered implementing themselves the -webkit- prefix or at least simulating it.

Vendor prefixes were not a complete failure. They allowed the release to the masses of innovative products and the deep adoption of HTML and CSS in products that were not originally made for Web Standards (like Microsoft Office). They allowed to ship experimental features and gather priceless feedback from our users, CSS Authors. But they failed for two main reasons:

  1. The CSS Working Group - and the Group is really made only of its Members, the vendors - took faaaar too much time to standardize critical features that saw immediate massive adoption.
  2. Some vendors did not update nor "retire" experimental features when they had to do it, ditching themselves the rules they originally agreed on.

From that perspective, putting experimental features behind a flag that is by default "off" in browsers is a much better option. It's not perfect though. I'm still under the impression the standardization process becomes considerably harder when such a flag is "turned on" in a major browser before the spec becomes a Proposed Recommendation. A Standardization process is not a straight line, and even at the latest stages of standardization of a given specification, issues can arise and trigger more work and then a delay or even important technical changes. Even at PR stage, a spec can be formally objected or face an IPR issue delaying it. As CSS matures, we increasingly deal with more and more complex features and issues, and it's hard to predict when a feature will be ready for shipping. But we still need to gather feedback, we still need to "turn flags on" at some point to get real-life feedback from CSS Authors. Unfortunately, you can't easily remove things from the Web. Breaking millions of web sites to "retire" an experimental feature is still a difficult choice...

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years. That requires taking the following hard decisions:

  • if a feature does not stabilize in two years' time, that's probably because it's not ready or too hard to implement, or not strategic at that moment, or that the production of a Test Suite is a too large effort, or whatever. It has then to be dropped or postponed.
  • Tests are painful and time-consuming. But testing is one of the mandatory steps of our Standardization process. We should "postpone" specs that can't get a Test Suite to move along the REC track in a reasonable time. That implies removing the experimental feature from browsers, or at least turning the flag they live behind off again. It's a hard and painful decision, but it's a reasonable one given all I said above and the danger of letting an experimenal feature spread.

W3C Team blogMoving the Web Platform forward

The Web Platform keeps moving forward every day. Back in October last year, following the release of HTML 5.0 as a Recommendation, I wrote about Streaming video on the Web as a good example of more work to do. But that’s only one among many: persistent background processing, frame rate performance data, metadata associated with a web application, or mitigating cross-site attacks are among many additions we’re working on to push the envelop. The Open Web Platform is far from complete and we’ve been focusing on strengthening the parts of the Open Web Platform that developers most urgently need for success, through our push for Application Foundations. Our focus on developers led us to the recent launch of the W3C’s Web Platform Incubator Community Group (WICG). It gives the easiest way possible for developers to propose new platform features and incubate their ideas.

As part of the very rapid pace of innovation in the Web Platform, HTML itself will continue to evolve as well. The work on Web Components is looking to provide Web developers the means to build their own fully-featured HTML elements, to eliminate the need for scaffolding in most Web frameworks or libraries. The Digital Publishing folks are looking to produce structural semantic extensions to accommodate their industry, through the governance model for modularization and extensions of WAI-ARIA.

In the meantime, the work boundaries between the Web Applications Working Group and the HTML Working Group have narrowed over the years, given that it is difficult nowadays to introduce new HTML elements and attributes without looking at their implications at the API level. While there is a desire to reorganize the work in terms of functionalities rather then technical solution, resulting in several Working Groups, we’re proposing the Web Platform Working Group as an interim group while discussion is ongoing regarding the proper modularization of HTML and its APIs. It enables the ongoing specifications to continue to move forward over the next 12 months. The second proposed group will the Timed Media Working Group. The Web is increasingly used to share and consume timed media, especially video and audio, and we need to enhance these experiences by providing a good Web foundation to those uses, by supporting the work of the Audio and Web Real-Time Communications Working Groups.

The challenge in making those innovations and additions is to continue to have an interoperable and royalty-free Web for everyone. Let’s continue to make the Open Web Platform the best platform for documents and applications.

Planet MozillaVendor Prefixes And Market Reality

Through the Web Compat twitter account, I happen to read a thread about Apple introducing a new vendor prefix. 🎳. The message by Alfonso Martínez L. starts a bit rough:

The mess caused by vendor prefixes on the wild is not enough, so we have new -apple https://www.webkit.org/blog/3709/using-the-system-font-in-web-content/ … @jonathandavis

Going to Apple blog post before reading the rest of the thread gives a bit more background.

Web content is sometimes designed to fit in with the overall aesthetic of the underlying platform which it is being rendered on. One of the ways to achieve this is by using the platform’s system font, which is possible on iOS and OS X by using the “-apple-system” CSS value for the “font-family” CSS property. On iOS 9 and OS X 10.11, doing this allows you to use Apple’s new system font, San Francisco. Using “-apple-system” also correctly interacts with the font-weight CSS property to choose the correct font on Apple’s latest operating systems.

Here I understand the desire to use the system font, but I don't understand the new -apple-system, specifically when the next paragraph says:

On platforms which do not support “-apple-system” the browser will simply fall back to the next item in the font-family fallback list. This provides a great way to make sure all your users get a great experience, regardless of which platform they are using.

I wonder what the cascade of font-family is not already doing so they need a new prefix. They explain later on by providing this information:

Going beyond the system font, iOS has dynamic type behavior, which can provide an additional level of fit and finish to your content.

font: -apple-system-body
font: -apple-system-headline
font: -apple-system-subheadline
font: -apple-system-caption1
font: -apple-system-caption2
font: -apple-system-footnote
font: -apple-system-short-body
font: -apple-system-short-headline
font: -apple-system-short-subheadline
font: -apple-system-short-caption1
font: -apple-system-short-footnote
font: -apple-system-tall-body

What I smell here is pushing the semantics of a text into the font-face, I believe it will not end well. But that's not what I want to talk about here.

Vendor Prefixes Principle

The vendor prefixes have been created for providing a safe place for vendors to experiment with new features. It's a good idea on paper. It can work well, specifically when the technology is not yet really mature and details need to be ironed. This would be perfectly acceptable if the feature was only available on beta and alpha versions of rendering engines. That would stop de facto the proliferation of these properties in common Web sites. And that would give space for experimenting.

Here the feature is not proposed as an experiment but as a way for Web developers, designers to use a new feature on Apple platform. It's proposed as a competitive advantage and a marketing tool for enticing developers to the cool new thing. And before I'm being targeted for blaming Apple only, all vendors in some fashion do that.

Let's assume that Apple is of good will. The real issue is not easy to understand, except if you are working daily on Web Compatibility across the world.

Enter the market reality field.

Flexbox And Gradients In China And Japan

tori in Kamakura

With the Web Compat team, we have been working lately a lot on Chinese and Japanese mobile Web site compatibility issues. The current market in China and Japan on Mobile is a smartphone ecosystem largely dominated by iOS and Android. It means that if you use in your site -webkit- and WebKit vendor prefixes, you are basically on the safe side for most of the users, but not all users.

What is happening here is interesting. Gradients and flexbox went through syntax changes and the standard syntax is really different from the original -webkit- syntax. These are two features of the Web platform which are very useful and very powerful, specifically flexbox. In a monopolistic market such as China and Japan, the end result was Web developers jumping on the initial version of the feature for creating their Web sites (shiny new and useful features).

Fast forward a couple of years and the economic reality of the Web starts playing its cards. Other vendors have caught up with the features, the standard process took place, and the new world of interoperability is all pink with common implementations in all rendering engines, except a couple of minor details.

Web developers should all jump on adjusting their Web sites to add the standard properties at least. This is not happening. Why? Because the benefits are not perceived by Web developers, project managers and site owners. Indeed adjusting the Web site will have a cost in editing and testing. Who bears this cost and for which reasons?

When mentioning it will allow other users with different browsers to use the Web site, the answer is straightforward: "This browser X is not in our targeted list of browsers." or "This browser Y doesn't appear in our stats." We all know that the browser Y can't appear in the stats because it's not usable on the site (A good example of that is MBGA).

mbga rendering on Gecko mobile

Dropping Vendor Prefixes

Adding prefixless version of properties in the implementation of rendering engines help, but do not magically fix everything for improving the Web Compatibility story. That's the mistake that Timothy Hatcher (WebKit Developer Experience Manager at Apple.) is making in:

@AlfonsoML We also unprefixed 4 dozen properties this year. https://developer.apple.com/library/mac/releasenotes/General/WhatsNewInSafari/Articles/Safari_9.html#//apple_ref/doc/uid/TP40014305-CH9-SW28

This is cool and I applaud Apple for this. I wish it happened a bit earlier. Why doesn't it solve the Web Compatibility issue? Because the prefixed version of properties still exists and is supported. Altogether, we then sing the tune "Yeah, Apple (and Google), let's drop the prefixed version of these properties!" Ooooh, hear me, I so wish it would be possible. But Apple and Google can't do that for the exact same reason that other non-WebKit browsers can't exist in Chinese and Japanese markets. They would instantly break a big number of high profiles Web sites.

We have reached the point where browser vendors have to start implementing or aliasing these WebKit prefixes just to allow their users to browse the Web, see Mozilla in Gecko and Microsoft in Edge. The same thing is happening over again. In the past, browser vendors had to implement the quirks of IE to be compatible with the Web. As much as I hate it, we will have to specify the current -webkit- prefixes to implement them uniformly.

Web Compatibility Responsibility

Microsoft is involved in the Web Compatibility project. I would like Apple and Google to be fully involved and committed in this project. The mess we are all involved is due to WebKit prefixes and the leader position they have on the mobile market can really help. This mess killed Opera Presto on mobile, which had to switch to Blink.

Let's all create a better story for the Web and understand fully the consequences of our decisions. It's not only about technology, but also economic dynamics and market realities.

Otsukare!

Planet MozillaUpdating you on 38 just-in-time

Did you see what I did there? For the past two weeks my free time apart from work and the Master's degree has been sitting in a debugger trying to fix JavaScript, which is just murder on my dating life. Here is the current showstopper bug-roll for 38.1.1b1:

  • The Faceblech bug with the new IonPower JavaScript JIT compiler is squashed, I think, after repairing some conformance test failures which in turn appear to have repaired Forceblah. In my defence, the two bugs in question were incredibly weird edge cases and these tests are not part of the usual JIT test suite, so I guess we'll have to run them as well in future. This also repairs an issue with Instagrump which is probably the same underlying issue since Faceboink owns them also.

    The silver lining after all that was that I was considering disabling inlining in the JIT prior to release, which worked around the "badness," but also cut the engine speed in about half. (Still faster than JaegerMonkey!) To make this a bit less of a hit, I tuned the thresholds for starting the twin JITs and got about 10% improvement without inlining. With inlining back on, it's still faster by about 4% and change -- the G5 now achieves a score of nearly 5800 on V8, up from 5560. I also tweaked our foreground finalization patch for generational GC so that we should be able to get the best of both worlds. Overall you should see even better performance out of this next beta.

  • I have a presumptive fix for the webfont "ATSUI puke" on the New York Times, but it's not implemented or well-tested yet. This is a crash on 10.5, so I consider it a showstopper and it will be fixed before the next beta. (It affects 31.8 also but I will not be making another 31 release unless there is a Mozilla ESR chemspill.)

  • The modified strip7 tool required for building 38.x has a serious bug in it that causes it to crash trying to strip certain symbols. I have fixed this bug and builders will need to install this new version (remember: do not replace your normal strip with this one; it is intentionally loose with the Mach-O specification). I will be uploading it sometime this week along with an updated gdb7 that has better debugger performance and repairs a bug with too eagerly disabling register display while single-stepping Ion code.

These bugs are not considered showstoppers, but I do acknowledge them and I plan to fix them either for the final release or the next version of 38:

  • I can confirm saved passwords do not appear in the preferences panel. They do work, though, and can be saved, so this is more of an issue with managing them; while it's possible to do so manually it requires some inconvenient screwing around with your profile, so I consider this the highest priority of the non-showstopper bugs.

  • Checkboxes on the dropdown menus from the Console tabs do not appear. This specific manifestation is purely cosmetic because they work normally otherwise, but this may be an indication there is a similar issue with dropdowns and context menus elsewhere, so I do want to fix this as well.

Other miscellaneous changes include some adjustments to HTML5 media streaming and I have decided to reduce the default window and tab undos back to 31's level (6 and 2 respectively) so that the browser still gives up tenured memory a bit more easily. Unfortunately, there is not enough time to get MP3 support fully functional for final release. I plan to get this completed in a future version of 38.x, but it will not be officially supported until then (you can still toggle tenfourfox.mp3.enabled to use the minimp3 driver for those sites it does work with as long as you remember that seeking within a track doesn't work yet).

The localizer elves have French, German, Spanish, Italian, Russian and Finnish installers available. Our Japanese localization appears to have dropped off the web, so if you can help us, o-negai shimasu! Swedish just needs a couple of strings to be finished. We do not yet have Polish or Asturian, which we used to, so if you can help on any of these languages, please visit issue 42 where Chris is coordinating these efforts. A big thank you to all of our localizers!

Once the localizations are all in, the Google Code project will be frozen to prepare for the wiki and issue tracker moving to Github ahead of Google Code going read-only on 24 August. Downloads will remain on SourceForge, but everything else will go to Github, including the source tree when we eventually drop source parity. I was hoping to have an Elcapitanspoof up in time for 38's final release, but we'll see if I have time to do the graphics.

Watch for the next beta to come out by next weekend with any luck, which gives us enough time if there needs to be a third emergency release prior to the final (weekend prior to 11 August).

Finally, I am pleased to note we are now no longer the only PowerPC JavaScript JIT out there, though we are the only one I know of for Mozilla SpiderMonkey. IBM has been working on a port of Google V8 to PowerPC for some time, both AIX and Linux, which recently became an official part of the Google V8 repository (i.e., the PPC port is now officially supported). If you've been looking at nabbing a POWER8 with that money burning a hole in your pocket, it even works with the new Power ISA little endian mode, of which we dare not speak. Since uppsala, Floodgap's main server, is a POWER6 running AIX and should be able to run this, I might give it a spin sometime when I have a few spare cycles. However, before some of the freaks amongst you get excited and think this means Google Chrome on OS X/ppc is just around the corner, there's still an awful lot more work required to get it operational than just the JavaScript engine, and it won't be me that works on it. It does mean, however, that things like node.js will now work on a Power-based server with substantially less fiddling around, and that might be very helpful for those of you who run Power boxes like me.

Planet WebKitXabier Rodríguez Calvar: ReadableStream almost ready

Hello dear readers! Long time no see! You might thing that I have been lazy, and I was in blog posting but I was coding like mad.

First remarkable thing is that I attended the WebKit Contributors Meeting that happened in March at Apple campus in Cupertino as part of the Igalia gang. There we discussed of course about Streams API, its state and different implementation possibilities. Another very interesting point which would make me very happy would be the movement of Mac to CMake.

In a previous post I already introduced the concepts of the Streams API and some of its possible use cases so I’ll save you that part now. The news is that ReadableStream has its basic funcionality complete. And what does it mean? It means that you can create a ReadableStream by providing the constructor with the underlying source and the strategy objects and read from it with its reader and all the internal mechanisms of backpresure and so on will work according to the spec. Yay!

Nevertheless, there’s still quite some work to do to complete the implementation of Streams API, like the implementation of byte streams, writable and transform streams, piping operations and built-in strategies (which is what I am on right now).I don’t know either when Streams API will be activated by default in the next builds of Safari, WebKitGTK+ or WebKit for Wayland, but we’ll make it at some point!

Code suffered already lots of changes because we were still figuring out which architecture was the best and Youenn did an awesome job in refactoring some things and providing support for promises in the bindings to make the implementation of ReadableStream more straitghforward and less “custom”.

Implementation could still suffer quite some important changes as, as part of my work implementing the strategies, some reviewers raised their concerns of having Streams API implemented inside WebCore in terms of IDL interfaces. I have already a proof of concept of CountQueuingStrategy and ByteLengthQueuingStrategy implemented inside JavaScriptCore, even a case where we use built-in JavaScript functions, which might help to keep closer to the spec if we can just include JavaScript code directly. We’ll see how we end up!

Last and not least I would like to thank Igalia for sponsoring me to attend the WebKit Contributors Meeting in Cupertino and also Adenilson for being so nice and taking us to very nice places for dinner and drinks that we wouldn’t be able to find ourselves (I owe you, promise to return the favor at the Web Engines Hackfest). It was also really nice to have the oportunity of quickly visiting New York City for some hours because of the long connection there which usually would be a PITA, but it was very enjoyable this time.

Bruce LawsonReading List

The reading list – a day early as I’m off to the taverns and love-dungeons of Brussels for the weekend to teach Belgians how to drink beer.

Planet MozillaServo developer tools overview

Servo is a new web browser engine. It is one of the largest Rust-based projects, but the total Rust code is still dwarfed by the size of the code provided in native C and C++ libraries. This post is an overview of how we have structured our development environment in order to integrate the Cargo build system, with its “many small and distributed dependencies” model with our needs to provide many additional features not often found in smaller Rust-only projects.

Mach

Mach is a python driver program that provides a frontend to Servo’s development environment that both reduces the number of steps required and integrates our various tools into a single frontend harness. Similar to its purpose in the Firefox build, we use it to centralize and simplify the number of commands that a developer has to perform.

mach bootstrap

The steps that mach will handle before issuing a normal cargo build command are:

  • Downloading the correct versions of the cargo and rustc tools. Servo uses many unstable features in Rust, most problematically those that change pretty frequently. We also test the edges of feature compatibility and so are the first ones to notice many changes that did not at first seem as if they would break anyone. Further, we build a custom version of the tools that additionally supports cross-compilation targeting Android (and ARM in the near future). A random local install of the Rust toolchain is pretty unlikely to work with Servo.

  • Updating git submodules. Some of Servo’s dependencies cannot be downloaded as Cargo dependencies because they need to be directly referenced in the build process, and Cargo adds a hash that makes it difficult to locate those files. For such code, we add them as submodules.

mach build & run

The build itself also verifies that the user has explicitly requested either a dev or release build — the Servo dev build is debuggable but quite slow, and it’s not clear which build should be the default.

Additionally, there’s the question of which cargo build to run. Servo has three different “toplevel” Cargo.toml files.

  • components/servo/Cargo.toml is used to build an executable binary named servo and is used on Linux and OSX. There are also horrible linker hacks in place that will cause an Android-targeted build to instead produce a file named servo that is actually an APK file that can be loaded onto Android devices.

  • ports/gonk/Cargo.toml produces a binary that can run on the Firefox OS Boot2Gecko mobile platform.

  • ports/cef/Cargo.toml produces a shared library that can be loaded within the Chromium Embedding Framework to provide a hostable web rendering engine.

The presence of these three different toplevel binaries and the curious directory structure means that mach also provides a run command that will execute the correct binary with any provided arguments.

mach test

Servo has several testing tools that can be executed via mach.

  • mach tidy will verify that there are no trivial syntactic errors in source files. It checks for valid license headers in each file, no tab characters, no trailing whitespaces, etc.

  • mach test-ref will run the Servo-specific reference tests. These tests render a pair of web pages that implement the same final layout using different CSS features to images. If the images are not pixel-identical, the test fails.

  • mach test-wpt runs the cross-browser W3C Web Platform Tests, which primarily test DOM features.

  • mach test-css runs the cross-browser CSS WG reference tests, which are a version of the reference tests that are intended to work across many browsers.

  • mach test-unit runs the Rust unit tests embedded in Servo crates. We do not have many of these, except for basic tests of per-crate functionality, as we rely on the WPT and CSS tests for most of our coverage. Philosophically, we prefer to write and upstream a cross-browser test where one does not exist instead of writing a Servo-specific test.

cargo

While the code that we have written for Servo is primarily in Rust, we estimate that at least 2/3 of the code that will run inside of Servo will be written in C/C++, even when we ship. From the SpiderMonkey JavaScript engine to the Skia and Azure/Moz2D graphics pipeline to WebRTC, media extensions, and proprietary video codecs, there is a huge portion of the browser that is integrated and wrapped into Servo, rather than rewritten. For each of these projects, we have a crate that has a build.rs file that performs the custom build steps to produce a static library and then produce a Rust rlib file to link into Servo.

The rest of Servo is a significant amount of code (~150k lines of Rust; ~250k if you include autogenerated DOM bindings), but follows the standard conventions of Cargo and Rust as far as producing crates. For the many crates within the Servo repo, we simply have a Cargo.toml file next to a lib.rs that defines the module structure. When we break them out into a separate GitHub repository, though, we follow the convention of a toplevel Cargo.toml file with a src directory that holds all of the Rust code.

Servo's dependency graph

Updating dependencies

Since there are three toplevel Cargo.toml files, there are correspondingly three Cargo.lock files. This configuration makes the already challenging updates of dependencies even moreso. We have added a command, mach update-cargo -p {package} --precise {version} to handle updates across all three of the lockfiles. While running this command without any arguments does attempt to upgrade all dependencies to the highest SemVer-compatible versions, in practice that operation is unlikely to work, due to a mixture of:

  • git-only dependencies, which do not have a version number

  • Dependencies with different version constraints on a common dependency, resulting in two copies of a library and conflicting types

  • Hidden Rust compiler version dependencies

Things we’d like to fix in the future

It would be great if there was a single Cargo.toml file and it was at the toplevel of the Servo repo. It’s confusing to people familiar with Rust projects, who go looking for a Cargo.toml file and can’t find them.

Cross-compilation to Android with linker hacks feels a bit awkward. We’d like to clean that up, remove the submodule that performs that linker hackery, and have a more clean/consistent feel to our cross-targeted builds.

Managing the dependencies — particularly if there is a cross-repo update like a Rust upgrade — is both a real pain and requires network access in order to clone the dependency that you would like to edit. The proposed cargo clone command would be a huge help here.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>