Planet MozillaOkay, but What Does Your Work Actually Mean, Nikki? Part 3: Translating A Standard Into Code

Over my previous two posts, I described my introduction to work on Servo, and my experience with learning and modifying the Fetch Standard. Now I’m going to combine these topics today, as I’ll be talking about what it’s like putting the Fetch Standard into practice in Servo. The process is roughly: I pick a part of the Fetch Standard to implement or improve; I write it on a step-by-step basis, often asking many questions; then when I feel it’s mostly complete, I submit my code for review and feedback.

I will talk about the review aspect in my next post, along with other things, as this entry ended up being pretty long!

Where To Start?

Whenever I realize I’m not sure what to be doing for the day, I go over my list of tasks, often talking with my project mentor about what I can do next. There’s a lot more work then I could manage in any internship - especially a 3 month long one - so having an idea of what aspects are the most important is good to keep in mind. Plus, I’m not equally skilled or knowledgeable about every aspect of Fetch or programming in Rust, and learning a new area more than halfway through my internship could be a significant waste of time. So, the main considerations are: “How important is this for Servo?”, “Will it take too long to implement?”, and “Do I know how to do this?”.

“How important is this for Servo?”

Often, my Servo mentor or co-workers are the only people who can answer “How important is this?”, since they’ve all been with Servo for much longer than me, and take a broader level view- personally, I only think of Servo in terms of the Fetch implementation, which isn’t too far off from reality: the Fetch implementation will be used by a number of parts of Servo which can easily use any of it, with clear boundaries for what should be handled by Fetch itself.

I’m not too concerned with what’s most important for the Fetch implementation, since I can’t answer it by myself. There’s always multiple things I could be doing, and I have a better idea for answering the other two aspects.

“Will it take too long to implement?”

“Will it take too long to implement?” is probably the hardest question, but one that gets just a bit easier all the time. Simply put, the more code I write, the better I can predict how long any specific task will take me to accomplish. There are always sort of random chances though: sometimes I run into surprising blocks for a single line of code; or I spend just half a day writing an entire Fetch step with no problems. Maybe with years of experience I will see those easy successes or hard failures coming as well, but for now, I’ll have to be content with rough estimates and a lenient time schedule.

“Do I know how to do this?”

The last question, “Do I know how to do this?”, depends greatly on what “this” is for me to be able to answer. Learning new aspects of Rust is always a closed book to me, in a way- I don’t know how difficult or simple any aspect of it will be to learn. Not to mention, just reading something has a minimal effect on my understanding. I need to put it into practice, multiple times, for me to really understand what I’m doing.

Unfortunately, programming Servo (or any project, really) doesn’t necessarily line up the concepts I need to learn and use in a nice order, so I often need to juggle multiple concepts of varying complexity at once. For areas not specific to Rust though, I can often have a better idea of. Generalized programming ideas though, I can better gauge my ability for. Writing tests? Sure, I’ve done that plenty- it shouldn’t be difficult to apply that to Rust. Writing code that handles multiple concurrent threads? I’ve only learned enough to know that those buzzwords mean something- I’d probably need a month to be taught it well!

An Example Of Deciding A Task

Right now, and for the past while, my work on Servo has been focused on writing tests to make sure my implementation of Fetch, and the functions written previously by my co-workers, conform to what’s expected by the Fetch Standard. What factors in to deciding this is a good task though?

Importance

Most of the steps for Fetch are mostly complete by now. The steps that aren’t coded either cannot be done yet in Servo, or are not necessary for a minimally working Fetch protocol. Sure, I can make the Rust compiler happy- but just because I can run the code at all doesn’t mean it’s working right! Thus, before deciding that the basics of Fetch have been perfected and can be built on, extensive test coverage of the code is significantly important. Testing the code means I can intentionally create many situations, both to make sure the result matches the standard, and that errors come up at only the defined moments.

Time

Writing the tests is often straightforward. I just need to add a lot of conditionals, such as: the Fetch Standard says a basic filtered response has the type “basic”. That’s simple- I need to have the Fetch protocol return a basic filtered response, then verify that the type is “basic”! And so on for every declaration the Fetch Standard makes. The trickier side of this though is that I can’t be absolutely sure, until I run a test, whether or not the existing code supports this.

It’s simple on paper to return a basic filtered response- but when I tell my Fetch implementation to do so, maybe it’ll work right away, or maybe I’m missing several steps or made mistakes that prevent it from happening! The time is a bit open ended as a result, but that can be weighed with how good it would be to catch and repair a significant error.

Knowledge

I have experience with writing tests, as I like them conceptually very much. Often, testing a program by hand is slow, difficult, and hard to reproduce errors. When the testing itself is written in code, everything is accessible and easily repeatable. If I don’t know what’s going on, I can update the test code to be more informative and run it again, or dig into it with a debugger. So I have the knowledge of tests themselves- but what about testing in Rust, or even Servo?

I can tell you the answer: I hadn’t foreseen much difficulty (reading over how to write tests in Rust was easy), but I ended up lacking a lot of experience with testing Servo. Since Servo is a browser engine, and the Fetch protocol deals with fetching any kind of resource a browser access, I need to handle having a resource on a running server for Fetch to retrieve. While this is greatly simplified thanks to some of the libraries Servo works with, it still took a lot of time for to have a good mental model to understand what I was doing in my head and thus, be able to write effective tests.

Actually Writing The Code

So I’ve said a lot about what goes into picking a task, but what about actually writing the code? That requires knowing how to a translate a step from the programming language-abstracted Fetch Standard into Rust code. Sometimes this is almost exactly like the original writing, such as step 1 of the Main Fetch function, “Let response be null”, which in Rust looks like this: let response = None;.

In Rust, the let keyword makes a variable binding- it’s just a coincidence that the Main Fetch step uses the same word. And Rust’s null operator is called None (which declares a variable that is currently holding nothing, and cannot be used for anything while still None, but it sets aside some memory now for response to be used later).

A More Complex Step

Of course, not every step is so literal to translate to Rust. Take step 10 of Main Fetch for instance: “If main fetch is invoked recursively, return response”. The first part of this is knowing what it’s saying, which is “if main fetch is invoked by itself or another function it invokes (ie., invoked recursively), return the response variable”. Translating that to Rust code gives us if main fetch is invoked recursively { return response }. This isn’t very good- main fetch is invoked recursively isn’t valid code, it’s a fragment of an English sentence.

The step doesn’t answer this for me, so I need to get answers elsewhere. There’s two things I can do: keep reading more of the Fetch Standard (and check personal notes I’ve made on it), or ask for help. I’ll do both, in that order. Right now, I have two questions I need answers to: “When is Main Fetch invoked recursively?”, and “How can I tell when Main Fetch is invoked recursively?”.

Doing My Own Research

I often like to try to get answers myself before asking other people for help (although sometimes I do both at the same time, which has lead to me answering my own questions immediately after asking a few times). I think it’s a good ideal to spend a few minutes trying to learn something on my own, but to also not let myself be stuck on any one problem for about 15 minutes, and 30 minutes at the absolute worst. I want to use my own time well- asking a question I can answer on my own shortly isn’t necessary, but not asking a question that is beyond my capability can end up wasting a lot more time.

So I’ll start trying to figure out this step by answering for myself, “When is Main Fetch invoked recursively?”. Most functions in the Fetch Standard invoke at least one other function, which is how it all works- each part is separated so they can be understood in smaller chunks, and can easily repeat specific steps, such as invoking Main Fetch again.

What I would normally need to do here is read through the Fetch Standard to find at least one point where Main Fetch is invoked from itself or another function it invokes. Thankfully, I don’t have to go read through each of those two functions, and everything they call, and so on, until I happen to get my answer, because of part of my notes I took earlier, when I was reading through as much of the Fetch Standard as I could.

I had decided to make short notes declaring when each Fetch function calls another one, and in what step it happens. At the time I did so to help give me an understanding of the relations between all the Fetch functions- now, it’s going to help me pinpoint when Main Fetch is invoked recursively! First I look for what calls Main Fetch, other than the initial Fetch function which primarily serves to set up some basic values, then pass it on to Main Fetch. The only function that does so is HTTP Redirect Fetch, in step 15. I can also see that HTTP Fetch calls HTTP Redirect Fetch in step 5, and that Main Fetch calls HTTP Fetch in step 9.

That was easy! I’ve now answered the question: “Main Fetch is invoked recursively by HTTP Redirect Fetch.”

Questions Are Welcome

However, I’m still at a loss for answering “How can I tell when Main Fetch is invoked recursively?”. Step 15 of HTTP Redirect Fetch doesn’t say to set any variable that would say “oh yeah, this is a recursive call now”. In fact, no such variable is defined by the Fetch Standard to be used!* So, I’ll ask my Servo mentor on IRC.

This example I’m covering actually happened a month or so ago, so I’m not going to pretend I can remember exactly what the conversation was. But the result of it was that the solution is actually very simple: I add a new boolean (a variable that is just true, or false) parameter (a variable that must be sent to a function when invoking it) to Main Fetch that’ll say whether or not it’s being invoked recursively. When Main Fetch is invoked from the Fetch function, I set it to false; when Main Fetch is invoked from HTTP Redirect Fetch, I set it to true.

Repeat

This is the typical process for every single function and step in the Fetch Standard. For every function I might implement, improve, or test, I repeat a similar decision process. For every step, I repeat a similar question-answer procedure, although unfortunately not all English to code translations are so shortly resolved as the examples in this post.

Sometimes picking a new part of the Fetch Standard to implement means it ends up relying on another piece, and needs to be put on hold for that, or occasionally my difficulty with a step might result in a request for change to the Fetch Standard to improve understanding or logic, as I’ve described in previous posts.

This, in sum with the other two parts of this series, effectively describes the majority of my work on Servo! Hopefully, you now have a better idea of what it all means.

Fourth post ending.

  • After sharing a first draft of my post, it was pointed out that adding this to the Fetch spec would be simple to do, so this case will likely soon become irrelevant for understanding Fetch. But it’s still a neat, concise example, if easily outdated.

Jeremy KeithEnhance’n’drag’n’drop

I’ve spent the last week implementing a new feature over at The Session. I had forgotten how enjoyable it is to get completely immersed in a personal project, thinking about everything from database structures right through to CSS animations,

I won’t bore you with the details of this particular feature—which is really only of interest if you play traditional Irish music—but I thought I’d make note of one little bit of progressive enhancement.

One of the interfaces needed for this feature was a form to re-order items in a list. So I thought to myself, “what’s the simplest technology to enable this functionality?” I came up with a series of select elements within a form.

Reordering

It’s not the nicest of interfaces, but it works pretty much everywhere. Once I had built that—and the back-end functionality required to make it all work—I could think about how to enhance it.

I brought it up at the weekly Clearleft front-end pow-wow (featuring special guest Jack Franklin). I figured that drag’n’drop would be the obvious enhancement, but I didn’t know if there were any “go-to” libraries for implementing it; I haven’t paid much attention to the state of drag’n’drop since the old IE implement was added to HTML5.

Nobody had any particular recommendations so I did a bit of searching. I came across Dragula, which looked pretty solid. It’s made by the super-smart Nicolás Bevacqua, who I know shares my feelings about progressive enhancement. To my delight, I was able to get it working within minutes.

Drag and drop

There’s a little bit of mustard-cutting going on: does the dragula object exist, and does the browser understand querySelector? If so, the select elements are hidden and the drag’n’drop is enabled. Then, whenever an item in the list is dragged and dropped, the corresponding (hidden) select element is updated …so that time I spent making the simpler non-drag’n’drop interface was time well spent: I didn’t need to do anything extra on the server to handle the data from the updated interface.

It’s a simple example but it demonstrates that the benefits of starting with the simpler universal interface before upgrading to the smoother experience.

Planet Mozilla[worklog] Outreach is hard, Webkit aliasing big progress

Tunes of the week: Earth, Wind and Fire. Maurice White, the founder, died at 74.

WebCompat Bugs

WebKit aliasing

  • When looking for usage of -webkit-mask-*, I remembered that Google Image was a big offender. So I tested again this morning and… delight! They now use SVG. So now, I need to test extensively Google search and check if they can just send us the version they send to Chrome.
  • Testing Google Calendar again on Gecko with Chrome user agent to see how far we are to receive a better user experience. We can't really yet ask Google to send us the same thing they send to Chrome. A couple of glitches here and there. But we are very close. The better would be for Google to fix their CSS, specifically to make flexbox and gradients standard-compatible.
  • The code for the max-width issue (not a bug but implementation differences due to an undefined scenario in the CSS specification) is being worked on by David Baron and reviewed by Daniel Holbert. And this makes me happy, it should solve a lot of the Webcompat bugs reports. Look at the list of SeeAlso in that bug.

Webcompat Life and Working with Developer Tools

  • Changing preferences all the time through "about:config" is multiple step. I liked in Opera Presto how you could link to a specific preference, so I filed a bug for Firefox. RESOLVED. It exists: about:config?filter=webkit and it's bookmark-able.
  • Bug 1245365 - searching attribute values through CSS selectors override the search terms
  • A discussion has been started on improving the Responsive Design Mode of Firefox Developer Tools. I suggested a (too big) list of features that would make my life easier.

Firefox OS Bugs to Firefox Android Bugs

  • Web Compatibility Mozilla employees reduced their support for solving Firefox OS bugs to its minimum. The community is welcome to continue to work on them. But some of these bugs have still an impact on Firefox Android. One good example of this is Bug 959137. Let's come up with a process to deal with those.
  • Another last week todo. I have been closing a lot of old bugs (around 600 in a couple of days) in Firefox OS and Firefox Android in Tech Evangelism product. The reasons for closing them are mostly:
    • the site doesn't exist anymore. (This goes into my list of Web Compatibility axioms: "Wait long enough, every bug disappears.")
    • the site fixed the initial issue
    • the layout.css.prefixes.webkit; true fixes it (see Bug 1213126)
    • the site has moved to a responsive design

Bug 812899 - absolutely positioned element should be vertically centered even if the height is bigger than that of the containing block

This bug was simple at the beginning, but when providing the fix, it broke other tests. It's normal. Boris explained which parts of the code was impacted. But I don't feel I'm good enough yet for touching this. Or it would require patience and step by step guidance. It could be interesting though. I have the feeling I have too much on my plate right now. So a bug to take over!

Testing Google Search On Gecko With Different UA Strings

So last week, I gave myself a Todo "testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA." My preliminary tests sound pretty good.

Reading List

Follow Your Nose

Otsukare!

Planet WebKitMichael Catanzaro: On WebKit Security Updates

Linux distributions have a problem with WebKit security.

Major desktop browsers push automatic security updates directly to users on a regular basis, so most users don’t have to worry about security updates. But Linux users are dependent on their distributions to release updates. Apple fixed over 100 vulnerabilities in WebKit last year, so getting updates out to users is critical.

This is the story of how that process has gone wrong for WebKit.

Before we get started, a few disclaimers. I want to be crystal clear about these points:

  1. This post does not apply to WebKit as used in Apple products. Apple products receive regular security updates.
  2. WebKitGTK+ releases regular security updates upstream. It is safe to use so long as you apply the updates.
  3. The opinions expressed in this post are my own, not my employer’s, and not the WebKit project’s.

Browser Security in a Nutshell

Web engines are full of security vulnerabilities, like buffer overflows, null pointer dereferences, and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.

If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities. Chromium has a top-class Linux sandbox. WebKit does have a Linux sandbox, but it’s not any good, so it’s (rightly) disabled by default. Firefox does not have a sandbox due to major architectural limitations (which Mozilla is working on).

For this blog post, it’s enough to know that attackers use crafted input to exploit vulnerabilities to gain control of your computer. This is why it’s not a good idea to browse to dodgy web pages. It also explains how a malicious email can gain control of your computer. Modern email clients render HTML mail using web engines, so malicious emails exploit many of the same vulnerabilities that a malicious web page might. This is one reason why good email clients block all images by default: image rendering, like HTML rendering, is full of security vulnerabilities. (Another reason is that images hosted remotely can be used to determine when you read the email, violating your privacy.)

WebKit Ports

To understand WebKit security, you have to understand the concept of WebKit ports, because different ports handle security updates differently.

While most code in WebKit is cross-platform, there’s a large amount of platform-specific code as well, to improve the user and developer experience in different environments. Different “ports” run different platform-specific code. This is why two WebKit-based browsers, say, Safari and Epiphany (GNOME Web), can display the same page slightly differently: they’re using different WebKit ports.

Currently, the WebKit project consists of six different ports: one for Mac, one for iOS, two for Windows (Apple Windows and WinCairo), and two for Linux (WebKitGTK+ and WebKitEFL). There are some downstream ports as well; unlike the aforementioned ports, downstream ports are, well, downstream, and not part of the WebKit project. The only one that matters for Linux users is QtWebKit.

If you use Safari, you’re using the Mac or iOS port. These ports get frequent security updates from Apple to plug vulnerabilities, which users receive via regular updates.

Everything else is broken.

Since WebKit is not a system library on Windows, Windows applications must bundle WebKit, so each application using WebKit must be updated individually, and updates are completely dependent on the application developers. iTunes, which uses the Apple Windows port, does get regular updates from Apple, but beyond that, I suspect most applications never get any security updates. This is a predictable result, the natural consequence of environments that require bundling libraries.

(This explains why iOS developers are required to use the system WebKit rather than bundling their own: Apple knows that app developers will not provide security updates on their own, so this policy ensures every iOS application rendering HTML gets regular WebKit security updates. Even Firefox and Chrome on iOS are required to use the system WebKit; they’re hardly really Firefox or Chrome at all.)

The same scenario applies to the WinCairo port, except this port does not have releases or security updates. Whereas the Apple ports have stable branches with security updates, with WinCairo, companies take a snapshot of WebKit trunk, make their own changes, and ship products with that. Who’s using WinCairo? Probably lots of companies; the biggest one I’m aware of uses a WinCairo-based port in its AAA video games. It’s safe to assume few to no companies are handling security backports for their downstream WinCairo branches.

Now, on to the Linux ports. WebKitEFL is the WebKit port for the Enlightenment Foundation Libraries. It’s not going to be found in mainstream Linux distributions; it’s mostly used in embedded devices produced by one major vendor. If you know anything at all about the internet of things, you know these devices never get security updates, or if they do, the updates are superficial (updating only some vulnerable components and not others), or end a couple months after the product is purchased. WebKitEFL does not bother with pretense here: like WinCairo, it has never had security updates. And again, it’s safe to assume few to no companies are handling security backports for their downstream branches.

None of the above ports matter for most Linux users. The ports available on mainstream Linux distributions are QtWebKit and WebKitGTK+. Most of this blog will focus on WebKitGTK+, since that’s the port I work on, and the port that matters most to most of the people who are reading this blog, but QtWebKit is widely-used and deserves some attention first.

It’s broken, too.

QtWebKit

QtWebKit is the WebKit port used by Qt software, most notably KDE. Some cherry-picked examples of popular applications using QtWebKit are Amarok, Calligra, KDevelop, KMail, Kontact, KTorrent, Quassel, Rekonq, and Tomahawk. QtWebKit provides an excellent Qt API, so in the past it’s been the clear best web engine to use for Qt applications.

After Google forked WebKit, the QtWebKit developers announced they were switching to work on QtWebEngine, which is based on Chromium, instead. This quickly led to the removal of QtWebKit from the WebKit project. This was good for the developers of other WebKit ports, since lots of Qt-specific code was removed, but it was terrible for KDE and other QtWebKit users. QtWebKit is still maintained in Qt and is getting some backports, but from a quick check of their git repository it’s obvious that it’s not receiving many security updates. This is hardly unexpected; QtWebKit is now years behind upstream, so providing security updates would be very difficult. There’s not much hope left for QtWebKit; these applications have hundreds of known vulnerabilities that will never be fixed. Applications should port to QtWebEngine, but for many applications this may not be easy or even possible.

Update: As pointed out in the comments, there is some effort to update QtWebKit. I was aware of this and in retrospect should have mentioned this in the original version of this article, because it is relevant. Keep an eye out for this; I am not confident it will make its way into upstream Qt, but if it does, this problem could be solved.

WebKitGTK+

WebKitGTK+ is the port used by GTK+ software. It’s most strongly associated with its flagship browser, Epiphany, but it’s also used in other places. Some of the more notable users include Anjuta, Banshee, Bijiben (GNOME Notes), Devhelp, Empathy, Evolution, Geany, Geary, GIMP, gitg, GNOME Builder, GNOME Documents, GNOME Initial Setup, GNOME Online Accounts, GnuCash, gThumb, Liferea, Midori, Rhythmbox, Shotwell, Sushi, and Yelp (GNOME Help). In short, it’s kind of important, not only for GNOME but also for Ubuntu and Elementary. Just as QtWebKit used to be the web engine for choice for Qt applications, WebKitGTK+ is the clear choice for GTK+ applications due to its nice GObject APIs.

Historically, WebKitGTK+ has not had security updates. Of course, we released updates with security fixes, but not with CVE identifiers, which is how software developers track security issues; as far as distributors are concerned, without a CVE identifier, there is no security issue, and so, with a few exceptions, distributions did not release our updates to users. For many applications, this is not so bad, but for high-risk applications like web browsers and email clients, it’s a huge problem.

So, we’re trying to improve. Early last year, my colleagues put together our first real security advisory with CVE identifiers; the hope was that this would encourage distributors to take our updates. This required data provided by Apple to WebKit security team members on which bugs correspond to which CVEs, allowing the correlation of Bugzilla IDs to Subversion revisions to determine in which WebKitGTK+ release an issue has been fixed. That data is critical, because without it, there’s no way to know if an issue has been fixed in a particular release or not. After we released this first advisory, Apple stopped providing the data; this was probably just a coincidence due to some unrelated internal changes at Apple, but it certainly threw a wrench in our plans for further security advisories.

This changed in November, when I had the pleasure of attending the WebKit Contributors Meeting at Apple’s headquarters, where I was finally able meet many of the developers I had interacted with online. At the event, I gave a presentation on our predicament, and asked Apple to give us information on which Bugzilla bugs correspond to which CVEs. Apple kindly provided the necessary data a few weeks later.

During the Web Engines Hackfest, a yearly event that occurs at Igalia’s office in A Coruña, my colleagues used this data to put together WebKitGTK+ Security Advisory WSA-2015-0002, a list of over 130 vulnerabilities disclosed since the first advisory. (The Web Engines Hackfest was sponsored by Igalia, my employer, and by our friends at Collabora. I’m supposed to include their logos here to advertise how cool it is that they support the hackfest, but given all the doom and gloom in this post, I decided perhaps they would perhaps prefer not to have their logos attached to it.)

Note that 130 vulnerabilities is an overcount, as it includes some issues that are specific to the Apple ports. (In the future, we’ll try to filter these out.) Only one of the issues — a serious error in the networking backend shared by WebKitGTK+ and WebKitEFL — resided in platform-specific code; the rest of the issues affecting WebKitGTK+ were all cross-platform issues. This is probably partly because the trickiest code is cross-platform code, and partly because security researchers focus on Apple’s ports.

Anyway, we posted WSA-2015-0002 to the oss-security mailing list to make sure distributors would notice, crossed our fingers, and hoped that distributors would take the advisory seriously. That was one month ago.

Distribution Updates

There are basically three different approaches distributions can take to software updates. The first approach is to update to the latest stable upstream version as soon as, or shortly after, it’s released. This is the strategy employed by Arch Linux. Arch does not provide any security support per se; it’s not necessary, so long as upstream projects release real updates for security problems and not simply patches. Accordingly, Arch almost always has the latest version of WebKitGTK+.

The second main approach, used by Fedora, is to provide only stable release updates. This is more cautious, reflecting that big updates can break things, so they should only occur when upgrading to a new version of the operating system. For instance, Fedora 22 shipped with WebKitGTK+ 2.8, so it would release updates to new 2.8.x versions, but not to WebKitGTK+ 2.10.x versions.

The third approach, followed by most distributions, is to take version upgrades only rarely, or not at all. For smaller distributions this may be an issue of manpower, but for major distributions it’s a matter of avoiding regressions in stable releases. Holding back on version updates actually works well for most software. When security problems arise, distribution maintainers for major distributions backport fixes and release updates. The problem is that this not feasible for web engines; due to the huge volume of vulnerabilities that need fixed, security issues can only practically be handled upstream.

So what’s happened since WSA-2015-0002 was released? Did it convince distributions to take WebKitGTK+ security seriously? Hardly. Fedora is the only distribution that has made any changes in response to WSA-2015-0002, and that’s because I’m one of the Fedora maintainers. (I’m pleased to announce that we have a 2.10.7 update headed to both Fedora 23 and Fedora 22 right now. In the future, we plan to release the latest stable version of WebKitGTK+ as an update to all supported versions of Fedora shortly after it’s released upstream.)

Ubuntu

Ubuntu releases WebKitGTK+ updates somewhat inconsistently. For instance, Ubuntu 14.04 came with WebKitGTK+ 2.4.0. 2.4.8 is available via updates, but even though 2.4.9 was released upstream over eight months ago, it has not yet been released as an update for Ubuntu 14.04.

By comparison, Ubuntu 15.10 (the latest release) shipped with WebKitGTK+ 2.8.5, which has never been updated; it’s affected by about 40 vulnerabilities fixed in the latest upstream release. Ubuntu organizes its software into various repositories, and provides security support only to software in the main repository. This version of WebKitGTK+ is in Ubuntu’s “universe” repository, not in main, so it is excluded from security support. Ubuntu users might be surprised to learn that a large portion of Ubuntu software is in universe and therefore excluded from security support; this is in contrast to almost all other distributions, which typically provide security updates for all the software they ship.

I’m calling out Ubuntu here not because it is specially-negligent, but simply because it is our biggest distributor. It’s not doing any worse than most of our other distributors.

Debian

Debian provides WebKit updates to users running unstable, and to testing except during freeze periods, but not to released version of Debian. Debian is unique in that it has a formal policy on WebKit updates. Here it is, reproduced in full:

Debian 8 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers with backported security fixes. Additionally, library interdependencies make it impossible to update to newer upstream releases. Therefore, browsers built upon the webkit, qtwebkit and khtml engines are included in Jessie, but not covered by security support. These browsers should not be used against untrusted websites.

For general web browser use we recommend Iceweasel or Chromium.

Chromium – while built upon the Webkit codebase – is a leaf package, which will be kept up-to-date by rebuilding the current Chromium releases for stable. Iceweasel and Icedove will also be kept up-to-date by rebuilding the current ESR releases for stable.

(Iceweasel and Icedove are Debian’s de-branded versions of Firefox and Thunderbird, the product of an old trademark spat with Mozilla.)

Debian is correct that we do not provide long term support branches, as it would be very difficult to backport security fixes. But it is not correct that “library interdependencies make it impossible to update to newer upstream releases.” This might have been true in the past, but for several years now, we have avoided requiring new versions of libraries whenever it would cause problems for distributions, and — with one big exception that I will discuss below — we ensure that each release maintains both API and ABI compatibility. (Distribution maintainers should feel free to get in touch if we accidentally introduce some compatibility issue for your distribution; if you’re having trouble taking our updates, we want to help. I recently worked with openSUSE to make sure WebKitGTK+ can still be compiled with GCC 4.8, for example.)

The risk in releasing updates is that WebKitGTK+ is not a leaf package: a bad update could break some application. This seems to me like a good reason for application maintainers to carefully test the updates, rather than a reason to withhold security updates from users, but it’s true there is some risk here. One possible solution would be to have two different WebKitGTK+ packages, say, webkitgtk-secure, which would receive updates and be used by high-risk software like web browsers and email clients, and a second webkitgtk-stable package that would not receive updates to reduce regression potential.

Recommended Distributions

We regularly receive bug reports from users with very old versions of WebKit, who trust their distributors to handle security for them and might not even realize they are running ancient, unsafe versions of WebKit. I strongly recommend using a distribution that releases WebKitGTK+ updates shortly after they’re released upstream. That is currently only Arch and Fedora. (You can also safely use WebKitGTK+ in Debian testing — except during its long freeze periods — and Debian unstable, and maybe also in openSUSE Tumbleweed. Just be aware that the stable releases of these distributions are currently not receiving our security updates.) I would like to add more distributions to this list, but I’m currently not aware of any more that qualify.

The Great API Break

So, if only distributions would ship the latest release of WebKitGTK+, then everything would be good, right? Nope, because of a large API change that occurred two and a half years ago, called WebKit2.

WebKit (an API layer within the WebKit project) and WebKit2 are two separate APIs around WebCore. WebCore is the portion of the WebKit project that Google forked into Blink; it’s too low-level to be used directly by applications, so it’s wrapped by the nicer WebKit and WebKit2 APIs. The difference between the WebKit and WebKit2 APIs is that WebKit2 splits work into multiple secondary processes. Asides from the UI process, an application will have one or many separate web processes (for the actual page rendering), possibly a separate network process, and possibly a database process for IndexedDB. This is good for security, because it allows the secondary processes to be sandboxed: the web process is the one that’s likely to be compromised first, so it should not have the ability to access the filesystem or the network. (Remember, though, that there is no Linux sandbox yet, so this is currently only a theoretical benefit.) The other main benefit is robustness. If a web site crashes the renderer, only a single web process crashes (corresponding to one tab in Epiphany), not the entire browser. UI process crashes are comparatively rare.

Intermission: Certificate Verification

Another advantage provided by the API change is the opportunity to handle HTTPS connections more securely. In the original WebKitGTK+ API, applications must handle certificate verification on their own. This was a serious mistake; predictably, applications performed no verification at all, or did so improperly. For instance, take this Shotwell bug which is not fixed in any released version of Shotwell, or this Banshee bug which is still open. Probably many more applications are affected, because I have not done a comprehensive check. The new API is secure by default; applications can ignore verification errors, but only if they go out of their way to do so.

Remember that even though WebKitGTK+ 2.4.9 was released upstream over eight months ago, Ubuntu 14.04 is still on 2.4.8? It’s worth mentioning that 2.4.9 contains the fix for that serious networking backend issue I mentioned earlier (CVE-2015-2330). The bug is that TLS certificate verification was not performed until an HTTP response was received from the server; it’s supposed to be performed before sending an HTTP request, to prevent secure cookies from leaking. This is a disaster, as attackers can easily use it to get your session cookie and then control your user account on most websites. (Credit to Ross Lagerwall for reporting that issue.) We reported this separately to oss-security due to its severity, but that was not enough to convince distributions to update. But most applications in Ubuntu 14.04, including Epiphany and Midori, would not even benefit from this fix, because the change only affects WebKit2; remember, there’s no certificate verification in the original WebKitGTK+ API. (Modern versions of Epiphany do use WebKit2, but not the old version included in Ubuntu 14.04.) Old versions of Epiphany and Midori load pages even if certificate verification fails; the verification result is only used to change the status of a security indicator, basically giving up your session cookies to attackers.

Removing WebKit1

WebKit2 has been around for Mac and iOS for longer, but the first stable release for WebKitGTK+ was the appropriately-versioned WebKitGTK+ 2.0, in March 2013. This release actually contained three different APIs: webkitgtk-1.0, webkitgtk-3.0, and webkit2gtk-3.0. webkitgtk-1.0 was the original API, used by GTK+ 2 applications. webkitgtk-3.0 was the same thing for GTK+ 3 applications, and webkit2gtk-3.0 was the new WebKit2 API, available only for GTK+ 3 applications.

Maybe it should have remained that way.

But, since the original API was a maintenance burden and not as stable or robust as WebKit2, it was deleted after the WebKitGTK+ 2.4 release in March 2014. Applications had had a full year to upgrade; surely that was long enough, right? The original WebKit API layer is still maintained for the Mac, iOS, and Windows ports, but the GTK+ API for it is long gone. WebKitGTK+ 2.6 (September 2014) was released with only one API, webkit2gtk-4.0, which was basically the same as webkit2gtk-3.0 except for a couple small fixes; most applications were able to upgrade by simply changing the version number. Since then, we have maintained API and ABI compatibility for webkit2gtk-4.0, and intend to do so indefinitely, hopefully until GTK+ 4.0.

A lot of good that does for applications using the API that was removed.

WebKit2 Adoption

While upgrading to the WebKit2 API will be easy for most applications (it took me ten minutes to upgrade GNOME Initial Setup), for many others it will be a significant challenge. Since rendering occurs out of process in WebKit2, the DOM API can only be accessed by means of a shared object injected into the web process. For applications that perform only a small amount of DOM manipulation, this is a minor inconvenience compared to the old API. For applications that use extensive DOM manipulation — the email clients Evolution and Geary, for instance — it’s not just an inconvenience, but a major undertaking to upgrade to the new API. Worse, some applications (including both Geary and Evolution) placed GTK+ widgets inside the web view; this is no longer possible, so such widgets need to be rewritten using HTML5. Say nothing of applications like GIMP and Geany that are stuck on GTK+ 2. They first have to upgrade to GTK+ 3 before they can consider upgrading to modern WebKitGTK+. GIMP is working on a GTK+ 3 port anyway (GIMP uses WebKitGTK+ for its help browser), but many applications like Geany (the IDE, not to be confused with Geary) are content to remain on GTK+ 2 forever. Such applications are out of luck.

As you might expect, most applications are still using the old API. How does this work if it was already deleted? Distributions maintain separate packages, one for old WebKitGTK+ 2.4, and one for modern WebKitGTK+. WebKitGTK+ 2.4 has not had any updates since last May, and the last real comprehensive security update was over one year ago. Since then, almost 130 vulnerabilities have been fixed in newer versions of WebKitGTK+. But since distributions continue to ship the old version, few applications are even thinking about upgrading. In the case of the email clients, the Evolution developers are hoping to upgrade later this year, but Geary is completely dead upstream and probably will never be upgraded. How comfortable are you with using an email client that has now had no security updates for a year?

(It’s possible there might be a further 2.4 release, because WebKitGTK+ 2.4 is incompatible with GTK+ 3.20, but maybe not, and if there is, it certainly will not include many security fixes.)

Fixing Things

How do we fix this? Well, for applications using modern WebKitGTK+, it’s a simple problem: distributions simply have to start taking our security updates.

For applications stuck on WebKitGTK+ 2.4, I see a few different options:

  1. We could attempt to provide security backports to WebKitGTK+ 2.4. This would be very time consuming and therefore very expensive, so count this out.
  2. We could resurrect the original webkitgtk-1.0 and webkitgtk-3.0 APIs. Again, this is not likely to happen; it would be a lot of work to restore them, and they were removed to reduce maintenance burden in the first place. (I can’t help but feel that removing them may have been a mistake, but my colleagues reasonably disagree.)
  3. Major distributions could remove the old WebKitGTK+ compatibility packages. That will force applications to upgrade, but many will not have the manpower to do so: good applications will be lost. This is probably the only realistic way to fix the security problem, but it’s a very unfortunate one. (But don’t forget about QtWebKit. QtWebKit is based on an even older version of WebKit than WebKitGTK+ 2.4. It doesn’t make much sense to allow one insecure version of WebKit but not another.)

Or, a far more likely possibility: we could do nothing, and keep using insecure software.

Steve Faulkner et alWCAG 2.0 Parsing Criterion is a PITA

The WCAG 2.0 Parsing Criterion is a Pain In The Ass (PITA) because the checking of it throws up lots of potential errors that if required to fix, may result in a lot of extra work (in some cases busy work) for developers. This is largely due to the lack of robust tools for producing a set of specific issues that require fixing.

I have discussed the parsing criterion previously in WCAG 2.0 parsing error bookmarklet also providing a bookmarklet that helps to filter out some HTML conformance checker errors that are definitely (maybe) not potential accessibility issues.

IMPORTANT NOTE:

I am not saying here that checking and fixing HTML Conformance errors is not an important and useful part of web development process, only that fixing all HTML conformance errors is not a requirement for accessibility. There are good reasons to validate your HTML as part of the development process.

What the WCAG parsing criterion requires?

Is really, only, a very limited subset of the errors and warnings that may be produced when checking with the only available tools (i.e. HTML conformance checkers) for testing the WCAG parsing Criterion. You can use a HTML conformance checker to find such errors, but the errors that need fixing for accessibility purposes can often be needles in a haystack.

1. Complete start and end tags

note: but only when this is required by the specification

Examples of what happens:

This:

fieldset><input></fieldset>

Displays this on page:

<mark>fieldset></mark>

or

This:

<img src="HTML5_Logo.png" alt="HTML5"
<p>test</p>

Produces this in DOM:

<img <mark><p=""</mark> alt="HTML5" src="HTML5_Logo.png"> <mark>test</mark>
<mark><p></p></mark>

i.e. unintended empty p element with intended text not contained and a mutant attribute <p="" sprouted on the img element.

What this requirement does not mean

Adding end tags to every element:

Not this! <input><mark></input></mark>
...
Not this!<li>list item <mark></li></mark>
...

or self closing elements without end tags

Not this! <input<mark> /</mark>> <img<mark> /</mark>>

There are rules in HTML detailing which elements require end tags and under what circumstances:  Optional Tags. You can also find this information under Tag omission in text/html in the definition of each element in HTML.

4.5.9 The abbr element

Tag omission in text/html:

Neither tag is omissible

<footer>http://www.w3.org/TR/html5/text-level-semantics.html#the-abbr-element</footer>

Good news is that most code errors of this type will be fairly obvious as they will show up as text strings in the rendered code or effect style/positioning of content and produce funky attributes in the DOM.

2. Malformed attribute and attribute values

quoted attributes

Any attributes that take text strings or a set of space-separated tokens or a set of comma-separated tokens or a valid list of integers, need to be quoted:

Do this:

<p class="poot pooter">some text about poot</p>
<img alt="The Etiology of poot." src="poot.png">

Not this:

//missing end quote on class attribute with multiple values: 
Not this!<p class="poot pooter>some text about poot</p>

//no quotes on class attribute with multiple values: 
Not this!<p class=poot pooter>some text about poot</p>

//missing start quote on alt attribute
Not this!<img alt=The Etiology of poot." src="poot.png">

//no quotes on alt attribute
Not this!<img alt=The Etiology of poot. src="poot.png">

Note: although some attributes do not require quoted values, the safest and sanest thing to do is quote all attributes.

Spaces between attributes

Do this:

<p class="poot"<mark> </mark>id="pooter">some text about poot</p>
<img alt="The Etiology of poot."<mark> </mark>src="poot.png">

Not this:

//no space between class and id attributes: 
Not this!<p <mark>class="poot"id="pooter"</mark>>some text about poot</p>

//no space between alt and src attributes:
Not this!<img <mark>alt="The Etiology of poot."src="poot.png"</mark>>

Further reading on attributes: Failure of Success Criterion 4.1.1 due to incorrect use of start and end tags or attribute markup

3. Elements are nested according to their specifications

What this requirement means is that you cannot do something silly like having a list item li without it having a ul or ol as a parent:

<div>
Not this!<li>list item</li> 
Not this!<li>list item</li>
</div>

or multiple controls inside a label element:

<label>
first name <input type="text"> 
Not this!last name <input type="text"> 
</label>

Examples of what happens:

For “a list item li without it having a ul or ol as a parent” depending on browser, the semantics of the list item including  the role, list size and position of an item in the list, are lost. It also results in funky rendering across browsers.

For “multiple controls inside a label element” depending on the browser, the accessible name for each of the controls is a concatenation of the text inside the label, so in the example case, each control has an accessible name of “first name last name”. Also clicking, with the mouse, on either text label will move focus to the first control in the label element.

4. Elements do not contain duplicate attributes

Pretty simple, don’t do this:

<img alt="html5" Not this! <mark>alt="html6"></mark>

Note: although this is a requirement in the WCAG criteria and a HTML conformance requirement, it causes no harm accessibility wise unless the 2nd instance of the duplicate attribute is one that exposes  required information, the usual processing behaviour for duplicate attributes is that the first instance is used, further instances are ignored.

5. Any IDs are unique

Again, pretty simple, don’t do this

<body>
...
<p id="IAmUnique">
...
 <div Not this! <mark>id="IAmUnique"</mark>> 
... 
</body>

Note: although this is a requirement in the WCAG criteria and a HTML conformance requirement, it causes no harm accessibility wise unless the id value is being referenced by a relationship attribute such as for or headers or aria-labelledby etc.

Some further examples of HTML conformance errors that ARE NOT WCAG parsing criterion fails

  • Unrecognized attributes:

    Error: Attribute event not allowed on element a at this point.

  • Unrecognized Elements:

    Error: Element poot not allowed as child of element body in this context.

  • Bad attribute values:

    Error: Bad value grunt for attribute type on element input.

  • Missing attribute values:

    Error: Element meta is missing one or more of the following attributes: content, property.

  • Obsolete elements and attributes:

    Error: The align attribute on the td element is obsolete.

 

WHATWG blogStandards development happening on GitHub

(This post is a rough copy of an email I sent to the mailing list.)

I wanted to remind the community that currently all WHATWG standards are being developed on GitHub. This enables everyone to directly change standards through pull requests and start topic-based discussion through issues.

GitHub is especially useful now that the WHATWG covers many more topics than “just” HTML, and using it has already enabled many folks to contribute who likely would not have otherwise. To facilitate participation by everyone, some of us have started identifying relative-easy-to-do issues across our GitHub repositories with the label “good first bug”. (See also good first bugs on Bugzilla. New issues go to GitHub, but some old ones are still on Bugzilla.) And we will also continue to help out with any questions on #whatwg IRC.

You should be able to find the relevant GitHub repository easily from the top of each standard the WHATWG publishes. Once you have a GitHub account, you can follow the development of a single standard using the “Watch” feature.

There are no plans to decommission the mailing list — but as you might have noticed, new technical discussion there has become increasingly rare. The mailing list is still a good place to discuss new standards, overarching design decisions, and more generally as a place to announce (new) things.

When there’s a concrete proposal or issue at hand, GitHub is often a better forum. IRC also continues to be used for a lot of day-to-day communications, support, and quick questions.

Planet WebKitXabier Rodríguez Calvar: Web Engines Hackfest according to me

And once again, in December we celebrated the hackfest. This year happened between Dec 7-9 at the Igalia premises and the scope was much broader than WebKitGTK+, that’s why it was renamed as Web Engines Hackfest. We wanted to gather people working on all open source web engines and we succeeded as we had people working on WebKit, Chromium/Blink and Servo.

The edition before this I was working with Youenn Fablet (from Canon) on the Streams API implementation in WebKit and we spent our time on the same thing again. We have to say that things are much more mature now. During the hackfest we spent our time in fixing the JavaScriptCore built-ins inside WebCore and we advanced on the automatic importation of the specification web platform tests, which are based on our prior test implementation. Since now they are managed there, it does not make sense to maintain them inside WebKit too, we just import them. I must say that our implementation is fairly complete since we support the current version of the spec and have almost all tests passing, including ReadableStream, WritableStream and the built-in strategy classes. What is missing now is making Streams work together with other APIs, such as Media Source Extensions, Fetch or XMLHttpRequest.

There were some talks during the hackfest and we did not want to be less, so we had our own about Streams. You can enjoy it here:

You can see all hackfest talks in this YouTube playlist. The ones I liked most were the ones by Michael Catanzaro about HTTP security, which is always interesting given the current clumsy political movements against cryptography and the one by Dominik Röttsches about font rendering. It is really amazing what a browser has to do just to get some letters painted on the screen (and look good).

As usual, the environment was amazing and we had a great time, including the traditional Street Fighter‘s match, where Gustavo found a worthy challenger in Changseok :)

Of course, I would like to thank Collabora and Igalia for sponsoring the event!

And by the way, quite shortly after that, I became a WebKit reviewer!

Planet MozillaTesting Google Search On Gecko With Different UA Strings

Google is serving very different versions of its services to individual browsers and devices. A bit more than one year ago, I had listed some of the bugs (btw, I need to go through this list again), Firefox was facing when accessing Google properties. Sometimes, we were not served the tier 1 experience that Chrome was receiving. Sometimes it was just completely broken.

We have an open channel of discussions with Google. Google is also not a monolithic organization. Some services have different philosophy with regards to fixing bugs or Web compatibility issues. The good news is that it is improving.

Three Small Important Things About Google Today

  1. mike was looking for usage of -webkit-mask-* CSS property on the Web. I was about to reply "Google search!" which was sending it to Chrome browser but decided to look at the bug again. They were using -webkit-mask-image. To my big surprise, they switched to an SVG icon. Wonderful!
  2. So it was time for me to testing one more time Google Search on Gecko with Firefox Android UA and Chrome UA. See below.
  3. Tantek started some discussion in the CSS Working Group about Web compatibility issues, including one about getting the members of the CSS Working Group to fix their Web properties.

Testing Google Search on Gecko and Blink

For each test, the first two screenshots are on the mobile device itself (Chrome, then Firefox). The third screenshot shows the same site with a Chrome user agent string but as displayed on Gecko on Desktop. Basically, this 3rd one is testing if Google was sending the same version to Firefox on Android that they serve to Chrome, would it work?

Home page

We reached the home page of Google.

home page

Home page - search term results

We typed the word "Japan".

home page with search term

Home page - scrolling down

We scrolled down a bit.

scrolling down the page

Home page - bottom

We reached the bottom of the page.

Bottom of the page

Google Images with the search term

We go back to the top of the page and tap on Images menu.

Accessing Google image

Google Images when image has been tapped

We tap on the first image.

focus on the image

Conclusion

We are not there yet the issue is complex, because of the big number of versions which are served to different browsers/devices, but definitely there is progress. At first sight, the version sent to Chrome is compatible with Firefox. We would need to test with being logged too and all the corner cases of the tools and menus. But it's a lot, lot, better that what it was used to be in the past. We have never been that close from an acceptable user experience.

Bruce LawsonReading List

Bruce LawsonI’ve got a new job!

TL;DR, I’m moving from Developer Relations to become Opera’s Deputy Chief Technology Officer. Or maybe Deputy Technology Officer, because “Deputy Chief” is almost oxymoronic. Anyway, call me “Bruce”; it’s more polite than what you usually call me.

Co-father of CSS Håkon Wium Lie continues to be CTO, and I’ll be working with him, the Communications Team, the product teams, and my lovely colleagues in devrel, to continue connecting the unconnected across the world.

In some ways, this is simply an evolution of what I’ve been doing for the last couple of years. In a more profound way it’s a return to basics.

My first real exposure to the Web came about working in Thailand in 1999, when I was convalescing after my diagnosis of Multiple Sclerosis. Because M.S. is very rare in Asia, I could find no English language information to tell me how quickly or painfully I would die.

But I’d read about this new-fangled Web thing, and there was an Internet Café near my apartment, so I typed in “Multiple Sclerosis” into Alta Vista and found something extraordinary: a community of people around the world supporting each other through their shared diagnosis on something called a “website” – and I could participate, too, from a café in Pratunam, Bangkok. All strangers, across the globe, coming together around a common theme and helping each other.

I knew immediately that I’d stumbled upon something amazing, something revolutionary, an undreamed of way to communicate. As an English Literature graduate and ex-programmer, I was fascinated, by both the communicative potential and also the tech that drove it. By 2002, I was Brand Manager for a UK book company publishing on books for web professionals, and our first, flagship book was on Web Accessibility.

From accessibility, I began to advocate the general concept of open web standards on my blog and with various employers, so that everyone could access the web. Then, after being invited to join Opera in 2008, I started advocating HTML5, so people could connect to an open web that could compete with the proprietary silos of Flash and iOS. After that, I began beating the drum for Media Queries and Responsive Design so that the people in developing nations (like I was in ’99), using affordable hand-held devices, could connect and enjoy the full web. Then I proposed the <picture> element (more accurately: a very naive precursor to it) so that people with limited funds for bandwidth could connect economically, too. Then I agitated, inside Opera and outside, for Progressive Web Apps, so people could have a great experience on the open web, not those pesky walled gardens.

The common thread is people and getting them connected to each other. This matters to me because that happened to me, 17 years ago (spoiler: and I didn’t die).

A third of a billion people use Opera’s products to get them online, fast and affordably. I want to be part of making that half a billion, then a billion, then more; not by stealing customers from competitors, but by opening up the web to people and places that currently have no access. That’s a lot of people; there’s a lot to be done. It’s a big job. I’m a n00b and I’m gonna fuck up from time-to-time.

Bring it on.

(Yikes.)

Steve Faulkner et alThe state of hidden content support in 2016

ARIA and HTML5 I have reported previously on support in browsers and screen readers (SR) for aria-hidden and the HTML5 hidden attribute. The last time was 2 years ago, the orginal article published 2 years prior in 2012 still gets lots of page views. As its a subject that developers are interested in, so here is an update.

Support for HTML5 hidden and aria-hidden in 2016

Summary

All major browsers and screen readers:

  • support the use of the hidden attribute to hide content
  • support the use of the CSS display:none to hide content
  • support the use of the aria-hidden attribute to hide visible content from screen reader users

The Details

Screen reader support for hidden content – tests and results for

  • Windows 10
    • Firefox 43
    • Internet Explorer 11
    • Chrome 47
    • JAWS 17
    • Window Eyes 9
    • NVDA 2015.4
    • Narrator 10
  • VoiceOver on iOS 9.2.1 (iPhone 6)
  • VoiceOver on OSX El Capitan
  • ChromeVox  on Chrome OS 47
  • Orca 3.16 on Linux

Notes:

In some browser and screen reader combinations aria-hidden=false on an element that is hidden using the hidden attribute or CSS display:none results in the content being unhidden. This behaviour does not have consensus support among browser implementers and I strongly recommend, it not be used.

Why no Edge testing? The Edge browser does not yet have sufficient accessibility support for testing to be useful.

The hidden attribute hides content due to browser’s implementation of CSS display:none on content hidden using the attribute. If the default UA CSS is overidden, then aria-hidden=true will have to be used alongside the hidden attribute:

See the Pen gPvoNR by steve faulkner (@stevef) on CodePen.

<script async="" src="https://assets.codepen.io/assets/embed/ei.js"></script>

Tests and results on github – issues and PR’s welcome!

WHATWG blogHTML Standard now more community-driven

It’s been several months now since maintenance of the HTML Standard moved from a mostly-private Subversion repository to the whatwg/html GitHub repository. This move has been even more successful than we hoped:

  • We now have thirty-seven contributors who have landed one or more patches, and have merged over 250 pull requests in total. That’s almost two new contributors each week since the move!
  • We’ve worked to curate a list of good first bugs to introduce newcomers to the community and the standard, and worked hard to improve the onboarding experience for building the standard.
  • With help from the community, the standard's gender pronoun disparity has been significantly improved. (See The happy case of pronouns and HTML.)
  • Sponsored by Mozilla, we have applied to Outreachy and Richa Rupela is now helping us write the HTML Standard.
  • We have collaborated with TC39—who thankfully moved to GitHub around the same time—to remove some longstanding discrepancies between HTML and ECMAScript.
  • We've made many, many small fixes to better match the reality of what is implemented in browsers, mostly in response to feedback from browser developers.

Aside from defining the HTML language, the HTML Standard defines the processing model around script execution, the fundamentals of the web’s security model, the web worker API for parallel script execution, and many more aspects that are core to the web platform. If you are interested in helping out, please reach out on IRC or GitHub.

Anne van KesterenW3C forks HTML yet again

The W3C has forked the HTML Standard for the nth time. As always, it is pretty disastrous:

  • Erased all Git history of the document.
  • Did not document how they transformed the document. Issues of mismatches have already been reported and it will likely be a long time, if ever, before all bugs due to this process are uncovered, since it was not open.
  • Did not discuss plans with the wider community.
  • Did not discuss plans with the folks they were forking from.
  • Did not even discuss plans with the members of the W3C Web Platform Working Group.
  • Erased the acknowledgments section.
  • Erased the copyright and licensing information and replaced it with their own.

So far this fork has been soundly ignored by the HTML community, which is as expected and desired. We hesitated to post this since we did not want to bring undeserved attention to the fork. But we wanted to make the situation clear to the web standards community, which might otherwise be getting the wrong message. Thus, proceed as before: the standards with green stylesheets are the up-to-date ones that should be used by implementers and developers, and referred to by other standards. They are where work on crucial bugfixes such as setting the correct flags for <img> fetches and exciting new features such as <script type=module> will take place.

If there are blockers preventing your organization from working with the WHATWG, feel free to reach out to us for help in resolving the matter. Deficient forks are not the answer.

— The editors of the HTML Standard

Planet Mozilla🙅 @media (-webkit-transform-3d)

@media (-webkit-transform-3d) is a funny thing that exists on the web.

It's like, a media query feature in the form of a prefixed CSS property, which should tell you if your (once upon a time probably Safari-only) browser supports 3D transforms, invented back in the day before we had @supports.

(According to Apple docs it first appeared in Safari 4, along side the other -webkit-transition and -webkit-transform-2d hybrid-media-query-feature-prefixed-css-properties-things that you should immediately forget exist.)

Older versions of Modernizr used this (and only this) to detect support for 3D transforms, and that seemed pretty OK. (They also did the polite thing and tested @media (transform-3d), but no browser has ever actually supported that, as it turns out). And because they're so consistently polite, they've since updated the test to prefer @supports too (via a pull request from Edge developer Jacob Rossi).

As it turns out other browsers have been updated to support 3D CSS transforms, but sites didn't go back and update their version of Modernizr. So unless you support @media (-webkit-transform-3d) these sites break. Niche websites like yahoo.com and about.com.

So, anyways. I added @media (-webkit-transform-3d) to the Compat Standard and we added support for it Firefox so websites stop breaking.

But you shouldn't ever use it—use @supports. In fact, don't even share this blog post. Maybe delete it from your browser history just in case.

HTML5 DoctorThe woes of date input

One of the many new input types that HTML5 introduced is the date input type which, in theory, should allow a developer to provide the user with a simple, usable, recognisable method of entering a date on a web page. But sadly, this input type has yet to reach its full potential.

<section id="introduction">

Briefly, the date input type is a form element that allows the capture of a date from a user, usually via a datepicker. The implementation of this datepicker is up to the browser vendor, as the HTML5 specification does not tell vendors how to implement the input’s UI.

The input itself can of course be used without using any of its available attributes:

<label for="when">Date:</label>
<input id="when" name="when" type="date">

Or you can specify minimum and maximum date values via the min and max attributes, ensuring that a user can only choose dates within a specific range:

<label for="when">Date:</label>
<input id="when" name="when" type="date" min="2016-01-01" max="2016-12-01">

You can also use the step attribute to specify, in days, how a date can increment. The default is 1.

This of course is the theory and a fine reality it would be, if it were so but alas it is not.

</section> <section id="features">

Features

Rather than talk about browser support for the date input type, I will instead talk about the various features that are part of this input type:

<section id="feature-input-field">

The input field

Neither Firefox nor Safari support this input type; it is treated as a simple text field with no formatting and no special interaction.

Microsoft Edge has no special interactions and in fact the input field appears to be read-only.

Chrome and Opera have the same implementations which display a date placeholder (or date value if the input’s value attribute has been set) using the user’s system settings’ date format. There are also some controls that allow the user to clear the input field, some arrows to cycle up and down between values, and an arrow that will open a datepicker (see Figure 1). There are some WebKit prefixed pseudo selectors available that allow you to change the appearance of the various bits within the input date field.

<figure> Screenshot of WebKit's native date input on Chrome

<figcaption>Figure 1. – WebKit’s date input field as seen on Chrome</figcaption> </figure> </section> <section id="feature-min-max-attributes">

The min and max attributes

All of those browsers that support the date input type also support the min and max attributes.

Chrome and Opera both work fine if both the min and max attributes are set, but the UI is poor when only one of them is set. In these browsers, the date input displays up and down arrows for the user to change each value (day, month, and year). If a date input has a minimum value set at today’s date, then if the user highlights the year and then clicks the down arrow, the year will change to 275760 as the arrows cycle between the valid values and, since no maximum has been set, the default maximum is used (see Figure 2). This has been reported as a bug but has been marked as “won’t fix” as apparently it is the desired behaviour, I disagree.

<figure> Screenshot of WebKit's native date input on Chrome showing the year cycling to 275760

<figcaption>Figure 2. – WebKit’s buggy date input field with a min value set but no max</figcaption> </figure>

Something similar happens when only a max value is set in that pressing the up arrow will cycle around to the year 0001, which is the default minimum. Again this leads to a very confusing UX for the user, so please use both min and max where possible.

With Android, the setting of these attributes can cause some weird quirks with the datepicker.

</section> <section id="feature-step-attribute">

The step attribute

None of the browsers tested support the step attribute.

</section> <section id="feature-datepicker">

The datepicker

Firefox and Safari do not support the date input type and therefore do not open a datepicker on desktop. But they do open a device’s native datepicker on mobile devices when the date input field receives focus.

Microsoft Edge opens a datepicker, but its UI is very poor (see Figure 3).

<figure> Screenshot of the the Edge's datepicker, which displays a list of selectable dates with one highlighted and two buttons at the bottom, once with a tick and the other with an X

<figcaption>Figure 3. – Microsoft Edge’s native datepicker</figcaption> </figure>

Chrome and Opera open a fairly decent datepicker on desktop but you cannot style it at all. It looks how it looks (see Figure 4), and there’s nothing that you can do about it, no matter what your designer says. With these browsers you can disable the native datepicker if you want and implement your own, but this will not work with mobile devices. This gives you the best of both worlds: you can implement a nicer datepicker for desktop and let a device use its own native datepicker.

<figure> Screenshot of WebKit's native datepicker and date input on Chrome

<figcaption>Figure 4. – WebKit’s native datepicker and date input field as seen on Chrome</figcaption> </figure> </section>

All the native datepickers that I tested on Android displayed some sort of confusing display quirks when valid min and max attribute are set. A user is correctly restricted from selecting a date outside of the specified range, but the visual datepickers often display “previous”, “current”, and “next” values for day, month, and year, and the values displayed for “previous” are either empty (in the case of the month) or set to 9999 (in the case of the year, see Figure 5).

<figure> Screenshot of the native datepicker on Android 4.3, with a selected date and the previous year set to 9999

<figcaption>Figure 5. – The native datepicker on Android 4.3</figcaption> </figure> </section> <section id="feature-events">

Events

Like most of the other input fields, the date input supports the input and changed events. The input event is fired every time a user interacts with the input field, while the change event is usually only fired when the value is committed, i.e. the input field loses focus.

For those browsers mentioned above that support the native date input field, no specific events are fired that will allow you to determine if the user has done anything “date specific” such as opened or closed the datepicker, or cleared the input field. You can work out if the field has been cleared by listening for the input or change events and then checking the input field’s value.

If you have an event listener set up for the change event and one for the input event, changed will never actually fire as input takes precendence.

</section> <section id="conclusion">

Conclusion

So, what can we deduce from all this? Should we use the input date type or not? And as usual the answer is “it depends” as only you know what you’re building and how and where it will be used, so it might fit perfectly for your needs or it might not.

I think in general it’s safe to use the input date type on occasions where a min and max does not need to be set as the quirks when these attributes are used are too irritating to ignore. Since not all browsers support a datepicker and you will have to get one from somewhere else to support these browsers, I would also recommend turning off the datepicker for all desktop browsers (using the method linked to above) as this will still allow devices to use their own.

Have fun!

</section> </section>

The woes of date input originally appeared on HTML5 Doctor on January 19, 2016.

Planet MozillaOkay, But What Does Your Work Actually Mean, Nikki? Part 2: The Fetch Standard and Servo

In my previous post, I started discussing in more detail what my internship entails, by talking about my first contribution to Servo. As a refresher, my first contribution was as part of my application to Outreachy, which I later revisited during my internship after a change I introduced to the HTML Standard it relied on. I’m going to expand on that last point today- specifically, how easy it is to introduce changes in WHATWG’s various standards. I’m also going to talk about how this accessibility to changing web standards affects how I can understand it, how I can help improve it, and my work on Servo.

Two Ways To Change

There are many ways to get involved with WHATWG, but there are two that I’ve become the most familiar with: firstly, by opening a discussion about a perceived issue and asking how it should be resolved; secondly, by taking on an issue approved as needing change and making the desired change. I’ve almost entirely only done the former, and the latter only for some minor typos. Any changes that relate directly to my work, however minor, are significant for me though! Like I discussed in my previous post, I brought attention to an inconsistency that was resolved, giving me a new task of updating my first contribution to Servo to reflect the change in the HTML Standard. I’ve done that several times since, for the Fetch Standard.

Understanding Fetch

My first two weeks of my internship were spent on reading through the majority of the Fetch Standard, primarily the various Fetch functions. I took many notes describing the steps to myself, annotated with questions I had and the answers I got from either other people on the Servo team who had worked with Fetch (including my internship mentor, of course!) or people from WHATWG who were involved in the Fetch Standard. Getting so familiar with Fetch meant a few things: I would notice minor errors (such as an out of date link) that I could submit a simple fix for, or a bigger issue that I couldn’t resolve myself.

Discussions & Resolutions

I’m going to go into more detail about some of those bigger issues. From my perspective, when I start a discussion about a piece of documentation (such as the Fetch Standard, or reading about a programming library Servo uses), I go into it thinking “Either this documentation is incorrect, or my understanding is incorrect”. Whichever the answer is, it doesn’t mean that the documentation is bad, or that I’m bad at reading comprehension. I understand best by building up a model of something in my head, putting that to practice, and asking a lot of questions along the way. I learn by getting things wrong and figuring out why I was wrong, and sometimes in the process I uncover a point that could be made more clear, or an inconsistency! I have good examples of both of the different outcomes I listed, which I’ll cover over the next two sections.

Looking For The Big Picture

Early on in my initial review of the Fetch Standard’s several protocols, I found a major step that seemed to have no use. I understood that since I was learning Fetch on a step-by-step basis, I did not have a view of the bigger picture, so I asked around what I was missing that would help me understand this. One of the people I work with on implementing Fetch agreed with me that the step seemed to have no purpose, and so we decided to open an issue asking about removing it from the standard. It turned out that I had actually missed the meaning of it, as we learned. However, instead of leaving it there, I shifted the issue into asking for some explanatory notes on why this step is needed, which was fulfilled. This meant that I would have a reference to go back to should I forget the significance of the step, and that people reading the Fetch Standard in the future would be much less likely to come to the same incorrect conclusion I had.

A Confusing Order

Shortly after I had first discovered that apparent issue, I found myself struggling to comprehend a sequence of actions in another Fetch protocol. The specification seemed to say that part of an early step was meant to only be done after the final step. I unfortunately don’t remember details of the discussion I had about this- if there was a reason for why it was organized like this, I forget what it was. Regardless, it was agreed that moving those sub-steps to be actually listed after the step they’re supposed to run after would be a good change. This meant that I would need to re-organize my notes to reflect the re-arranged sequence of actions, as well as have an easier time being able to follow this part of the Fetch Standard.

A Living Standard

Like I said at the start of this post, I’m going to talk about how changes in the Fetch Standard affects my work on Servo itself. What I’ve covered so far has mostly been how changes affect my understanding of the standard itself. A key aspect in understanding the Fetch protocols is reviewing them for updates that impact me. WHATWG labels every standard they author as a “Living Standard” for good reason. It was one thing for me to learn how easy it is to introduce changes, while knowing exactly what’s going on, but it’s another for me to understand that anybody else can, and often does, make changes to the Fetch Standard!

Changes Over Time

When an update is made to the Fetch Standard, it’s not so difficult to deal with as one might imagine. The Fetch Standard always notes the last day it was updated at the top of the document, I follow a Twitter account that posts about updates, and all the history can be seen on GitHub which will show me exactly what has been changed as well as some discussion on what the change does. All of these together alert me to the fact that the Fetch Standard has been modified, and I can quickly see what was revised. If it’s relevant to what I’m going to be implementing, I update my notes to match it. Occasionally, I need to change existing code to reflect the new Standard, which is also easily done by comparing my new notes to the Fetch implementation in Servo!

Snapshots

From all of this, it might sound like the Fetch Standard is unfinished, or unreliable/inconsistent. I don’t mean to misrepresent it- the many small improvements help make the Fetch Standard, like all of WHATWG’s standards, better and more reliable. You can think of the status of the Fetch Standard at any point in time as a single, working snapshot. If somebody implemented all of Fetch as it is now, they’d have something that works by itself correctly. A different snapshot of Fetch is just that- different. It will have an improvement or two, but that doesn’t obsolete anybody who implemented it previously. It just means if they revisit the implementation, they’ll have things to update.

Third post over.

Planet MozillaDo you need to install Flash anymore?

If someone asks “Do I need Java“, my answer is a) most people don’t need it, and b) to find out if you need it, remove it. I did that many years ago and haven’t needed it. I’ve been hoping to reach the same point with Flash. I’d try disabling it, but there are two sites I regularly visit, which sometimes require Flash – Youtube and Facebook (for videos). Last year, Youtube switched to HTML5, and recently I found that Facebook started using HTML5 for videos, so I decided to try disabling Flash again. This time, I was pleasantly surprised at how many websites no longer use Flash.

Using Firefox on a late 2013 Macbook Pro, here is a list of sites I’ve found work well with Flash disabled:

  • Youtube
  • Facebook
  • Thestar.com
  • Instagram
  • CNN
  • CNET
  • Vimeo
  • Dailymotion

There are still some holdouts. In my case, I’m really affected by CTV Toronto News requiring Flash. I also wanted to watch an episode of Comedians In Cars Getting Coffee, and that required Flash. Others:

  • BBC
  • Hulu

I emailed CTV, and here’s the response:
At this time we currently do not have any future plans to support HTML5. Regardless, your comments have been forwarded to our technical team for review.

I’ve decided to switch back to thestar.com for local [Toronto] news, now that they’re over their Rob Ford obsession.

And with that, I can keep Flash disabled. Every now and then I may require it to view some web content, but for the most part, I don’t need it.
Flash has been thought of as a must-have plugin, but after disabling it, that wasn’t the case for me. A lot of the web has already switched to HTML5. Try disabling Flash for yourself, and enjoy so much more battery life!

Planet MozillaJamie Charlton: a versatile and engaged contributor

Jamie Charlton has been involved with Mozilla since 2014 and contributes to various parts of the Mozilla project, including Firefox OS. He is from Wassaic, New York.


Hi Jamie! How did you discover the Web?

For the most part I grew up with the web, but mainly got into the web when I was 13-14 and I wanted to learn how to work with and code for the web.

How did you hear about Mozilla?

For the most part I always use Firefox due to the security settings and found out about Mozilla from Firefox.

How and why did you start contributing to Mozilla?

The irony of dumb luck – I started contributing to Mozilla on December 22, 2014. I had installed the Firefox Aurora because it was supposed to support more of the html5 gaming features that were up and coming and I wanted to see how well they worked, then I accidentally stumbled upon webIDE and was really intrigued by Firefox OS and filed my first bug that day.

Jamie (center) at the Work Week in Orlando, FL in December 2015.

Jamie (center) at the Work Week in Orlando, FL in December 2015.

Have you contributed to any other Mozilla projects in any other way?

Yes, several. I am a Mozilla rep, and I also help out in IRC running the nightly channel helping people with questions regarding anything to do with a nightly version of anything from Thunderbird, Firefox OS nightly, to Firefox nightly – anything anyone asks about.

What’s the contribution you’re the most proud of?

Odd question… none actually, to me this is just a lot of fun.

What advice would you give to someone who is new and interested in contributing to Mozilla?

Dive right in, don’t be afraid to mess up, if you mess up someone will help you fix it. First hand experience is the best way to learn and get involved.

If you had one word or sentence to describe Mozilla, what would it be?

Fun!

What exciting things do you envision for you and Mozilla in the future?

Well hopefully some more fun and a lot more projects based around Firefox OS.

Is there anything else you’d like to say or add to the above questions?

Have fun, and don’t take anything seriously – it’s more fun that way.


The Mozilla QA and Firefox OS teams would like to thank Jamie for his contributions.

Jamie has been extremely helpful in updating documentation and in filing Firefox OS bugs. – Marcia Knous

Dedicated and highly enthusiastic, Jamie_ has had a profound impact in QA. He has been learning and growing from the time he has started joining us, and continues to help Mozilla better the web. He also helps Mozilla grow and tries to help recruit QA locally.

Jamie_, we salute you and thank you for your contributions. We hope that there will be many more to come. – Naoki Hirata

Planet MozillaLooking for a new challenge

World War II poster with get excited and make things text

Creative Commons: https://flic.kr/p/6bLxDe

It’s the beginning of a new year, which means a blank slate to move forward, improve yourself, and enhance your life. Personally, I’m starting the year looking for a new full-time job!

Over the last six months, I had the pleasure of working with the talented folks at IMMUNIO. However, the company is prioritizing other marketing activities than evangelism (the CEO can give you more information). It became apparent that this new direction won’t give me the possibility to use my passion and expertise to produce the impact I would like and the results they need, so my position was put on hold.

What’s next

Of course, I’ve been a Technical/Developer Evangelist/Advisor/Relations (whatever you call it) for five years now. I’ve built my experience (my LinkedIn profile – I don’t have a traditional resume) at companies like Microsoft and Mozilla, but I’m open to discussing any other type of role, technical or not, where my experience can help the business to achieve their goals. My only criteria? A role that will get me excited and where I’ll make things happen: without creativity, passion, and any ways to challenge myself, it won’t be a good fit, for both of us. On the compensation side, let’s be honest, I also have a lifestyle I would like to keep.

I’m fine with travelling extensively and remote working as it’s what I’ve done extensively for the last couple of years, but because of health issues in my family, I cannot move from Montreal (Canada). Note that I don’t want to go back as a full-time developer.

Some of my experience includes:

Feel free to read other articles on this blog and give a closer look to my primary social media accounts (Twitter, Facebook and LinkedIn). You’ll find a passionate, honest and bold person.

Contact me

I have the firm intention to find a company where I’ll be able to grow in the next couple of years. If you think we can work together, please send me an email with some information about the role.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>