- A Web compat bug which can be fixed by adding a CSS property, but which led also to the opening of a bug for Gecko about max-width.
- The sad comment of the day was in that bug about fonts on Nokia site:
Forgive me to say this, but... Sadly, FF's market has been dropping and keeps dropping, losing to Chrome. At this point "works in Chrome, used to work in FF" should mean "fix it in FF asap". An end user only sees FF doesn't work, Chrome does. They don't care about the technical reasons.
- GetPocket displays an interesting issue on Firefox for Android. The bug had been opened a little while ago, but I took time to create an account to investigate. It sounded familiar to me. It may be related to Bug 1244192
- Two weeks ago, there was an issue about a BOM breaking a templating system in Dojo. Henri Sivonen looked at it and indeed found a bug in XHR implementation in Firefox.
- Sometimes another team of Mozilla will ask help from Webcompat team for contacting site owners to fix an issue on their Web site which hinders the user experience on Firefox. Let's go through some tips to maximize the chances of getting results when we outreach.
- When looking for usage of
-webkit-mask-*, I remembered that Google Image was a big offender. So I tested again this morning and… delight! They now use SVG. So now, I need to test extensively Google search and check if they can just send us the version they send to Chrome.
- Testing Google Calendar again on Gecko with Chrome user agent to see how far we are to receive a better user experience. We can't really yet ask Google to send us the same thing they send to Chrome. A couple of glitches here and there. But we are very close. The better would be for Google to fix their CSS, specifically to make flexbox and gradients standard-compatible.
- The code for the
max-widthissue (not a bug but implementation differences due to an undefined scenario in the CSS specification) is being worked on by David Baron and reviewed by Daniel Holbert. And this makes me happy, it should solve a lot of the Webcompat bugs reports. Look at the list of SeeAlso in that bug.
Webcompat Life and Working with Developer Tools
- Changing preferences all the time through "about:config" is multiple step. I liked in Opera Presto how you could link to a specific preference, so I filed a bug for Firefox. RESOLVED. It exists:
about:config?filter=webkitand it's bookmark-able.
- Bug 1245365 - searching attribute values through CSS selectors override the search terms
- A discussion has been started on improving the Responsive Design Mode of Firefox Developer Tools. I suggested a (too big) list of features that would make my life easier.
Firefox OS Bugs to Firefox Android Bugs
- Web Compatibility Mozilla employees reduced their support for solving Firefox OS bugs to its minimum. The community is welcome to continue to work on them. But some of these bugs have still an impact on Firefox Android. One good example of this is Bug 959137. Let's come up with a process to deal with those.
- Another last week todo. I have been closing a lot of old bugs (around 600 in a couple of days) in Firefox OS and Firefox Android in Tech Evangelism product. The reasons for closing them are mostly:
- the site doesn't exist anymore. (This goes into my list of Web Compatibility axioms: "Wait long enough, every bug disappears.")
- the site fixed the initial issue
layout.css.prefixes.webkit; truefixes it (see Bug 1213126)
- the site has moved to a responsive design
Bug 812899 - absolutely positioned element should be vertically centered even if the height is bigger than that of the containing block
This bug was simple at the beginning, but when providing the fix, it broke other tests. It's normal. Boris explained which parts of the code was impacted. But I don't feel I'm good enough yet for touching this. Or it would require patience and step by step guidance. It could be interesting though. I have the feeling I have too much on my plate right now. So a bug to take over!
Testing Google Search On Gecko With Different UA Strings
So last week, I gave myself a Todo "testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA." My preliminary tests sound pretty good.
- Frameworks vs libraries (or: process shifts at Mozilla).
- Mozilla's internal dialog on HTTPS by Dave Winner, who's not happy about the move to HTTPS in general. He is worried and it's understandable because the announcements on "burning down everything non HTTP" were not followed-up by regular status on a milder position. Google has also a worrying Marking HTTP As Non-Secure page. That said Mozilla bug had no activity for more than one year, which in bug tracker standards may mean "wait long enough it will die by itself". The push from HTTP to HTTPS should not be valid by itself or more exactly, we should not equate a technology with user security and choices. Maybe there are better and more accessible ways for both Web developers and users to create the same secure environment. Let's Encrypt is a very interesting initiative, but has still too many implications down the road on the development work.
- Discussion in the CSS Working Group about Web Compatibility for
0degfor angles. The resolution: "Angles can drop unit when value is 0"
- An email conversation summary visualization. Mail is something I like very much. But I manage it in a way which is a bit different than many people. I also think that the email apps on Mobile are just dumb and it might be a part of the hate that people have for mails. If you send notifications mail to your email address, give yourself a gift… use filtering.
- mike published about A quiz about ES2015 block-scoped function declarations (in a with block statement)
Follow Your Nose
Linux distributions have a problem with WebKit security.
Major desktop browsers push automatic security updates directly to users on a regular basis, so most users don’t have to worry about security updates. But Linux users are dependent on their distributions to release updates. Apple fixed over 100 vulnerabilities in WebKit last year, so getting updates out to users is critical.
This is the story of how that process has gone wrong for WebKit.
Before we get started, a few disclaimers. I want to be crystal clear about these points:
- This post does not apply to WebKit as used in Apple products. Apple products receive regular security updates.
- WebKitGTK+ releases regular security updates upstream. It is safe to use so long as you apply the updates.
- The opinions expressed in this post are my own, not my employer’s, and not the WebKit project’s.
Browser Security in a Nutshell
Web engines are full of security vulnerabilities, like buffer overflows, null pointer dereferences, and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.
If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities. Chromium has a top-class Linux sandbox. WebKit does have a Linux sandbox, but it’s not any good, so it’s (rightly) disabled by default. Firefox does not have a sandbox due to major architectural limitations (which Mozilla is working on).
For this blog post, it’s enough to know that attackers use crafted input to exploit vulnerabilities to gain control of your computer. This is why it’s not a good idea to browse to dodgy web pages. It also explains how a malicious email can gain control of your computer. Modern email clients render HTML mail using web engines, so malicious emails exploit many of the same vulnerabilities that a malicious web page might. This is one reason why good email clients block all images by default: image rendering, like HTML rendering, is full of security vulnerabilities. (Another reason is that images hosted remotely can be used to determine when you read the email, violating your privacy.)
To understand WebKit security, you have to understand the concept of WebKit ports, because different ports handle security updates differently.
While most code in WebKit is cross-platform, there’s a large amount of platform-specific code as well, to improve the user and developer experience in different environments. Different “ports” run different platform-specific code. This is why two WebKit-based browsers, say, Safari and Epiphany (GNOME Web), can display the same page slightly differently: they’re using different WebKit ports.
Currently, the WebKit project consists of six different ports: one for Mac, one for iOS, two for Windows (Apple Windows and WinCairo), and two for Linux (WebKitGTK+ and WebKitEFL). There are some downstream ports as well; unlike the aforementioned ports, downstream ports are, well, downstream, and not part of the WebKit project. The only one that matters for Linux users is QtWebKit.
If you use Safari, you’re using the Mac or iOS port. These ports get frequent security updates from Apple to plug vulnerabilities, which users receive via regular updates.
Everything else is broken.
Since WebKit is not a system library on Windows, Windows applications must bundle WebKit, so each application using WebKit must be updated individually, and updates are completely dependent on the application developers. iTunes, which uses the Apple Windows port, does get regular updates from Apple, but beyond that, I suspect most applications never get any security updates. This is a predictable result, the natural consequence of environments that require bundling libraries.
(This explains why iOS developers are required to use the system WebKit rather than bundling their own: Apple knows that app developers will not provide security updates on their own, so this policy ensures every iOS application rendering HTML gets regular WebKit security updates. Even Firefox and Chrome on iOS are required to use the system WebKit; they’re hardly really Firefox or Chrome at all.)
The same scenario applies to the WinCairo port, except this port does not have releases or security updates. Whereas the Apple ports have stable branches with security updates, with WinCairo, companies take a snapshot of WebKit trunk, make their own changes, and ship products with that. Who’s using WinCairo? Probably lots of companies; the biggest one I’m aware of uses a WinCairo-based port in its AAA video games. It’s safe to assume few to no companies are handling security backports for their downstream WinCairo branches.
Now, on to the Linux ports. WebKitEFL is the WebKit port for the Enlightenment Foundation Libraries. It’s not going to be found in mainstream Linux distributions; it’s mostly used in embedded devices produced by one major vendor. If you know anything at all about the internet of things, you know these devices never get security updates, or if they do, the updates are superficial (updating only some vulnerable components and not others), or end a couple months after the product is purchased. WebKitEFL does not bother with pretense here: like WinCairo, it has never had security updates. And again, it’s safe to assume few to no companies are handling security backports for their downstream branches.
None of the above ports matter for most Linux users. The ports available on mainstream Linux distributions are QtWebKit and WebKitGTK+. Most of this blog will focus on WebKitGTK+, since that’s the port I work on, and the port that matters most to most of the people who are reading this blog, but QtWebKit is widely-used and deserves some attention first.
It’s broken, too.
QtWebKit is the WebKit port used by Qt software, most notably KDE. Some cherry-picked examples of popular applications using QtWebKit are Amarok, Calligra, KDevelop, KMail, Kontact, KTorrent, Quassel, Rekonq, and Tomahawk. QtWebKit provides an excellent Qt API, so in the past it’s been the clear best web engine to use for Qt applications.
After Google forked WebKit, the QtWebKit developers announced they were switching to work on QtWebEngine, which is based on Chromium, instead. This quickly led to the removal of QtWebKit from the WebKit project. This was good for the developers of other WebKit ports, since lots of Qt-specific code was removed, but it was terrible for KDE and other QtWebKit users. QtWebKit is still maintained in Qt and is getting some backports, but from a quick check of their git repository it’s obvious that it’s not receiving many security updates. This is hardly unexpected; QtWebKit is now years behind upstream, so providing security updates would be very difficult. There’s not much hope left for QtWebKit; these applications have hundreds of known vulnerabilities that will never be fixed. Applications should port to QtWebEngine, but for many applications this may not be easy or even possible.
Update: As pointed out in the comments, there is some effort to update QtWebKit. I was aware of this and in retrospect should have mentioned this in the original version of this article, because it is relevant. Keep an eye out for this; I am not confident it will make its way into upstream Qt, but if it does, this problem could be solved.
WebKitGTK+ is the port used by GTK+ software. It’s most strongly associated with its flagship browser, Epiphany, but it’s also used in other places. Some of the more notable users include Anjuta, Banshee, Bijiben (GNOME Notes), Devhelp, Empathy, Evolution, Geany, Geary, GIMP, gitg, GNOME Builder, GNOME Documents, GNOME Initial Setup, GNOME Online Accounts, GnuCash, gThumb, Liferea, Midori, Rhythmbox, Shotwell, Sushi, and Yelp (GNOME Help). In short, it’s kind of important, not only for GNOME but also for Ubuntu and Elementary. Just as QtWebKit used to be the web engine for choice for Qt applications, WebKitGTK+ is the clear choice for GTK+ applications due to its nice GObject APIs.
Historically, WebKitGTK+ has not had security updates. Of course, we released updates with security fixes, but not with CVE identifiers, which is how software developers track security issues; as far as distributors are concerned, without a CVE identifier, there is no security issue, and so, with a few exceptions, distributions did not release our updates to users. For many applications, this is not so bad, but for high-risk applications like web browsers and email clients, it’s a huge problem.
So, we’re trying to improve. Early last year, my colleagues put together our first real security advisory with CVE identifiers; the hope was that this would encourage distributors to take our updates. This required data provided by Apple to WebKit security team members on which bugs correspond to which CVEs, allowing the correlation of Bugzilla IDs to Subversion revisions to determine in which WebKitGTK+ release an issue has been fixed. That data is critical, because without it, there’s no way to know if an issue has been fixed in a particular release or not. After we released this first advisory, Apple stopped providing the data; this was probably just a coincidence due to some unrelated internal changes at Apple, but it certainly threw a wrench in our plans for further security advisories.
This changed in November, when I had the pleasure of attending the WebKit Contributors Meeting at Apple’s headquarters, where I was finally able meet many of the developers I had interacted with online. At the event, I gave a presentation on our predicament, and asked Apple to give us information on which Bugzilla bugs correspond to which CVEs. Apple kindly provided the necessary data a few weeks later.
During the Web Engines Hackfest, a yearly event that occurs at Igalia’s office in A Coruña, my colleagues used this data to put together WebKitGTK+ Security Advisory WSA-2015-0002, a list of over 130 vulnerabilities disclosed since the first advisory. (The Web Engines Hackfest was sponsored by Igalia, my employer, and by our friends at Collabora. I’m supposed to include their logos here to advertise how cool it is that they support the hackfest, but given all the doom and gloom in this post, I decided perhaps they would perhaps prefer not to have their logos attached to it.)
Note that 130 vulnerabilities is an overcount, as it includes some issues that are specific to the Apple ports. (In the future, we’ll try to filter these out.) Only one of the issues — a serious error in the networking backend shared by WebKitGTK+ and WebKitEFL — resided in platform-specific code; the rest of the issues affecting WebKitGTK+ were all cross-platform issues. This is probably partly because the trickiest code is cross-platform code, and partly because security researchers focus on Apple’s ports.
Anyway, we posted WSA-2015-0002 to the oss-security mailing list to make sure distributors would notice, crossed our fingers, and hoped that distributors would take the advisory seriously. That was one month ago.
There are basically three different approaches distributions can take to software updates. The first approach is to update to the latest stable upstream version as soon as, or shortly after, it’s released. This is the strategy employed by Arch Linux. Arch does not provide any security support per se; it’s not necessary, so long as upstream projects release real updates for security problems and not simply patches. Accordingly, Arch almost always has the latest version of WebKitGTK+.
The second main approach, used by Fedora, is to provide only stable release updates. This is more cautious, reflecting that big updates can break things, so they should only occur when upgrading to a new version of the operating system. For instance, Fedora 22 shipped with WebKitGTK+ 2.8, so it would release updates to new 2.8.x versions, but not to WebKitGTK+ 2.10.x versions.
The third approach, followed by most distributions, is to take version upgrades only rarely, or not at all. For smaller distributions this may be an issue of manpower, but for major distributions it’s a matter of avoiding regressions in stable releases. Holding back on version updates actually works well for most software. When security problems arise, distribution maintainers for major distributions backport fixes and release updates. The problem is that this not feasible for web engines; due to the huge volume of vulnerabilities that need fixed, security issues can only practically be handled upstream.
So what’s happened since WSA-2015-0002 was released? Did it convince distributions to take WebKitGTK+ security seriously? Hardly. Fedora is the only distribution that has made any changes in response to WSA-2015-0002, and that’s because I’m one of the Fedora maintainers. (I’m pleased to announce that we have a 2.10.7 update headed to both Fedora 23 and Fedora 22 right now. In the future, we plan to release the latest stable version of WebKitGTK+ as an update to all supported versions of Fedora shortly after it’s released upstream.)
Ubuntu releases WebKitGTK+ updates somewhat inconsistently. For instance, Ubuntu 14.04 came with WebKitGTK+ 2.4.0. 2.4.8 is available via updates, but even though 2.4.9 was released upstream over eight months ago, it has not yet been released as an update for Ubuntu 14.04.
By comparison, Ubuntu 15.10 (the latest release) shipped with WebKitGTK+ 2.8.5, which has never been updated; it’s affected by about 40 vulnerabilities fixed in the latest upstream release. Ubuntu organizes its software into various repositories, and provides security support only to software in the main repository. This version of WebKitGTK+ is in Ubuntu’s “universe” repository, not in main, so it is excluded from security support. Ubuntu users might be surprised to learn that a large portion of Ubuntu software is in universe and therefore excluded from security support; this is in contrast to almost all other distributions, which typically provide security updates for all the software they ship.
I’m calling out Ubuntu here not because it is specially-negligent, but simply because it is our biggest distributor. It’s not doing any worse than most of our other distributors.
Debian provides WebKit updates to users running unstable, and to testing except during freeze periods, but not to released version of Debian. Debian is unique in that it has a formal policy on WebKit updates. Here it is, reproduced in full:
Debian 8 includes several browser engines which are affected by a steady stream of security vulnerabilities. The high rate of vulnerabilities and partial lack of upstream support in the form of long term branches make it very difficult to support these browsers with backported security fixes. Additionally, library interdependencies make it impossible to update to newer upstream releases. Therefore, browsers built upon the webkit, qtwebkit and khtml engines are included in Jessie, but not covered by security support. These browsers should not be used against untrusted websites.
For general web browser use we recommend Iceweasel or Chromium.
Chromium – while built upon the Webkit codebase – is a leaf package, which will be kept up-to-date by rebuilding the current Chromium releases for stable. Iceweasel and Icedove will also be kept up-to-date by rebuilding the current ESR releases for stable.
(Iceweasel and Icedove are Debian’s de-branded versions of Firefox and Thunderbird, the product of an old trademark spat with Mozilla.)
Debian is correct that we do not provide long term support branches, as it would be very difficult to backport security fixes. But it is not correct that “library interdependencies make it impossible to update to newer upstream releases.” This might have been true in the past, but for several years now, we have avoided requiring new versions of libraries whenever it would cause problems for distributions, and — with one big exception that I will discuss below — we ensure that each release maintains both API and ABI compatibility. (Distribution maintainers should feel free to get in touch if we accidentally introduce some compatibility issue for your distribution; if you’re having trouble taking our updates, we want to help. I recently worked with openSUSE to make sure WebKitGTK+ can still be compiled with GCC 4.8, for example.)
The risk in releasing updates is that WebKitGTK+ is not a leaf package: a bad update could break some application. This seems to me like a good reason for application maintainers to carefully test the updates, rather than a reason to withhold security updates from users, but it’s true there is some risk here. One possible solution would be to have two different WebKitGTK+ packages, say, webkitgtk-secure, which would receive updates and be used by high-risk software like web browsers and email clients, and a second webkitgtk-stable package that would not receive updates to reduce regression potential.
We regularly receive bug reports from users with very old versions of WebKit, who trust their distributors to handle security for them and might not even realize they are running ancient, unsafe versions of WebKit. I strongly recommend using a distribution that releases WebKitGTK+ updates shortly after they’re released upstream. That is currently only Arch and Fedora. (You can also safely use WebKitGTK+ in Debian testing — except during its long freeze periods — and Debian unstable, and maybe also in openSUSE Tumbleweed. Just be aware that the stable releases of these distributions are currently not receiving our security updates.) I would like to add more distributions to this list, but I’m currently not aware of any more that qualify.
The Great API Break
So, if only distributions would ship the latest release of WebKitGTK+, then everything would be good, right? Nope, because of a large API change that occurred two and a half years ago, called WebKit2.
WebKit (an API layer within the WebKit project) and WebKit2 are two separate APIs around WebCore. WebCore is the portion of the WebKit project that Google forked into Blink; it’s too low-level to be used directly by applications, so it’s wrapped by the nicer WebKit and WebKit2 APIs. The difference between the WebKit and WebKit2 APIs is that WebKit2 splits work into multiple secondary processes. Asides from the UI process, an application will have one or many separate web processes (for the actual page rendering), possibly a separate network process, and possibly a database process for IndexedDB. This is good for security, because it allows the secondary processes to be sandboxed: the web process is the one that’s likely to be compromised first, so it should not have the ability to access the filesystem or the network. (Remember, though, that there is no Linux sandbox yet, so this is currently only a theoretical benefit.) The other main benefit is robustness. If a web site crashes the renderer, only a single web process crashes (corresponding to one tab in Epiphany), not the entire browser. UI process crashes are comparatively rare.
Intermission: Certificate Verification
Another advantage provided by the API change is the opportunity to handle HTTPS connections more securely. In the original WebKitGTK+ API, applications must handle certificate verification on their own. This was a serious mistake; predictably, applications performed no verification at all, or did so improperly. For instance, take this Shotwell bug which is not fixed in any released version of Shotwell, or this Banshee bug which is still open. Probably many more applications are affected, because I have not done a comprehensive check. The new API is secure by default; applications can ignore verification errors, but only if they go out of their way to do so.
Remember that even though WebKitGTK+ 2.4.9 was released upstream over eight months ago, Ubuntu 14.04 is still on 2.4.8? It’s worth mentioning that 2.4.9 contains the fix for that serious networking backend issue I mentioned earlier (CVE-2015-2330). The bug is that TLS certificate verification was not performed until an HTTP response was received from the server; it’s supposed to be performed before sending an HTTP request, to prevent secure cookies from leaking. This is a disaster, as attackers can easily use it to get your session cookie and then control your user account on most websites. (Credit to Ross Lagerwall for reporting that issue.) We reported this separately to oss-security due to its severity, but that was not enough to convince distributions to update. But most applications in Ubuntu 14.04, including Epiphany and Midori, would not even benefit from this fix, because the change only affects WebKit2; remember, there’s no certificate verification in the original WebKitGTK+ API. (Modern versions of Epiphany do use WebKit2, but not the old version included in Ubuntu 14.04.) Old versions of Epiphany and Midori load pages even if certificate verification fails; the verification result is only used to change the status of a security indicator, basically giving up your session cookies to attackers.
WebKit2 has been around for Mac and iOS for longer, but the first stable release for WebKitGTK+ was the appropriately-versioned WebKitGTK+ 2.0, in March 2013. This release actually contained three different APIs: webkitgtk-1.0, webkitgtk-3.0, and webkit2gtk-3.0. webkitgtk-1.0 was the original API, used by GTK+ 2 applications. webkitgtk-3.0 was the same thing for GTK+ 3 applications, and webkit2gtk-3.0 was the new WebKit2 API, available only for GTK+ 3 applications.
Maybe it should have remained that way.
But, since the original API was a maintenance burden and not as stable or robust as WebKit2, it was deleted after the WebKitGTK+ 2.4 release in March 2014. Applications had had a full year to upgrade; surely that was long enough, right? The original WebKit API layer is still maintained for the Mac, iOS, and Windows ports, but the GTK+ API for it is long gone. WebKitGTK+ 2.6 (September 2014) was released with only one API, webkit2gtk-4.0, which was basically the same as webkit2gtk-3.0 except for a couple small fixes; most applications were able to upgrade by simply changing the version number. Since then, we have maintained API and ABI compatibility for webkit2gtk-4.0, and intend to do so indefinitely, hopefully until GTK+ 4.0.
A lot of good that does for applications using the API that was removed.
While upgrading to the WebKit2 API will be easy for most applications (it took me ten minutes to upgrade GNOME Initial Setup), for many others it will be a significant challenge. Since rendering occurs out of process in WebKit2, the DOM API can only be accessed by means of a shared object injected into the web process. For applications that perform only a small amount of DOM manipulation, this is a minor inconvenience compared to the old API. For applications that use extensive DOM manipulation — the email clients Evolution and Geary, for instance — it’s not just an inconvenience, but a major undertaking to upgrade to the new API. Worse, some applications (including both Geary and Evolution) placed GTK+ widgets inside the web view; this is no longer possible, so such widgets need to be rewritten using HTML5. Say nothing of applications like GIMP and Geany that are stuck on GTK+ 2. They first have to upgrade to GTK+ 3 before they can consider upgrading to modern WebKitGTK+. GIMP is working on a GTK+ 3 port anyway (GIMP uses WebKitGTK+ for its help browser), but many applications like Geany (the IDE, not to be confused with Geary) are content to remain on GTK+ 2 forever. Such applications are out of luck.
As you might expect, most applications are still using the old API. How does this work if it was already deleted? Distributions maintain separate packages, one for old WebKitGTK+ 2.4, and one for modern WebKitGTK+. WebKitGTK+ 2.4 has not had any updates since last May, and the last real comprehensive security update was over one year ago. Since then, almost 130 vulnerabilities have been fixed in newer versions of WebKitGTK+. But since distributions continue to ship the old version, few applications are even thinking about upgrading. In the case of the email clients, the Evolution developers are hoping to upgrade later this year, but Geary is completely dead upstream and probably will never be upgraded. How comfortable are you with using an email client that has now had no security updates for a year?
(It’s possible there might be a further 2.4 release, because WebKitGTK+ 2.4 is incompatible with GTK+ 3.20, but maybe not, and if there is, it certainly will not include many security fixes.)
How do we fix this? Well, for applications using modern WebKitGTK+, it’s a simple problem: distributions simply have to start taking our security updates.
For applications stuck on WebKitGTK+ 2.4, I see a few different options:
- We could attempt to provide security backports to WebKitGTK+ 2.4. This would be very time consuming and therefore very expensive, so count this out.
- We could resurrect the original webkitgtk-1.0 and webkitgtk-3.0 APIs. Again, this is not likely to happen; it would be a lot of work to restore them, and they were removed to reduce maintenance burden in the first place. (I can’t help but feel that removing them may have been a mistake, but my colleagues reasonably disagree.)
- Major distributions could remove the old WebKitGTK+ compatibility packages. That will force applications to upgrade, but many will not have the manpower to do so: good applications will be lost. This is probably the only realistic way to fix the security problem, but it’s a very unfortunate one. (But don’t forget about QtWebKit. QtWebKit is based on an even older version of WebKit than WebKitGTK+ 2.4. It doesn’t make much sense to allow one insecure version of WebKit but not another.)
Or, a far more likely possibility: we could do nothing, and keep using insecure software.
The WCAG 2.0 Parsing Criterion is a Pain In The Ass (PITA) because the checking of it throws up lots of potential errors that if required to fix, may result in a lot of extra work (in some cases busy work) for developers. This is largely due to the lack of robust tools for producing a set of specific issues that require fixing.
I have discussed the parsing criterion previously in WCAG 2.0 parsing error bookmarklet also providing a bookmarklet that helps to filter out some HTML conformance checker errors that are definitely (maybe) not potential accessibility issues.
I am not saying here that checking and fixing HTML Conformance errors is not an important and useful part of web development process, only that fixing all HTML conformance errors is not a requirement for accessibility. There are good reasons to validate your HTML as part of the development process.
What the WCAG parsing criterion requires?
Is really, only, a very limited subset of the errors and warnings that may be produced when checking with the only available tools (i.e. HTML conformance checkers) for testing the WCAG parsing Criterion. You can use a HTML conformance checker to find such errors, but the errors that need fixing for accessibility purposes can often be needles in a haystack.
1. Complete start and end tags
note: but only when this is required by the specification
Examples of what happens:
Displays this on page:
<img src="HTML5_Logo.png" alt="HTML5" <p>test</p>
Produces this in DOM:
<img <mark><p=""</mark> alt="HTML5" src="HTML5_Logo.png"> <mark>test</mark> <mark><p></p></mark>
i.e. unintended empty
p element with intended text not contained and a mutant attribute
<p="" sprouted on the
What this requirement does not mean
Adding end tags to every element:
<li>list item <mark></li></mark>
or self closing elements without end tags
<input<mark> /</mark>> <img<mark> /</mark>>
There are rules in HTML detailing which elements require end tags and under what circumstances: Optional Tags. You can also find this information under Tag omission in text/html in the definition of each element in HTML.
Tag omission in text/html:
Neither tag is omissible<footer>http://www.w3.org/TR/html5/text-level-semantics.html#the-abbr-element</footer>
Good news is that most code errors of this type will be fairly obvious as they will show up as text strings in the rendered code or effect style/positioning of content and produce funky attributes in the DOM.
2. Malformed attribute and attribute values
<p class="poot pooter">some text about poot</p> <img alt="The Etiology of poot." src="poot.png">
//missing end quote on class attribute with multiple values: <p class="poot pooter>some text about poot</p> //no quotes on class attribute with multiple values: <p class=poot pooter>some text about poot</p> //missing start quote on alt attribute <img alt=The Etiology of poot." src="poot.png"> //no quotes on alt attribute <img alt=The Etiology of poot. src="poot.png">
Note: although some attributes do not require quoted values, the safest and sanest thing to do is quote all attributes.
Spaces between attributes
<p class="poot"<mark> </mark>id="pooter">some text about poot</p> <img alt="The Etiology of poot."<mark> </mark>src="poot.png">
//no space between class and id attributes: <p <mark>class="poot"id="pooter"</mark>>some text about poot</p> //no space between alt and src attributes: <img <mark>alt="The Etiology of poot."src="poot.png"</mark>>
Further reading on attributes: Failure of Success Criterion 4.1.1 due to incorrect use of start and end tags or attribute markup
3. Elements are nested according to their specifications
What this requirement means is that you cannot do something silly like having a list item
li without it having a
ol as a parent:
or multiple controls inside a label element:
<label> first name <input type="text">
Examples of what happens:
For “a list item
li without it having a
ol as a parent” depending on browser, the semantics of the list item including the role, list size and position of an item in the list, are lost. It also results in funky rendering across browsers.
For “multiple controls inside a label element” depending on the browser, the accessible name for each of the controls is a concatenation of the text inside the label, so in the example case, each control has an accessible name of “first name last name”. Also clicking, with the mouse, on either text label will move focus to the first control in the label element.
4. Elements do not contain duplicate attributes
Pretty simple, don’t do this:
<img alt="html5" <mark>alt="html6"></mark>
Note: although this is a requirement in the WCAG criteria and a HTML conformance requirement, it causes no harm accessibility wise unless the 2nd instance of the duplicate attribute is one that exposes required information, the usual processing behaviour for duplicate attributes is that the first instance is used, further instances are ignored.
5. Any IDs are unique
Again, pretty simple, don’t do this
<body> ... <p id="IAmUnique"> ... <div <mark>id="IAmUnique"</mark>> ... </body>
Note: although this is a requirement in the WCAG criteria and a HTML conformance requirement, it causes no harm accessibility wise unless the
id value is being referenced by a relationship attribute such as
Some further examples of HTML conformance errors that ARE NOT WCAG parsing criterion fails
- Unrecognized attributes:
eventnot allowed on element
aat this point.
- Unrecognized Elements:
pootnot allowed as child of element
bodyin this context.
- Bad attribute values:
Error: Bad value
- Missing attribute values:
metais missing one or more of the following attributes:
- Obsolete elements and attributes:
alignattribute on the
tdelement is obsolete.
(This post is a rough copy of an email I sent to the mailing list.)
I wanted to remind the community that currently all WHATWG standards are being developed on GitHub. This enables everyone to directly change standards through pull requests and start topic-based discussion through issues.
GitHub is especially useful now that the WHATWG covers many more topics than “just” HTML, and using it has already enabled many folks to contribute who likely would not have otherwise. To facilitate participation by everyone, some of us have started identifying relative-easy-to-do issues across our GitHub repositories with the label “good first bug”. (See also good first bugs on Bugzilla. New issues go to GitHub, but some old ones are still on Bugzilla.) And we will also continue to help out with any questions on #whatwg IRC.
You should be able to find the relevant GitHub repository easily from the top of each standard the WHATWG publishes. Once you have a GitHub account, you can follow the development of a single standard using the “Watch” feature.
There are no plans to decommission the mailing list — but as you might have noticed, new technical discussion there has become increasingly rare. The mailing list is still a good place to discuss new standards, overarching design decisions, and more generally as a place to announce (new) things.
When there’s a concrete proposal or issue at hand, GitHub is often a better forum. IRC also continues to be used for a lot of day-to-day communications, support, and quick questions.
And once again, in December we celebrated the hackfest. This year happened between Dec 7-9 at the Igalia premises and the scope was much broader than WebKitGTK+, that’s why it was renamed as Web Engines Hackfest. We wanted to gather people working on all open source web engines and we succeeded as we had people working on WebKit, Chromium/Blink and Servo.
There were some talks during the hackfest and we did not want to be less, so we had our own about Streams. You can enjoy it here:
You can see all hackfest talks in this YouTube playlist. The ones I liked most were the ones by Michael Catanzaro about HTTP security, which is always interesting given the current clumsy political movements against cryptography and the one by Dominik Röttsches about font rendering. It is really amazing what a browser has to do just to get some letters painted on the screen (and look good).
As usual, the environment was amazing and we had a great time, including the traditional Street Fighter‘s match, where Gustavo found a worthy challenger in Changseok
Of course, I would like to thank Collabora and Igalia for sponsoring the event!
And by the way, quite shortly after that, I became a WebKit reviewer!
Google is serving very different versions of its services to individual browsers and devices. A bit more than one year ago, I had listed some of the bugs (btw, I need to go through this list again), Firefox was facing when accessing Google properties. Sometimes, we were not served the tier 1 experience that Chrome was receiving. Sometimes it was just completely broken.
We have an open channel of discussions with Google. Google is also not a monolithic organization. Some services have different philosophy with regards to fixing bugs or Web compatibility issues. The good news is that it is improving.
Three Small Important Things About Google Today
- mike was looking for usage of
-webkit-mask-*CSS property on the Web. I was about to reply "Google search!" which was sending it to Chrome browser but decided to look at the bug again. They were using
-webkit-mask-image. To my big surprise, they switched to an SVG icon. Wonderful!
- So it was time for me to testing one more time Google Search on Gecko with Firefox Android UA and Chrome UA. See below.
- Tantek started some discussion in the CSS Working Group about Web compatibility issues, including one about getting the members of the CSS Working Group to fix their Web properties.
Testing Google Search on Gecko and Blink
For each test, the first two screenshots are on the mobile device itself (Chrome, then Firefox). The third screenshot shows the same site with a Chrome user agent string but as displayed on Gecko on Desktop. Basically, this 3rd one is testing if Google was sending the same version to Firefox on Android that they serve to Chrome, would it work?
We reached the home page of Google.
Home page - search term results
We typed the word "Japan".
Home page - scrolling down
We scrolled down a bit.
Home page - bottom
We reached the bottom of the page.
Google Images with the search term
We go back to the top of the page and tap on Images menu.
Google Images when image has been tapped
We tap on the first image.
We are not there yet the issue is complex, because of the big number of versions which are served to different browsers/devices, but definitely there is progress. At first sight, the version sent to Chrome is compatible with Firefox. We would need to test with being logged too and all the corner cases of the tools and menus. But it's a lot, lot, better that what it was used to be in the past. We have never been that close from an acceptable user experience.
Tantek Çelik — 2.5 days @W3C @CSSWG meetings done, 2.5 days left. Good @webcompat discussion this Monday morn http://log.csswg.org/irc.w3.org/css/2016-01-31/#e643566
Good @webcompat discussion this Monday morn http://log.csswg.org/irc.w3.org/css/2016-01-31/#e643566
- Building Offline Sites with ServiceWorkers and UpUp (UpUp is a nice’n’easy ServiceWorkers library)
- Subgrids Considered Essential – Eric Meyer on CSS Grids: “If grid layout is released without subgrid support, we’re risking shoving subgrids into the back of the author-practices cupboard for a long time to come. And along with it, potentially, grids themselves.”
- Related: Could a simpler subgrid lead to more implementer interest? – proposal by Francois Remy
- Big Web Show 141: CSS Grid Layout With Rachel Andrew Rachel is “Ms Grids”.
- The woes of date input – HTML5Doctor article by Ian Devlin
- Remove <iframe seamless> – removing it from HTML (no-one implements; overtaken by events; complex; devs need auto-resize, but CSS Working Group is already on the case).
- The world’s poorest households are more likely to have a mobile phone than a toilet – Word Bank report. “The full benefits of the information and communications transformation will not be realized unless countries continue to improve their business climate, invest in people’s education and health, and promote good governance.”
- The Facebook-Loving Farmers of Myanmar – “A dispatch from an Internet revolution in progress”. Fascinating article.
- What to do when you get sued… [for your inaccessible website] (revisited) by Karl Groves
- Moving to a Plugin-Free Web – “Oracle plans to deprecate the Java browser plugin in JDK 9”
- Losing Our Heads – Separating Information from Interface – nice briefing note by Peter Gasston on Push notifications, their usefulness to consumers, and their effect on UI design
- What I’ve learned from monitoring four years of web page bloat by Tammy Everts, with helpful tips on what to concentrate on to give anyone who touches your web pages
- Talking of fast, fast pages: Opera Mini 14 for Android is out, with more than 90 languages supported, including 13 Indian languages.
- Responsive Image Breakpoints Generator, A New Open Source Tool
- 2016 – the year of web streams – Jank Architect walks you through the Streams API. Good comments, too.
- Optimising SVGs for Web Use — Part 1
- Leaner Responsive Images With Client Hints
- Gyrophone: Recognizing Speech From Gyroscope Signals – ” Since iOS and Android require no special permissions to access the gyro, our results show that apps and active web content that cannot access the microphone can nevertheless eavesdrop on speech in the vicinity of the phone.”
- My Experience With the Great Firewall of China – “as an InfoSec professional I was very curious to finally be able to poke at the Great Firewall of China with my own hands to see how it works and how easy it is evade”
- Uninstalling Facebook Speeds Up Your Android Phone – Tested – “that settles it for me… I am joining the browser-app camp for now…”
- <related:>Lite apps grow in popularity on the App Store – because they’re less than 1MB to download, and work in low-connectivity conditions.</related:>
- Woe-ARIA: The Surprisingly but Ridiculously Complicated World of aria-label/ aria-labelledby by one of the devs of the NVDA screenreader
- Video corner: The Future of the Web Platform: Does It Have One? – Google’s sinister mastermind, Alex Russell, “discusses the impact of new standards-track technologies like Service Workers, Web Manifests, and Web Push which are landing in browsers” (48 mins)
- Becoming Responsible for CSS – a talk by CSS co-editor Alan Stearns of Adobe (17 mins)
- Audio corner: Working Draft – Revision 250: Achtung Baby! – “we managed to get our greedy hands on no one less than Bruce Lawson from Opera. Having barely returned from a trip to Asia and still dizzy from his jetlag, we managed to extract a whole bunch of classified information on CSS Houdini out of him (also thanks to our German interview style)”
- Why India is the fastest growing tech hub in the world – “developers in India are 3 times more likely to be female than in the US.”
- China’s Millennials Infographic – There are 318 million Chinese millennials (15-29 year olds). 48% are female. >90% own smartphones. >90 million are graduates. They’re 50% of China’s international travellers. 74% feel they have more in common with global millennials than their Chinese elders. Each expects to spend $4362 on luxury goods this year. 66% choose western brands over Asian brands.
TL;DR, I’m moving from Developer Relations to become Opera’s Deputy Chief Technology Officer. Or maybe Deputy Technology Officer, because “Deputy Chief” is almost oxymoronic. Anyway, call me “Bruce”; it’s more polite than what you usually call me.
Co-father of CSS Håkon Wium Lie continues to be CTO, and I’ll be working with him, the Communications Team, the product teams, and my lovely colleagues in devrel, to continue connecting the unconnected across the world.
In some ways, this is simply an evolution of what I’ve been doing for the last couple of years. In a more profound way it’s a return to basics.
My first real exposure to the Web came about working in Thailand in 1999, when I was convalescing after my diagnosis of Multiple Sclerosis. Because M.S. is very rare in Asia, I could find no English language information to tell me how quickly or painfully I would die.
But I’d read about this new-fangled Web thing, and there was an Internet Café near my apartment, so I typed in “Multiple Sclerosis” into Alta Vista and found something extraordinary: a community of people around the world supporting each other through their shared diagnosis on something called a “website” – and I could participate, too, from a café in Pratunam, Bangkok. All strangers, across the globe, coming together around a common theme and helping each other.
I knew immediately that I’d stumbled upon something amazing, something revolutionary, an undreamed of way to communicate. As an English Literature graduate and ex-programmer, I was fascinated, by both the communicative potential and also the tech that drove it. By 2002, I was Brand Manager for a UK book company publishing on books for web professionals, and our first, flagship book was on Web Accessibility.
From accessibility, I began to advocate the general concept of open web standards on my blog and with various employers, so that everyone could access the web. Then, after being invited to join Opera in 2008, I started advocating HTML5, so people could connect to an open web that could compete with the proprietary silos of Flash and iOS. After that, I began beating the drum for Media Queries and Responsive Design so that the people in developing nations (like I was in ’99), using affordable hand-held devices, could connect and enjoy the full web. Then I proposed the <picture> element (more accurately: a very naive precursor to it) so that people with limited funds for bandwidth could connect economically, too. Then I agitated, inside Opera and outside, for Progressive Web Apps, so people could have a great experience on the open web, not those pesky walled gardens.
The common thread is people and getting them connected to each other. This matters to me because that happened to me, 17 years ago (spoiler: and I didn’t die).
A third of a billion people use Opera’s products to get them online, fast and affordably. I want to be part of making that half a billion, then a billion, then more; not by stealing customers from competitors, but by opening up the web to people and places that currently have no access. That’s a lot of people; there’s a lot to be done. It’s a big job. I’m a n00b and I’m gonna fuck up from time-to-time.
Bring it on.
I have reported previously on support in browsers and screen readers (SR) for
aria-hidden and the HTML5
hidden attribute. The last time was 2 years ago, the orginal article published 2 years prior in 2012 still gets lots of page views. As its a subject that developers are interested in, so here is an update.
Support for HTML5 hidden and aria-hidden in 2016
All major browsers and screen readers:
- support the use of the hidden attribute to hide content
- support the use of the CSS display:none to hide content
- support the use of the aria-hidden attribute to hide visible content from screen reader users
Screen reader support for hidden content – tests and results for
- Windows 10
- Firefox 43
- Internet Explorer 11
- Chrome 47
- JAWS 17
- Window Eyes 9
- NVDA 2015.4
- Narrator 10
- VoiceOver on iOS 9.2.1 (iPhone 6)
- VoiceOver on OSX El Capitan
- ChromeVox on Chrome OS 47
- Orca 3.16 on Linux
In some browser and screen reader combinations
aria-hidden=false on an element that is hidden using the
hidden attribute or CSS
display:none results in the content being unhidden. This behaviour does not have consensus support among browser implementers and I strongly recommend, it not be used.
Why no Edge testing? The Edge browser does not yet have sufficient accessibility support for testing to be useful.
hidden attribute hides content due to browser’s implementation of CSS display:none on content hidden using the attribute. If the default UA CSS is overidden, then
aria-hidden=true will have to be used alongside the
<script async="" src="https://assets.codepen.io/assets/embed/ei.js"></script>
Tests and results on github – issues and PR’s welcome!
It’s been several months now since maintenance of the HTML Standard moved from a mostly-private Subversion repository to the whatwg/html GitHub repository. This move has been even more successful than we hoped:
- We now have thirty-seven contributors who have landed one or more patches, and have merged over 250 pull requests in total. That’s almost two new contributors each week since the move!
- We’ve worked to curate a list of good first bugs to introduce newcomers to the community and the standard, and worked hard to improve the onboarding experience for building the standard.
- With help from the community, the standard's gender pronoun disparity has been significantly improved. (See The happy case of pronouns and HTML.)
- Sponsored by Mozilla, we have applied to Outreachy and Richa Rupela is now helping us write the HTML Standard.
- We have collaborated with TC39—who thankfully moved to GitHub around the same time—to remove some longstanding discrepancies between HTML and ECMAScript.
- We've made many, many small fixes to better match the reality of what is implemented in browsers, mostly in response to feedback from browser developers.
Aside from defining the HTML language, the HTML Standard defines the processing model around script execution, the fundamentals of the web’s security model, the web worker API for parallel script execution, and many more aspects that are core to the web platform. If you are interested in helping out, please reach out on IRC or GitHub.
The W3C has forked the HTML Standard for the nth time. As always, it is pretty disastrous:
- Erased all Git history of the document.
- Did not document how they transformed the document. Issues of mismatches have already been reported and it will likely be a long time, if ever, before all bugs due to this process are uncovered, since it was not open.
- Did not discuss plans with the wider community.
- Did not discuss plans with the folks they were forking from.
- Did not even discuss plans with the members of the W3C Web Platform Working Group.
- Erased the acknowledgments section.
- Erased the copyright and licensing information and replaced it with their own.
So far this fork has been soundly ignored by the HTML community, which is as expected and desired. We hesitated to post this since we did not want to bring undeserved attention to the fork. But we wanted to make the situation clear to the web standards community, which might otherwise be getting the wrong message. Thus, proceed as before: the standards with green stylesheets are the up-to-date ones that should be used by implementers and developers, and referred to by other standards. They are where work on crucial bugfixes such as setting the correct flags for
<img> fetches and exciting new features such as
<script type=module> will take place.
If there are blockers preventing your organization from working with the WHATWG, feel free to reach out to us for help in resolving the matter. Deficient forks are not the answer.
— The editors of the HTML Standard
@media (-webkit-transform-3d) is a funny thing that exists on the web.
It's like, a media query feature in the form of a prefixed CSS property, which should tell you if your (once upon a time probably Safari-only) browser supports 3D transforms, invented back in the day before we had
(According to Apple docs it first appeared in Safari 4, along side the other
-webkit-transform-2d hybrid-media-query-feature-prefixed-css-properties-things that you should immediately forget exist.)
Older versions of Modernizr used this (and only this) to detect support for 3D transforms, and that seemed pretty OK. (They also did the polite thing and tested
@media (transform-3d), but no browser has ever actually supported that, as it turns out). And because they're so consistently polite, they've since updated the test to prefer
@supports too (via a pull request from Edge developer Jacob Rossi).
As it turns out other browsers have been updated to support 3D CSS transforms, but sites didn't go back and update their version of Modernizr. So unless you support
@media (-webkit-transform-3d) these sites break. Niche websites like yahoo.com and about.com.
So, anyways. I added
@media (-webkit-transform-3d) to the Compat Standard and we added support for it Firefox so websites stop breaking.
But you shouldn't ever use it—use
@supports. In fact, don't even share this blog post. Maybe delete it from your browser history just in case.
One of the many new input types that HTML5 introduced is the date input type which, in theory, should allow a developer to provide the user with a simple, usable, recognisable method of entering a date on a web page. But sadly, this input type has yet to reach its full potential.<section id="introduction">
Briefly, the date input type is a form element that allows the capture of a date from a user, usually via a datepicker. The implementation of this datepicker is up to the browser vendor, as the HTML5 specification does not tell vendors how to implement the input’s UI.
The input itself can of course be used without using any of its available attributes:
<label for="when">Date:</label> <input id="when" name="when" type="date">
Or you can specify minimum and maximum date values via the
max attributes, ensuring that a user can only choose dates within a specific range:
<label for="when">Date:</label> <input id="when" name="when" type="date" min="2016-01-01" max="2016-12-01">
You can also use the
step attribute to specify, in days, how a date can increment. The default is 1.
This of course is the theory and a fine reality it would be, if it were so but alas it is not.</section> <section id="features">
Rather than talk about browser support for the date input type, I will instead talk about the various features that are part of this input type:
Neither Firefox nor Safari support this input type; it is treated as a simple text field with no formatting and no special interaction.
Microsoft Edge has no special interactions and in fact the input field appears to be read-only.
Chrome and Opera have the same implementations which display a date placeholder (or date value if the input’s
value attribute has been set) using the user’s system settings’ date format. There are also some controls that allow the user to clear the input field, some arrows to cycle up and down between values, and an arrow that will open a datepicker (see Figure 1). There are some WebKit prefixed pseudo selectors available that allow you to change the appearance of the various bits within the input date field.
All of those browsers that support the date input type also support the
Chrome and Opera both work fine if both the
max attributes are set, but the UI is poor when only one of them is set. In these browsers, the date input displays up and down arrows for the user to change each value (day, month, and year). If a date input has a minimum value set at today’s date, then if the user highlights the year and then clicks the down arrow, the year will change to 275760 as the arrows cycle between the valid values and, since no maximum has been set, the default maximum is used (see Figure 2). This has been reported as a bug but has been marked as “won’t fix” as apparently it is the desired behaviour, I disagree.
minvalue set but no
Something similar happens when only a
max value is set in that pressing the up arrow will cycle around to the year 0001, which is the default minimum. Again this leads to a very confusing UX for the user, so please use both
max where possible.
With Android, the setting of these attributes can cause some weird quirks with the datepicker.</section> <section id="feature-step-attribute">
None of the browsers tested support the
Firefox and Safari do not support the date input type and therefore do not open a datepicker on desktop. But they do open a device’s native datepicker on mobile devices when the date input field receives focus.
Microsoft Edge opens a datepicker, but its UI is very poor (see Figure 3).<figure> <figcaption>Figure 3. – Microsoft Edge’s native datepicker</figcaption> </figure>
Chrome and Opera open a fairly decent datepicker on desktop but you cannot style it at all. It looks how it looks (see Figure 4), and there’s nothing that you can do about it, no matter what your designer says. With these browsers you can disable the native datepicker if you want and implement your own, but this will not work with mobile devices. This gives you the best of both worlds: you can implement a nicer datepicker for desktop and let a device use its own native datepicker.<figure> <figcaption>Figure 4. – WebKit’s native datepicker and date input field as seen on Chrome</figcaption> </figure> </section>
All the native datepickers that I tested on Android displayed some sort of confusing display quirks when valid
max attribute are set. A user is correctly restricted from selecting a date outside of the specified range, but the visual datepickers often display “previous”, “current”, and “next” values for day, month, and year, and the values displayed for “previous” are either empty (in the case of the month) or set to 9999 (in the case of the year, see Figure 5).
Like most of the other input fields, the date input supports the
changed events. The
input event is fired every time a user interacts with the input field, while the
change event is usually only fired when the value is committed, i.e. the input field loses focus.
For those browsers mentioned above that support the native date input field, no specific events are fired that will allow you to determine if the user has done anything “date specific” such as opened or closed the datepicker, or cleared the input field. You can work out if the field has been cleared by listening for the
change events and then checking the input field’s value.
If you have an event listener set up for the
change event and one for the
changed will never actually fire as
input takes precendence.
So, what can we deduce from all this? Should we use the input date type or not? And as usual the answer is “it depends” as only you know what you’re building and how and where it will be used, so it might fit perfectly for your needs or it might not.
I think in general it’s safe to use the input date type on occasions where a
max does not need to be set as the quirks when these attributes are used are too irritating to ignore. Since not all browsers support a datepicker and you will have to get one from somewhere else to support these browsers, I would also recommend turning off the datepicker for all desktop browsers (using the method linked to above) as this will still allow devices to use their own.
Have fun!</section> </section>
Planet Mozilla — Okay, But What Does Your Work Actually Mean, Nikki? Part 2: The Fetch Standard and Servo
In my previous post, I started discussing in more detail what my internship entails, by talking about my first contribution to Servo. As a refresher, my first contribution was as part of my application to Outreachy, which I later revisited during my internship after a change I introduced to the HTML Standard it relied on. I’m going to expand on that last point today- specifically, how easy it is to introduce changes in WHATWG’s various standards. I’m also going to talk about how this accessibility to changing web standards affects how I can understand it, how I can help improve it, and my work on Servo.
Two Ways To Change
There are many ways to get involved with WHATWG, but there are two that I’ve become the most familiar with: firstly, by opening a discussion about a perceived issue and asking how it should be resolved; secondly, by taking on an issue approved as needing change and making the desired change. I’ve almost entirely only done the former, and the latter only for some minor typos. Any changes that relate directly to my work, however minor, are significant for me though! Like I discussed in my previous post, I brought attention to an inconsistency that was resolved, giving me a new task of updating my first contribution to Servo to reflect the change in the HTML Standard. I’ve done that several times since, for the Fetch Standard.
My first two weeks of my internship were spent on reading through the majority of the Fetch Standard, primarily the various Fetch functions. I took many notes describing the steps to myself, annotated with questions I had and the answers I got from either other people on the Servo team who had worked with Fetch (including my internship mentor, of course!) or people from WHATWG who were involved in the Fetch Standard. Getting so familiar with Fetch meant a few things: I would notice minor errors (such as an out of date link) that I could submit a simple fix for, or a bigger issue that I couldn’t resolve myself.
Discussions & Resolutions
I’m going to go into more detail about some of those bigger issues. From my perspective, when I start a discussion about a piece of documentation (such as the Fetch Standard, or reading about a programming library Servo uses), I go into it thinking “Either this documentation is incorrect, or my understanding is incorrect”. Whichever the answer is, it doesn’t mean that the documentation is bad, or that I’m bad at reading comprehension. I understand best by building up a model of something in my head, putting that to practice, and asking a lot of questions along the way. I learn by getting things wrong and figuring out why I was wrong, and sometimes in the process I uncover a point that could be made more clear, or an inconsistency! I have good examples of both of the different outcomes I listed, which I’ll cover over the next two sections.
Looking For The Big Picture
Early on in my initial review of the Fetch Standard’s several protocols, I found a major step that seemed to have no use. I understood that since I was learning Fetch on a step-by-step basis, I did not have a view of the bigger picture, so I asked around what I was missing that would help me understand this. One of the people I work with on implementing Fetch agreed with me that the step seemed to have no purpose, and so we decided to open an issue asking about removing it from the standard. It turned out that I had actually missed the meaning of it, as we learned. However, instead of leaving it there, I shifted the issue into asking for some explanatory notes on why this step is needed, which was fulfilled. This meant that I would have a reference to go back to should I forget the significance of the step, and that people reading the Fetch Standard in the future would be much less likely to come to the same incorrect conclusion I had.
A Confusing Order
Shortly after I had first discovered that apparent issue, I found myself struggling to comprehend a sequence of actions in another Fetch protocol. The specification seemed to say that part of an early step was meant to only be done after the final step. I unfortunately don’t remember details of the discussion I had about this- if there was a reason for why it was organized like this, I forget what it was. Regardless, it was agreed that moving those sub-steps to be actually listed after the step they’re supposed to run after would be a good change. This meant that I would need to re-organize my notes to reflect the re-arranged sequence of actions, as well as have an easier time being able to follow this part of the Fetch Standard.
A Living Standard
Like I said at the start of this post, I’m going to talk about how changes in the Fetch Standard affects my work on Servo itself. What I’ve covered so far has mostly been how changes affect my understanding of the standard itself. A key aspect in understanding the Fetch protocols is reviewing them for updates that impact me. WHATWG labels every standard they author as a “Living Standard” for good reason. It was one thing for me to learn how easy it is to introduce changes, while knowing exactly what’s going on, but it’s another for me to understand that anybody else can, and often does, make changes to the Fetch Standard!
Changes Over Time
When an update is made to the Fetch Standard, it’s not so difficult to deal with as one might imagine. The Fetch Standard always notes the last day it was updated at the top of the document, I follow a Twitter account that posts about updates, and all the history can be seen on GitHub which will show me exactly what has been changed as well as some discussion on what the change does. All of these together alert me to the fact that the Fetch Standard has been modified, and I can quickly see what was revised. If it’s relevant to what I’m going to be implementing, I update my notes to match it. Occasionally, I need to change existing code to reflect the new Standard, which is also easily done by comparing my new notes to the Fetch implementation in Servo!
From all of this, it might sound like the Fetch Standard is unfinished, or unreliable/inconsistent. I don’t mean to misrepresent it- the many small improvements help make the Fetch Standard, like all of WHATWG’s standards, better and more reliable. You can think of the status of the Fetch Standard at any point in time as a single, working snapshot. If somebody implemented all of Fetch as it is now, they’d have something that works by itself correctly. A different snapshot of Fetch is just that- different. It will have an improvement or two, but that doesn’t obsolete anybody who implemented it previously. It just means if they revisit the implementation, they’ll have things to update.
Third post over.
If someone asks “Do I need Java“, my answer is a) most people don’t need it, and b) to find out if you need it, remove it. I did that many years ago and haven’t needed it. I’ve been hoping to reach the same point with Flash. I’d try disabling it, but there are two sites I regularly visit, which sometimes require Flash – Youtube and Facebook (for videos). Last year, Youtube switched to HTML5, and recently I found that Facebook started using HTML5 for videos, so I decided to try disabling Flash again. This time, I was pleasantly surprised at how many websites no longer use Flash.
Using Firefox on a late 2013 Macbook Pro, here is a list of sites I’ve found work well with Flash disabled:
There are still some holdouts. In my case, I’m really affected by CTV Toronto News requiring Flash. I also wanted to watch an episode of Comedians In Cars Getting Coffee, and that required Flash. Others:
I emailed CTV, and here’s the response:
“At this time we currently do not have any future plans to support HTML5. Regardless, your comments have been forwarded to our technical team for review.”
I’ve decided to switch back to thestar.com for local [Toronto] news, now that they’re over their Rob Ford obsession.
And with that, I can keep Flash disabled. Every now and then I may require it to view some web content, but for the most part, I don’t need it.
Flash has been thought of as a must-have plugin, but after disabling it, that wasn’t the case for me. A lot of the web has already switched to HTML5. Try disabling Flash for yourself, and enjoy so much more battery life!
Jamie Charlton has been involved with Mozilla since 2014 and contributes to various parts of the Mozilla project, including Firefox OS. He is from Wassaic, New York.
Hi Jamie! How did you discover the Web?
For the most part I grew up with the web, but mainly got into the web when I was 13-14 and I wanted to learn how to work with and code for the web.
How did you hear about Mozilla?
For the most part I always use Firefox due to the security settings and found out about Mozilla from Firefox.
How and why did you start contributing to Mozilla?
The irony of dumb luck – I started contributing to Mozilla on December 22, 2014. I had installed the Firefox Aurora because it was supposed to support more of the html5 gaming features that were up and coming and I wanted to see how well they worked, then I accidentally stumbled upon webIDE and was really intrigued by Firefox OS and filed my first bug that day.
Have you contributed to any other Mozilla projects in any other way?
Yes, several. I am a Mozilla rep, and I also help out in IRC running the nightly channel helping people with questions regarding anything to do with a nightly version of anything from Thunderbird, Firefox OS nightly, to Firefox nightly – anything anyone asks about.
What’s the contribution you’re the most proud of?
Odd question… none actually, to me this is just a lot of fun.
What advice would you give to someone who is new and interested in contributing to Mozilla?
Dive right in, don’t be afraid to mess up, if you mess up someone will help you fix it. First hand experience is the best way to learn and get involved.
If you had one word or sentence to describe Mozilla, what would it be?
What exciting things do you envision for you and Mozilla in the future?
Well hopefully some more fun and a lot more projects based around Firefox OS.
Is there anything else you’d like to say or add to the above questions?
Have fun, and don’t take anything seriously – it’s more fun that way.
The Mozilla QA and Firefox OS teams would like to thank Jamie for his contributions.
Jamie has been extremely helpful in updating documentation and in filing Firefox OS bugs. – Marcia Knous
Dedicated and highly enthusiastic, Jamie_ has had a profound impact in QA. He has been learning and growing from the time he has started joining us, and continues to help Mozilla better the web. He also helps Mozilla grow and tries to help recruit QA locally.
Jamie_, we salute you and thank you for your contributions. We hope that there will be many more to come. – Naoki Hirata
It’s the beginning of a new year, which means a blank slate to move forward, improve yourself, and enhance your life. Personally, I’m starting the year looking for a new full-time job!
Over the last six months, I had the pleasure of working with the talented folks at IMMUNIO. However, the company is prioritizing other marketing activities than evangelism (the CEO can give you more information). It became apparent that this new direction won’t give me the possibility to use my passion and expertise to produce the impact I would like and the results they need, so my position was put on hold.
Of course, I’ve been a Technical/Developer Evangelist/Advisor/Relations (whatever you call it) for five years now. I’ve built my experience (my LinkedIn profile – I don’t have a traditional resume) at companies like Microsoft and Mozilla, but I’m open to discussing any other type of role, technical or not, where my experience can help the business to achieve their goals. My only criteria? A role that will get me excited and where I’ll make things happen: without creativity, passion, and any ways to challenge myself, it won’t be a good fit, for both of us. On the compensation side, let’s be honest, I also have a lifestyle I would like to keep.
I’m fine with travelling extensively and remote working as it’s what I’ve done extensively for the last couple of years, but because of health issues in my family, I cannot move from Montreal (Canada). Note that I don’t want to go back as a full-time developer.
Some of my experience includes:
- International public speaking (list of my talks including slides and recording);
- Spokesperson (ex.: media interviews in Greece, Mexico, Uruguay – they are translation, I only speak French and English);
- Book (Personal Branding for Developers published by Apress), and blog posts writing (ex.: Mozilla, Microsoft);
- Events and user groups (co)creation/lead (ex.: HTML5mtl, YulDev, GeekFestMtl – now owned by others);
- Helping developers being successful (ex.: GitHub, StackOverflow);
- Creating and managing brand/marketing campaign (ex.: Make Web Not War).
I have the firm intention to find a company where I’ll be able to grow in the next couple of years. If you think we can work together, please send me an email with some information about the role.
I mentioned it in my worklog last week. On 2016-01-02 09:39:38 Japan Standard Time, Mozilla closed a very important issue enabling a new feature in Firefox: Bug 1213126 - Enable layout.css.prefixes.webkit by default . Thanks to many people for this useful work. First, because an image is sometimes worth 1000 words, let's look at an example. The following image is the rendering in Gecko (Firefox Nightly 46.0a1 (2016-01-03) on Android) of the mobile site mobage by DeNA.
- On the left side
layout.css.prefixes.webkit; true(default now in Firefox Nightly)
- On the right side
layout.css.prefixes.webkit; false(still the case as of now in Firefox Developer Edition)
Below, I will explain the origin, the how and the why.
We have been dealing with issues about Web Compatibility on mobile devices for quite a long time. The current Mozilla Web Compatibility team is people from Opera Software. We were working on very similar issues. Both Microsoft, Mozilla and Opera have had hard time existing on the mobile Web because of Web sites developed with WebKit prefixes only.
Old Folk Tales from East Asia
In March 2014, Hallvord and I went to the Mozilla Chinese office in Beijing for working with the team on improving Web Compatibility in China. Many bug reports had been opened about mobile sites failing in China in Firefox OS and Firefox Android. Sometimes, we had to lie about the User-Agent string on the client side. Most of the time, it was not enough. Firefox on smartphones (Android, Firefox OS) was still receiving broken sites made for WebKit only (Chrome, Safari). The Mozilla Beijing team was spending a lot of time to retrofit Firefox Android into a product which was compatible with the Chinese Web. It was unfortunate. Each release required a long and strenuous effort. It could have benefited other markets with similar issues.
On December 2014 in Portland (Oregon, USA) during the Mozilla work week, we (Mozilla core platform engineers, Web Compatibility team and some members of the Beijing team) had a meeting to discuss the type of issues users had to deal with on their browser. By the end of the meetiing, we decided to start identify the painful points (the most common type of issues) and how we could try to fix them. Hallvord started to work on a service for rewriting CSS WebKit prefixes on the fly in the browser. Later on, this led to the creation of CSS Fix me (Just released in December 2015).
We also started to survey the Japanese Mobile Web. It gave us another data point about the broken Web. In the top 100 Japanese sites (then top 1000 sites in Japan), we identified that 20% of them had rendering issues due to non-standard coding such as WebKit prefixes and other delicacies related to DOM APIs.
Fixing the Mobile Web
Through surveys and analysis, the Web Compatibility team assessed a couple of priorities, the ones hurting usability the most. Quickly, we noticed that
were the major issues for CSS.
for DOM APIs.
Microsoft shared with us what they had to implement to make Edge compatible with the Web.
On June 2015, during the Whistler (British Columbia, Canada) Mozilla work week, we decided to move forward and be more effective than just the unprefixing service. Some core platform engineers spent time on implementing natively what was needed for getting a performant browser compatible with the Web. This includes the excellent work of Daniel Holbert (mainly flexbox) and Robert O'Callahan (innerText). I should probably list exactly who did what in details (for another day).
Leading the effort for a while, Mike Taylor opened a couple of very important issues on Mozilla bugzilla, referenced at Bug 1170774 - (meta) Implement some non-standard APIs for web compatibility and the associated wiki. He also started the Compatibility transitory specification. The role of this specification is not to stand on its own, but to give a support for other specifications to cherry pick what's necessary to be compatible with the current Web.
There is still a lot of work to be done, but the closing of Bug 1213126 - Enable layout.css.prefixes.webkit by default is a very important step. Thanks Daniel Holbert for this Christmas gift. This is NOT a full implementation of all WebKit prefixes, just the ones which are currently breaking the Web.
Why Do We Do That?
The usual and valid questions emerge:
- Don't we reward lazy developers?
- Does this destroy the standard process?
- Do we further entrench the (current) dominant position of WebKit?
- Do we risk to follow the same path than Opera abandonning Presto for Blink?
- Do we bloat the rendering engine code with redundancies?
All these questions have very rational answers, righteous ones which I deeply understand at a personal level. Web developers have bosses and their own constraints. I have seen many times, people being very much oriented toward Web standards and still having to make choices because they were using a framework or a library not fully compliant or with its own bugs.
Then there is the daily reality of the users. Browsers are extremely confusing for people. They don't know how it's working, they don't know (and it's not sure there are any good reasons that they should all know) why a rendering engine fails to render a site not being properly coded.
In the end, the users deserve to have the right experience. The same way we recover from Web sites with broken HTML, we need to make the efforts to help users have a usable Web experience.
PS: Typos fixes and grammar tweaks are welcome.
In my first post, I tried talking about what I’m working on, but ended up talking about everything that lead to my internship, starting about a year ago! That’s pretty cool, especially since it was the only way I could think of talking about it, but it was kind of misleading. I had a couple short paragraphs about Servo and Fetch, respectively, but I didn’t feel up to expanding on either since my post was already so long. So, I’m going to be trying that now! Today I’ll start writing about what I’ve done for Servo.
Servo At A Broad Level
There’s so much to Servo, I don’t assume I could adequately talk about it at a broad level beyond quoting what I said two weeks ago: “Servo is a web rendering engine, which means it handles loading web pages and other specifications for using the web. To contrast Servo to a full-fledged browser: a browser could use Servo for its web rendering, and then put a user interface on top of it, things like bookmarks, tabs, a url bar.”
What I do feel more confident talking about is the parts of Servo I’ve worked with. That’s in three areas: my first contribution to Servo as part of my Outreachy application (and its follow-up), working on the Fetch implementation in Servo, and changes I’ve made to parts of Servo that Fetch relies on. Today I’m going to talk about the first part, and the other two together in the next post.
My First Contribution
Like I briefly mentioned in my first post, I first got acquainted with Servo by adding more functionality for existing code dealing with sending data over a WebSocket, which is a well defined web communication protocol. My task was to understand a small part of the specification (“If the argument is a Blob object”), and the existing implementation for “If the argument is a string”. A Blob can represent things like images or audio, but the definition is pretty open-ended, although most of what I had to do with it was pass it on to other functions that would handle it for me.
As with everything I’ve done in Servo, what first seemed like a simple goal - make a new function similar to the existing code that just handles a different kind of data - ended up becoming pretty involved, and needing changes in other areas that I didn’t have the time to fully comprehend. A big part of this is that working on Servo is my first time using the programming language Rust, so I was learning both Rust and the WebSocket specification at the same time! Thankfully, I had a lot of help from other members of the Servo team on anything I didn’t understand, which was a lot.
Outside of slowly learning how to work with Rust, I spent most of my time on this talking about exactly how I should be implementing anything. I like to make sure I do the right thing, and the best way for me to do that is to understand the situation as well as I can, then present that and any questions I have to other people.
The Buffer Amount Dilemma
One instance of this was that the specification for WebSocket didn’t say what to do if the size of a Blob being sent is larger than the maximum size a WebSocket buffer (where data is temporarily stored while being passed on) can hold. A Blob can hold up to 2^64 bits of data, whereas the WebSocket buffer could only hold 2^32 bits of data. The “obvious” solution would be to handle it like I would if the buffer overflowed by being fed many Blobs smaller than 2^32 bits- but that seemed to me a different situation, and so I asked around for advice on what to do.
The consensus was that for now I should 1) truncate the data to 2^32 bits, and then raise an error in response to the buffer overflow 2) open an issue at WHATWG (the authority behind the current HTML Standard) about the seeming gap in the specification about this, to find out what part of the specification, if any, needs updating.
And so, in approaching everything I was unsure about in such a way, I slowly made progress on ironing out a decent function that covered previous and new needs, follows the specification as much as possible, and has reasonable solutions to areas either not covered by the specification or are just unique challenges to programming in Rust.
Return Of The Buffers
Since all of this was meant to be part of my application to Outreachy, after it was accepted for Servo, I stopped thinking about it while waiting to hear back on my acceptance status. Between then and my being chosen as the intern for Servo, that issue I had opened at WHATWG had been discussed and resolved, with the decision being to let the WebSocket buffer hold up to 2^64 bits of data, meaning that there would be no need to intentionally lose any amount of data sent at one time, and preventing (incredibly) large files from instantly raising errors.
This change also meant that the code I had wrote earlier would need to be updated, since it was now incorrect! It made the most sense for me to do that, since it was my code, and my question that had changed the specification. I was grateful to be able to rectify a situation that had before seemed to have no good answer, especially since it also made my code simpler.
So, starting and ending over a period of a couple months, that’s the story of the first work I ever did on Servo!
End of second post.