W3C Team blogOpenSocial Foundation Moves Standards Work to W3C Social Web Activity

W3C and the OpenSocial Foundation announced today that as of 1 January 2015, OpenSocial standards work and specifications beyond OpenSocial 2.5.1 will take place in the W3C Social Web Working Group, of which the OpenSocial Foundation is a founding member. The W3C Social Web Working Group extends the reach of OpenSocial into the enterprise, HTML5 and Indie Web communities.

In this post we talk about next steps for standards work at W3C and open source projects at Apache.

Note: As part of the transfer of OpenSocial specifications and assets to the W3C, requests to opensocial.org will be redirected to this blog post. For more information, please see the FAQ below.

Standards and Requirements at W3C

W3C launched its Social Web Activity in July 2014 with two groups:

  • The Social Web Working Group, which defines the technical standards and APIs to facilitate access to social functionality as part of the Open Web Platform.
  • The Social Interest Group, which coordinates messaging around social at the W3C and is formulating a broad strategy to enable social business and federation.

In addition, some OpenSocial work has moved (or will move) to existing W3C groups. Here is a summary of where you can get involved with different W3C standardization efforts and discussions.

Open Source Projects at Apache Foundation

In addition to the several leading commercial enterprise platforms thant use OpenSocial, the Apache Software Foundation hosts two active and ongoing projects that serve as reference implementations for OpenSocial technology:

  • Apache Shindig is the reference implementation of OpenSocial API specifications, versions 1.0.x and 2.0.x, a standard set of Social Network APIs that includes Profiles, Relationships, Activities, Shared Applications, Authentication, and Authorization.
  • Apache Rave is a lightweight and open-standards based extensible platform for using, integrating and hosting OpenSocial and W3C Widget related features, technologies and services. It will also provide strong context-aware personalization, collaboration and content integration capabilities and a high quality out-of-the-box installation as well as be easy to integrate in other platforms and solutions.

FAQ

Note: We will add to this FAQ over time as questions arise. Please send questions to public-socialweb-comments@w3.org

Why is OpenSocial Foundation closing?

OpenSocial Foundation feels that the community will have a better chance of realizing an open social web through discussions at a single organization, and the OpenSocial Foundation board believes that working as an integrated part of W3C will help reach more communities that will benefit from open social standards.

What does it mean that OpenSocial Foundation is closing?

OpenSocial will no longer exist as a separate legal entity, but work will continue within the W3C Social Web Activity.

What will happen to development of the OpenSocial specification?

Development will continue within the Social Web Working Group.

What will happen to development of the reference implementations Apache Shindig and Rave?

Development will continue within the Apache Software Foundation.

Where do I go if I have questions about OpenSocial?

Members of the OpenSocial Community will be actively involved in the Social Web Working Group.

Will older versions of OpenSocial specifications remain available?

Yes, they will remain available on GitHub.

Will discussion archives be preserved?

Discussion archives are in Google groups. As long as those are allowed to remain, they will remain in place.

Planet WebKitWeb Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Planet MozillaBittorrent's Project Maelstrom is 'Firecloud' on steroids

Earlier this week, BitTorrent, Inc. announced Project Maelstrom. The idea is to apply the bittorrent technologies and approaches to more of the web.

Project Maelstrom

Note: if you can’t read the text in the image, it says: “This is a webpage powered by 397 people + You. Not a central server.” So. Much. Win.

The blog post announcing the project doesn’t have lots of details, but a follow-up PC World article includes an interview with a couple of the people behind it.

I think the key thing comes in this response from product manager Rob Velasquez:

We support normal web browsing via HTTP/S. We only add the additional support of being able to browse the distributed web via torrents

This excites me for a couple of reasons. First, I’ve thought on-and-off for years about how to build a website that’s untakedownable. I’ve explored DNS based on the technology powering Bitcoin, experimented with the PirateBay’s now-defunct blogging platform Baywords, and explored the dark underbelly of the web with sites available only through Tor.

Second, Vinay Gupta and I almost managed to get a project off the ground called Firecloud. This would have used a combination of interesting technologies such as WebRTC, HTML5 local storage and DHT to provide distributed website hosting through a Firefox add-on.

I really, really hope that BitTorrent turn this into a reality. I’d love to be able to host my website as a torrent. :-D

Update: People pay more attention to products than technologies, but I’d love to see Webtorrent get more love/attention/exposure.


Comments? Questions Email me: doug@mozillafoundation.org

W3C Team blogThis week: Fire TV WebApp kit, Mike[tm] Smith on HTML validation, etc.

This is the 5-12 December 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

W3C in the Press (or blogs)

<strong88 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

Bruce LawsonReading List

Reading List ninety-nine. With a flake in it.

HTML5 DoctorHTML5 – Check it Before you Wreck it with Mike[tm] Smith

The W3C’s Mike[tm] Smith (AKA @sideshowbarker) is the man with his head in the W3C validation markup checking tool source code; he makes the magic happen.  Questions were asked for the HTML5 Doctor reader’s delight and edification.

First off tell us a bit about what you do and what you work on

mike smith with phone and beer

Mike[tm] Smith – Deputy Director @W3C – permissive work mode edition


I don’t work. I’m an old-world boulevardier. I drink tea with my pinky extended and I only expend effort on anything if it somehow amuses me to do so. For the last few years it’s amused me to spend time working on software for helping people check whether or not their documents meet certain requirements in the HTML spec.

nu markup checker

What’s the difference between DTD and schema based checking?


DTDs are chiseled into stone tablets. And so for processing they require stone-tablet-aware toolchains. Sadly however the Web was not built on stone-tablet processing so we’ve had to look around for other solutions. In the case of document-conformance checking we’ve turned to using things like RelaxNG schemas that while lacking the quaintness of DTDs are a far more powerful means for expressing certain kinds of document-conformance requirements. So it’s a tradeoff.

W3C Validator

What’s the difference between conformance checking and validation?


Validation is an oldthink word. Use it for when you want to make people think you’re a sort of fossil or relic of some earlier time. Kind of like the word groovy or XHTML.

Lots of people don’t know this but the etymology of that word validation is from the days when our ancestors were mostly pig thieves and they were given actual badges for spelling their own names correctly, and usually a pat on that back too. Good job!

Document-conformance checking is the current party-approved goodthink way of talking about looking for problems in HTML documents. And do note that we call it document conformance and not authoring conformance, and we talk about conformance requirements for documents, not conformance requirements for authors. That’s because you as an author are a human being; a technical specification can’t place requirements on you, it can only place requirements on documents you create. And related tools don’t evaluate you as a human being for conformance to particular technologies; instead they just evaluate the documents you create.

Anyway, document-conformance checking has the nasty ugly part conformance which is a hurtful word really but you gotta look past that part and only pay attention to the word checking which is mostly a happy helpful type of word.

So I call the tool at validator.w3.org/nu the Nu Markup Checker instead of the blah-blah-Validator because I want to spread the happiness of the word checker in the sense of doing something actually useful for people instead of just giving them a pat on the back. It’s an automated thing which checks stuff for you that’d otherwise be really tedious for you to check manually. So it helps you. Maybe it should be called the Nu Help-You Checker.

As far as what it checks, it looks for unintentional mistakes you might have made: misspelled element names or attribute values where some stray character snuck in. That kind of stuff. And it alerts you about those sorts of things so you can fix them.

It also looks at other kinds of requirements defined in the HTML spec designed to help you not make broken HTML documents and web applications that aren’t going to work the way they should or that might otherwise result in degraded user experience. Some of those requirements are gray-area judgment calls, but it’s helpful to have a common baseline-ish set of those kinds of requirements actually defined in a spec.

Other words for what this tool does that aren’t yet party-approved goodthink are words like linter and static-analysis tool. But the difference with this thing I work on is, the linting rules are actually defined in a spec, instead of something, say, Doug Crockford (to pick a name at random) woke up one morning and just pulled out of his hat.

What’s the difference between errors and warnings?


An error is for something that’s clearly a mistake, like a misspelled element name or an attribute value that has some crazy garbage characters or whatever that showed up somehow and shouldn’t be there.

But an error is also for some cases of stuff that the HTML spec for other reasons just says, this must be an error. The spec explains that the reasons for some of those other things being defined as errors; basically it’s just that they can create certain kinds of problems that are not always easy to anticipate.

There’s a long list of those kinds of problems that are defined as errors but some examples include markup cases that are bad for accessibility, usability, interoperability, security, or maintainability—or that can result in poor performance, or that might cause your scripts to fail in ways that are hard to troubleshoot.

Along with those some cases are defined as errors because they can cause you to run into quirks in HTML parsing and error-handling behavior—so that, say, you’d end up with some unintuitive, unexpected result in the DOM.

Finally there are some other errors defined for markup cases that just don’t make any sense and would most likely only be used by mistake, or cases that clash with default styling behavior.

Warnings, on the other hand, are for things that the spec doesn’t define as an outright errors but that still might be problems. Sometimes warnings get added to the checker experimentally, as a way to test out whether they’re useful to you or not. (That’s part of the reason the checker continues to be labelled as experimental.)

Is there a use in using HTML4/XHTML doctypes?


There’s absolutely no reason whatsoever for using an HTML4 doctype. Just put the <!DOCTYPE HTML> doctype on your HTML documents and make sure they’re served as text/html and be done with it. Move on with your life. But if for some reason you really want to serve your documents as application/xhtml+xml you don’t have to put an XHTML doctype on them—you can can still just use <!DOCTYPE HTML> like the rest of us. (But you probably don’t want to be using application/xhtml+xml and XHTML anyway. Again, lose the haircut—there’s a whole world out there waiting for you.)

What are the pitfalls for users of HTML checking/validation tools?


I guess the same pitfalls as you’d running into asking some really helpful and really thoughtful person for help with anything: They’ll actually make an effort to help you instead of just shining you on or giving you a this-pig-thief-can-spell-his/her-own-name badge. The help they give you may not always be what you want to hear, or it may be some advice that you already know yourself you can safely ignore. Such is life.

What are the upsides?


The upsides are that you catch mistakes you might have otherwise missed.

There are differences between W3C HTML and WHATWG HTML conformance rules, how so?


Some things defined as errors are judgment calls. Specs are written by human people, not machines. Different people can make different judgment calls—“reasonable people can disagree” or whatever other less trite way there is for expressing that sentiment. If you walk around this world expecting complete consistency from mankind everywhere you’re going to stumble onto a few serious disappointments now and then.

What if I find an error in the W3C HTML validator/checker?


Report it at w3.org/Bugs/Public/enter_bug.cgi?product=Nu%20Markup%20Checker or at bugzilla.validator.nu or github.com/validator/validator/issues.

Can I run a local copy of the W3C HTML conformance checker?

Yeah. The best way to do that is to download a release from github.com/validator/validator/releases and, for using that, to follow the instructions at validator.github.io/validator and at validator.github.io/validator/#web-based-checking.

And if you use grunt, check out github.com/jzaefferer/grunt-html which is a grunt plugin for HTML checking that uses code from github.com/validator/validator as its backend.

Any tips/advice for sane using of HTML conformance checking tools?

Is this some kind of trick question? I guess the only advice I’d give is that you should remember that tools are machines, and you are not a machine. (Assuming this question wasn’t asked by a machine.) So when evaluating error and warning messages that you get from any HTML checker, use your own human judgment. And if your judgment is that a particular checker message isn’t really helping you, then just ignore it. This isn’t a popularity contest, you won’t be hurting anybody’s feelings.

Or better yet if you care to take the time, use the “Message filtering” feature at validator.w3.org/nu which lets you persistently ignore any checker messages you find unhelpful or annoying or just don’t want to see any more.

Currently the W3C HTML checking tools don’t check/throw errors for SVG1.1 and some web component attributes, any plans to add support?


Yeah. That stuff is on my TODO list. I’ll get to it eventually.

What’s the deal with unknown attribute errors? many JS libraries use them, what should developers do?

The problem is that the checker is a machine and it’s not smart enough to tell the difference between some attribute with an unknown name that you’re using on purpose and some attribute whose name you misspelled by mistake. If we just told the checker to let through all unknown attribute names without checking, then we wouldn’t be able to help you catch the case where you misspelled something by mistake.

The workaround is that if you’re using some unknown attribute name on purpose, then exploit the “Message filtering” option at validator.w3.org/nu to tell the checker you don’t want to see messages about that particular attribute any more. And they’ll go away.

Does the validator check for use of ARIA? if so what is it checking?

Yes it checks for errors in the use of ARIA markup in HTML documents, including now some limited checking for errors in use of ARIA with SVG elements in HTML documents and also in standalone SVG documents.

For HTML elements it’s checking against requirements in the HTML spec itself but that are now also specified at [ARIA in HTML] – specs.webplatform.org/html-aria/webspecs/master as a separate standalone document, with the plan that for ARIA, the HTML spec can soon be updated to just reference the ARIA requirements in that document.

For SVG elements, my plan’s to soonishly update the checker to follow a similar standalone document at [Web developer rules for use of ARIA attributes on SVG1.1 elements] – specs.webplatform.org/SVG1.1-ARIA/webspecs/master

ARIA checking in < HTML5 what’s the deal? Will/should/can it be supported?


Nobody should be using anything but “HTML5”, and we shouldn’t be trying to help them do it. HTML5 is just HTML. We outgrew the whole version thing a long time ago now. <!DOCTYPE HTML> will be 10 years old soon. Common sense won. Here in the 21st century we can’t really help anybody who’s putting an HTML4 or whatever ancient doctype on a new document. That’s a lost cause. Certainly we’d not be helping by providing some way for them to do that and to put ARIA markup into their documents and then we tell them that’s OK. That’s called enabling behavior, in clinical terms.

When using the w3c html validator to check my HTML5 I see the following:
“The validator checked your document with an experimental feature: HTML5 Conformance Checker. …” does this mean there is a more stable validation tool I should be using?

The idea of stable doesn’t really apply here. But yeah there is another tool you should be using. You should use validator.w3.org/nu directly. It has more features and is better in every possible way.

That tool is an experimental tool, but in a good sense. And the plan is for it to always remain that way. The validator.w3.org/nu/about.html page tries to help set the right expectations about what the goals are and what experimental means:

The Nu Markup Checker is an experimental tool and its behavior remains subject to change. In particular, because new types of error checks continue to be actively added to the checker, there is no guarantee provided that if the checker reports zero errors for a particular document at one point in time, it will report zero errors for that same document at some later point in time.

The Nu Markup Checker should not be used as a means to attempt to unilaterally enforce pass/fail conformance of documents to any particular specifications; it is intended solely as a checker, not as a pass/fail certification mechanism.

Web components checking?


If you mean checking custom elements, my answer is that custom elements aren’t yet widely supported in multiple browser engines, so I don’t think it’s useful for me or anybody else to put too much of time and energy yet into figuring out how to deal with checker behavior for documents that contain custom elements.

If/when custom elements do ever become widely supported across more browser engines, then we should figure out how to deal with checker behavior for them. That’s actually going to be complicated and messy to do—but that’s the case for a lot of stuff in the Web platform and I’m sure we’ll figure out something together that we can all live with, just as we all have together for lots of other complicated Web-platform stuff.

What is the difference between the w3c validator and the nu markup checker?

The legacy W3C validator is at validator.w3.org and its core is built on old stuff like Perl and DTDs and SGML and old specs from the 20th century like HTML4 and nobody is actively maintaining its code at this point. The only good news about it is that for checking any document with a modern <!DOCTYPE html> doctype, it actually uses the backend from the Nu Markup Checker to check the document, and then just passes back all the messages from that.

The Nu Markup Checker is at validator.w3.org/nu and it’s built on slightly less old stuff like Java and RelaxNG and on specs from the current century like “HTML5” and has the big advantage of actually being actively maintained. And it has more features, like the “Message filtering” feature that lets you filter out message you don’t want to see.

Checking the source code versus the HTML DOM output, one better? issues?

I guess there’s good use cases for both. A limitation with checking the DOM is that at validator.w3.org/nu itself we can’t really provide a way to have it go grab the DOM of some arbitary HTML document on the Web and then check that. There needs to be a browser engine somewhere in between to actually parse the document into a DOM representation in memory and execute your JavaScript on that and then serialize that resulting DOM back out to a text representation you can feed to a checker. But if you have an HTML document you want to check and you actually open it in your browser you can then use something like the bookmarklet at codepen.io/stevef/full/LasCJ to send the serialized DOM from that document to validator.w3.org/nu for checking.

Bonus Questions

Should pre-HTML5 doctypes be flagged with a warning, in the W3C Validator, now HTML5 is a REC?


I dunno, maybe. On the one hand there are gazillions of existing documents out there with older doctypes that are working just fine the way they are now, so no reason to screw with them. On the other hand, if somebody’s actually taking time to run one of those documents through an HTML checker, then they may be doing that for some good reason and maybe we would be helping by alerting them to obsolete doctype in there so they can go in an update it.

It is conforming in HTML5 for the <a> element to contain block/flow content, what changed (if anything) in browsers?


Nothing changed in browsers. Browsers have always supported that and it doesn’t cause any problems and we’d not be helping anybody by making it an error. So we made it a non-error.

On WCAG 2.0 Parsing Criteria

Our client’s accessibility consultant is telling them that they must have valid HTML in order to be WCAG 2.0 compliant. Is that true? – Shoptalk Show


I have no idea. I’m not a WCAG expert and I’ve never even read the WCAG 2.0 spec. And the HTML checker is not a WCAG checker. Or at least it doesn’t claim to be.

Steve Faulkner
WCAG 2.0 has a success criterion that requires markup documents have no parsing errors. The nu markup checker flags parsing errors along with other machine checkable HTML conformance criteria. We have created a WCAG 2.0 Parsing error bookmarklet that filters the results from the nu markup checker to only display parsing errors/warnings.

Note: this bookmarklet is experimental and not the law and even when filtered some of the errors/warnings displayed may not have any practical negative effect on the accessibility of the document. It is provided as an aid to filter out some of the irrelevant (to WCAG) issues only. Mike and I have talked about providing the filter as a built in feature of the nu markup checker, so hope to make that happen.

Thanks Mike!

Pro tip – always check your HTML with Rock’n’Roll playing… LOUD!

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/FVbVCZw5BPQ?rel=0" width="480"></iframe>

More questions people?

HTML5 – Check it Before you Wreck it with Mike[tm] Smith originally appeared on HTML5 Doctor on December 9, 2014.

IEBlogStatus roadmap update: srcset, <main> element, and date inputs in development

Today we’re updating our platform roadmap with a few more features that we’ve started working on:

Responsive Images: image srcset

To take advantage of high resolution screens, it’s desirable to provide higher resolution image resources. While today’s devices come with all sorts of different resolution screens, it’s important to be able to provide the right resource for the device’s capabilities for optimal experience and performance. We have therefore begun work on implementing the srcset attribute for image elements, enabling alternate image resources based on the device’s DPI scaling factor:

<script src="https://gist.github.com/jacobrossi/06cf2d47dd7d87994850.js"></script>

We’re starting with srcset pixel density descriptors for the broadest interoperability, but we’re looking at other features like width/height descriptors and the <picture> element for possible implementation in the future.

Date input controls

Inputting dates in a form is common practice on the Web. We’re beginning the implementation of a variety of new input controls for basic date picking. These controls will use the standard HTML5 types and provide UI that’s friendly to your input device, like our other HTML5 input controls.

<script src="https://gist.github.com/jacobrossi/e49fb02075dab1c40e5e.js"></script>

This work is “phase 1” of 2. This first phase includes date, week, and month controls. The second phase (not yet in development) includes time related inputs. We’ll update you when work on phase 2 begins.

<main> element

We introduced a number of HTML5 semantic elements in IE9. We’re now adding support for the <main> element, which represents the main content of the document or application.

<script src="https://gist.github.com/jacobrossi/fe94491fdd226f6977c8.js"></script>

As always, check status.modern.IE for the latest on our development roadmap, vote for features on User Voice, and give us feedback on how we’re doing at @IEDevChat.

Jacob Rossi
Senior Program Manager
@jacobrossi

Planet MozillaImplementing the Service Worker Cache API in Gecko

For the last few months I’ve been heads down, implementing the Service Worker Cache API in gecko. All the work to this point has been done on a project branch, but the code is finally reaching a point where it can land in mozilla-central. Before this can happen, of course, it needs to be peer reviewed. Unfortunately this patch is going to be large and complex. To ease the pain for the reviewer I thought it would be helpful to provide a high-level description of how things are put together.

If you are unfamiliar with Service Workers and its Cache API, I highly recommend reading the following excellent sources:

Building Blocks

The Cache API is implemented in C++ based on the following Gecko primitives:

  • WebIDL DOM Binding

    All new DOM objects in gecko now use our new WebIDL bindings.

  • PBackground IPC

    PBackground is an IPC facility that connects a child actor to a parent actor. The parent actor is always in the parent process. PBackground, however, allows the child actor to exist in either a remote child content process or within the same parent process. This allows us to build services that support both electrolysis (e10s) and our more traditional single process model.

    Another advantage of PBackground is that the IPC calls are handled by a worker thread rather than the parent process main thread. This helps avoid stalls due to other main thread work.

  • Quota Manager

    Quota Manager is responsible for managing the disk space used by web content. It determines when quota limits have been reached and will automatically delete old data when necessary.

  • SQLite

    mozStorage is an API that provides access to an SQLite database.

  • File System

    Finally, the Cache uses raw files in the file system.

Alternatives

We did consider a couple alternatives to implementing a new storage engine for Cache. Mainly, we thought about using the existing HTTP cache or building on top of IndexedDB. For various reasons, however, we chose to build something new using these primitives instead. Ultimately it came down to the Cache spec not quite lining up with these solutions.

For example, the HTTP cache has an optimization where it only stores a single response for a given URL. In contrast, the Cache API spec requires that multiple Responses can be stored per-URL based on VARY headers, multiple Cache objects, etc. In addition, the HTTP cache doesn’t use the quota management system and Cache must use the quota system.

IndexedDB, on the other hand, is based on structured cloning which doesn’t currently support streaming data. Given that Responses could be quite large and come in from the network slowly, we thought streaming was a priority to reduce the amount of required memory.

Also, while not a technical issue, IndexedDB was undergoing a significant rewrite at the time the Cache work began. We felt that this would delay the Cache implementation.

10,000-Foot View

With those primitives in mind, the overall structure of the Cache implementation looks like this:

Here we see from left-to-right:

  • JS Script

    Web content running in a JavaScript context on the far left. This could be in a Service Worker, a normal Web Worker, or on the main thread.

  • DOM Object

    The script calls into the C++ DOM object using the WebIDL bindings. This layer does some argument validation and conversion, but is mostly just a pass through to the other layers. Since most of the Cache API is asynchronous the DOM object also returns a Promise. A unique RequestId is passed through to the Cache backend and is later used to find the Promise on completion.

  • Child and Parent IPC Actors

    The connection between the processes is represented by a child and a parent actor. These have a one-to-one correlation. In the Cache API request messages are sent from the child-to-parent and response messages are sent back from the parent-to-child. All of these messages are asynchronous and non-blocking.

  • Manager

    This is where things start to get a bit more interesting. The Cache spec requires each origin to get its own, unique CacheStorage instance. This is accomplished by creating a separate per-origin Manager object. These Manager objects can come and go as DOM objects are used and then garbage collected, but there is only ever one Manager for each origin.

  • Context

    When a Manager has a disk operation to perform it first needs to take a number of stateful steps to configure the QuotaManager properly. All of this logic is wrapped up in what is called the Context. I’ll go into more detail on this later, but suffice it to say that the Context handles handles setting up the QuotaManager and then scheduling Actions to occur at the right time.

  • Action

    An Action is essentially a command object that performs a set of IO operations within a Context and then asynchronously calls back to the Manager when they are complete. There are many different Action objects, but in general you can think of each Cache method, like match() or put(), having its own Action.

  • File System

    Finally, the Action objects access the file system through the SQLite database, file streams, or the nsIFile interface.

Closer Look

Lets take a closer look at some of the more interesting parts of the system. Most of the action takes place in the Manager and Context, so lets start there.

Manager

As I mentioned above, the Cache spec indicates each origin should have its own isolated caches object. This maps to a single Manager instance for all CacheStorage and Cache objects for scripts running in the same origin:

Its important that all operations for a single origin are routed through the same Manager because operations in different script contexts can interact with one another.

For example, lets consider the following CacheStorage method calls being executed by scripts running in two separate child processes.

  1. Process 1 calls caches.open('foo').
  2. Process 1’s promise resolves with a Cache object.
  3. Process 2 calls caches.delete('foo').

At this point process 1 has a Cache object that has been removed from the caches CacheStorage index. Any additional calls to caches.open('foo') will create a new Cache object.

But how should the Cache returned to Process 1 behave? It’s a bit poorly defined in the spec, but the current interpretation is that it should behave normally. The script in process 1 should continue to be able to access data in the Cache using match(). In addition, it should be able to store A value using put(), although this is somewhat pointless if the Cache is not in caches anymore. In the future, a caches.put() call may be added to let a Cache object to be re-inserted into the CacheStorage.

In any case, the key here is that the caches.delete() call in process 2 must understand that a Cache object is in use. It cannot simply delete all the data for the Cache. Instead we must reference count all uses of the Cache and only remove the data when they are all released.

The Manager is the central place where all of this reference tracking is implemented and these races are resolved.

A similar issue can happen with cache.match(req) and cache.delete(req). If the matched Response is still referenced, then the body data file needs to remain available for reading. Again, the Manager handles this by tracking outstanding references to open body files. This is actually implemented by using an additional actor called a StreamControl which will be shown in the cache.match() trace below.

Context

There are a number of stateful rules that must be followed in order to use the QuotaManager. The Context is designed to implement these rules in a way that hides the complexity from the rest of the Cache as much as possible.

Roughly the rules are:

  1. First, we must extract various information from the nsIPrincipal by calling QuotaManager::GetInfoFromPrincipal() on the main thread.
  2. Next, the Cache must call QuotaManager::WaitForOpenAllowed() on the main thread. A callback is provided so that we can be notified when the open is permitted. This callback occurs on the main thread.
  3. Once we receive the callback we must next call QuotaManager::EnsureOriginIsInitialized() on the QuotaManager IO thread. This returns a pointer to the origin-specific directory in which we should store all our files.
  4. The Cache code is now free to interact with the file system in the directory retrieved in the last step. These file IO operations can take place on any thread. There are some small caveats about using QuotaManager specific APIs for SQLite and file streams, but for the most part these simply require providing information from the GetInfoFromPrincipal() call.
  5. Once all file operations are complete we must call QuotaManager::AllowNextSynchronizedOp() on the main thread. All file streams and SQLite database connections must be closed before making this call.

The Context object functions like a reference counted RAII-style object. It automatically executes steps 1 to 3 when constructed. When the Context object’s reference count drops to zero, its destructor runs and it schedules the AllowNextSynchronzedOp() to run on the main thread.

Note, while it appears the GetInfoFromPrincipal() call in step 1 could be performed once and cached, we actually can’t do that. Part of extracting the information is querying the current permissions for the principal. Its possible these can change over time.

In theory, we could perform the EnsureOriginIsInitialized() call in step 3 only once if we also implemented the nsIOfflineStorage interface. This interface would allow the QuotaManager to tell us to shutdown when the origin directory needs to be deleted.

Currently the Cache does not do this, however, because the nsIOfflineStorage interface is expected to change significantly in the near future. Instead, Cache simply calls the EnsureOriginIsInitialized() method each time to re-create the directory if necessary. Once the API stabilizes the Cache will be updated to receive all such notifications from QuotaManager.

An additional consequence of not getting the nsIOfflineStorage callbacks is that the Cache must proactively call QuotaManager::AllowNextSynchronizedOp() so that the next QuotaManager client for the origin can do work.

Given the RAII-style life cycle, this is easily achieved by simply having the Action objects hold a reference to the Context until they complete. The Manager has a raw pointer to the Context that is cleared when it destructs. If there is no more work to be done, the Context is released and step 5 is performed.

Once the new nsIOfflineStorage API callbacks are implemented the Cache will be able to keep the Context open longer. Again, this is relatively easy and simply needs the Manager to hold a strong reference to the Context.

Streams and IPC

Since mobile platforms are a key target for Service Workers, the Cache API needs to be memory efficient. RAM is often the most constraining resource on these devices. To that end, our implementation should use streaming whenever possible to avoid holding large buffers in memory.

In gecko this is essentially implemented by a collection of classes that implement the nsIInputStream interface. These streams are pretty straightforward to use in normal code, but what happens when we need to serialize a stream across IPC?

The answer depends on the type of stream being serialized. We have a couple existing solutions:

  • Streams created for a flat memory buffer are simply copied across.
  • Streams backed by a file have their file descriptor dup()’d and passed across. This allows the other process to read the file directly without any immediate memory impact.

Unfortunately, we do not have a way to serialize an nsIPipe across IPC without completely buffering it first. This is important for Cache, because this is the type of stream we receive from a fetch() Response object.

To solve this, Kyle Huey is implementing a new CrossProcessPipe that will send the data across the IPC boundary in chunks.

In this particular case we will be sending all the fetched Response data from the parent-to-child when the fetch() is performed. If the Response is passed to Cache.put(), then the data is copied back to the parent.

You may be asking, “why do you need to send the fetch() data from the child to the parent process when doing a cache.put()? Surely the parent process already has this data somewhere.”

Unfortunately, this is necessary to avoid buffering potentially large Response bodies in the parent. It’s imperative that the parent process never runs out of memory. One day we may be able to open the file descriptor in the parent, dup() it to the child, and then write the data directly from the child process, but currently this is not possible with the current Quota Manager.

Disk Schema

Finally, that brings us to a discussion of how the data is actually stored on disk. It basically breaks down like this:

  • Body data for both Requests and Responses are stored directly in individual snappy compressed files.
  • All other Request and Response data are stored in SQLite.

I know some people discourage using SQLite, but I chose it for a few reasons:

  1. SQLite provides transactional behavior.
  2. SQLite is a well-tested system with known caveats and performance characteristics.
  3. SQL provides a flexible query engine to implement and fine tune the Cache matching algorithm.

In this case I don’t think serializing all of the Cache metadata into a flat file, as suggested by that wiki page, would be a good solution here. In general, only a small subset of the data will be read or write on each operation. In addition, we don’t want to require reading the entire dataset into memory. Also, for expected Cache usage, the data should typically be read-mostly with fewer writes over time. Data will not be continuously appended to the database. For these reasons I’ve chosen to go with SQLite while understanding the risks and pitfalls.

I plan to mitigate fragmentation by performing regular maintenance. Whenever a row is deleted from or inserted into a table a counter will be updated in a flat file. When the Context opens it will examine this counter and perform a VACUUM if it’s larger than a configured constant. The constant will of course have to be fine-tuned based on real world measurements.

Simple marker files will also be used to note when a Context is open. If the browser is killed with a Context open, then a scrubbing process will be triggered the next time that origin accesses caches. This will look for orphaned Cache and body data files.

Finally, the bulk of the SQLite specific code is isolated in two classes; DBAction.cpp and DBSchema.cpp. If we find SQLite is not performant enough, it should be straightforward to replace these files with another solution.

Detailed Trace

Now that we have the lay of the land, lets trace what happens in the Cache when you do something like this:

<figure class="code">
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// photo by leg0fenris: https://www.flickr.com/photos/legofenris/
var troopers = 'blob:https://mdn.github.io/6d4a4e7e-0b37-c342-81b6-c031a4b9082c'

var legoBox;
Promise.all([
  fetch(troopers),
  caches.open('legos')
]).then(function(results) {
  var response = results[0];
  legoBox = results[1];
  return legoBox.put(troopers, response);
}).then(function() {
  return legoBox.match(troopers);
}).then(function(response) {
  // invade rebel base
});
</figure>

While it might seem the first Cache operation is caches.open(), we actually need to trace what happens when caches is touched. When the caches attribute is first accessed on the global we create the CacheStorage DOM object and IPC actors.

I’ve numbered each step in order to show the sequence of events. These steps are roughly:

  1. The global WebIDL binding for caches creates a new CacheStorage object and returns it immediately to the script.
  2. Asynchronously, the CacheStorage object creates a new child IPC actor. Since this may not complete immediately, any requests coming in will be queued until actor is ready. Of course, since all the operations use Promises, this queuing is transparent to the content script.
  3. The child actor in turn sends a message to the parent process to create a corresponding parent actor. This message includes the nsIPrincipal describing the content script’s origin and other identifying information.
  4. Before permitting any actual work to take place, the principal provided to the actor must be verified. For various reasons this can only be done on the main thread. So an asynchronous operation is triggered to examine the principal and any CacheStorage operations coming in are queued.
  5. Once the principal is verified we return to the PBackground worker thread.
  6. Assuming verification succeeded, then the origin’s Manager can now be accessed or created. (This is actually deferred until the first operation, though.) Any pending CacheStorage operations are immediately executed.

Now that we have the caches object we can get on with the open(). This sequence of steps is more complex:

There are a lot more steps here. To avoid making this blog post any more boring than necessary, I’ll focus on just the interesting ones.

As with the creation trace above, steps 1 to 4 are basically just passing the open() arguments across to the Manager. Your basic digital plumbing at work.

Steps 5 and 6 make sure the Context exists and schedules an Action to run on the IO thread.

Next, in step 7, the Action will perform the actual work involved. It must find the Cache if it already exists or create a new Cache. This basically involves reading and writing an entry in the SQLite database. The result is a unique CacheId.

Steps 8 to 11 essentially just return the CacheId back to the actor layer.

If this was the last Action, then the Context is released in step 10.

At this point we need to create a new parent actor for the CacheId. This Cache actor will be passed back to the child process where it gets a child actor. Finally a Cache DOM object is constructed and used to resolve the Promise returned to the JS script in first step. All of this occurs in steps 12 to 17.

On the off chance you’re still reading this section, the script next performs a put() on the cache:

This trace looks similar to the last one, with the main difference occurring in the Action on the right. While this is true, its important to note that the IPC serialization in this case includes a data stream for the Response body. So we might be creating a CrossProcessPipe actor to copy data across in chunks.

With that in mind the Action needs to do the following:

  • Stream body data to files on disk. This happens asynchronously on the IO thread. The Action and the Context are kept alive this entire time.
  • Update the SQLite database to reflect the new Request/Response pair with a file name pointer to the body.

All of the steps back to the child process are essentially just there to indicate completion. The put() operation resolves undefined in the success case.

Finally the script can use match() to read the data back out of the Cache:

In this trace the Action must first query the SQLite tables to determine if the Request exists in the Cache. If it does, then it opens a stream to the body file.

Its important to note, again, that this is just opening a stream. The Action is only accessing the file system directory structure and opening a file descriptor to the body. Its not actually reading any of the data for the body yet.

Once the matched Response data and body file stream are passed back to the parent actor, we must create an extra actor for the stream. This actor is then passed back to the child process and used to create a ReadStream.

A ReadStream is a wrapper around the body file stream. This wrapper will send a message back to the parent whenever the stream is closed. In addition, it allows the Manager to signal the stream that a shutdown is occurring and the stream should be immediately closed.

This extra call back to the parent process on close is necessary to allow the Manager to reference track open streams and hold the Context open until all the streams are closed.

The body file stream itself is serialized back to the child process by dup()‘in the file descriptor opened by the Action.

Ultimately the body file data is read from the stream when the content script calls Response.text() or one of the other body consumption methods.

TODO

Of course, there is still a lot to do. While we are going to try to land the current implementation on mozilla-central, a number of issues will need to be resolved in the near future.

  1. SQLite maintenance must be implemented. As I mentioned above, I have a plan for how this will work, but it has not been written yet.
  2. Stress testing must be performed to fine tune the SQLite schema and configuration.
  3. Files should be de-duplicated within a single origin’s CacheStorage. This will be important for efficiently supporting some expected uses of the Cache API. (De-duplication beyond the same origin will require expanded support from the QuotaManager and is unlikely to occur in the near future.)
  4. Request and Response clone() must be improved. Currently a clone() call results in the body data being copied. In general we should be able to avoid almost all copying here, but it will require some work. See bug 1100398 for more details.
  5. Telemetry should be added so that we can understand how the Cache is being used. This will be important for improving the performance of the Cache over time.

Conclusion

While the Cache implementation is sure to change, this is where we are today. We want to get Cache and the other Service Worker bits off of our project branch and into mozilla-central as soon as possible so other people can start testing with them. Reviewing the Cache implementation is an important step in that process.

If you would like to follow along please see bug 940273. As always, feedback is welcome by email or on twitter.

Bruce LawsonReading List

Ooh,ooh, it’s the 98th Reading List (including last week’s Device Detection vs Responsive Web Design-themed list). Will I get to 100 before 2015?

Planet MozillaDNS-Based soft releases

Firefox Hello is this cool WebRTC app we've landed in Firefox to let you video chat with friends. You should try it, it's amazing.

My team was in charge of the server side of this project - which consists of a few APIs that keep track of some session information like the list of the rooms and such things.

The project was not hard to scale since the real work is done in the background by Tokbox - who provide all the firewall traversal infrastructure. If you are curious about the reasons we need all those server-side bits for a peer-2-peer technology, this article is great to get the whole picture: http://www.html5rocks.com/en/tutorials/webrtc/infrastructure/

One thing we wanted to avoid is a huge peak of load on our servers on Firefox release day. While we've done a lot of load testing, there are so many interacting services that it's quite hard to be 100% confident. Potentially going from 0 to millions of users in a single day is... scary ? :)

So right now only 10% of our user base sees the Hello button. You can bypass this by tweaking a few prefs, as explained in many places on the web.

This percent is going to be gradually increased so our whole user base can use Hello.

How does it work ?

When you start Firefox, a random number is generated. Then Firefox ask our service for another number. If the generated number is inferior to the number sent by the server, the Hello button is displayed. If is superior, the button is hidden.

Adam Roach proposed to set up an HTTP endpoint on our server to send back the number and after a team meeting I suggested to use a DNS lookup instead.

The reason I wanted to use a DNS server was to rely on a system that's highly available and freaking fast. On the server side all we had to do is to add a new DNS entry and let Firefox do a DNS lookup - yeah you can do DNS lookups in Javascript as long as you are within Gecko.

Due to a DNS limitation we had to move from a TXT field to an A field - which returns an IP field. But converting IP to integer values is not a problem, so that worked out.

See https://wiki.mozilla.org/Loop/Load_Handling#Service_Soft_Start for all the details.

Generalizing the idea

I think using DNS as a distributed database for simple values like this is an awesome idea. I am happy I thought of this one :)

Based on the same technique, you can also set up some A/B testing based on the DNS server ability to send back a different value depending on things like a user location for example.

For example, we could activate a feature in Firefox only for people in Connecticut, or France or Europe.

We had a work week in Portland and we started to brainstorm on how such a service could look like, and if it would be practical from a client-side point of view.

The general feedback I had so far on this is: Hell yeah we want this!

To be continued...

W3C Team blogThis week: W3C CEO on after HTML5, #bbd14, @W3C-PO protocol droid, No CAPTCHA ReCAPTCHA, etc.

This is the 28 November – 5 December 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

W3C in the Press (or blogs)

4 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

Bruce LawsonOn the accessibility of web components. Again.

I enjoyed watching Dimitri Glazkov’s introduction to Web Components Easy composition and reuse with Web Components, given at the Chrome Developer Summit. It’s an excellently-constructed talk that builds on the use-cases that web components address to make a compelling argument for the technology.

At 11 min 55 seconds, after a slide reading “Make HTML useful”, Dimitri says

Custom elements is really neat. It basically says, “HTML it’s been a pleasure”.

There we are. Bye-bye HTML; you weren’t useful enough. Hello, brave new world of custom elements. Of course, this isn’t the full messaging; a 20 minute video can’t go into the nuances. But it’s what a lot of people are hearing.

Let’s straighten that out.

One of the advantages of oh-so-boring HTML was that certain elements carried default behaviours in browsers and assistive technology. Like, when you use this mark-up

<label for="form-name">What's yer name?<label>
<input id="form-name">

and you click on the label, the focus goes into the associated input. There’s no need for JavaScript, there’s no fancy stuff extra for a developer (except setting up the association with the for="" attribute), and there’s a significant usability and accessibility advantage for the end-user.

A recent HTML5 Rocks article by Addy Osmani and Alice Boxhall called Accessible Web Components begins with the words

Custom Elements present a fantastic opportunity for us to improve accessibility on the web.

Yes. Yes. Yes. (Thanks Addy and Alice!) It’s perfectly possible to make web components and custom elements accessible. Alice has an example which I’ve screenshotted in Opera (top) and Safari (bottom).

Opera and Safari screenshots

Note that in the Safari screenshot, the second column of sexy checkboxes don’t work at all – there is no checkbox. That’s because Safari doesn’t support web components. You’ll see the same in IE, or browsers without JavaScript.

Note that the first column does render in Safari, but it’s just normal checkboxes; they aren’t sexy web component-ised as they are in Opera. But – crucially – you can still interact with them, as they’re web components progressively enhancing silly old “useless” HTML. It works like this:

<input type="checkbox" is="io-checkbox">

Simple, huh? You have a silly old useless HTML element, and a new attribute that says “this is extended via web components into a special element I’m calling ‘io-checkbox'”. The web component inherits all the silly old useless behaviour like associating labels with form fields, activation with keyboard for free.

Compare with the sexy but not progressively-enhanced way that doesn’t work in older browsers (the second column):

<io-custom-checkbox tabindex="0" role="checkbox"></io-custom-checkbox>

There’s a super-whizzo-fabbo-megalicious UltraShiny custom element there, which has no graceful degradation. It needs a tabindex and a role there because who wants that silly old useless HTML behaviour? Not us! We’re post-HTML. Yay!

Snarking aside, why do so few people talk about extending existing HTML elements with web components? Why’s all the talk about brand new custom elements? I don’t know.

Of course, not every new element you’ll want to make can extend an existing HTML element. In this case, you can still make your custom element accessible. Just because you’re in the super-whizzo-fabbo-megalicious UltraShiny world of web components, you can – and should – still add ARIA information to make your code accessible. Just because you’re hiding nasty code behind the Shadow DOM, it doesn’t mean that you can brush proper coding under the web components carpet.

You’d hope that those who are assiduously pushing components into the platform would ensure that their demos did this – after all, those demos are meant to be studied, copied and adapted by developers, right?

Wrong. Take a look at Polymer gmail, a “Polymer version of New Gmail app”. Patrick Lauke points out

Google has expertise in-house to create functional, beautiful, web-component stuff that is also accessible. It would be great if high-profile demos like these would actually take advantage of those resources to create things that work not just for sighted mouse/touchscreen users…

To which he received the reply

There’s plenty that can be done in the convenience of unlimited time and resources. If you’d like to help, please submit a PR.

A big demo of a Google cutting-edge technology, made by Google, and there’s no resources simply to make it accessible.

At Paris Web, Karl Groves and I talked about Web Components – the right way, we talked of extending existing elements, adding ARIA and suggested that web accessibility advocates actively fix issues on Open-Source projects. But I meant fixing small projects that you’re using in your own sites – like the WordPress Live Comment Preview plugin, which I tweaked, thereby making 44,837 sites accessible.

I wasn’t talking about fixing demos by a company with a $362.48 Billion market capitalisation. As Patrick Lauke so eloquently puts it:

My resources are currently a bit more stretched than Google’s…but I’ll put it on my to-do list ;)

I’m a fan of web components. But I’m increasingly worried about the messaging surrounding them.

Planet MozillaFirefox OS – HTML for the mobile web at All Things Open

Copyright: Jonathan LeBlanc

Copyright: Jonathan LeBlanc

Every time I speak at a conference, I feel bless to be able to do so. For me, it’s a great opportunity to share my passion as expertise about technology. For All Things Open, I was happy to be part of the amazing list of speakers who love Open Source as much as I do. As for many of the talks I did in the last one year, and a half, I was talking about Firefox OS. I think that there is still a lot of awareness to create about this new operating system: so many people don’t know about the power behind Firefox OS and HTML5. It’s even truer about North America as you cannot go to your local store, and buy a device like you can in some places in Europe, and LATAM.

Since I was in the main room, my talk was recorded. Also, because of that, and that five tracks were available at the same time, my room looks a bit empty (I had about 100 people). In any cases, I got interesting feedbacks about Firefox OS and my talk.

As a habit, I started my recording process, so you have access to another version. The sound is not as good as a professional recording, but you have a better view on the screen (I should mix the two to make the ultimate recording).

Looking forward to speaking at the 2016 edition of ATO!


--
Firefox OS – HTML for the mobile web at All Things Open is a post on Out of Comfort Zone from Frédéric Harper

Planet MozillaIt is Blue Beanie Day – let’s reflect #bbd14

Today we celebrate once again blue beanie day. People who build things that are online don their blue hat and show their support for standards based web development. All this goes back to Jeffrey Zeldman’s book that outlined that idea and caused a massive change in the field of web design.

me, wearing my HTML beanie

Let’s celebrate – once again

It feels good to be part of this, it is a tradition and it reminds us of how far we’ve come as a community and as a professional environment. To me, it starts to feel a bit stale though. I get the feeling we are losing our touch to what happens these days and celebrate the same old successes over and over again.

This could be normal disillusionment of having worked in the same field for a long time. It also could be having heard the same messages over and over. I start to wonder if the message of “use web standards” is still having an impact in today’s world.

The web is a commodity

I am not saying they are unnecessary – far from it. I am saying that we lose a lot of new developers to other causes and that web development as a craft is becoming less important than it used to be.

The web is a thing that people use. It is there, it does things. Much like opening a tap gives you water in most places we live in. We don’t think about how the tap works, we just expect it to do so. And we don’t want to listen to anyone who tells us that we need to use a tap in a certain way or we’re “doing it wrong”. We just call someone in when the water doesn’t run.

Standards mattered most when browsers worked against them

When web standards based development became a thing it was an absolute necessity. Browser support was all over the shop and we had to find something we can rely on. That is a standard. You can dismantle and assemble things because there is a standard for screws and screwdrivers. You can also use a knife or a key for that and thus damage the screw and the knife. But who cares as long as the job’s done, right? You do – as soon as you need to disassemble the same thing again.

Far beyond view source

Nowadays our world has changed a lot. Browser support is excellent. Browsers are pretty amazing at displaying complex HTML, CSS and JavaScript. On top of that, browsers are development tools giving us insights into what is happening. This goes beyond the view-source of old which made the web what it is. You can now inspect JavaScript generated code. You an see browser internal structures. You see what loaded when and how the browser performs. You can inspect canvas, WebGL and WebAudio. You can inspect browsers on connected devices and simulate devices and various connectivity scenarios.

All this and the fact that the HTML5 parser is forgiving and fixes minor markup glitches makes our chant for web standards support seem redundant. We’ve won. The enemies of old – Flash and other non-standard technologies seem to be forgotten. What’s there to celebrate?

Our standards, right or wrong?

Well, the struggle for a standards based web is far from over and at times we need to do things we don’t like doing. An open source browser like Firefox having to support DRM in video playback is not good. But it is better than punishing its users by preventing them from using massively successful services like Netflix. Or is it? Should our goal to only support open and standardised technology be the final decision? Or is it still up to us to show that open and standardised means the solutions are better in the long run and let that one slip for now? I’m not sure, but I know that it is easier to influence something when you don’t condemn it.

A new, self-made struggle

All in all there is a new target for those of us who count themselves in the blue beanie camp: complexity and “de-facto standards”.

The web grew to what it is now as it was simple to create for it. Take a text editor, write some code, open it in a browser and you’re done. These days professional web development looks much different. We rely on package managers. We rely on resource managers. We use task runners and pre-processing to create HTML, CSS and JavaScript solutions. All these tools are useful and can make a massive difference in a big and complex site. They should not be a necessity and are often overkill for the final product though. Web standards based development means one thing: you know what you’re doing and what your code should do in a supported browser. Adding these layers adds a layer of dark magic to that. Instead of teaching newcomers how to create, we teach them to rely on things they don’t understand. This is a perfectly OK way to deliver products, but it sets a strange tone for those learning our craft. We don’t empower builders, we empower users of solutions to build bigger solutions. And with that, we create a lot of extra code that goes on the web.

A “de-facto standard” is nonsense. The argument that something becomes good and sensible because a lot of people use it assumes a lot. Do these people use it because they need it? Or because they like it? Or because it is fashionable to use? Or because it yields quick results? Results that in a few months time are “considered dangerous” but stick around for eternity as the product has been shipped.

Framing the new world of web development

We who don the blue hats live in a huge echo chamber. It is time to stop repeating the same messages and concentrate on educating again. The web is obese, solutions become formulaic (parallax scrollers, huge hero headers…). There is a whole new range of frameworks to replace HTML, CSS and JavaScript out there that people use. Our job as the fans of standards is to influence those. We should make sure we don’t go towards a web that is dependent on the decisions of a few companies. Promises of evergreen support for those frameworks ring hollow. It happened with YUI - a very important player in making web standards based work scale to huge company size. And it can happen to anything we now promote as “the easier way to apply standards”.

W3C Team blogThis week: much html-json-forms wow such fame, Epub-Web, Webizen vote, European Parliament on competition, etc.

This is the 21-28 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & digital market

W3C in the Press (or blogs)

4 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

Planet MozillaDeveloping and releasing the Khan Academy Firefox OS app

I'm happy to announce that the Khan Academy Firefox OS app is now available in the Firefox Marketplace!

Khan Academy’s mission is to provide a free world-class education for anyone anywhere. The goal of the Firefox OS app is to help with the “anyone anywhere” part of the KA mission.

Why?

There's something exciting about being able to hold a world class education in your pocket for the cheap price of a Firefox OS phone. Firefox OS devices are mostly deployed in countries where the cost of an iPhone or Android based smart phone is out of reach for most people.

The app enables developing countries, lower income families, and anyone else to take advantage of the Khan Academy content. A persistent internet connection is not required.

What's that.... you say you want another use case? Well OK, here goes: A parent wanting each of their kids to have access to Khan Academy at the same time could be very expensive in device costs. Not anymore.

Screenshots!

App features

  • Access to the full library of Khan Academy videos and articles.
  • Search for videos and articles.
  • Ability to sign into your account for:
  • Profile access.
  • Earning points for watching videos.
  • Continuing where you left off from previous partial video watches, even if that was on the live site.
  • Partial and full completion status of videos and articles.
  • Downloading videos, articles, or entire topics for later use.
  • Sharing functionality.
  • Significant effort was put in, to minify topic tree sizes for minimal memory use and faster loading.
  • Scrolling transcripts for videos as you watch.
  • The UI is highly influenced by the first generation iPhone app.

Development statistics

  • 340 commits
  • 4 months of consecutive commits with at least 1 commit per day
  • 30 minutes - 2 hours per day max

Technologies used

Technologies used to develop the app include:

Localization

The app is fully localized for English, Portuguese, French, and Spanish, and will use those locales automatically depending on the system locale. The content (videos, articles, subtitles) that the app hosts will also automatically change.

I was lucky enough to have several amazing and kind translators for the app volunteer their time.

The translations are hosted and managed on Transifex.

Want to contribute?

The Khan Academy Firefox OS app source is hosted in one of my github repositories and periodically mirrored on the Khan Academy github page.

If you'd like to contribute there's a lot of future tasks posted as issues on github.

Current minimum system requirements

  • Around 8MB of space.
  • 512 MB of RAM

Low memory devices

By default, apps on the Firefox marketplace are only served to devices with at least 500MB of RAM. To get them on 256MB devices, you need to do a low memory review.

One of the major enhancements I'd like to add next, is to add an option to use the YouTube player instead of HTML5 video. This may use less memory and may be a way onto 256MB devices.

How about exercises?

They're coming in a future release.

Getting preinstalled on devices

It's possible to request to get pre-installed on devices and I'll be looking into that in the near future after getting some more initial feedback.

Projects like Matchstick also seem like a great opportunity for this app.

Steve Faulkner et alRough Guide: browsers, operating systems and screen reader support – Updated

Practical screen reader support by browser and OS

When testing aspects of support for new HTML5, WAI-ARIA features and HTML features in general, I often test browsers that do not have practical support for screen readers on a particular operating system. I find they have support for feature X, but lack support for feature Y that is required to enable practical support to web content for screen reader users. While it is useful to test and find successful implementations of discrete features, it needs to be viewed in the broader context of which browsers can be considered usable with popular OS level screen readers.

I found it difficult to get a complete understanding from the resources available on the web, but have put together a high level support table based on information I could glean.

If you have any further information or find any inaccuracies please comment.

Practical support

Practical support for screen readers means that a browser can be successfully used to browse and interact with commonly encountered web content, using current versions of OS level screen readers such as, on Windows; JAWS, NVDA, Window Eyes. On Mac OSX and iOS; VoiceOver. On Linux; Orca and on Chrome OS; ChromeVox.

Table legend

  • supported “supported” means that the browser is usable in practice with a screen reader on the operating system (OS).
    Note: in the case of Internet Explorer it lacks support for some important features, but due to its market share screen readers hack around its lack of support.
  • “partial support” lacks support for some important features. For example, Chrome on Windows supports browsing using JAWS, but does not fully support accessible name calculation.
  • not applicable “not applicable” means the browser does not run on the OS
  • not supported “not supported” means the browser does not have practical support for screen readers on the OS.
  • not known “not known” means that accessibility support information is not publicly available.
  • supported, but limited support data.browsers recently added to OS’s, Early data indicates usable accessibility support.
  • not knownlikely, not supported not known, but likely is unsupported.

Note: The table refers to the current (12/10/2014) versions of browsers and current versions of operating systems.

Practical screen reader support by browser and OS (24/11/2014)
Chrome Firefox Internet Explorer Opera Safari
Windows supported supported supported note partial support not supported
OSX supported partial support not applicable partial support supported
Linux not supported supported not applicable not knownlikely, not supported not applicable
IOS supported, but limited support data not applicable not applicable not knownlikely, not supported supported
Android supported supported not applicable not knownlikely, not supported partial support
(built-in Android browser
webkit  based)
Chrome OS supported not applicable not applicable not applicable not applicable
Firefox OS not applicable partial support not applicable not applicable not applicable

References:

Steve Faulkner et alSlow Week in Web Standards

Scanning through my twitter timeline, realized that it had been another week in web standards devoid of progress, passion and humour…

 

 

 

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

 

<script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Planet MozillaPushing Hybrid Mobile Apps to the Forefront

Mozilla Festival 2014 was held in London in October.

At Mozilla Festival 2014, I facilitated a session on Pushing Hybrid Mobile Apps to the Forefront. Before, I had been building a poker app to keep track of my poker winning statistics, record notes on opponents, and crunch poker math. I used the web as a platform, but having an iPhone, wanted this app to be on iOS. Thus, the solution was hybrid mobile apps, apps written in HTML5 technologies that are wrapped to run "natively" on all platforms (e.g., iOS, Android, FirefoxOS).

I stumbled upon the Ionic hybrid mobile app framework. This made app development so easy. IT fulfills the promise of the web: write once, run everywhere. In being with Mozilla for over two years, I've read so little hype for hybrid mobile apps. Hybrid mobile apps have potential to convert much more native developers over to the web platform, but hybrid mobile apps aren't getting the ad-time they deserve.

What is a Hybrid Mobile App?

Hybrid mobile apps, well explained in this article from Telerik, are apps written in HTML5 technologies that are enabled to run within a native container. They use the device's browser engine to render the app. And then web-to-native polyfill can be injected, prominently Cordova, in order to access device APIs.

The Current Lack of Exposure for Hybrid Mobile Apps

In all of the Mozilla Developer Network (MDN), there are around three articles on hybrid mobile apps, which aren't really fully fleshed and in need of technical review. There's been a good amount of work from James Longster in the form of Cordova Firefox OS support. There could be more to be done on the documentation side.

Cross-platform capability on mobile should be flaunted more. In MDN's main article on Open Web Apps, there's a list of advantages on open web apps. This article is important since it is a good entry point to into developing web apps. The advantages listed shouldn't really be considered advantages relative to native apps:

  • Local installation and offline storage: to a developer, these should be inherent to an app, not an explicit advantage. Apps are expected to be installable and have offline storage.
  • Hardware access: also should be inherent to an app and not an explicit advantage. Apps are expected to be able to communicate with its device APIs.
  • Breaking the walled gardens: there are no "walls" being broken if these web apps only run in the browser and FirefoxOS. They should be able to live inside the App Store and Play Store to really have any effect.
  • Open Web App stores: well, that is prety cool actually. I built a personal app that I didn't want to be distributed except with me and antoher. So I simply built a page that had the ability to install the app. However, pure web apps alone can't be submitted it to App Store or Play Store so that should be addressed first.

What's missing here is the biggest advantage of all: being able to run cross-platform (e.g., iOS, Android, FirefoxOS, Windows). That's the promise the web, and that's what attracts most developers to the web in the first place. Write it once, run anywhere, no need to port between languages or frameworks, and still be able to submit to the App Store/Play Store duopoly for to gain the most users. For many developers, the web is an appropriate platform, saving time and maintenance.

Additionally, most developers also prefer the traditional idea of apps, that they are packaged up and uploaded to the storefront, rather than self-hosted on a server. On the Firefox Marketplace, the majority of apps are packaged over hosted (4800 to 4100).

There's plenty of bark touting the cross-platform capability of the web, but there's little bite on how to actually achieve that on mobile. Hybrid mobile apps have huge potential to attact more developers to the web platform. But with its lack of exposure, it's wasted potential.

So what can we do? The presence of hybrid mobile apps on MDN could be buffed. I've talked to Chris Mills of the MDN team at Mozfest, and he mentioned it was a goal for 2015. FirefoxOS Cordova plugins may welcome contributors. And I think the biggest way would be to help add official FirefoxOS support to Ionic, a popular hybrid mobile app framework which currently has over 11k stars. They've mentioned they have FirefoxOS on the roadmap.

Building with Ionic

Ionic Framework is a hybrid mobile app framework It has a beautifully designed set of native-like icons and CSS components, pretty UI transitions, web components (through Angular directives for now), build tools, and an easy-to-use command-line interface.

With Ionic, I built my poker app I initially mentioned. It installs on my phone, and I can use it at the tables:

Poker app

Poker app built with Ionic.

For the Mozfest session, I generated a sample app with Ionic (that simply just makes use of the camera), and put it on Github with instructions. To get started with a hybrid mobile app:

  • npm install -g ionic cordova
  • ionic start myApp tabs - creates a template app
  • cordova plugin add org.apache.cordova.camera - installs the Cordova camera plugin (there are many to choose from)
  • ionic platform add <PLATFORM> - where could be ios, android, or firefoxos. This enables the platforms
  • ionic platform build <PLATFORM> - builds the project

To emulate it for iOS or Android:

  • ionic emulate <PLATFORM> - will open the app in XCode for ios or adbtools for android

To simulate it for FirefoxOS, open the project with WebIDE inside platforms/firefoxos/www.

How the Mozfest Session Went

It was difficult to plan since Mozfest is more of a hands-on unconference, where everything is meant to be hands-on and accessible. Mozfest wasn't a deeply technical conference so I tried to cater to those who don't have much development experience and to those who don't bring a laptop.

Thus I set up three laptops: my Macbook, a Thinkpad, and a Vaio. And had three devices: my iPhone, a Nexus 7, and a FirefoxOS Flame. My Macbook would help to demonstrate the iOS side. Whereas the other machines had Linux Mint within a VirtualBox. These VMs had adbtools and Firefox with WebIDE set up. All the mobile devices had the demo apps pre-installed so people could try it out.

I was prepared as a boy scout. Well, until my iPhone was pickpocketed in London, stripping me of the iOS demonstration. Lugging around three laptops in my bag that probably amounted to 20 pounds back and forth between the hotel, subway, and venue wasn't fun. I didn't even know what day I was going to present at Mozfest. Then I didn't even use those meticulously prepared laptops at the session. Everyone who showed up was pretty knowledgable, had a laptop, and had an internet connection.

The session went well nonetheless. After a bit of speech about pushing hybrid mobile apps to the forefront, my Nexus 7 and Flame were passed around to demo the sample hybrid mobile app running. It just had a simple camera button. That morning, everyone had received a free Firefox Flame for attending Mozfest so it turned more into WebIDE session on how to get an app on the Flame. My coworker who attended was able to get the accelerometer working with a "Shake Me / I was shaken." app, and I was able to get geolocation working with an app that displays longitude and latitude coordinates with the GPS.

What I Thought About Mozfest

There was a lot of energy in the building. Unfortunately, the energy didn't reach me, especially since I was heavily aircraft-latencied. Maybe conferences aren't my thing. The place was hectic. Hard to find out what was where. I tried to go to a session that was labeled as "The 6th Floor Hub", which turned out to be a small area of a big open room labelled with a hard-to-spot sign that said "The Hub". When I got there, there was no session being held despite the schedule saying so as the facilitator was MIA.

The sessions didn't connect with me. Perhaps I wanted something more technical and concrete that I could takeaway and use, but most sessions were abstract. There was a big push for Mozilla Webmaker and Appmaker, though those aren't something I use often. They're great teaching tools, but I usually direct to Codecademy for those who want to learn to build stuff.

There was a lot of what I call "the web kool-aid". Don't get me wrong, I love the web, I've drank a lot of the kool-aid, but there was a lot of championing of the web in the keynotes. I guess "agency" is the new buzzword now. Promoting the web is great, though I've just heard it all before.

However, I was glad to add value to those who found it more inspiring and motivating than me. I believe my session went well and attendees took away something hard and practical. As for me, I was just happy to get back home after a long day of travel and go replace my phone.

Bruce LawsonReading List

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>