Anne van KesterenHTML components

Hayato left a rather flattering review comment to my pull request for integrating shadow tree event dispatch into the DOM Standard. It made me reflect upon all the effort that came before us with regards to adding components to DOM and HTML. It has been a nearly two-decade journey to get to a point where all browsers are willing to implement, and then ship. It is not quite a glacial pace, but you can see why folks say that about standards.

What I think was the first proposal was simply titled HTML Components, better known as HTC, a technology by the Microsoft Internet Explorer team. Then in 2000, published in early 2001, came XBL, a technology developed at Netscape by Dave Hyatt (now at Apple). In some form that variant of XBL still lives on in Firefox today, although at this point it is considered technical debt.

In 2004 we got sXBL and in 2006 XBL 2.0, the latter largely driven by Ian Hickson with design input from Dave Hyatt. sXBL had various design disputes that could not be resolved among the participants. Selectors versus XPath was a big one. Though even with XBL 2.0 the lesson that namespaces are an unnecessary evil for rather tightly coupled languages was not yet learned. A late half-hearted revision of XBL 2.0 did drop most of the XML aspects, but by that time interest had waned.

There was another multi-year gap and then from 2011 onwards the Google Chrome team put effort into a different, more API-y approach towards HTML components. This was rather contentious initially, but after recent compromises with regards to encapsulation, constructors for custom elements, and moving from selectors to an even more simplistic model (basically strings), this seems to be the winning formula. A lot of it is now part of the DOM Standard and we also started updating the HTML Standard to account for shadow trees, e.g., making sure script elements execute.

Hopefully implementations follow soon and then widespread usage to cement it for a long time to come.

Anne van KesterenNetwork effects affecting standards

In the context of another WebKit issue around URL parsing, Alexey pointed me to WebKit bug 116887. It handily demonstrates why the web needs the URL Standard. The server reacts differently to %7B than it does to {. It expects the latter, despite the IETF STD not even mentioning that code point.

Partly to blame here are the browsers. In the early days code shipped without much quality assurance and many features got added in a short period of time. While standards evolved there was not much of a feedback loop going on with the browsers. There was no organized testing effort either, so the mismatch grew.

On the other side, you have the standards folks ignoring the browsers. While they did not necessarily partake in the standards debate back then that much, browsers have had an enormous influence on the web. They are the single most important piece of client software out there. They are even kind of a meta-client. They are used to fetch clients that in turn talk to the internet. As an example, the Firefox meta-client can be used to get and use the FastMail email client.

And this kind of dominance means that it does not matter much what standards say, it matters what the most-used clients ship. Since when you are putting together some server software, and have deadlines to make, you typically do not start with reading standards. You figure out what bits you get from the client and operate on that. And that is typically rather simplistic. You would not use a URL parser, but rather various kinds of string manipulation. Dare I say, regular expressions.

This might ring some bells. If it did, that is because this story also applies to HTML parsing, text encodings, cookies, and basically anything that browsers have deployed at scale and developers have made use of. This is why standards are hardly ever finished. Most of them require decades of iteration to get the details right, but as you know that does not mean you cannot start using any of it right now. And getting the details right is important. We need interoperable URL parsing for security, for developers to build upon them without tons of cross-browser workarounds, and to elevate the overall abstraction level at which engineering needs to happen.

Jeremy KeithConversational interfaces

<figure>

Psst… Jeremy! Right now you’re getting notified every time something is posted to Slack. That’s great at first, but now that activity is increasing you’ll probably prefer dialing that down.

<figcaption> —Slackbot, 2015 </figcaption> </figure>

<figure>

What’s happening?

<figcaption> —Twitter, 2009 </figure>

<figure>

Why does everyone always look at me? I know I’m a chalkboard and that’s my job, I just wish people would ask before staring at me. Sometimes I don’t have anything to say.

<figcaption> —Existentialist chalkboard, 2007 </figcaption> </figure>

<figure>

I’m Little MOO - the bit of software that will be managing your order with us. It will shortly be sent to Big MOO, our print machine who will print it for you in the next few days. I’ll let you know when it’s done and on it’s way to you.

<figcaption> —Little MOO, 2006 </figcaption> </figure>

<figure>

It looks like you’re writing a letter.

<figcaption> —Clippy, 1997 </figcaption> </figure>

<figure>

Your quest is to find the Warlock’s treasure, hidden deep within a dungeon populated with a multitude of terrifying monsters. You will need courage, determination and a fair amount of luck if you are to survive all the traps and battles, and reach your goal — the innermost chambers of the Warlock’s domain.

<figcaption> —The Warlock Of Firetop Mountain, 1982 </figure>

<figure>

Welcome to Adventure!! Would you like instructions?

<figcaption> —Colossal Cave, 1976 </figcaption> </figure>

<figure>

I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write.

<figcaption> —I, Pencil, 1958 </figcaption> </figure>

<figure>

ÆLFRED MECH HET GEWYRCAN
Ælfred ordered me to be made

Ashmolean Museum, Oxford <figcaption> —The Ælfred Jewel, ~880 </figcaption> </figure>

Technical note

I have marked up the protagonist of each conversation using the cite element. There is a long-running dispute over the use of this element. In HTML 4.01 it was perfectly fine to use cite to mark up a person being quoted. In the HTML Living Standard, usage has been narrowed:

The cite element represents the title of a work (e.g. a book, a paper, an essay, a poem, a score, a song, a script, a film, a TV show, a game, a sculpture, a painting, a theatre production, a play, an opera, a musical, an exhibition, a legal case report, a computer program, etc). This can be a work that is being quoted or referenced in detail (i.e. a citation), or it can just be a work that is mentioned in passing.

A person’s name is not the title of a work — even if people call that person a piece of work — and the element must therefore not be used to mark up people’s names.

I disagree.

In the examples above, it’s pretty clear that I, Pencil and Warlock Of Firetop Mountain are valid use cases for the cite element according to the HTML5 definition; they are titles of works. But what about Clippy or Little Moo or Slackbot? They’re not people …but they’re not exactly titles of works either.

If I were to mark up a dialogue between Eliza and a human being, should I only mark up Eliza’s remarks with cite? In text transcripts of conversations with Alexa, Siri, or Cortana, should only their side of the conversation get attributed as a source? Or should they also be written without the cite element because it must not be used to mark up people’s names …even though they are not people, according to conventional definition.

It’s downright botist.

Planet MozillaMaking asm.js/WebAssembly compilation more parallel in Firefox

In December 2015, I've worked on reducing startup time of asm.js programs in Firefox by making compilation more parallel. As our JavaScript engine, Spidermonkey, uses the same compilation pipeline for both asm.js and WebAssembly, this also benefitted WebAssembly compilation. Now is a good time to talk about what it meant, how it got achieved and what are the next ideas to make it even faster.

What does it mean to make a program "more parallel"?

Parallelization consists of splitting a sequential program into smaller independent tasks, then having them run on different CPU. If your program is using N cores, it can be up to N times faster.

Well, in theory. Let's say you're in a car, driving on a 100 Km long road. You've already driven the first 50 Km in one hour. Let's say your car can have unlimited speed from now on. What is the maximal average speed you can reach, once you get to the end of the road?

People intuitively answer "If it can go as fast as I want, so nearby lightspeed sounds plausible". But this is not true! In fact, if you could teleport from your current position to the end of the road, you'd have traveled 100 Km in one hour, so your maximal theoritical speed is 100 Km per hour. This result is a consequence of Amdahl's law. When we get back to our initial problem, this means you can expect a N times speedup if you're running your program with N cores if, and only if your program can be entirely run in parallel. This is usually not the case, and that is why most wording refers to speedups up to N times faster, when it comes to parallelization.

Now, say your program is already running some portions in parallel. To make it faster, one can identify some parts of the program that are sequential, and make them independent so that you can run them in parallel. With respect to our car metaphor, this means augmenting the portion of the road on which you can run at unlimited speed.

This is exactly what we have done with parallel compilation of asm.js programs under Firefox.

A quick look at the asm.js compilation pipeline

I recommend to read this blog post. It clearly explains the differences between JIT (Just In Time) and AOT (Ahead Of Time) compilation, and elaborates on the different parts of the engines involved in the compilation pipeline.

As a TL;DR, keep in mind that asm.js is a strictly validated, highly optimizable, typed subset of JavaScript. Once validated, it guarantees high performance and stability (no garbage collector involved!). That is ensured by mapping every single JavaScript instruction of this subset to a few CPU instructions, if not only a single instruction. This means an asm.js program needs to get compiled to machine code, that is, translated from JavaScript to the language your CPU directly manipulates (like what GCC would do for a C++ program). If you haven't heard, the results are impressive and you can run video games directly in your browser, without needing to install anything. No plugins. Nothing more than your usual, everyday browser.

Because asm.js programs can be gigantic in size (in number of functions as well as in number of lines of code), the first compilation of the entire program is going to take some time. Afterwards, Firefox uses a caching mechanism that prevents the need for recompilation and almost instaneously loads the code, so subsequent loadings matter less*. The end user will mostly wait for the first compilation, thus this one needs to be fast.

Before the work explained below, the pipeline for compiling a single function (out of an asm.js module) would look like this:

  • parse the function, and as we parse, emit intermediate representation (IR) nodes for the compiler infrastructure. SpiderMonkey has several IRs, including the MIR (middle-level IR, mostly loaded with semantic) and the LIR (low-level IR closer to the CPU memory representation: registers, stack, etc.). The one generated here is the MIR. All of this happens on the main thread.
  • once the entire IR graph is generated for the function, optimize the MIR graph (i.e. apply a few optimization passes). Then, generate the LIR graph before carrying out register allocation (probably the most costly task of the pipeline). This can be done on supplementary helper threads, as the MIR optimization and LIR generation for a given function doesn't depend on other ones.
  • since functions can call between themselves within an asm.js module, they need references to each other. In assembly, a reference is merely an offset to somewhere else in memory. In this initial implementation, code generation is carried out on the main thread, at the cost of speed but for the sake of simplicity.

So far, only the MIR optimization passes, register allocation and LIR generation were done in parallel. Wouldn't it be nice to be able to do more?

* There are conditions for benefitting from the caching mechanism. In particular, the script should be loaded asynchronously and it should be of a consequent size.

Doing more in parallel

Our goal is to make more work in parallel: so can we take out MIR generation from the main thread? And we can take out code generation as well?

The answer happens to be yes to both questions.

For the former, instead of emitting a MIR graph as we parse the function's body, we emit a small, compact, pre-order representation of the function's body. In short, a new IR. As work was starting on WebAssembly (wasm) at this time, and since asm.js semantics and wasm semantics mostly match, the IR could just be the wasm encoding, consisting of the wasm opcodes plus a few specific asm.js ones*. Then, wasm is translated to MIR in another thread.

Now, instead of parsing and generating MIR in a single pass, we would now parse and generate wasm IR in one pass, and generate the MIR out of the wasm IR in another pass. The wasm IR is very compact and much cheaper to generate than a full MIR graph, because generating a MIR graph needs some algorithmic work, including the creation of Phi nodes (join values after any form of branching). As a result, it is expected that compilation time won't suffer. This was a large refactoring: taking every single asm.js instructions, and encoding them in a compact way and later decode these into the equivalent MIR nodes.

For the second part, could we generate code on other threads? One structure in the code base, the MacroAssembler, is used to generate all the code and it contains all necessary metadata about offsets. By adding more metadata there to abstract internal calls **, we can describe the new scheme in terms of a classic functional map/reduce:

  • the wasm IR is sent to a thread, which will return a MacroAssembler. That is a map operation, transforming an array of wasm IR into an array of MacroAssemblers.
  • When a thread is done compiling, we merge its MacroAssembler into one big MacroAssembler. Most of the merge consists in taking all the offset metadata in the thread MacroAssembler, fixing up all the offsets, and concatenate the two generated code buffers. This is equivalent to a reduce operation, merging each MacroAssembler within the module's one.

At the end of the compilation of the entire module, there is still some light work to be done: offsets of internal calls need to be translated to their actual locations. All this work has been done in this bugzilla bug.

* In fact, at the time when this was being done, we used a different superset of wasm. Since then, work has been done so that our asm.js frontend is really just another wasm emitter.

** referencing functions by their appearance order index in the module, rather than an offset to the actual start of the function. This order is indeed stable, from a function to the other.

Results

Benchmarking has been done on a Linux x64 machine with 8 cores clocked at 4.2 Ghz.

First, compilation times of a few asm.js massive games:

The X scale is the compilation time in seconds, so lower is better. Each value point is the best one of three runs. For the new scheme, the corresponding relative speedup (in percentage) has been added:

Compilation times of various benchmarks

For all games, compilation is much faster with the new parallelization scheme.

Now, let's go a bit deeper. The Linux CLI tool perf has a stat command that gives you an average of the number of utilized CPUs during the program execution. This is a great measure of threading efficiency: the more a CPU is utilized, the more it is not idle, waiting for other results to come, and thus useful. For a constant task execution time, the more utilized CPUs, the more likely the program will execute quickly.

The X scale is the number of utilized CPUs, according to the perf stat command, so higher is better. Again, each value point is the best one of three runs.

CPU utilized on DeadTrigger2

With the older scheme, the number of utilized CPUs quickly rises up from 1 to 4 cores, then more slowly from 5 cores and beyond. Intuitively, this means that with 8 cores, we almost reached the theoritical limit of the portion of the program that can be made parallel (not considering the overhead introduced by parallelization or altering the scheme).

But with the newer scheme, we get much more CPU usage even after 6 cores! Then it slows down a bit, although it is still more significant than the slow rise of the older scheme. So it is likely that with even more threads, we could have even better speedups than the one mentioned beforehand. In fact, we have moved the theoritical limit mentioned above a bit further: we have expanded the portion of the program that can be made parallel. Or to keep on using the initial car/road metaphor, we've shortened the constant speed portion of the road to the benefit of the unlimited speed portion of the road, resulting in a shorter trip overall.

Future steps

Despite these improvements, compilation time can still be a pain, especially on mobile. This is mostly due to the fact that we're running a whole multi-million line codebase through the backend of a compiler to generate optimized code. Following this work, the next bottleneck during the compilation process is parsing, which matters for asm.js in particular, which source is plain text. Decoding WebAssembly is an order of magnitude faster though, and it can be made even faster. Moreover, we have even more load-time optimizations coming down the pipeline!

In the meanwhile, we keep on improving the WebAssembly backend. Keep track of our progress on bug 1188259!

Planet MozillaBeer and Tell – April 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

emceeaich: Memory Landscapes

First up was emceeaich, who shared Memory Landscapes, a visual memoir of the life and career of artist and photographer Laurie Toby Edison. The project is presented as a non-linear collection of photographs, in contrast to the traditionally linear format of memoirs. The feature that emceeaich demoed was “Going to Brooklyn”, which gives any link a 1/5 chance of showing shadow pictures briefly before moving on to the linked photo.

lorchard: DIY Keyboard Prototype

Next was lorchard, who talked about the process of making a DIY keyboard using web-based tools. He used keyboard-layout-editor.com to generate a layout serialized in JSON, and then used Plate & Case Builder to generate a CAD file for use with a laser cutter.

A flickr album is available with photos of the process.

lorchard: Jupyter Notebooks in Space

lorchard also shared eve-market-fun, a Node.js-based service that pulls data from the EVE Online API and pre-digests useful information about it. He then uses a Jupyter notebook to pull data from the API and analyze it to guide his market activities in the game. Neat!

Pomax: React Circle-Tree Visualizer

Pomax was up next with a new React component: react-circletree! It depicts a tree structure using segmented concentric circles. The component is very configurable and can by styled with CSS as it is generated via SVG. While built as a side-project, the component can be seen in use on the Web Literacy Framework website.

Pomax: HTML5 Mahjong

Also presented by Pomax was an HTML5 multiplayer Mahjong game. It allows four players to play the classic Chinese game online by using socket.io and a Node.js server to connect the players. The frontend is built using React and Webpack.

groovecoder and John Dungan: Codesy

Last up was groovecoder and John Dungan, who shared codesy, an open-source startup addressing the problem of compensation for fixing bugs in open-source software. They provide a browser extension that allows users to bid on bugs as well as name their price for fixing a bug. Users may then provide proof that they fixed a bug, and once it is approved by the bidders, they receive a payout.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Planet MozillaLooking at summary details in HTML5

On the dev-platform mailing-list, Ting-Yu Lin has sent an Intent to Ship: HTML5 <details> and <summary> tags. So what about it?

HTML 5.1 specification describes details as:

The details element represents a disclosure widget from which the user can obtain additional information or controls.

which is not that clear, luckily enough the specification has some examples. I put one on codepen (you need Firefox Nightly at this time or Chrome/Opera or Safari dev edition to see it). At least the rendering seems to be pretty much the same.

But as usual evil is in the details (pun not intended at first). In case, the developer would want to hide the triangle, the possibilities are for now not interoperable. Think here possible Web compatibility issues. I created another codepen for testing the different scenarios.

In Blink/WebKit world:

summary::-webkit-details-marker { display: none; } 

In Gecko world:

summary::-moz-list-bullet { list-style-type: none;  }

or

summary { display: block; } 

These work, though the summary {display: block;} is a call for catastrophes.

Then on the thread there was the proposal of

summary { list-style-type: none; }

which is indeed working for hiding the arrow, but doesn't do anything whatsoever in Blink and WebKit. So it's not really a reliable solution from a Web compatibility point of view.

Then usually I like to look at what people do on GitHub for their projects. So these are a collection of things on the usage of -webkit-details-marker:

details summary::-webkit-details-marker { display:none; }

/* to change the pointer on hover */
details summary { cursor: pointer; }

/* to style the arrow widget on opening and closing */
details[open] summary::-webkit-details-marker {
  color: #00F;
  background: #0FF;}

/* to replace the marker with an image */
details summary::-webkit-details-marker:after {
  content: icon('file.png');

/* using content this time for a unicode character */
summary::-webkit-details-marker {display: none; }
details summary::before { content:"►"; }
details[open] summary::before { content:"▼" }

JavaScript

On JavaScript side, it seems there is a popular shim used by a lot of people: details.js

More reading

Otsukare!

WHATWG blogAdding JavaScript modules to the web platform

One thing we’ve been meaning to do more of is tell our blog readers more about new features we’ve been working on across WHATWG standards. We have quite a backlog of exciting things that have happened, and I’ve been nominated to start off by telling you the story of <script type="module">.

JavaScript modules have a long history. They were originally slated to be finalized in early 2015 (as part of the “ES2015” revision of the JavaScript specification), but as the deadline drew closer, it became clear that although the syntax was ready, the semantics of how modules load each other were still up in the air. This is a hard problem anyway, as it involves extensive integration between the JavaScript engine and its “host environment”—which could be either a web browser, or something else, like Node.js.

The compromise that was reached was to have the JavaScript specification specify the syntax of modules, but without any way to actually run them. The host environment, via a hook called HostResolveImportedModule, would be responsible for resolving module specifiers (the "x" in import x from "x") into module instances, by executing the modules and fetching their dependencies. And so a year went by with JavaScript modules not being truly implementable in web browsers, as while their syntax was specified, their semantics were not yet.

In the epic whatwg/html#433 pull request, we worked on specifying these missing semantics. This involved a lot of deep changes to the script execution pipeline, to better integrate with the modern JavaScript spec. The WHATWG community had to discuss subtle issues like how cross-origin module scripts were fetched, or how/whether the async, defer, and charset attributes applied. The end result can be seen in a number of places in the HTML Standard, most notably in the definition of the script element and the scripting processing model sections. At the request of the Edge team, we also added support for worker modules, which you can see in the section on creating workers. (This soon made it over to the service workers spec as well!) To wrap things up, we included some examples: a couple for <script type="module">, and one for module workers.

Of course, specifying a feature is not the end; it also needs to be implemented! Right now there is active implementation work happening in all four major rendering engines, which (for the open source engines) you can follow in these bugs:

And there's more work to do on the spec side, too! There's ongoing discussion of how to add more advanced dynamic module-loading APIs, from something simple like a promise-returning self.importModule, all the way up to the experimental ideas being prototyped in the whatwg/loader repository.

We hope you find the addition of JavaScript modules to the HTML Standard as exciting as we do. And we'll be back to tell you more about other recent important changes to the world of WHATWG standards soon!

Planet Mozilla[worklog] Daruma, draw one eye, wish for the rest

Daruma is a little doll where you draw an eye on a status with a wish in mind. And finally draw the second eye, once the wishes has been realized. This week Tokyo Metro and Yahoo! Japan fixed their markup. Tune of the week: Pretty Eyed Baby - Eri Chiemi.

Webcompat Life

Progress this week:

Today: 2016-04-11T07:29:54.738014
368 open issues
----------------------
needsinfo       3
needsdiagnosis  124
needscontact    32
contactready    94
sitewait        116
----------------------

You are welcome to participate

The feed of otsukare (this blog) doesn't have an updated element. That was bothering me too. There was an open issue about it on Pelican issues tracker. Let's propose a pull request. It was accepted after a couple of up and down.

We had a team meeting this week.

Understanding Web compatibility is hard. It doesn't mean the same exact thing for everyone. We probably need to better define for others what it means. Maybe the success stories could help with concrete examples to give the perimeter of what is a Web Compat issue.

Looking at who is opening issues on WebCompat.com I was pleasantly surprised by the results.

Webcompat issues

(a selection of some of the bugs worked on this week).

Reading List

  • Sharing my volunteer onboarding experience at webcompat.com
  • Browsers, Innovators Dilemma, and Project Tofino
  • css-usage. "This script is used within our Bing and Interop crawlers to determine the properties used on a page and generalized values that could have been used. http://data.microsoftedge.com "
  • Apple is bad news for the future of the Web. Interesting read, though I would do this substituion s/Apple/Market Share Gorilla/. Currently there are very similar thing in some ways happening for Chrome on the Desktop market. The other way around, aka implementing fancy APIs that Web developers rush to use on their site and create Web Compatibility issues. The issue is not that much about being late at implementing, or being early at implementing. The real issue is the market share dominance, which warps the way people think about the technology and in the end making it difficult for other players to even exist in the market. I have seen that for Opera on Desktop, and I have seen that for Firefox on Mobile. And bear with me, it was Microsoft (Desktop) in the past, it is Google (Desktop) and Apple (Mobile) now, it will be another company in the future, the one dominating the market share.
  • The veil of ignorance

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Planet MozillaSecuring your self-hosted website with Let’s Encrypt, part 5: I have HTTPS, and now what?

In part 4, we looked at hardening default configurations and avoiding known vulnerabilities, but what other advantages are there to having our sites run HTTPS?

First, a recap of what we get by using HTTPS:

  • Privacy – no one knows what are your users accessing
  • Integrity – what is sent between you and your users is not tampered with at any point*

*unless the uses’ computers are infected with a virus or some kind of browser malware that modifies pages after the browser has decrypted them, or modifies the content before sending it back to the network via the browser–Remember I said that security is not 100% guaranteed? Sorry to scare you. You’re welcome 😎

So that’s cool, but there’s even more!

HTTPS-only JS APIs

Most of the newest platform features are only available if served via HTTPS, and some existing features, such as GeoLocation or AppCache, will only work if served under HTTPS too. For example:

  • Service Workers
  • Push notifications
  • Background sync
  • Adding to home screen
  • WebRTC

While this is ‘annoying’, because it complicates web development and makes it less accessible than it used to be (“just place some files on a folder and bam, you’re done!”), it also makes sense to allow their usage over HTTPS only, because at the same time that  these APIs add more power to the web platform, they are also capable of exposing more private data from users than the pre-HTML5 APIs, if stuff is transmitted over HTTP.

You can read about the reasoning behind this move in the Secure Contexts specification.

Hopefully, Let’s Encrypt will help making HTTPS universally available for everyone–not just those fortunate enough to have the time and money required to obtain and install digital certificates.

Coming up next: WordPress considerations, and cool things you can do with WordPress and HTTPS.

 

flattr this!

Tantek Çelik@cackhanded is right. class is used on visible content. Thus HTML authors & designers learn & use class already. @snookca @Malarkey, #microformats prefer simplicity, familiarity, & learnability. For #microformats2, we aimed to make them even simpler than classic #microformats, as well as more compatible with existing class values. In addition the data-* attributes are meant for private page-specific meaning, and explicitly not for shared or cross-site data. http://microformats.org/wiki/HTML5#data_attributes There were attempts to use new attributes in HTML instead, with both RDFa and microdata, and the result was more complex designs using *multiple* new attributes, each, which were more work to learn, and harder for authors to use: https://indiewebcamp.com/irc/2014-06-21/line/1403415993000 And as @cackhanded says, easy parsing too, especially with the microformats2 parsing specification and now numerous open source libraries in nearly every language: http://microformats.org/wiki/microformats2#Parsers

@cackhanded is right. class is used on visible content.
Thus HTML authors & designers learn & use class already.

@snookca @Malarkey, #microformats prefer simplicity, familiarity, & learnability.

For #microformats2, we aimed to make them even simpler than classic #microformats, as well as more compatible with existing class values.

In addition the data-* attributes are meant for private page-specific meaning, and explicitly not for shared or cross-site data.

http://microformats.org/wiki/HTML5#data_attributes

There were attempts to use new attributes in HTML instead, with both RDFa and microdata, and the result was more complex designs using *multiple* new attributes, each, which were more work to learn, and harder for authors to use:

https://indiewebcamp.com/irc/2014-06-21/line/1403415993000

And as @cackhanded says, easy parsing too, especially with the microformats2 parsing specification and now numerous open source libraries in nearly every language:

http://microformats.org/wiki/microformats2#Parsers

W3C Team blogWorking on HTML5.1

HTML5 was released in 2014 as the result of a concerted effort by the W3C HTML Working Group. The intention was then to begin publishing regular incremental updates to the HTML standard, but a few things meant that didn’t happen as planned. Now the Web Platform Working Group (WP WG) is working towards an HTML5.1 release within the next six months, and a general workflow that means we can release a stable version of HTML as a W3C Recommendation about once per year.

Goals

The core goals for future HTML specifications are to match reality better, to make the specification as clear as possible to readers, and of course to make it possible for all stakeholders to propose improvements, and understand what makes changes to HTML successful.

Timelines

The plan is to ship an HTML5.1 Recommendation in September 2016. This means we will need to have a Candidate Recommendation by the middle of June, following a Call For Consensus based on the most recent Working Draft.

To make it easier for people to review changes, an updated Working Draft will be published approximately once a month. For convenience, changes are noted within the specification itself.

Longer term we would like to “rinse and repeat”, making regular incremental updates to HTML a reality that is relatively straightforward to implement. In the meantime you can track progress using Github pulse, or by following @HTML_commits or @HTMLWG on Twitter.

Working on the spec…

The specification is on Github, so anyone who can make a Pull Request can propose changes. For simple changes such as grammar fixes, this is a very easy process to learn – and simple changes will generally be accepted by the editors with no fuss.

If you find something in the specification that generally doesn’t work in shipping browsers, please file an issue, or better still file a Pull Request to fix it. We will generally remove things that don’t have adequate support in at least two shipping browser engines, even if they are useful to have and we hope they will achieve sufficient support in the future: in some cases, you can or we may propose the dropped feature as a future extension – see below regarding “incubation”.

HTML is a very large specification. It is developed from a set of source files, which are processed with the Bikeshed preprocessor. This automates things like links between the various sections, such as to element definitions. Significant changes, even editorial ones, are likely to require a basic knowledge of how Bikeshed works, and we will continue to improve the documentation especially for beginners.

HTML is covered by the W3C Patent Policy, so many potential patent holders have already ensured that it can be implemented without paying them any license fee. To keep this royalty-free licensing, any “substantive change” – one that actually changes conformance – must be accompanied by the patent commitment that has already been made by all participants in the Web Platform Working Group. If you make a Pull Request, this will automatically be checked, and the editors, chairs, or W3C staff will contact you to arrange the details. Generally this is a fairly simple process.

For substantial new features we prefer a separate module to be developed, “incubated”, to ensure that there is real support from the various kinds of implementers including browsers, authoring tools, producers of real content, and users, and when it is ready for standardisation to be proposed as an extension specification for HTML. The Web Platform Incubator Community Group (WICG) was set up for this purpose, but of course when you develop a proposal, any venue is reasonable. Again, we ask that you track technical contributions to the proposal (WICG will help do this for you), so we know when it arrives that people who had a hand in it have also committed to W3C’s royalty-free patent licensing and developers can happily implement it without a lot of worry about whether they will later be hit with a patent lawsuit.

Testing

W3C’s process for developing Recommendations requires a Working Group to convince the W3C Director, Tim Berners-Lee, that the specification

“is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized”

This had to be done for HTML 5.0. When a change is proposed to HTML we expect it to have enough tests to demonstrate that it does improve interoperability. Ideally these fit into an automatable testing system like the “Webapps test harness“. But in practice we plan to accept tests that demonstrate the necessary interoperability, whether they are readily automated or not.

The benefit of this approach is that except where features are removed from browsers, which is comparatively rare, we will have a consistently increasing level of interoperability as we accept changes, meaning that at any time a snapshot of the Editors’ draft should be a stable basis for an improved version of HTML that can be published as an updated version of an HTML Recommendation.

Conclusions

We want HTML to be a specification that authors and implementors can use with ease and confidence. The goal isn’t perfection (which is after all the enemy of good), but rather to make HTML 5.1 better than HTML 5.0 – the best HTML specification until we produce HTML 5.2…

And we want you to feel welcome to participate in improving HTML, for your own purposes and for the good of the Web.

Chaals, Léonie, Ade – chairs
Alex, Arron, Steve, Travis – editors

W3C Team blogHTML Media Extensions to continue work

The HTML Media Extensions Working Group was extended today until the end of September 2016. As part of making video a first class citizen of the Web, an effort started by HTML5 itself in 2007, W3C has been working on many extension specifications for the Open Web Platform: capturing images from the local device camera, handling of video streams and tracks, captioning and other enhancements for accessibility, audio processing, real-time communications, etc. The HTML Media Extensions Working Group is working on two of those extensions: Media Sources Extensions (MSE), for facilitating adaptive and live streaming, and Encrypted Media Extensions (EME), for playback of protected content. Both are extension specifications to enhance the Open Web Platform with rich media support.

The W3C supports the statement from the W3C Technical Architecture Group (TAG) regarding the importance of broad participation, testing, and audit to keep users safe and the Web’s security model intact. The EFF, a W3C member, concerned about this issue, proposed a covenant to be agreed by all W3C members which included exemptions for security researchers as well as interoperable implementations under the US Digital Millennium Copyright Act (DMCA) and similar laws. After discussion for several months and review at the recent W3C Advisory Committee meeting, no consensus has yet emerged from follow-up discussions about the covenant from the EFF.

We do recognize that issues around Web security exist as well as the importance of the work of security researchers and that these necessitate further investigation but we maintain that the premises for starting the work on the EME specification are still applicable. See the information about W3C and Encrypted Media Extensions.

The goal for EME has always been to replace non-interoperable private content protection APIs (see the Media Pipeline Task Force (MPTF) Requirements). By ensuring better security, privacy, and accessibility around those mechanisms, as well as having those discussions at W3C, EME provides more secure interfaces for license and key exchanges by sandboxing the underlying content decryption modules. The only required key system in the specification is one that actually does not perform any digital rights management (DRM) function and is using fully defined and standardized mechanisms (the JSON Web Key format, RFC7517, and algorithms, RFC7518). While it may not satisfy some of the requirements from distributors and media owners in resisting attacks, it is the only fully interoperable key system when using EME.

We acknowledge and welcome further efforts from the EFF and other W3C Members in investigating the relations between technologies and policies. Technologists and researchers indeed have benefited from the EFF’s work in securing an exemption from the DMCA from the Library of Congress which will help to better protect security researchers from the same issues they worked to address at the W3C level.

W3C does intend to keep looking at the challenges related to the US DMCA and similar laws such as international implementations of the EU Copyright Directive with our Members and staff. The W3C is currently setting up a Technology and Policy Interest Group to keep looking at those issues and we intend to bring challenges related to these laws to this Group.

Anne van KesterenUpstreaming custom elements and shadow DOM

We have reached the point where custom elements and shadow DOM slowly make their way into other standards. The Custom Elements draft has been reformatted as a series of patches by Domenic and I have been migrating parts of the Shadow DOM draft into the DOM Standard and HTML Standard. Once done this should remove a whole bunch of implementation questions that have come up and essentially turn these features into new fundamental primitives all other features will have to cope with.

There are still a number of open issues for which we need to reach rough consensus and input on those would certainly be appreciated. And there is still quite a bit of work to be done once those issues are resolved. Many features in HTML and CSS will need to be adjusted to work correctly within shadow trees. E.g., as things stand today in the HTML Standard a script element does not execute, an img element does not load, et cetera.

Planet WebKitNew Web Features in Safari

Last week, a new version of Safari shipped with the release of iOS 9.3 and OS X El Capitan 10.11.4. Safari on iOS 9.3 and Safari 9.1 on OS X are significant releases that incorporate a lot of exciting web features from WebKit. These are web features that were considered ready to go, and we simply couldn’t wait to share them with users and developers alike.

On top of new web features, this release improves the way people experience the web with more responsiveness on iOS, new gestures on OS X, and safer JavaScript dialogs. Developers will appreciate the extra polish, performance and new tools available in Web Inspector.

Here is a brief review of what makes this release so significant.

Web Features

Picture Element

The <picture> element is a container that is used to group different <source> versions of the same image. It offers a fallback approach so the browser can choose the right image depending on device capabilities, like pixel density and screen size. This comes in handy for using new image formats with built-in graceful degradation to well-supported image formats. The ability to specify media queries in the media attribute on the <source> elements brings art direction of images to responsive web design.

For more on the <picture> element, take a look at the HTML 5.1 spec.

CSS Variables

CSS variables, known formally as CSS Custom Properties, let developers reduce code duplication, code complexity and make maintenance of CSS much easier. Recently we talked about how Web Inspector took advantage of CSS variables, to reduce code duplication and shed many CSS rules.

You can read more about CSS Variables in WebKit.

Font Features

CSS font features allow you to use special text styles and effects available in fonts like ligatures and small caps. These aren’t faux representations manufactured by the browser, but the intended styles designed by the font author.

For more information, read the CSS Font Features blog post.

Will Change Property

The CSS will-change property lets you tell the browser ahead of time about changes that are likely to happen on an element. The hint gives browsers a heads-up so that they can make engine optimizations to deliver smooth performance.

Read more about will-change in the CSS Will Change Module Level 1 spec.

Gesture Events for OS X

Already available in WebKit for iOS, gesture events are supported on OS X with Safari 9.1. Gesture events are used to detect pinching and rotation on trackpads.

See the GestureEvent Class Reference for more details.

Browsing Improvements

Fast-Tap on iOS

WebKit for iOS has a 350 millisecond delay to allow detecting double-tapping to zoom content that appears too small on mobile devices. With the release of Safari 9.1, WebKit on iOS removes the delay for web content that is designed for the mobile viewport size.

Read about More Responsive Tapping on iOS for details on how to ensure web pages can get this fast-tap behavior.

JavaScript Dialogs

To protect users from bad actors using JavaScript dialogs in unscrupulous ways, the dialogs in Safari 9.1 were redesigned to look and work more like web content. The new behavior means that dialogs no longer prevent a user from navigating away or closing the page. Instead users can more clearly understand the dialogs come from the web page they are viewing and easily dismiss them.

For more details, see the Displaying Dialogs section from What’s New in Safari.

Web Inspector Improvements

Web developers will enjoy several new noteworthy enhancements to debugging and styling with Web Inspector. Faster performance in timelines means debugging complex pages and web apps is easier than ever. The new Watch Expressions section in the Scope Chain sidebar helps a developer to see the data flowing through the JavaScript environment.

In the Elements tab, pseudo-elements such as ::before and ::after are accessible from the DOM tree to make it straightforward to inspect and style them.

Web Inspector also added a Visual Styles sidebar that adds visual tools for modifying webpage styles without requiring memorization of all of the properties and elements of CSS. It makes styling a web page more accessible to designers and developers alike, allowing them to get involved in exploring different styles.

Learn more about how it works in Editing CSS with the Visual Styles Sidebar.

Feedback

That is a lot of enhancements and refinements for a dot-release. Along with OS X El Capitan, Safari 9.1 is also available on OS X Yosemite and OS X Mavericks — bringing all of these improvements to even more users. We’d love to hear about your favorite new feature. Please send your tweets to @webkit or @jonathandavis and let us know!

Planet Mozilla[worklog] The world is mad.

News of this week were quite disheartening. Young individuals with dreams of violence and blood. Politicians with words of hate. The Web compatibility bugs seem to be a gentle stroke. Tune of the week: Mad World - Gary Jules.

Webcompat Life

Short webcompat meeting.

Finally managed to get the to be triaged bugs to a couple of them. We need to get better as a community at filtering the incoming bugs. There is right now 2 remaining old issues.

Progress this week:

Today: 2016-03-25T10:41:48.606549
418 open issues
----------------------
needsinfo       2
needsdiagnosis  131
needscontact    96
contactready    84
sitewait        105
----------------------

You are welcome to participate

Blink devtools

When working on a invalid webcompat issue, I noticed something in Opera Blink developer tools console which made me happy.

Opera blink devtools

Noticed the message? "jquery-1.6.2.min.js:18 'webkitRequestAnimationFrame' is vendor-specific. Please use the standard 'requestAnimationFrame' instead."

-webkit-border-image

There is progress on the front of -webkit-border-image and border-style in the WebKit/Blink world. Some Web sites might break which should help to fix them.

Preparing the work week in London

Mike kicked off the discussion for London Mozilla meeting on the Webcompat side. If you want to participate and provide a topic that you are ready to push, please add to the wiki and/or the mailing list.

Webcompat issues

(a selection of some of the bugs worked on this week).

Gecko Bugs

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Planet WebKitCarlos García Campos: WebKitGTK+ 2.12

We did it again, the Igalia WebKit team is pleased to announce a new stable release of WebKitGTK+, with a bunch of bugs fixed, some new API bits and many other improvements. I’m going to talk here about some of the most important changes, but as usual you have more information in the NEWS file.

FTL

FTL JIT is a JavaScriptCore optimizing compiler that was developed using LLVM to do low-level optimizations. It’s been used by the Mac port since 2014 but we hadn’t been able to use it because it required some patches for LLVM to work on x86-64 that were not included in any official LLVM release, and there were also some crashes that only happened in Linux. At the beginning of this release cycle we already had LLVM 3.7 with all the required patches and the crashes had been fixed as well, so we finally enabled FTL for the GTK+ port. But in the middle of the release cycle Apple surprised us announcing that they had the new FTL B3 backend ready. B3 replaces LLVM and it’s entirely developed inside WebKit, so it doesn’t require any external dependency. JavaScriptCore developers quickly managed to make B3 work on Linux based ports and we decided to switch to B3 as soon as possible to avoid making a new release with LLVM to remove it in the next one. I’m not going to enter into the technical details of FTL and B3, because they are very well documented and it’s probably too boring for most of the people, the key point is that it improves the overall JavaScript performance in terms of speed.

Persistent GLib main loop sources

Another performance improvement introduced in WebKitGTK+ 2.12 has to do with main loop sources. WebKitGTK+ makes an extensive use the GLib main loop, it has its own RunLoop abstraction on top of GLib main loop that is used by all secondary processes and most of the secondary threads as well, scheduling main loop sources to send tasks between threads. JavaScript timers, animations, multimedia, the garbage collector, and many other features are based on scheduling main loop sources. In most of the cases we are actually scheduling the same callback all the time, but creating and destroying the GSource each time. We realized that creating and destroying main loop sources caused an overhead with an important impact in the performance. In WebKitGTK+ 2.12 all main loop sources were replaced by persistent sources, which are normal GSources that are never destroyed (unless they are not going to be scheduled anymore). We simply use the GSource ready time to make them active/inactive when we want to schedule/stop them.

Overlay scrollbars

GNOME designers have requested us to implement overlay scrollbars since they were introduced in GTK+, because WebKitGTK+ based applications didn’t look consistent with all other GTK+ applications. Since WebKit2, the web view is no longer a GtkScrollable, but it’s scrollable by itself using native scrollbars appearance or the one defined in the CSS. This means we have our own scrollbars implementation that we try to render as close as possible to the native ones, and that’s why it took us so long to find the time to implement overlay scrollbars. But WebKitGTK+ 2.12 finally implements them and are, of course, enabled by default. There’s no API to disable them, but we honor the GTK_OVERLAY_SCROLLING environment variable, so they can be disabled at runtime.

But the appearance was not the only thing that made our scrollbars inconsistent with the rest of the GTK+ applications, we also had a different behavior regarding the actions performed for mouse buttons, and some other bugs that are all fixed in 2.12.

The NetworkProcess is now mandatory

The network process was introduced in WebKitGTK+ since version 2.4 to be able to use multiple web processes. We had two different paths for loading resources depending on the process model being used. When using the shared secondary process model, resources were loaded by the web process directly, while when using the multiple web process model, the web processes sent the requests to the network process for being loaded. The maintenance of this two different paths was not easy, with some bugs happening only when using one model or the other, and also the network process gained features like the disk cache that were not available in the web process. In WebKitGTK+ 2.12 the non network process path has been removed, and the shared single process model has become the multiple web process model with a limit of 1. In practice it means that a single web process is still used, but the network happens in the network process.

NPAPI plugins in Wayland

I read it in many bug reports and mailing lists that NPAPI plugins will not be supported in wayland, so things like http://extensions.gnome.org will not work. That’s not entirely true. NPAPI plugins can be windowed or windowless. Windowed plugins are those that use their own native window for rendering and handling events, implemented in X11 based systems using XEmbed protocol. Since Wayland doesn’t support XEmbed and doesn’t provide an alternative either, it’s true that windowed plugins will not be supported in Wayland. Windowless plugins don’t require any native window, they use the browser window for rendering and events are handled by the browser as well, using X11 drawable and X events in X11 based systems. So, it’s also true that windowless plugins having a UI will not be supported by Wayland either. However, not all windowless plugins have a UI, and there’s nothing X11 specific in the rest of the NPAPI plugins API, so there’s no reason why those can’t work in Wayland. And that’s exactly the case of http://extensions.gnome.org, for example. In WebKitGTK+ 2.12 the X11 implementation of NPAPI plugins has been factored out, leaving the rest of the API implementation common and available to any window system used. That made it possible to support windowless NPAPI plugins with no UI in Wayland, and any other non X11 system, of course.

New API

And as usual we have completed our API with some new additions:

 

Planet Mozilla[worklog] Python code review, silence for some bugs.

Seen a lot of discussions about Ads blockers. It seems it's like pollen fever, these discussions are seasonal and recurrent. I have a proposal for the Web and its publishing industry. Each time, you are using the term in an article Ad Blocker, replace it by Performance Enhancer or Privacy Safeguard and see if your article still makes sense. Tune of the week: Feeling Good — Nina Simone.

Webcompat Life

Progress this week:

Today: 2016-03-18T18:14:59.820644
450 open issues
----------------------
needsinfo       9
needsdiagnosis  130
needscontact    95
contactready    84
sitewait        104
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

Gecko Bugs

Webcompat.com development

Reading List

  • Instagram-style filters in HTML5 Canvas. Clever technique for applying filters to photo when browsers are lacking support for CSS filters. Two thoughts: 1. What is the performance analysis? 2. This should be tied to a class name with a proper fallback for seeing the image when JS is not here.
  • 6 reasons to start using flexbox. "But in actual fact, flexbox is very well supported, at 95.89% global support. (…) The solution to this I have settled on is to just use autoprefixer. Keeping track of which vendor prefixes we need to use for what isn’t necessarily the best use of our time and effort, so we can and should automate it."
  • Seen that introductory blog post about CSS. I liked it very much, because it's very basic. It's written for beginners but with a nice turn into it. It also reminds me that writing about the simplest thing can be very helpful to others.
  • Selenium and JavaScript events. In that case, used with python for functional testing.

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

W3C Team blogHTML: What’s next?

Since the end of last year the Web Platform Working Group has responsibility for W3C’s HTML spec, as well as many other core specifications. What have we been doing with HTML, and what is the plan?

The short story is that we are working toward an HTML 5.1 Recommendation later this year. The primary goals are to provide a specification that is a better match for reality, by incorporating things that are interoperable and removing things that aren’t.

We also want more people and organisations to get involved and make sure the development of HTML continues to reflect the needs and goals of the broad community.

As an important step down that path, the editors (Arron Eicholz, Steve Faulkner and Travis Leithead) have published the Editors’ Draft in github, and by using bikeshed to build it we have made it easier for people to propose an effective edit. Different kinds of edit require different levels of effort, of course…

Fixing a typo, or clarifying some text so it is easier to understand, are easy ways to start contributing, getting used to the spec source and github, and improving HTML. This level of edit will almost always be accepted with little discussion.

Meanwhile, we welcome suggestions – ideally as pull requests, but sometimes raising an issue is more appropriate – for features that should not be in a Recommendation yet, for example because they don’t work interoperably.

Naturally proposals for new features require the most work. Before we will accept a substantial feature proposal as part of an HTML recommendation, there needs to be an indication that it has real support from implementors – browsers, content producers, content authoring and management system vendors and framework developers are all key stakeholders. The Web Platform Incubator Community Group is specifically designed to provide a home for such incubation, although there is no obligation to do it there. Indeed, the picture element was developed in its own Community Group, and is a good example of how to do this right.

Finally, a lot of time last year was spent talking about modularisation of HTML. But that is much more than just breaking the spec into pieces – it requires a lot of deep refactoring work to provide any benefit. We want to start building new things that way, but we are mostly focused on improving quality for now.

The Working Group is now making steady progress on its goals for HTML, as well as its other work. An important part of W3C work is getting commitments to provide Royalty-Free patent licenses from organisations, and for some large companies with many patents that approval takes time. At the same time, Art Barstow who was for many years co-chair of Web Apps, and an initial co-chair of this group, has had to step down due to other responsibilities. While chaals continues as a co-chair from Web Apps, joined by new co-chairs Adrian Bateman and Léonie Watson, we still miss both Art’s invaluable contributions and Art himself.

So we have taken some time to get going, but we’re now confident that we are on track to deliver a Recommendation for HTML 5.1 this year, with a working approach that will make it possible to deliver a further improved HTML Recommendation (5.2? We’re not too worried about numbering yet…) in another year or so.

Planet MozillaFirefox estrena Web Speech API, localización en Guaraní y nuevas herramientas para desarrolladores

Hace algunos minutos Mozilla lanzó una nueva versión de Firefox y ya podemos gozar de nuevas e interesantes funcionalidades en nuestro navegador favorito. Si no lo sabías, las versiones del panda rojo ya no serán liberadas cada 6 semanas y de ahora en adelante tendrán un ritmo variado que oscila entre 6 y 8 semanas.

La API Web Speech

Si alguna vez soñaste con hablarle a una página web y que esta ejecutase acciones al igual que haces en iOS o en Android, desde Firefox ya lo puedes hacer. Web Speech permite incorporar datos en nuestras páginas y aplicaciones web, permitiendo que determinadas funciones como la autenticación puedan ser controladas por la voz.

Web Speech comprende dos componentes fundamentales: el reconocimiento (como su nombre lo indica, se encarga de reconocer la voz desde un dispositivos de entrada) y la síntesis (texto a voz). Para más detalles de cómo emplear esta API puedes leer el artículo publicado en MDN. En GitHub también podrás encontrar algunos ejemplos que ilustran el reconocimiento y síntesis de voz.

Pestañas sincronizadas a la vista

Desde la inclusión de Sync en el navegador puedes compartir tu información y preferencias (como marcadores, contraseñas, pestañas abiertas, lista de lectura y complementos instalados) con todos tus dispositivos para mantenerte actualizado y no perderte nada.

Con este lanzamiento, ver las pestañas abiertas en otros dispositivos será mucho más fácil e intuitivo pues al sincronizar inmediatamente se mostrará el botón  tabs en la barra de herramientas, el cual te permite acceder mediante un panel a estas páginas en tu equipo con tan solo un clic. También, mientras busques en la barra de direcciones, las pestañas serán mostradas en la lista desplegable.

sync_tabs_firefox

Panel que muestra las pestañas abiertas en otros dispositivos

Si nunca has configurado Sync y deseas hacerlo, puedes leer este artículo en la Ayuda de Mozilla. ¡Es muy fácil y rápido!

Firefox habla guaraní

Gracias a la colaboración de las comunidades Mozilla de Paraguay y Bolivia, y de la Universidad Nacional de Asunción (UNA) ha sido posible traducir Firefox al guaraní, lengua originaria de Latinoamérica y muy empleada en Paraguay junto al español, también es hablada en algunas regiones del sur de Brasil y el norte de Argentina, así como en una zona de Bolivia.

Dicho proyecto, bautizado con el nombre de Aguaratata (zorro de fuego) tuvo una duración de aproximadamente 2 años y se tradujeron más de 45 000 palabras. De esta forma, el guaraní pasará a ser una de las 91 localizaciones o traducciones en las que se puede obtener Firefox.

Adiós a los grupos de pestañas

Panorama, la funcionalidad que nos permitía gestionar grupos y organizar nuestras pestañas finalmente será eliminada de Firefox. Desde hace algún tiempo esta noticia venía dando rumbos, ya que Panorama era usada solamente por menos del 1% de los usuarios y por parte de los desarrolladores, su mantenimiento era complicado frente a los cambios que están sucediendo actualmente en el corazón de Firefox.

panorama_firefox_linuxSi eres de los que usaba esta característica, cuando actualices a la versión 45 de Firefox, te aparecerá una pestaña especial explicándote lo que ha pasado. Todos tus grupos de pestañas se convertirán en marcadores automáticamente y se guardarán en la carpeta de marcadores. Podrás acceder a ellos haciendo clic en el botón de marcadores Marcadores en la barra de herramientas.

Si quieres un reemplazo directo, prueba el complemento Tab Groups. El cual se ha creado directamente a partir del código de Firefox y funciona justo como la antigua función. Si lo instalas antes de actualizar a la versión 45 de Firefox:

  • Los grupos de pestañas de Firefox se migrarán automáticamente al complemento.
  • Firefox no convertirá tus grupos en marcadores, como se describe más arriba.

Al utilizar Tab Groups será como si no la función no hubiera desaparecido de Firefox y no notarás el cambio al actualizar.

Para Android

  • Adicionada una opción para deshabilitar el uso compartido de la cámara y el micrófono en la interfaz de administración para la navegación familiar.
  • La opción clic para ver imágenes en el menú de configuración avanzado permite a los usuarios elegir las imágenes a descargar y conservar el uso de datos.
  • Reemplazadas las notificaciones con snackbar.
  • Optimizada o re-organizada “Configuración” bajo el “Menú”.
  • Deshabilitada la inclusión de la URL cuando se comparte el texto seleccionado de una página web.
  • Simplificada la interfaz de administración de navegación familiar en tabletas durante el perfil restringido.

Otras novedades para desarrolladores

Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).

Aún no contamos con la versión para móviles pero cuando las tengamos les avisamos.

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.

Planet MozillaThe Gecko monoculture

I remember a time, not so very long ago, when Gecko powered 4 or 5 non-Mozilla browsers, some of them on exotic platforms, as well as GPS devices, wysiwyg editors, geographic platforms, email clients, image editors, eBook readers, documentation browsers, the UX of virus scanners, etc, as well as a host of innovative and exotic add-ons. In these days, Gecko was considered, among other things, one of the best cross-platform development toolkits available.

The year is now 2016 and, if you look around, you’ll be hard-pressed to find Gecko used outside of Firefoxen (alright, and Thunderbird and Bluegriffon). Did Google or Apple or Microsoft do that? Not at all. I don’t know how many in the Mozilla community remember this, but this was part of a Mozilla strategy. In this post, I’d like to discuss this strategy, its rationale, and the lessons that we may draw from it.

Building a Gecko monoculture

For the first few years of Firefox, enthusiasm for the technology behind the browser was enormous. After years of implementing Gecko from scratch, Mozilla had a kick-ass cross-platform toolkit that covered almost everything from system interaction to network, cryptography, user interface, internationalization, even an add-on mechanism, a scripting language and a rendering engine. For simplicity, let’s call this toolkit XUL. Certainly, XUL had a number of drawbacks, but in many ways, this toolkit was years ahead of everything that other toolkits had to offer at the time. And many started to use XUL for things that had never been planned. Dozens of public projects and certainly countless more behind corporate doors. Attempts were made to extend XUL towards Python, .Net, Java and possibly more. These were the days of the “XUL Planet”. All of this was great – for one, that is how I joined the Mozilla community, embedding Gecko in exotic places and getting it to work nicely with exotic network protocols.

But this success was also hurting Mozilla’s mission in two ways. The first way was the obvious cost. The XUL platform had a huge API, in JavaScript, in C, in C++, in IDL, in declarative UI (XUL and XBL), not to mention its configuration files and exotic query language (hello, RDF, I admit that I don’t really miss you that much), and I’m certain I miss a few. Oh, that’s not including the already huge web-facing API that can never abandon backwards compatibility with any feature, of course. Since third-party developers could hit any point of this not-really-internal API, any change made to the code of Gecko had the potential of breaking applications in subtle and unexpected ways – applications that we often couldn’t test ourselves. This meant that any change needed to be weighed carefully as it could put third-party developers out of business. That’s hardly ideal when you attempt to move quickly. To make things worse, this API was never designed for such a scenario, many bits were extremely fragile and often put together in haste with the idea of taking them down once a better API was available. Unfortunately, in many cases, fixing or replacing components often proved impossible, for the sake of compatibility. And to make things even worse, the XUL platform was targeting an insane number of Operating Systems, including Solaris, RiscOS, OS/2, even the Amiga Workbench if I recall correctly. Any change had to be kept synchronized between all these platforms, or, once again, we could end up putting third-party developers out of business by accident.

So this couldn’t last forever.

Another way this success was hurting Mozilla is that XUL was not the web. Recall that Mozilla’s objectives were not to create yet another cross-platform toolkit, no matter how good, but to Take Back the Web from proprietary and secret protocols. When the WhatWG and HTML5 started rising, it became clear that the web was not only taken back, but that we were witnessing the dawn of a new era of applications, which could run on all operating systems, which were based on open protocols and at least at some level on open-source. The Web Applications were the future – an ideal future, by some criteria – and the future was there. In this context, non-standard, native cross-platform toolkits were a thing of the past, something that Mozilla was fighting, not something that Mozilla should be promoting. It made entire sense to stop putting resources in XUL and concentrate more tightly on the web.

So XUL as a cross-platform toolkit couldn’t last forever.

I’m not sure exactly who took the decision but at some point around 2009, Mozilla’s strategy changed. We started deprecating the use cases of Gecko that were not the Web Platform. This wasn’t a single decision or a single fell swoop, and this didn’t go in one single unambiguous direction, but this happened. We got rid of the impedimenta.

We reinvented Gecko as a Firefox monoculture.

Living in a monoculture

We have now been living in a Gecko monoculture long enough to be able to draw lessons from our choices. So let’s look at the costs and benefits.

API and platform cost

Now that third-party developers using Gecko and hitting every single internal API are gone, it is much easier to refactor. Some APIs are clearly internal and I can change them without referring to anyone. Some are still accessible by add-ons, and I need to look for add-ons that use them and get in touch with their developers, but this is still infinitely simpler than it used to be. Already, dozens of refactorings that were critically needed but that had been blocked at some point in the past by backwards internal compatibility have been made possible. Soon, Jetpack WebExtensions will become the sole entry point for writing most add-ons, and Gecko developers will finally be free to refactor their code at will as long as it doesn’t break public APIs, much like developers of every single other platform on the planet.

Similarly, dropping support for exotic platforms made it possible to drop plenty of legacy code that was hurting refactoring, and in many cases, made it possible to write richer APIs without being bound by the absolute need to implement everything on every single platform.

In other words, by the criteria of reducing costs and increasing agility, yes, the Gecko monoculture has been a clear success.

Web Applications

Our second objective was to promote web applications. And if we look around, these days, web applications are everywhere – except on mobile. Actually, that’s not entirely true. On mobile, a considerable number of applications are built using PhoneGap/Cordova. In other word, these are web applications, wrapped in native applications, with most of the benefits of both worlds. Indeed, one could argue that PhoneGap/Cordova applications are more or less applications which could have been developed with XUL, and are instead developed with a closer-to-standards approach. As a side-note, it is a little-known fact is that one of the many (discarded) designs of FirefoxOS was as a runtime somewhat comparable to PhoneGap/Cordova, and which would have replaced the XUL platform.

Despite the huge success of web applications and even the success of hybrid web/native applications, the brand new world in which everything would be a web application hasn’t arrived yet, and it is not sure that it ever will. The main reason is that mobile has taken over the world. Mobile applications need to integrate with a rather different ecosystem, with push notifications, working disconnected, transactions and microtransactions, etc. not to mention a host of new device-specific features that were not initially web-friendly. Despite the efforts of most browser vendors, browser still haven’t caught up this moving target. New mobile device have gained voice recognition and in the meantime, the WhatWG is still struggling to design a secure, cross-platform API for accessing local files.

In other words, by the criteria of pushing web applications, I would say that the Gecko monoculture has had a positive influence, but not quite enough to be called a success.

The Hackerbase

Now that we have seen the benefits of this Gecko monoculture, perhaps it is time to look at the costs.

By turning Gecko into a Firefox monoculture, we have lost dozens of products. We have almost entirely lost the innovations that were not on the roadmap of the WhatWG, as well as the innovators themselves. Some of them have turned to web applications, which is what we wanted, or hybrid applications, which is close enough to what we wanted. In the meantime, somewhere else in the world, the ease of embedding first WebKit and now Chromium (including Atom/Electron) have made it much easier to experiment and innovate with these platforms, and to do everything that has ever been possible with XUL, and more. Speaking only for myself, if I were to enter the field today with the same kind of technological needs I had 15 years ago, I would head towards Chromium without a second thought. I find it a bit sad that my present self is somehow working against my past self, while they could be working together.

By turning our back on our Hackerbase, we have lost many things. In the first place, we have lost smart people, who may have contributed ideas or code or just dynamism. In the second place, we have lost plenty of opportunities for our code and our APIs to be tested for safety, security, or just good design. That’s already pretty bad.

Just as importantly, we have lost opportunities to be part of important projects. Chris Lord has more to say on this topic, so I’ll invite you to read his post if you are interested.

Also, somewhere along the way, we have largely lost any good reason to provide clean and robust APIs, to separate concerns between our libraries. I would argue that the effects of this can be witnessed in our current codebase. Perhaps not in the web-facing APIs, that are still challenged by their (mis)usage in terms of convenience, safety and robustness, but in all our internal+addons APIs, many of which are sadly under-documented, under-tested, and designed to break in new and exciting ways whenever they are confronted with unexpected inputs. One could argue that the picture I am painting is too bleak, and that some of our fragile APIs are, in fact, due to backwards compatibility with add-ons or, at some point, third-party applications.

Regardless, by the criteria of our Hackerbase, I would count the Gecko monoculture as a bloody waste.

Bottom line

So the monoculture has succeeded at making us faster, has somewhat helped propagate Web Applications, and has hurt us by severing our hackerbase.

Before starting to write this blogpost, I felt that turning Gecko into a Firefox monoculture was a mistake. Now, I realize that this was probably a necessary phase. The Gecko from 2006 was impossible to fix, impossible to refactor, impossible to improve. The Firefox from 2006 would have needed a nearly-full reimplementation to support e10s or Rust-based code (ok, I’m excluding Rust-over-XPConnect, which would be a complete waste). Today’s Gecko is much fitter to fight against WebKit and Chromium. I believe that tomorrow’s Gecko – not Firefox, just Gecko – with full support for WebExtensions and progressive addition of new, experimental WebExtensions, would be a much better technological base for implementing, say, a cross-platform e-mail client, or an e-Book reader, or even a novel browser.

As all phases, though, this monoculture needs to end sooner or later, and I certainly hope that it ends soon, because we keep paying the cost of this transformation through our community.

Surviving the monoculture

An exit strategy from the Gecko monoculture

It is my belief that we now need to consider an exit strategy from the Gecko monoculture. No matter which strategy is picked, it will have a cost. But I believe that the potential benefits in terms of community and innovation will outweigh these costs.

First, we need to avoid repeating past mistakes. While WebExtensions may not cover all the use cases for which we need an extension API for Gecko, they promise a set of clean and high-level APIs, and this is a good base. We need to make sure that whatever we offer as part of WebExtensions or in addition to them remains a set high-level, well-insulated APIs, rather than the panic-inducing entanglement that is our set of internal APIs.

Second, we need to be able to extend our set of extension APIs in directions we not planned by any single governing body, including Mozilla. When WebExtensions were first announced, the developers in charge of the project introduced a uservoice survey to determine the features that the community expected. This was a good start, but this will not be sufficient in the long run. Around that time, Giorgio Maone drafted an API for developing and testing experimental WebExtensions features. This was also a good start, because experimenting is critical for innovation. Now, we need a bridge to progressively turn experimental extension APIs into core APIs. For this purpose, I believe that the best mechanism is a RFC forum and a RFC process for WebExtensions, inspired from the success of RFCs in the Rust (or Python) community.

Finally, we need a technological brick to get applications other than Firefox to run Gecko. We have experience doing this, from XULRunner to Prism. A few years ago, Mike De Boer introduced “Chromeless 2”, which was roughly in the Gecko world what Electron is nowadays in the Chromium world. Clearly, this project was misunderstood by the Mozilla community – I know that it was misunderstood by me, and that it took Electron to make me realize that Mike was on the right track. This project was stopped, but it could be resumed or rebooted. To make it easier for the community, using the same API as Electron, would be a possibility.

Keeping projects multicultural

Similarly, I believe that we need to consider strategies that will let us avoid similar monocultures in our other projects. This includes (and is not limited to) B2G OS (formerly known as Firefox OS), Rust, Servo and Connected Devices.

So far, Rust has proved very open to innovation. For one thing, Rust has its RFC process and it works very well. Additionally, while Rust was originally designed for Servo, it has already escaped this orbit and the temptation of a Servo monoculture. Rust is now used for cryptocurrencies, operating systems, web servers, connected devices… So far, so good.

Similarly, Servo has proved quite open, albeit in very different directions. For one thing, Servo is developed separately from any web browser that may embed it, whether Servo Shell or Browser.html. Also, Servo is itself based on dozens of libraries developed, tested and released individually, by community members. Similarly, many of the developments undertaken for Servo are released themselves as independent libraries that can independently be maintained or integrated in yet other projects… I have hopes that Servo, or at least large subsets, will eventually find its way into projects unrelated to Mozilla, possibly unrelated to web browsers. My only reservation is that I have not checked how much effort the Servo team has made into checking that the private APIs of Servo remain private. If this is the case, so far, so good.

The case of Firefox OS/B2G OS is quite different. B2G OS was designed from scratch as a Gecko application and was entirely dependent on Gecko and some non-standard extensions. Since the announcement that Firefox OS would be retired – and hopefully continue to live as B2G OS – it has been clear that B2G-specific Gecko support would be progressively phased out. The B2G OS community is currently actively reworking the OS to make sure that it can live in a much more standard environment. Similarly, the Marketplace, which was introduced largely to appease carriers, will disappear, leaving B2G OS to live as a web OS, as it was initially designed. While the existence of the project is at risk, I believe that these two changes, together, have the potential to also set it free from a Gecko + Marketplace + Telephone monoculture. If B2G is still alive in one or two years, it may have become a cross-platform, cross-rendering engine operating system designed for a set of devices that may be entirely different from the Firefox Phones. So, I’m somewhat optimistic.

As for Connected Devices, well, these projects are too young to be able to judge. It is our responsibility to make sure that we do not paint ourselves into monocultural corners.

edit Added a link to Chris Lord’s post on the topic of missed opportunities.


Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>