Planet MozillaTying ecosystems through browsers

One of the principles behind HTML5, and the community building it, is that the specifications that say how the Web works should have enough detail that somebody reading them can implement the specification. This makes it easier for new Web browsers to enter the market, which in turn helps users through competitive pressure on existing and new browsers.

I worry that the Web standards community is in danger of losing this principle, quite quickly, and at a cost to competition on the Web.

Some of the recent threats to the ability to implement competitive browsers are non-technical:

  • Many leading video and audio codecs are subject to non-free patent licenses, due at least in part to the patent policies and practices of the standards bodies building such codecs.
  • Implementing EME in a way that is usable in practice requires having a proprietary DRM component and then convincing the sites that use EME to support that component. This can be done by building such a component or forming a business relationship with somebody else who already has. But this threat to browser competition is at least partly related to the nature of DRM, whose threat model treats the end user as the attacker.

Many parts of the technology industry today are dominated by a small group of large companies (effectively an oligopoly) that have an ecosystem of separate products that work better together than with their competitors' products. Apple has Mac OS (software and hardware), iOS (again, software and hardware), Apple TV, Apple Pay, etc. Google has its search engine and other Web products, Android (software only), Chrome OS, Chromecast and Google Cast, Android Pay, etc. Microsoft has Windows, Bing, Windows Phone, etc. These products don't line up precisely, but they cover many of the same areas while varying based on the companies strengths and business models. Many of these products are tied together in ways that both help users and, since these ties aren't standardized and interoperable, strongly encourage users to use other products from the same company.

There are some Web technologies in development that deal with connections between parts of these ecosystems. For example:

  • The Presentation API defines a way for a Web page to show content on something like a Chromecast or an Apple TV. But it only specifies the API between the Web page and the browser; the API between the browser and the TV is completely unspecified. (Mozilla participants in the group tried to change that early in the group's history, but gave up.)
  • The future Web Payments Working Group (which I wrote about last week) is intended to build technology in which the browser connects a user making a payment to a Web site. This has the risk that instead of specifying how browsers talk to payment networks or banks, a browser is expected to make business deals with them, or make business deals with somebody who already has such deals.

In both cases, specifying the system fully is more work. But it's work that needs to happen to keep the Web open and competitive. That's why we've had the principle of complete specification, and it still applies here.

I'm worried that the ties that connect the parts of these ecosystems together will start running through unspecified parts of Web technologies. This would, through the loss of the principle of specification for competition, makes it harder for new browsers (or existing browsers made by smaller companies) to compete, and would make the Web as a whole a less competitive place.

Planet MozillaTab audio indicators and muting in Firefox Nightly

Sometimes when you have several tabs open, and one of them starts to make some noise, you may wonder where the noise is coming from.  Other times, you may want to quickly mute a tab without figuring out if the web page provides its own UI for muting the audio.  On Wednesday, I landed the user facing bits of a feature to add an audio indicator to the tabs that are playing audio, and enable muting them.  You can see a screenshot of what this will look like in action below.

Tab audio indicators in action

Tab audio indicators in action

As you can see in the screenshot, my Soundcloud tab is playing audio, and so is my Youtube tab, but the Youtube tab has been muted.  Muting and unmuting a tab is easy by clicking on the tab audio indicator icon.  You can now test this out yourself on Firefox Nightly tomorrow!

This feature should work with all APIs that let you play audio, such as HTML5 <audio> and <video>, and Web Audio.  Also, it works with the latest Flash beta.  Note that you actually need to install the latest Flash beta, that is, version 19.0.0.124 which was released yesterday.  Earlier versions of Flash won’t work with this feature.

We’re interested in your feedback about this feature, and especially about any bugs that you may encounter.  We hope to iron out the rough edges and then let this feature ride the trains.  If you are curious about this progress, please follow along on the tracking bug.

Last but not least, this is the results of the effort of many of my colleagues, most notably Andrea Marchesini, Benoit Girard, and Stephen Horlander.  Thanks to those and everyone else who helped with the code, reviews, and other things!

Planet MozillaCSS Vendor Prefixes

I have read everything and its contrary about CSS vendor prefixes in the last 48 hours. Twitter, blogs, Facebook are full of messages or articles about what are or are supposed to be CSS vendor prefixes. These opinions are often given by people who were not members of the CSS Working Group when we decided to launch vendor prefixes. These opinions are too often partly or even entirely wrong so let me give you my own perspective (and history) about them. This article is with my CSS Co-chairman's hat off, I'm only an old CSS WG member in the following lines...

  • CSS Vendor Prefixes as we know them were proposed by Mike Wexler from Adobe in September 1998 to allow browser vendors to ship proprietary extensions to CSS.

    In order to allow vendors to add private properties using the CSS syntax and avoid collisions with future CSS versions, we need to define a convention for private properties. Here is my proposal (slightly different than was talked about at the meeting). Any vendors that defines a property that is not specified in this spec must put a prefix on it. That prefix must start with a '-', followed by a vendor specific abbreviation, and another '-'. All property names that DO NOT start with a '-' are RESERVED for using by the CSS working group.

  • One of the largest shippers of prefixed properties at that time was Microsoft that introduced literally dozens of such properties in Microsoft Office.
  • The CSS Working Group slowly evolved from that to « vendor prefixes indicate proprietary features OR experimental features under discussion in the CSS Working Group ». In the latter case, the vendor prefixes were supposed to be removed when the spec stabilized enough to allow it, i.e. reaching an official Call for Implementation.
  • Unfortunately, some prefixed « experimental features » were so immensely useful to CSS authors that they spread at fast pace on the Web, even if the CSS authors were instructed not to use them. CSS Gradients (a feature we originally rejected: « Gradients are an example. We don't want to have to do this in CSS. It's only a matter of time before someone wants three colors, or a radial gradient, etc. ») are the perfect example of that. At some point in the past, my own editor BlueGriffon had to output several different versions of CSS gradients to accomodate the various implementation states available in the wild (WebKit, I'm looking at you...).
  • Unfortunately, some of those prefixed properties took a lot, really a lot, of time to reach a stable state in a Standard and everyone started relying on prefixed properties in production web sites...
  • Unfortunately again, some vendors did not apply the rules they decided themselves: since the prefixed version of some properties was so widely used, they maintained them with their early implementation and syntax in parallel to a "more modern" implementation matching, or not, what was in the Working Draft at that time.
  • We ended up just a few years ago in a situation where prefixed proprerties were so widely used they started being harmful to the Web. The indredible growth of first WebKit and then Chrome triggered a massive adoption of prefixed properties by CSS authors, up to the point other vendors seriously considered implementing themselves the -webkit- prefix or at least simulating it.

Vendor prefixes were not a complete failure. They allowed the release to the masses of innovative products and the deep adoption of HTML and CSS in products that were not originally made for Web Standards (like Microsoft Office). They allowed to ship experimental features and gather priceless feedback from our users, CSS Authors. But they failed for two main reasons:

  1. The CSS Working Group - and the Group is really made only of its Members, the vendors - took faaaar too much time to standardize critical features that saw immediate massive adoption.
  2. Some vendors did not update nor "retire" experimental features when they had to do it, ditching themselves the rules they originally agreed on.

From that perspective, putting experimental features behind a flag that is by default "off" in browsers is a much better option. It's not perfect though. I'm still under the impression the standardization process becomes considerably harder when such a flag is "turned on" in a major browser before the spec becomes a Proposed Recommendation. A Standardization process is not a straight line, and even at the latest stages of standardization of a given specification, issues can arise and trigger more work and then a delay or even important technical changes. Even at PR stage, a spec can be formally objected or face an IPR issue delaying it. As CSS matures, we increasingly deal with more and more complex features and issues, and it's hard to predict when a feature will be ready for shipping. But we still need to gather feedback, we still need to "turn flags on" at some point to get real-life feedback from CSS Authors. Unfortunately, you can't easily remove things from the Web. Breaking millions of web sites to "retire" an experimental feature is still a difficult choice...

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years. That requires taking the following hard decisions:

  • if a feature does not stabilize in two years' time, that's probably because it's not ready or too hard to implement, or not strategic at that moment, or that the production of a Test Suite is a too large effort, or whatever. It has then to be dropped or postponed.
  • Tests are painful and time-consuming. But testing is one of the mandatory steps of our Standardization process. We should "postpone" specs that can't get a Test Suite to move along the REC track in a reasonable time. That implies removing the experimental feature from browsers, or at least turning the flag they live behind off again. It's a hard and painful decision, but it's a reasonable one given all I said above and the danger of letting an experimenal feature spread.

W3C Team blogMoving the Web Platform forward

The Web Platform keeps moving forward every day. Back in October last year, following the release of HTML 5.0 as a Recommendation, I wrote about Streaming video on the Web as a good example of more work to do. But that’s only one among many: persistent background processing, frame rate performance data, metadata associated with a web application, or mitigating cross-site attacks are among many additions we’re working on to push the envelop. The Open Web Platform is far from complete and we’ve been focusing on strengthening the parts of the Open Web Platform that developers most urgently need for success, through our push for Application Foundations. Our focus on developers led us to the recent launch of the W3C’s Web Platform Incubator Community Group (WICG). It gives the easiest way possible for developers to propose new platform features and incubate their ideas.

As part of the very rapid pace of innovation in the Web Platform, HTML itself will continue to evolve as well. The work on Web Components is looking to provide Web developers the means to build their own fully-featured HTML elements, to eliminate the need for scaffolding in most Web frameworks or libraries. The Digital Publishing folks are looking to produce structural semantic extensions to accommodate their industry, through the governance model for modularization and extensions of WAI-ARIA.

In the meantime, the work boundaries between the Web Applications Working Group and the HTML Working Group have narrowed over the years, given that it is difficult nowadays to introduce new HTML elements and attributes without looking at their implications at the API level. While there is a desire to reorganize the work in terms of functionalities rather then technical solution, resulting in several Working Groups, we’re proposing the Web Platform Working Group as an interim group while discussion is ongoing regarding the proper modularization of HTML and its APIs. It enables the ongoing specifications to continue to move forward over the next 12 months. The second proposed group will the Timed Media Working Group. The Web is increasingly used to share and consume timed media, especially video and audio, and we need to enhance these experiences by providing a good Web foundation to those uses, by supporting the work of the Audio and Web Real-Time Communications Working Groups.

The challenge in making those innovations and additions is to continue to have an interoperable and royalty-free Web for everyone. Let’s continue to make the Open Web Platform the best platform for documents and applications.

Planet MozillaVendor Prefixes And Market Reality

Through the Web Compat twitter account, I happen to read a thread about Apple introducing a new vendor prefix. 🎳. The message by Alfonso Martínez L. starts a bit rough:

The mess caused by vendor prefixes on the wild is not enough, so we have new -apple https://www.webkit.org/blog/3709/using-the-system-font-in-web-content/ … @jonathandavis

Going to Apple blog post before reading the rest of the thread gives a bit more background.

Web content is sometimes designed to fit in with the overall aesthetic of the underlying platform which it is being rendered on. One of the ways to achieve this is by using the platform’s system font, which is possible on iOS and OS X by using the “-apple-system” CSS value for the “font-family” CSS property. On iOS 9 and OS X 10.11, doing this allows you to use Apple’s new system font, San Francisco. Using “-apple-system” also correctly interacts with the font-weight CSS property to choose the correct font on Apple’s latest operating systems.

Here I understand the desire to use the system font, but I don't understand the new -apple-system, specifically when the next paragraph says:

On platforms which do not support “-apple-system” the browser will simply fall back to the next item in the font-family fallback list. This provides a great way to make sure all your users get a great experience, regardless of which platform they are using.

I wonder what the cascade of font-family is not already doing so they need a new prefix. They explain later on by providing this information:

Going beyond the system font, iOS has dynamic type behavior, which can provide an additional level of fit and finish to your content.

font: -apple-system-body
font: -apple-system-headline
font: -apple-system-subheadline
font: -apple-system-caption1
font: -apple-system-caption2
font: -apple-system-footnote
font: -apple-system-short-body
font: -apple-system-short-headline
font: -apple-system-short-subheadline
font: -apple-system-short-caption1
font: -apple-system-short-footnote
font: -apple-system-tall-body

What I smell here is pushing the semantics of a text into the font-face, I believe it will not end well. But that's not what I want to talk about here.

Vendor Prefixes Principle

The vendor prefixes have been created for providing a safe place for vendors to experiment with new features. It's a good idea on paper. It can work well, specifically when the technology is not yet really mature and details need to be ironed. This would be perfectly acceptable if the feature was only available on beta and alpha versions of rendering engines. That would stop de facto the proliferation of these properties in common Web sites. And that would give space for experimenting.

Here the feature is not proposed as an experiment but as a way for Web developers, designers to use a new feature on Apple platform. It's proposed as a competitive advantage and a marketing tool for enticing developers to the cool new thing. And before I'm being targeted for blaming Apple only, all vendors in some fashion do that.

Let's assume that Apple is of good will. The real issue is not easy to understand, except if you are working daily on Web Compatibility across the world.

Enter the market reality field.

Flexbox And Gradients In China And Japan

tori in Kamakura

With the Web Compat team, we have been working lately a lot on Chinese and Japanese mobile Web site compatibility issues. The current market in China and Japan on Mobile is a smartphone ecosystem largely dominated by iOS and Android. It means that if you use in your site -webkit- and WebKit vendor prefixes, you are basically on the safe side for most of the users, but not all users.

What is happening here is interesting. Gradients and flexbox went through syntax changes and the standard syntax is really different from the original -webkit- syntax. These are two features of the Web platform which are very useful and very powerful, specifically flexbox. In a monopolistic market such as China and Japan, the end result was Web developers jumping on the initial version of the feature for creating their Web sites (shiny new and useful features).

Fast forward a couple of years and the economic reality of the Web starts playing its cards. Other vendors have caught up with the features, the standard process took place, and the new world of interoperability is all pink with common implementations in all rendering engines, except a couple of minor details.

Web developers should all jump on adjusting their Web sites to add the standard properties at least. This is not happening. Why? Because the benefits are not perceived by Web developers, project managers and site owners. Indeed adjusting the Web site will have a cost in editing and testing. Who bears this cost and for which reasons?

When mentioning it will allow other users with different browsers to use the Web site, the answer is straightforward: "This browser X is not in our targeted list of browsers." or "This browser Y doesn't appear in our stats." We all know that the browser Y can't appear in the stats because it's not usable on the site (A good example of that is MBGA).

mbga rendering on Gecko mobile

Dropping Vendor Prefixes

Adding prefixless version of properties in the implementation of rendering engines help, but do not magically fix everything for improving the Web Compatibility story. That's the mistake that Timothy Hatcher (WebKit Developer Experience Manager at Apple.) is making in:

@AlfonsoML We also unprefixed 4 dozen properties this year. https://developer.apple.com/library/mac/releasenotes/General/WhatsNewInSafari/Articles/Safari_9.html#//apple_ref/doc/uid/TP40014305-CH9-SW28

This is cool and I applaud Apple for this. I wish it happened a bit earlier. Why doesn't it solve the Web Compatibility issue? Because the prefixed version of properties still exists and is supported. Altogether, we then sing the tune "Yeah, Apple (and Google), let's drop the prefixed version of these properties!" Ooooh, hear me, I so wish it would be possible. But Apple and Google can't do that for the exact same reason that other non-WebKit browsers can't exist in Chinese and Japanese markets. They would instantly break a big number of high profiles Web sites.

We have reached the point where browser vendors have to start implementing or aliasing these WebKit prefixes just to allow their users to browse the Web, see Mozilla in Gecko and Microsoft in Edge. The same thing is happening over again. In the past, browser vendors had to implement the quirks of IE to be compatible with the Web. As much as I hate it, we will have to specify the current -webkit- prefixes to implement them uniformly.

Web Compatibility Responsibility

Microsoft is involved in the Web Compatibility project. I would like Apple and Google to be fully involved and committed in this project. The mess we are all involved is due to WebKit prefixes and the leader position they have on the mobile market can really help. This mess killed Opera Presto on mobile, which had to switch to Blink.

Let's all create a better story for the Web and understand fully the consequences of our decisions. It's not only about technology, but also economic dynamics and market realities.

Otsukare!

Planet MozillaUpdating you on 38 just-in-time

Did you see what I did there? For the past two weeks my free time apart from work and the Master's degree has been sitting in a debugger trying to fix JavaScript, which is just murder on my dating life. Here is the current showstopper bug-roll for 38.1.1b1:

  • The Faceblech bug with the new IonPower JavaScript JIT compiler is squashed, I think, after repairing some conformance test failures which in turn appear to have repaired Forceblah. In my defence, the two bugs in question were incredibly weird edge cases and these tests are not part of the usual JIT test suite, so I guess we'll have to run them as well in future. This also repairs an issue with Instagrump which is probably the same underlying issue since Faceboink owns them also.

    The silver lining after all that was that I was considering disabling inlining in the JIT prior to release, which worked around the "badness," but also cut the engine speed in about half. (Still faster than JaegerMonkey!) To make this a bit less of a hit, I tuned the thresholds for starting the twin JITs and got about 10% improvement without inlining. With inlining back on, it's still faster by about 4% and change -- the G5 now achieves a score of nearly 5800 on V8, up from 5560. I also tweaked our foreground finalization patch for generational GC so that we should be able to get the best of both worlds. Overall you should see even better performance out of this next beta.

  • I have a presumptive fix for the webfont "ATSUI puke" on the New York Times, but it's not implemented or well-tested yet. This is a crash on 10.5, so I consider it a showstopper and it will be fixed before the next beta. (It affects 31.8 also but I will not be making another 31 release unless there is a Mozilla ESR chemspill.)

  • The modified strip7 tool required for building 38.x has a serious bug in it that causes it to crash trying to strip certain symbols. I have fixed this bug and builders will need to install this new version (remember: do not replace your normal strip with this one; it is intentionally loose with the Mach-O specification). I will be uploading it sometime this week along with an updated gdb7 that has better debugger performance and repairs a bug with too eagerly disabling register display while single-stepping Ion code.

These bugs are not considered showstoppers, but I do acknowledge them and I plan to fix them either for the final release or the next version of 38:

  • I can confirm saved passwords do not appear in the preferences panel. They do work, though, and can be saved, so this is more of an issue with managing them; while it's possible to do so manually it requires some inconvenient screwing around with your profile, so I consider this the highest priority of the non-showstopper bugs.

  • Checkboxes on the dropdown menus from the Console tabs do not appear. This specific manifestation is purely cosmetic because they work normally otherwise, but this may be an indication there is a similar issue with dropdowns and context menus elsewhere, so I do want to fix this as well.

Other miscellaneous changes include some adjustments to HTML5 media streaming and I have decided to reduce the default window and tab undos back to 31's level (6 and 2 respectively) so that the browser still gives up tenured memory a bit more easily. Unfortunately, there is not enough time to get MP3 support fully functional for final release. I plan to get this completed in a future version of 38.x, but it will not be officially supported until then (you can still toggle tenfourfox.mp3.enabled to use the minimp3 driver for those sites it does work with as long as you remember that seeking within a track doesn't work yet).

The localizer elves have French, German, Spanish, Italian, Russian and Finnish installers available. Our Japanese localization appears to have dropped off the web, so if you can help us, o-negai shimasu! Swedish just needs a couple of strings to be finished. We do not yet have Polish or Asturian, which we used to, so if you can help on any of these languages, please visit issue 42 where Chris is coordinating these efforts. A big thank you to all of our localizers!

Once the localizations are all in, the Google Code project will be frozen to prepare for the wiki and issue tracker moving to Github ahead of Google Code going read-only on 24 August. Downloads will remain on SourceForge, but everything else will go to Github, including the source tree when we eventually drop source parity. I was hoping to have an Elcapitanspoof up in time for 38's final release, but we'll see if I have time to do the graphics.

Watch for the next beta to come out by next weekend with any luck, which gives us enough time if there needs to be a third emergency release prior to the final (weekend prior to 11 August).

Finally, I am pleased to note we are now no longer the only PowerPC JavaScript JIT out there, though we are the only one I know of for Mozilla SpiderMonkey. IBM has been working on a port of Google V8 to PowerPC for some time, both AIX and Linux, which recently became an official part of the Google V8 repository (i.e., the PPC port is now officially supported). If you've been looking at nabbing a POWER8 with that money burning a hole in your pocket, it even works with the new Power ISA little endian mode, of which we dare not speak. Since uppsala, Floodgap's main server, is a POWER6 running AIX and should be able to run this, I might give it a spin sometime when I have a few spare cycles. However, before some of the freaks amongst you get excited and think this means Google Chrome on OS X/ppc is just around the corner, there's still an awful lot more work required to get it operational than just the JavaScript engine, and it won't be me that works on it. It does mean, however, that things like node.js will now work on a Power-based server with substantially less fiddling around, and that might be very helpful for those of you who run Power boxes like me.

Planet WebKitXabier Rodríguez Calvar: ReadableStream almost ready

Hello dear readers! Long time no see! You might thing that I have been lazy, and I was in blog posting but I was coding like mad.

First remarkable thing is that I attended the WebKit Contributors Meeting that happened in March at Apple campus in Cupertino as part of the Igalia gang. There we discussed of course about Streams API, its state and different implementation possibilities. Another very interesting point which would make me very happy would be the movement of Mac to CMake.

In a previous post I already introduced the concepts of the Streams API and some of its possible use cases so I’ll save you that part now. The news is that ReadableStream has its basic funcionality complete. And what does it mean? It means that you can create a ReadableStream by providing the constructor with the underlying source and the strategy objects and read from it with its reader and all the internal mechanisms of backpresure and so on will work according to the spec. Yay!

Nevertheless, there’s still quite some work to do to complete the implementation of Streams API, like the implementation of byte streams, writable and transform streams, piping operations and built-in strategies (which is what I am on right now).I don’t know either when Streams API will be activated by default in the next builds of Safari, WebKitGTK+ or WebKit for Wayland, but we’ll make it at some point!

Code suffered already lots of changes because we were still figuring out which architecture was the best and Youenn did an awesome job in refactoring some things and providing support for promises in the bindings to make the implementation of ReadableStream more straitghforward and less “custom”.

Implementation could still suffer quite some important changes as, as part of my work implementing the strategies, some reviewers raised their concerns of having Streams API implemented inside WebCore in terms of IDL interfaces. I have already a proof of concept of CountQueuingStrategy and ByteLengthQueuingStrategy implemented inside JavaScriptCore, even a case where we use built-in JavaScript functions, which might help to keep closer to the spec if we can just include JavaScript code directly. We’ll see how we end up!

Last and not least I would like to thank Igalia for sponsoring me to attend the WebKit Contributors Meeting in Cupertino and also Adenilson for being so nice and taking us to very nice places for dinner and drinks that we wouldn’t be able to find ourselves (I owe you, promise to return the favor at the Web Engines Hackfest). It was also really nice to have the oportunity of quickly visiting New York City for some hours because of the long connection there which usually would be a PITA, but it was very enjoyable this time.

Bruce LawsonReading List

The reading list – a day early as I’m off to the taverns and love-dungeons of Brussels for the weekend to teach Belgians how to drink beer.

Planet MozillaServo developer tools overview

Servo is a new web browser engine. It is one of the largest Rust-based projects, but the total Rust code is still dwarfed by the size of the code provided in native C and C++ libraries. This post is an overview of how we have structured our development environment in order to integrate the Cargo build system, with its “many small and distributed dependencies” model with our needs to provide many additional features not often found in smaller Rust-only projects.

Mach

Mach is a python driver program that provides a frontend to Servo’s development environment that both reduces the number of steps required and integrates our various tools into a single frontend harness. Similar to its purpose in the Firefox build, we use it to centralize and simplify the number of commands that a developer has to perform.

mach bootstrap

The steps that mach will handle before issuing a normal cargo build command are:

  • Downloading the correct versions of the cargo and rustc tools. Servo uses many unstable features in Rust, most problematically those that change pretty frequently. We also test the edges of feature compatibility and so are the first ones to notice many changes that did not at first seem as if they would break anyone. Further, we build a custom version of the tools that additionally supports cross-compilation targeting Android (and ARM in the near future). A random local install of the Rust toolchain is pretty unlikely to work with Servo.

  • Updating git submodules. Some of Servo’s dependencies cannot be downloaded as Cargo dependencies because they need to be directly referenced in the build process, and Cargo adds a hash that makes it difficult to locate those files. For such code, we add them as submodules.

mach build & run

The build itself also verifies that the user has explicitly requested either a dev or release build — the Servo dev build is debuggable but quite slow, and it’s not clear which build should be the default.

Additionally, there’s the question of which cargo build to run. Servo has three different “toplevel” Cargo.toml files.

  • components/servo/Cargo.toml is used to build an executable binary named servo and is used on Linux and OSX. There are also horrible linker hacks in place that will cause an Android-targeted build to instead produce a file named servo that is actually an APK file that can be loaded onto Android devices.

  • ports/gonk/Cargo.toml produces a binary that can run on the Firefox OS Boot2Gecko mobile platform.

  • ports/cef/Cargo.toml produces a shared library that can be loaded within the Chromium Embedding Framework to provide a hostable web rendering engine.

The presence of these three different toplevel binaries and the curious directory structure means that mach also provides a run command that will execute the correct binary with any provided arguments.

mach test

Servo has several testing tools that can be executed via mach.

  • mach tidy will verify that there are no trivial syntactic errors in source files. It checks for valid license headers in each file, no tab characters, no trailing whitespaces, etc.

  • mach test-ref will run the Servo-specific reference tests. These tests render a pair of web pages that implement the same final layout using different CSS features to images. If the images are not pixel-identical, the test fails.

  • mach test-wpt runs the cross-browser W3C Web Platform Tests, which primarily test DOM features.

  • mach test-css runs the cross-browser CSS WG reference tests, which are a version of the reference tests that are intended to work across many browsers.

  • mach test-unit runs the Rust unit tests embedded in Servo crates. We do not have many of these, except for basic tests of per-crate functionality, as we rely on the WPT and CSS tests for most of our coverage. Philosophically, we prefer to write and upstream a cross-browser test where one does not exist instead of writing a Servo-specific test.

cargo

While the code that we have written for Servo is primarily in Rust, we estimate that at least 2/3 of the code that will run inside of Servo will be written in C/C++, even when we ship. From the SpiderMonkey JavaScript engine to the Skia and Azure/Moz2D graphics pipeline to WebRTC, media extensions, and proprietary video codecs, there is a huge portion of the browser that is integrated and wrapped into Servo, rather than rewritten. For each of these projects, we have a crate that has a build.rs file that performs the custom build steps to produce a static library and then produce a Rust rlib file to link into Servo.

The rest of Servo is a significant amount of code (~150k lines of Rust; ~250k if you include autogenerated DOM bindings), but follows the standard conventions of Cargo and Rust as far as producing crates. For the many crates within the Servo repo, we simply have a Cargo.toml file next to a lib.rs that defines the module structure. When we break them out into a separate GitHub repository, though, we follow the convention of a toplevel Cargo.toml file with a src directory that holds all of the Rust code.

Servo's dependency graph

Updating dependencies

Since there are three toplevel Cargo.toml files, there are correspondingly three Cargo.lock files. This configuration makes the already challenging updates of dependencies even moreso. We have added a command, mach update-cargo -p {package} --precise {version} to handle updates across all three of the lockfiles. While running this command without any arguments does attempt to upgrade all dependencies to the highest SemVer-compatible versions, in practice that operation is unlikely to work, due to a mixture of:

  • git-only dependencies, which do not have a version number

  • Dependencies with different version constraints on a common dependency, resulting in two copies of a library and conflicting types

  • Hidden Rust compiler version dependencies

Things we’d like to fix in the future

It would be great if there was a single Cargo.toml file and it was at the toplevel of the Servo repo. It’s confusing to people familiar with Rust projects, who go looking for a Cargo.toml file and can’t find them.

Cross-compilation to Android with linker hacks feels a bit awkward. We’d like to clean that up, remove the submodule that performs that linker hackery, and have a more clean/consistent feel to our cross-targeted builds.

Managing the dependencies — particularly if there is a cross-repo update like a Rust upgrade — is both a real pain and requires network access in order to clone the dependency that you would like to edit. The proposed cargo clone command would be a huge help here.

Planet MozillaEnvironment

Servo developer tools overview

Servo is a new web browser engine. It is one of the largest Rust-based projects, but the total Rust code is still dwarfed by the size of the code provided in native C and C++ libraries. This post is an overview of how we have structured our development environment in order to integrate the Cargo build system, with its “many small and distributed dependencies” model with our needs to provide many additional features not often found in smaller Rust-only projects.

Mach

Mach is a python driver program that provides a frontend to Servo’s development environment that both reduces the number of steps required and integrates our various tools into a single frontend harness. Similar to its purpose in the Firefox build, we use it to centralize and simplify the number of commands that a developer has to perform.

mach bootstrap

The steps that mach will handle before issuing a normal cargo build command are: * Downloading the correct versions of the cargo and rustc tools. Servo uses many unstable features in Rust, most problematically those that change pretty frequently. We also test the edges of feature compatibility and so are the first ones to notice many changes that did not at first seem as if they would break anyone. Further, we build a custom version of the tools that additionally supports cross-compilation targeting Android (and ARM in the near future). A random local install of the Rust toolchain is pretty unlikely to work with Servo.

  • Updating git submodules. Some of Servo’s dependencies cannot be downloaded as Cargo dependencies because they need to be directly referenced in the build process, and Cargo adds a hash that makes it difficult to locate those files. For such code, we add them as submodules.
mach build & run

The build itself also verifies that the user has explicitly requested either a dev or release build — the Servo dev build is debuggable but quite slow, and it’s not clear which build should be the default.

Additionally, there’s the question of which cargo build to run. Servo has three different “toplevel” Cargo.toml files. * components/servo/Cargo.toml is used to build an executable binary named servo and is used on Linux and OSX. There are also horrible linker hacks in place that will cause an Android-targeted build to instead produce a file named servo that is actually an APK file that can be loaded onto Android devices. * ports/gonk/Cargo.toml produces a binary that can run on the Firefox OS Boot2Gecko mobile platform. * ports/cef/Cargo.toml produces a shared library that can be loaded within the Chromium Embedding Framework to provide a hostable web rendering engine.

The presence of these three different toplevel binaries and the curious directory structure means that mach also provides a run command that will execute the correct binary with any provided arguments.

mach test

Servo has several testing tools that can be executed via mach.

  • mach tidy will verify that there are no trivial syntactic errors in source files. It checks for valid license headers in each file, no tab characters, no trailing whitespaces, etc.

  • mach test-ref will run the Servo-specific reference tests. These tests render a pair of web pages that implement the same final layout using different CSS features to images. If the images are not pixel-identically, the test fails.

  • mach test-wpt runs the cross-browser W3C Web Platform Tests, which primarily test DOM features.

  • mach test-css runs the cross-browser CSS WG reference tests, which are a version of the reference tests that are intended to work across many browsers.

  • mach test-unit runs the Rust unit tests embedded in Servo crates. We do not have many of these, except for basic tests of per-crate functionality, as we rely on the WPT and CSS tests for most of our coverage. Philosophically, we prefer to write and upstream a cross-browser test where one does not exist instead of writing a Servo-specific test.

cargo

While the code that we have written for Servo is primarily in Rust, we estimate that at least 2/3 of the code that will run inside of Servo will be written in C/C++, even when we ship. From the SpiderMonkey JavaScript engine to the Skia and Azure/Moz2D graphics pipeline to WebRTC, media extensions, and proprietary video codecs, there is a huge portion of the browser that is integrated and wrapped into Servo, rather than rewritten. For each of these projects, we have a crate that has a build.rs file that performs the custom build steps to produce a static library and then produce a Rust rlib file to link into Servo.

The rest of Servo is a significant amount of code (~150k lines of Rust; ~250k if you include autogenerated DOM bindings), but follows the standard conventions of Cargo and Rust as far as producing crates. For the many crates within the Servo repo, we simply have a Cargo.toml file next to a lib.rs that defines the module structure. When we break them out into a separate GitHub repository, though, we follow the convention of a toplevel Cargo.toml file with a src directory that holds all of the Rust code.

Servo's dependency graph

Updating dependencies

Since there are three toplevel Cargo.toml files, there are correspondingly three Cargo.lock files. This configuration makes the already challenging updates of dependencies even moreso. We have added a command, mach update-cargo -p {package} --precise {version} to handle updates across all three of the lockfiles. While running this command without any arguments does attempt to upgrade all dependencies to the highest SemVer-compatible versions, in practice that operation is unlikely to work, due to a mixture of:

  • git-only dependencies, which do not have a version number

  • Dependencies with different version constraints on a common dependency, resulting in two copies of a library and conflicting types

  • Hidden Rust compiler version dependencies

Things we’d like to fix in the future

It would be great if there was a single Cargo.toml file and it was at the toplevel of the Servo repo. It’s confusing to people familiar with Rust projects, who go looking for a Cargo.toml file and can’t find them.

Cross-compilation to Android with linker hacks feels a bit awkward. We’d like to clean that up, remove the submodule that performs that linker hackery, and have a more clean/consistent feel to our cross-targeted builds.

Managing the dependencies — particularly if there is a cross-repo update like a Rust upgrade — is both a real pain and requires network access in order to clone the dependency that you would like to edit. The proposed cargo clone command would be a huge help here.

IEBlogBringing componentization to the web: An overview of Web Components

Editor’s note: This is part one of a two-part series by Microsoft Edge engineers Travis Leithead and Arron Eicholz. Part two will be coming tomorrow, July 15th.

Four of our five most-requested platform features on UserVoice (Shadow DOM, Template, Custom Elements, HTML Imports) belong to the family of features called Web Components. In this post we’ll talk about Web Components and give our viewpoint, some background for those who may not be intimately familiar with them, and speculate a bit about where we might expect them to evolve in the future. To do it justice requires a bit of length, so sit back, grab a coffee (or non-caffeinated beverage) and read-on. In part two (coming tomorrow), we’ll address questions about our roadmap and plans for implementation.

Table of Contents:

Componentization: an old design practice made new again for the web

Web applications are now as complex as any other software applications and often take many people coordinating to produce the released product. It is essential to find the right way to divide up the development work with minimal overlap between people and systems in order to be more efficient. Componentization (in general) is how this is done. Any component system should reduce overall complexity by providing isolation, or a natural barrier that hides the complexity of one system from another. Good isolation also makes reusability and serviceability easier.

Initially, web application complexity was managed mostly on the server by isolating the application into separate pages, each requiring the user to navigate their browser from page to page. With the introduction of AJAX and related capabilities, developers no longer needed to “navigate” between different pages of a web application. For some common scenarios like reading email or news, expectations have changed. For example, after logging into your email, you may be “running the email application” from a single URL and stay on that page all day long (aka Single-Page Applications). Client-side web application logic may be much more complex, perhaps even rivaling that of the server side. A possible solution to help solve this complexity is to further componentize and isolate logic within a single web page or document.

The goal of web components is to reduce complexity by isolating a related group of HTML, CSS, and JavaScript to perform a common function within the context of a single page.

How to componentize?

Because web components must draw together each of HTML, CSS, and JavaScript, the existing isolation models supported by each technology contribute to scenarios that are important to preserve in the whole of web components. These independent isolation models include (and are described in more detail in the following paragraphs):

CSS style isolation

There is no great way to componentize CSS natively in the platform today (though tools like Sass can certainly help). A component model must support a way to isolate some set of CSS from another such that the rules defined in one don’t interfere with the other. Additionally, component styles should apply only to the necessary parts of the component and nothing else. Easier said than done!

Within a style sheet, CSS styles are applied to the document using Selectors. Selectors are always considered with potential applicability to the whole document, thus their reach is essentially global. This global reach leads to real conflicts when many contributors to a project pool their CSS files together. Overlapping and duplicate selectors have an established precedence (e.g., cascade, specificity and source-order) for resolving conflicts, but the emergent behavior may not be what the developers intended. There are many potential solutions to this problem. A simple solution is to move elements and related styles participating in a component from the primary document to a different document (a shadow document) such that they are no longer selector matching candidates. This gives rise to a secondary problem: now that there is a boundary established, how does one style across the boundary? This is obviously possible to do imperatively in JavaScript, but it seems awkward to rely on JavaScript to mediate styles over the boundary for what seems like a gap in CSS.

To transmit styles across a component boundary effectively, and to protect the structure of a component (e.g., allow freedom of structural changes without breaking styles), there are two general approaches that have some consensus: “parts” styling using custom pseudo elements and custom properties (formerly known as CSS “variables”). For a time, the ultra-powerful cross-boundary selector combinator ‘>>>’ was also considered (specified in CSS Scoping), but this is now generally accepted as a bad idea because it breaks component isolation too easily.

Parts styling would allow component authors to create custom pseudo elements for styling, thereby exposing only part of their internal structure to the outside world. This is similar to the model that browsers use to expose the “parts” of their native controls. To complete the scenario, authors would likely need some way to restrict the set of styles that could apply to the pseudo element. Additional exploration into a pseudo-element-based “parts model” could result in a useful styling primitive, though the details would need to be ironed out. Further work in a parts model should also rationalize browser built-in native control styling (an area that desperately needs attention).

Custom properties allow authors to describe the style values they would like to re-use in a stylesheet (defined as custom property names prefixed by a double-dash). Custom properties inherit through the document’s sub-tree, allowing selectors to overwrite the value of a custom property for a given sub-tree without affecting other sub-trees. Custom property names would also be able to inherit across component boundaries providing an elegant styling mechanism for components that avoids revealing the structural nature of the component. Custom properties have been evaluated by various Google component frameworks, and are reported to address most styling needs.

Of all the styling approaches considered so far, a future “parts” model and the current custom properties spec appear to have the most positive momentum. We consider custom properties as a new essential member of the web components family of specifications.

Other CSS Style isolation approaches

By way of completeness, scoping and isolation of CSS is not quite as black-and-white as may be assumed above. In fact, several past and current proposals offer scoping and isolation benefits with varying applicability to web components.

CSS provides some limited forms of Selector isolation for specific scenarios. For example, the @media rule groups a set of selectors together and conditionally applies them when the media conditions are met (such as size/dimension of the viewport, or media type–e.g., printing); the @page rule defines some styles that are only applicable to printing conditions (paged media); the @supports rule collects selectors together to apply only when an implementation supports a specific CSS feature–the new form of CSS feature-detection); the proposal for @document groups selectors together to be applied only when the document in which the style sheet is loaded matches the rule.

The CSS Scoping feature (initially formed as part of the web components work) is a proposal for limiting CSS selector applicability within a single HTML document. It defines a new rule @scope which enables a selector to identify scoping root(s) and then causes the evaluation of all selectors contained within the @scope rule to only have subtree-wide applicability to that root (rather than document-wide applicability). The specification allows for HTML to declaratively define a scoping root (e.g., the proposed <style scoped> attribute which only Firefox currently implements; the feature was previously available in Chrome as an opt-in experiment, but has since been removed completely). Aspects of the feature (i.e., :scope defined in Selectors L4) is also intended to be used for relative-selector evaluations in the DOM spec’s new query API.)

It’s important to note that @scope only establishes a one-way isolation boundary: selectors contained within the @scope are constrained to the scope, while any other selectors (outside the @scope) are still free to select inside the @scope at will (though they may be ordered differently by the cascade). This is an unfortunate design because it does not offer scoping/isolation to any of the selectors that are not in the @scope subset–all CSS must still “play nice” in order to avoid accidental styling within another’s @scope rule. See Tab’s @in-shadow-of sketch that is better aligned with a model for protecting component isolation.

Another proposed form of scoping is CSS Containment. Containment scoping is less about Style/Selector isolation and more about “layout” isolation. With the “contain” property, the behavior of certain CSS features that have a natural inheritance (in terms of applicability from parent to child element in the document, e.g., counters) would be blocked. The primary use case is for developers to indicate that certain elements have a strong containment promise, such that the layout applicable to that element and its sub-tree will never effect the layout of another part of the document. These containment promises (enforced by using the ‘contain’ property) allow browsers to optimize layout and rendering such that a “dirty” layout in the contained sub-tree would only require that sub-tree’s layout to be updated, rather than the whole document.

As the implementations of web component technologies across browser vendors mature and get more and more public use, additional styling patterns and problems may arise; we should expect to see further investment and more progress made on various CSS proposals to improve web component styling as a result.

JavaScript and scopes

All JavaScript that gets included into a web page has access to the same shared global object. Like any programming language, JavaScript has scopes that provide a degree of “privacy” for a function’s code. These lexical scopes are used to isolate variables and functions from the rest of the global environment. The JavaScript “module pattern” in vogue today (which uses lexical scopes) evolved out of a need for multiple JavaScript frameworks to “live together” within the single global environment without “stomping” over each other (depending on load-order).

Lexical scopes in JavaScript are a one-way isolation boundary–code within a scope can access both the scope’s contents as well as any ancestor scopes up to the global scope, while code outside of the scope cannot access the scope’s contents. The important principle is that the one-way isolation favors the code inside the scope, protecting it. The code in the lexical scope has the choice to protect/hide its code from the rest of the environment (or not).

The contribution that JavaScript’s lexical scopes lend to web components is the requirement to have a way of “closing” a component off such that its contents can be made reasonably private.

Global object isolation

Some code may not want to share access to the global environment as described above. For example, some JavaScript code may not be trusted by the application developer–yet it provides a crucial value. Ads and ad frameworks are such examples. For security assurance in JavaScript, it is required to run untrusted code in a separate, clean, scripting environment (one with its own unique global object). Developers may also prefer a fresh global object in which to code without concern for other scripts. In order to do that today (without resorting to iframe elements) developers can use a Worker. The downside to Workers is that they do not provide access to elements, and hence UI.

There are a number of considerations when designing a component that supports global object isolation–especially if that isolation will enable a security boundary (more on that just below). Isolated components are not expected to be fully developed until after the initial set of web components specifications are locked down (i.e., “saved for v2″). However, spending some time now to look forward to what isolated components may be like will help inform some of the work going on today. Several proposals have been suggested and are worth looking into.

Global object isolation fills an important missing scenario for web components. In the mean-time, we must rely on today’s most successful and widely-deployed form of componentization on the web today: the iframe element.

Element encapsulation (the iframe)

Iframe elements and their related cousins: object elements, framesets, and the imperative window.open() API already provide the ability to host an isolated element subtree. Unlike components which are intended to run inside of a single document, iframes join whole HTML documents together; as if two separate web applications are being co-located, one inside the other. Each has a unique document URL, global scripting environment, and CSS scope; each document is completely isolated from the other.

iframes are the most successful (and only widely-deployed) form of componentization on the web today. Iframes enable different web applications to collaborate. For example, many websites use iframes as a form of component to enable everything from advertisements to federated identity login. Iframes have a set of challenges and ways in which those challenges have been addressed:

  • JavaScript code within one HTML document may potentially breach into another document’s isolation boundary (e.g., via the iframe’s contentWindow property). This ability to breach the iframe’s isolation boundary may be a required feature, but is also a security risk when the content in the iframe contains sensitive information not intended for sharing. Today, unwanted breaching is mitigated by the same-origin policy: document URL’s from the same origin are allowed to breach by default, while cross-origin documents have only limited ability to communicate with each other.
  • Breaching alone is not the only security risk. The use of a <iframe sandbox> attribute provides further restrictions on the cross-origin iframe in order to protect the host from unwanted scripting, popups, navigation, and other capabilities otherwise available in the frame.
  • CSS styles from outside the framed document are unable to apply to the document within. This design preserves the principle of isolation. However, style isolation creates a significant seam in the integration of the iframe when using it as a component (within the same-origin). HTML addresses this with the proposed <iframe seamless> attribute for same-origin iframes. The use of the seamless attribute removes style isolation from the framed content; seamless framed documents take a copy of their host document’s styles, and are rendered as if free of the confines of their hosted frame element.

With good security policies and the seamless frame feature, using the iframe as a component model appears to be a pretty attractive solution. However, a number of desirable properties of a web component model are lacking:

  • Deep integration. Iframes limit (and mostly stop completely) integration and interaction models between the host and framed document. For example, relative to the host, focus and selection models are independent and event propagation is isolated to one or the other document. For components that want to more closely integrate, supporting these behaviors is impossible without an “agent” composed in the host’s document to facilitate bridging the boundary.
  • Proliferation of global objects. For each iframe instance created on a page there will always be a unique global object created as well. Global objects and their associated complete type system are not cheap to create and can end up consuming a lot of memory and additional overhead in the browser. Multiple copies of the same component used on a page may not need to each be fully isolated from each other, in fact sharing one global object is preferable especially where common shared state is desired.
  • Host content model distribution. Iframes currently do not allow re-use of the host element’s content model within the framed document. (Simply: an element’s content model is its supported sub-tree of elements and text.) For example, a select element has a content model that includes option elements. A select element implemented as a web component would also want to consume those same children in like manner.
  • Selective styling. The seamless iframe does not work for cross-origin documents. There are subtle security risks if this was allowed to happen. The main problem is that “seamless” is controlled by the host, not the framed document (the framed document is more often the victim in related attacks). For a component, the binary “seamless” feature may be too extreme; components may want to be more selective in deciding which styles from the host should be applicable to its content (rather than automatically inheriting all of them i.e., how seamless works). In general, the question of what to style should belong to the component.
  • API exposure. Many scenarios for web components involve the creation of full-featured new “custom” elements with their own exposed API surface, rendering semantics, and lifecycle management. Using iframes limits the developer to working with the iframe’s API surface as a baseline, with all the assumptions that brings with it. For example, the identity of the element as well as its lifecycle semantics cannot be altered.

Not the first time

It’s worth noting that several past technologies have been proposed and implemented in an attempt to improve upon HTML’s iframe and related encapsulation features. None of these has survived in any meaningful way on the public web today:

  • HTML Components (1998) was proposed and implemented by Microsoft starting in IE5.5 (obsoleted in IE10). It used a declarative model for attaching events and APIs to a host element (with isolation in mind) and parsed components into a “viewlink” (a “Shadow DOM”). Two flavors of components were possible, one permanently attached to an element, and another dynamically bound via a CSS “behavior” property.
  • XBL (2001) and its successor XBL2 (2007) was proposed by Mozilla as a companion to their XUL user-interface language. A declarative binding language with two binding flavors (similar to Microsoft’s HTML Components), XBL supported the additional features of host content model distribution and content generation.

Today’s web components

After the two previous failures to launch, it was time again for another round of component proposals, this time led by Google. With the concepts described in XBL as a starting point, the monolithic component system was broken up into a collection of component building blocks. These building blocks allowed web developers to experiment with some useful independent features before the whole of the web components vision was fully realized. It was this componentization and development of independently useful features that has contributed to its success. Nearly everyone can find some part of web components useful for their application!

This new breed of web components aspire to meet a set of defined use-cases and do so by explaining how existing built-in elements work in the web platform today. In theory, web components allow developers to prototype new kinds of HTML elements with the same fidelity and characteristics as native elements (in practice the accessibility behaviors in HTML are especially hard to match at the moment).

It is clear that the full set of technologies necessary to deliver upon all the web components use-cases, will not all be available in browsers at first. Implementors are working together to agree upon the core set of technologies whose details can be implemented consistently before moving on to additional use-cases.

The “first generation” of web components technologies are:

  • Custom elements: Custom elements define an extension point for the HTML parser to be able to recognize a new “custom element” name and provide it with a JavaScript-backed object model automatically. Custom elements do not enable a component boundary, but rather provide the means for the browser to attach an API and behavior to an author-defined element. Un-supported browsers can polyfill Custom elements to varying levels of precision using mutation observers/events and prototype adjustment. Getting the timing right and understanding the implication is one of the subjects of our upcoming meeting.
    • “is” attribute. Hidden inside the Custom Elements spec is another significant feature–the ability to indicate that a built-in element should be given a custom element name and API capabilities. In the normal case, the custom element starts as a generic element; with “is”, a native element can be used instead (e.g., <input is=”custom-input”> ). While this feature is a great way to inherit all the goodness of default rendering, accessibility, parsing, etc., that is built-in to specific HTML elements, its syntax is somewhat regarded as a hack and others wonder if accessibility primitives plus native control styling are a better long-term standardization route.
  • Shadow DOM: Provides an imperative API for creating a separate tree of elements that can be connected (only once) to a host element. These “shadow” children replace the “real” children when rendering the document. Shadow DOM also provides re-use of the host element’s content model using new slot elements (recently proposed), event target fixup, and closed/open modes of operation (also newly adopted). This relatively trivial idea has a surprisingly huge number of side-effects on everything from focus models and selection to composition and distribution (for shadow DOMs inside of shadow DOMs).
    • CSS Scoping defines various pseudo elements relevant for shadow DOM styling, including :host, ::content (soon-to-be ::slot??), and the former ‘>>>’ (shadow dom piercing combinator) which is now officially disavowed.
  • The template element: Included for completeness, this early web components feature is now part of the HTML5 recommendation. The template element introduced the concept of inertness (template’s children don’t trigger downloads or respond to user input, etc.) and was the first way to declaratively create a disconnected element subtree in HTML. Template may be used for a variety of things from template-stamping and data-binding to conveying the content of a shadow DOM.
  • HTML Imports: Defines a declarative syntax to “import” (request, download and parse) HTML into a document. Imports (using a link element’s rel=”import”) execute the imported document’s script in the context of the host page (thus having access to the same global object and state). The HTML, JavaScript, and CSS parts of a web component can be conveniently deployed using a single import.
  • Custom Properties: Described in more detail above, custom properties described outside of a component that are available for use inside the component are a simple and useful model for styling components today. Given this, we include custom properties as part of the first generation of web components technologies.

Web Components: The Next Generation

As noted at the start of this post, building out the functionality for web components is a journey. A number of ideas to expand and fill gaps in the current generation of features have already started to circulate (this is not a complete index):

  • Declarative shadow DOM. A declarative shadow DOM becomes important when considering how to re-convey a component in serialized form. Without a declarative form, techniques like innerHTML or XMLSerializer can’t build a string-representation of the DOM that includes any shadow content. Shadow DOMs are thus not round-tripable without help from script. Anne from Mozilla proposed a <shadowdom> element as a strawman proposal. Similarly, the template element is already a declarative approach to building a form of shadow markup, and serialization techniques in the browser have already been adjusted to account for this and serialize the template “shadow” content accordingly.
  • Fully Isolated Components. Three browser vendors have made three different proposals in this space. These three proposals are fairly well aligned already which is good news from a consensus-building standpoint. As mentioned previously, isolated components would use a new global object, and might be imported from cross-origin. They would also have a reasonable model for surfacing an API and related behavior across their isolation boundary.
  • Accessibility primitives. Many in the accessibility community like the idea of “is” (from Custom Elements) as a way of extending existing native elements because the native elements carry accessibility behaviors that are not generally available to JavaScript developers. Ideally, a generic web component (without using “is”) could integrate just as closely as native elements in aspects of accessibility, including form submission and focus-ability among others; these extension points are not possible today, but should be explored and defined.
  • Unified native control styling. Lack of consistent control styling between browsers has been an interop problem area preventing simple “theming-oriented” extensions from being widely deployed. This leads developers to often consider shadow DOM as a replacement solution (though mocking up a shadow DOM with the same behavioral fidelity as a native control can be challenging). Some control-styling ideas have been brewing in CSS for some time, though with lack of momentum these ideas are not being developed very quickly.
  • CSS Parts styling. While CSS custom properties can deliver a wide range of styling options for web components today, there are additional scenarios where exposing some of the component’s encapsulated boxes for styling is more direct or appropriate. This is especially useful if it also rationalizes the approach that existing browsers use to expose parts styling of native controls.
  • Parser customization. When used with Custom Elements, only standard open/close tags may be used for custom element parsing. It may be desirable to allow additional customization in the parser for web components to take advantage of.

Finally, while not officially considered part of web components, the venerable iframe should not be neglected. As discussed, iframes are still a highly useful feature of the web platform for building component-like things. It would be useful to understand and potentially improve the “component” story for iframes. For example, understanding and addressing the unresolved problems with <iframe seamless> seems like a good place to start.

Web Components are a transformative step forward for the web. We are excited to continue to support and contribute to this journey. Tomorrow, we’ll share more about our roadmap and plans for implementation. In the meantime, please share your feedback @MSEdgeDev on Twitter or in the comments below.

Travis Leithead, Program Manager, Microsoft Edge
Arron Eicholz, Program Manager, Microsoft Edge

Planet MozillaCan we kill Adobe Flash?

Kill Adobe FlashYesterday the usual tech news outlets were buzzing over an accidental tweet which the media incorrectly interpreted as Mozilla was ditching flash (Blame The Verge for the chain reaction of copied news articles) entirely as a policy. While that is not the case, I was just as excited as many at the faux-news. This got me thinking: what would it really take for the web to kill Adobe Flash? Could Mozilla really make such a move and kill Flash on its own if it wanted to?

My thought is that Mozilla could not because users would be upset at the immediate lack of support for flash which is widely used. However, if Mozilla talked to other browsers including the Chrome Team, Opera, Vivaldi, Safari etc and made a coordinated effort to get at least some of the major names to agree on a set date to end their support for flash, say a year or so out, then I think it would be possible for Adobe Flash to die.

But absent the above happening a tweet by Alex Stamos, CSO of Facebook is right and maybe he understated it because it is really past time for Adobe to do the right thing and announce a end-of-life date for Adobe Flash in the next year or two. Such an announcement would give websites a year or two to do the major task of removing flash from millions of sites around the world.

The open web would be a better place without Flash but it would also be a better place without Java (sorry Minecraft fans but that game needs porting to HTML5) and other relics of the early less open web.Apple_dances
If you agree it is time for a open web free of flash then go give Alex Stamos’s tweet a RT and buy him a beer.

flamingline

 

Planet MozillaLiving in a Go Faster, post-XUL world

A long time ago, XUL was an extraordinary component of Firefox. It meant that front-end and add-on developers could deliver user interfaces in a single, mostly-declarative, language, and see them adapt automatically to the look and feel of each OS. Ten years later, XUL has become a burden: most of its features have been ported to HTML5, often with slightly different semantics – which makes Gecko needlessly complex – and nobody understands XUL – which makes contributions harder than they should be. So, we have reached a stage at which we basically agree that, in a not-too-distant future, Firefox should not rely upon XUL anymore.

But wait, it’s not the only thing that needs to change. We also want to support piecewise updates for Firefox. We want Firefox to start fast. We want the UI to remain responsive. We want to keep supporting add-ons. Oh, and we want contributors, too. And we don’t want to lose internationalization.

Mmmh… and perhaps we don’t want to restart Firefox from bare Gecko.

All of the above are worthy objectives, but getting them all will require some careful thought.

So I’d like to put together a list of all our requirements, against which we could evaluate potential solutions, re-architectures, etc. for the front-end:

High-level

  • Get rid of the deprecated (XUL) bits of Gecko in a finite time.
  • Don’t break Firefox [1].

User-oriented goals

  • Firefox should start fast.
  • The UI should not suffer from jank.
  • The UI should not cause jank.
  • Look and feel like a native app, even with add-ons.
  • Keep supporting internationalization.
  • Keep supporting lightweight themes.
  • Keep supporting acccessibility.

Contributor/dev-oriented goals

  • Use technologies that the world understands.
  • Use technologies that are useful to add-on authors.
  • Support piece-wise, restart-less front-end updates.
  • Provide an add-ons API that won’t break.
  • Code most of the front-end with the add-ons API.

[1] I have heard this claim contested. Some apparently suggest that we should actually break Firefox and base all our XUL-less, Go Faster initiatives on a clean slate from e.g. Browser.html or Servo. If you wish to defend this, please step forward :)

Does this sound like a correct list for all of you?


Steve Faulkner et al#A11Y Slackers

#a11y slackers - play on the slack multicolored hash symbolA ubiquitous issue for people involved in moving the web forward is that there is always too much to do. Identifiying and prioritising tasks is critical. Identifying is fairly easy, prioritising not so. Priorities depend on both internal and external factors often beyond our control.

I have a tendency to be critical of huge, profit making, corporations that are involved in moving the web forward who de-prioritise accessibility [who appear to de-prioritise accessibility]. It’s not always a useful trait, but it’s who I am. I am also aware that many of the people who I interact with, while working for these corporations, are doing their best to make accessibility a priority despite the internal and external factors beyond their control.

A current example is Microsoft, what we know is that Windows 10 will be with us at the end of the month along with a new much improved standards based browser called Edge. What is known is that Microsoft is a huge corporation with a long history of accessibility support in its software and also a long history of broken sub-standard accessibility support for HTML in its browser (IE). IE has come along way in web standards support in the last few years, the  same cannot be said for the accessibility branch of web standards.

There is the future promise that with Edge accessibility web standards support will be much improved, but as is often the case due to corporate prioritisation, and resulting provision of resources and expertise, Windows 10 will arrive with a brand new browser, with broken accessibility support and advice to continue to use the old IE browser whose support, while broken, is more robust because assistive technology have had a lot of time to hack around the brokeness.

It is instructive to understand that when an organization genuinely prioritises the needs of users with disabilities, anything is possible:

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Further Reading

Windows 10: Advisories from screen reader developers and final thoughts

Addendum

It was unfair of me to imply that the current Microsoft browser team are #a11y slackers. They are doing all they can under difficult circumstances to bring a modern, robust accessibility implementation to Edge, something that unfortunately Microsoft failed to do in the history of IE.

Planet MozillaNo Flash 0.5 - still fighting the legacy

Last week I released No Flash 0.5, my addon for Firefox to fix the legacy of video embedding done with Flash. If you are like me and don't have Flash installed, sometime you encounter embedded video that don't work. No Flash will fix some by replacing the Flash object with a HTML5 video. This is done using the proper video embedding for HTML5.

This version brings the following:

  • Work on more recent Firefox Nightlies with e10s - it was utterly broken
  • Add support for embedded Dailymotion.

Also still, supports vimeo and YouTube - the later being extremely common.

Update: please file issues in the issue tracker.

Planet MozillaCSS Basic User Interface Module Level 3 Candidate Recommendation Published

The CSS WG has published a second Candidate Recommendation of CSS Basic User Interface Module Level 3. This specification describes user interface related properties and values that are proposed for CSS level 3, incorporating such features from CSS level 2 revision 1, and extending them with both new values and new properties.

Call for Implementations

This notice serves as a call for implementations of all CSS-UI-3 features, new properties, values, and fixes/details of existing features. Implementor feedback is strongly encouraged.

Thorough review is particularly encouraged of the following features new in level 3:

Significant Changes

Significant changes since the previous 2012 LCWD are listed in the Changes section.

This CR has an informative "Considerations for Security and Privacy" section with answers to the "Self-Review Questionnaire: Security and Privacy" document being developed by the W3C TAG.

Feedback

Please send feedback to the (archived) public mailing list www-style@w3.org with the spec code ([css-ui-3]) and your comment topic in the subject line. (Alternatively, you can email one of the editors and ask them to forward your comment.)

See Also

Previously

Also syndicated to CSS WG Blog: CSS Basic User Interface Module Level 3 Candidate Recommendation Published.

IEBlogMoving to HTML5 Premium Media

The commercial media industry is undergoing a major transition as content providers move away from proprietary web plug-in based delivery mechanisms (such as Flash or Silverlight), and replace them with unified plug-in free video players that are based on HTML5 specifications and commercial media encoding capabilities.  Browsers are moving away from plug-ins as well, as Chrome is with NPAPI and Microsoft Edge is with ActiveX, and toward more secure extension models.

The transition to plug-in free media has been enabled through the recent development of new specifications:

These specs were designed and developed to enable interoperable streaming to a variety of media platforms and devices.  By focusing on interoperable solutions, content providers are able to reduce costs and at the same time users are able to access the content they want on the device they prefer using the app or web browser of their choice. Microsoft believes that this is a huge benefit to both content producers and consumers, and is committed to supporting companies that make this transition.

This is a long blog, and we don’t want you to miss a topic that interests you.  Here’s a glimpse of what we’ll cover:

  • Some information on Microsoft Edge and Silverlight
  • An overview of interoperable web media.
  • Some challenges and options to address them:
    • The simplest form of DASH streaming.
    • A website demo that uses a library to play Smooth content.
    • Services from Azure Media Services that can help.
    • A simple method for creating a Universal Windows Platform (UWP) app based on website code.
    • A demo UWP that integrates video playback with Cortana voice commands.

Microsoft Edge and Silverlight

Support for ActiveX has been discontinued in Microsoft Edge, and that includes removing support for Silverlight.  The reasons for this have been discussed in previous blogs and include the emergence of viable and secure media solutions based on HTML5 extensions.  Microsoft continues to support Silverlight, and Silverlight out-of-browser apps can continue to use it.  Silverlight will also continue to be supported in Internet Explorer 11, so sites continue to have Silverlight options in Windows 10.  At the same time, we encourage companies that are using Silverlight for media to begin the transition to DASH/MSE/CENC/EME based designs and to follow a single, DRM-interoperable encoding work flow enabled by CENC.  This represents the most broadly interoperable solution across browsers, platforms, content and devices going forward.

Interoperable Media across Browsers

Plug-ins like Silverlight were intended to support interoperable media by providing versions for every browser they supported.  This became more difficult as devices and platforms that support browsers multiplied.  Now, as the old plug-in models are being removed, replacements for them are needed.  For media, a great forward looking replacement can be based on DASH, MSE, EME and CENC.

Windows 10 and Microsoft Edge support DASH, MSE, EME and CENC natively, and other major browsers ship implementations of MSE and CENC compliant EME. This support allows developers to build plug-in free web video apps that runs across a huge range of platforms and devices, with each MSE/EME implementation built on top of a different media pipeline and DRM provider.

DRM Providers Can Differ by Browser

DRM Providers Can Differ by Browser (Click to enlarge)

In the days when DRM systems used proprietary file formats and encryption methods, this variation in DRM providers by browser would have presented a significant issue.  With the development and use of Common Encryption (CENC), the problem is substantially reduced because the files are compressed in standard formats and encrypted using global industry standards.  The service provider issues the keys and licenses necessary to consume the content in a given browser, but the website code, content and encryption keys are common across all of them, regardless of which DRM is in use. An example of such an implementation is DASH.js, the open source industry reference player used to prove out these technologies and which serves as the basis for many players across the web today.

As shown above, Microsoft’s PlayReady DRM supports two modes of DRM protection:  “Software DRM”, which uses our traditional software protected media path, and “Hardware DRM”, which moves the media path into secured hardware when available on the device.  Hardware DRM was designed to meet the highest requirements for commercial media content protection, and will allow streaming of the highest quality content available.  Not all devices will be Hardware DRM capable, but sites built on MSE/EME can accommodate the difference and stream the best possible content quality supported by the browser or device.

Support from Microsoft

Like any new technology, the transition to DASH/MSE/EME/CENC can present some challenges.  These include:

  • MSE works by allowing a JavaScript client to set one or more sourceBuffer(s) as the source on a media element, and dynamically download and append media segments to the sourceBuffer. This provides sites precise control over the experience, but also requires a larger investment in site development.
  • Large existing media libraries have been encoded in formats which are not directly supported by MSE/EME. These libraries must be either supported in some way on the new APIs or re-encoded. Silverlight Smooth Streaming is an example format which was designed and built for sites using Silverlight plug-ins.  A solution that could play this content directly would be useful for any Silverlight replacement technology.
  • MSE/EME are maturing, but still undergoing change that may present interop challenges working with different media formats and across browsers.

As part of helping the industry move to interoperable media delivery solutions, we are investing in technologies to address these challenges.

DASH Type 1:  MSE Made Easy

DASH content usually consists of media files encoded at various quality levels and a manifest that provides information on the files to the media application.  An MSE based player is then responsible for parsing these files, downloading the most appropriate content and feeding it into the media element’s sourceBuffer(s). This is very flexible, but requires either investment in authoring site MSE implementations or use of a reference MSE implementation such as the aforementioned DASH.js.

There is another much easier option:  native DASH streaming, where the site code simply sets the manifest as the media element source, and playback is automatically managed by the browser’s built-in streaming engine.  This approach allows web developers to leverage the learnings and investments made by browser makers and easily provide a premium media experience on their site. We have added support for native DASH streaming to Windows 10 and Microsoft Edge, and more details on this are available in our previous blog: Simplified Adaptive Video Streaming: Announcing support for HLS and DASH in Windows 10.

A DASH JavaScript Library That Plays Smooth Content

A number of websites have large media libraries encoded in the Smooth Streaming format and are looking to move to an interoperable HTML5-based solution. One possible solution is to use a library that supports their current content through MSE/EME without re-encoding.  Libraries are available now that are capable of playing back Smooth Streaming content using MSE and EME. For example, a version of the “hasplayer.js” library can do just that and is available for download at https://github.com/eiximenis/hasplayer.js.

This hasplayer.js library is based on dash.js, and enables playback of both clear and protected Smooth content using PlayReady on Microsoft Edge. It is a client-side JavaScript library that provides content and manifest translations and is cross browser compatible. Thanks to the inclusion of EME poly-fill support, it may be easily extended to support other browser’s DRM solutions as well.

Here’s a sample of JavaScript that uses hasplayer.js to retrieve and play a DASH or Smooth media file:

<style>.gist table { margin-bottom: 0; }</style>

<script src="https://gist.github.com/kypflug/654574c120e7ad66fa70.js"></script>

That makes it very simple to support streaming Smooth content on a website.  We’ve provided a sample in the Contoso Video Sample GitHub Repository that uses this library to play video.  You can try the webpage yourself at the Contoso Video Demo Website.

Screenshot of the Contoso Video Demo Website

• Microsoft Edge Rendering
• Chakra JavaScript Engine
• HTML/CSS/JS from server

It is possible to provide a client-side translation library for Smooth Streaming content because the Protected Interoperable File Format (PIFF) underlying the Smooth Streaming protocol was a primary input into the ISO Base Media File Format (ISOBMFF) spec used with DASH and because PIFF introduced the multi-DRM protocol which was standardized as ISO MPEG Common Encryption (CENC). Two PIFF formats are in broad use today – PIFF 1.1 and PIFF 1.3 – and the hasplayer.js Smooth Streaming MSE/EME library supports both formats. These formats are supported by on-the-fly conversion from the PIFF format to the Common Media Format (CMF) used with DASH. This ensures that all browser content played back by the library is conformant with DASH CMF and capable of playback in all MSE-capable browsers.

Media Services

Some content owners would prefer to focus on producing great content, not the technical details required to deliver their content to consumers. These companies may wish to leverage hosted media services that prepare content for web delivery, handle the streaming logic and player UI, and manage the DRM license servers.  Azure Media Services offers this capability today, with support for both PlayReady and Widevine DRM systems.  This service provides support for both Video on Demand (VoD) and live streaming.  A single high quality file/stream is provided to Azure, where it undergoes dynamic compression and encryption into CENC protected content that can be streamed to end devices, and a complete player solution is available for developers to simply add to their website. Some details of this service are newly announced in Azure Media Services delivers Widevine encrypted stream by partnering with castLabs.

Hosted Web Apps

One powerful advantage of moving to DASH/MSE/EME/CENC streaming is that the same code running on your website can be packaged as a Universal Windows Platform (UWP) app. UWP applications run on all devices with Windows 10.   A website developer can create both an interoperable website based player and a Windows app that uses the same code, but offers a richer experience using Windows platform capabilities.  Their common code will thus be able to handle UI and media streaming details, AND take advantage of capabilities only available to apps through WinRT APIs.

These hosted web apps:

  • Are offered through the Windows Store
  • Are able to interact with Cortana (“Contoso Video play elephants”)
  • Are able to offer notifications (“NBA Finals streaming starts in 15 minutes”)
  • Have access to advanced adaptive streaming support
  • Have access to enhanced content protection for Full HD and Ultra-HD video playback
  • Are able to light up live tiles
  • And more!

We talked about the power of hosted apps in our talk on Hosted web apps and web platform innovations at the Microsoft Edge Web Summit 2015. You can also find reference documentation on MSDN.

Hosted App Demo

We’ve taken the demo Contoso Video website shown above and packaged it as a demo UWP app that takes advantage of Windows Platform APIs.  This demo shows how simple it is to take the basic video player and integrate Cortana Voice Commands.  The demo also customizes the colors used in the app’s title bar.  All of the JavaScript code is part of the HTML website which is deployed as part of the standard web development workflow.

Screenshot of the Contoso Video website packaged as a UWP app

• Retain Microsoft Edge Rendering
• Retain Chakra JavaScript Engine
• Retain HTML/CSS/JS from server or local
• Add access native Windows APIs – Cortana, notifications, customizations & more…
• Offered in the Windows Store Catalog

Three files are needed to integrate Cortana in your Hosted Web Applications (HWA): A Voice Command Definition (VCD) file, a JavaScript file, and an HTML file.

Voice Command Definition (VCD) file

The Voice Command Definition (VCD) File specifies the actions you want to support with voice commands.   The code below informs Cortana of the app name (Contoso Video), that the app supports a “play” command, and how the “playing” state should be represented in the app UI.

<style>.gist table { margin-bottom: 0; }</style>

<script src="https://gist.github.com/kypflug/6d1e47aec679fa0b38fe.js"></script>

The JavaScript File

JavaScript must listen for the Windows Activation event and check for VoiceCommands.

<style>.gist table { margin-bottom: 0; }</style>

<script src="https://gist.github.com/kypflug/bafa784fb0995437e91b.js"></script>

The HTML File

The HTML file must add a meta element pointing to a VCD file on your server.

<style>.gist table { margin-bottom: 0; }</style>

<script src="https://gist.github.com/kypflug/5d25e2a9508e238dabe0.js"></script>

With the addition of a VCD, and updates to the website HTML and JavaScript, our Contoso Video website demo can now be packaged as a Universal Windows Platform application that will run across every device that runs Windows 10.   Further, users can launch the app to play a video by saying “Contoso, play Tears of Steel”.  Cortana will understand the command, launch the Contoso Video app and start playing the video “Caves of Steel”.  The app also has a customized view on the app bar.

Screenshot showing the Contoso Video UWP app being launched via Cortana

Contoso Video in Cortana

Screenshot showing Contoso Video as a search result in the App Menu

Contoso Video in the App Menu

The full source for the Contoso Video website and demo app is available in the Contoso Video Sample GitHub Repository.

Conclusion

DASH/MSE/EME/CENC offer compelling advantages over plug-in based solutions, and we are quickly moving towards an era of broadly interoperable streaming media. Both content providers and consumers will benefit from this transition. While the adoption of these technologies may present short-term challenges, the features and options discussed in this blog are provided to assist companies as they make this change.

We’re eager for your feedback so we can further improve our streaming media offerings, and are looking forward to seeing what you do with the tools and approaches we’ve discussed here!

– David Mebane, Program Manager, Windows Media
– Jerry Smith, Senior Program Manager, Microsoft Edge
– Kevin Hill, Senior Program Manager, Microsoft Edge

Planet MozillaWeb Components, Stories Of Scars

Chris Heilmann has written about Web Components.

If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

Indeed a very good blog post to read. Then Chris went on saying:

Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web.

This is twitching in the back of my mind for the last couple of weeks. And I kind of remember a wicked pattern from 10 years ago. Enter Compound Document Formats (CDF) with its WICD (read wicked) specifications. If you think I'm silly, check the CDF FAQ:

When combining content from arbitrary sources, a number of problems present themselves, including how rendering is handled when crossing from one markup language to another, or how events propagate across the same boundaries, or how to interpret the meaning of a piece of content within an unanticipated context.

and

Simply put, a compound document is a mixture of content in any number of formats. Compound documents range from static (say, XHTML that includes a simple SVG illustration) to very dynamic (a full-fledged Web Application). A compound document may include its parts directly (such as when you include an SVG image in an XHTML file) or by reference (such as when you embed a separate SVG document in XHTML using an <object> element. There are benefits to both, and the application should determine which one you use. For instance, inclusion by reference facilitates reuse and eases maintenance of a large number of resources. Direct inclusion can improve portability or offline use. W3C will support both modes, called CDR ("compound documents by reference") and CDI ("compound documents by inclusion").

At that time, the Web and W3C, where full throttle on XML and namespaces. Now, the cool kids on the block are full HTML, JSON, polymers and JS frameworks. But if you look carefully and remove the syntax, architecture parts, the narrative is the same. And with the narratives of the battle and its scars, the Web Components sound very familiar to the Coupound Document Format.

Still by Chris

When it comes to componentising the web, the rabbit hole is deep and also a maze.

Note that not everything was lost from WICD. It helped develop a couple of things, and reimagine the platform. Stay tune, I think we will have surprises on this story. Not over yet.

Modularity has already a couple of scars when thinking about large distribution. Remember Opendoc and OLE. I still remember using Cyberdog. Fun times.

Otsukare!

Planet MozillaOver the Edge: Web Components are an endangered species

Last week I ran the panel and the web components/modules breakout session of the excellent Edge Conference in London, England and I think I did quite a terrible job. The reason was that the topic is too large and too fragmented and broken to be taken on as a bundle.

If you want to see the mess that is the standardisation effort around web components right now in all its ugliness, Wilson Page wrote a great post on that on Mozilla Hacks. Make sure to also read the comments – lots of good stuff there.

Web Components are a great idea. Modules are a great idea. Together, they bring us hours and hours of fun debating where what should be done to create a well-performing, easy to maintain and all around extensible complex app for the web. Along the way we can throw around lots of tools and ideas like NPM and ES6 imports or – as Alex Russell said it on the panel: “tooling will save you”.

It does. But that was always the case. When browsers didn’t support CSS, we had Dreamweaver to create horribly nested tables that achieved the same effect. There is always a way to make browsers do what we want them to do. In the past, we did a lot of convoluted things client-side with libraries. With the advent of node and others we now have even more environments to innovate and release “not for production ready” impressive and clever solutions.

When it comes to componentising the web, the rabbit hole is deep and also a maze. Many developers don’t have time to even start digging and use libraries like Polymer or React instead and call it a day and that the “de facto standard” (a term that makes my toenails crawl up – layout tables were a “de facto standard”, so was Flash video).

React did a genius thing: by virtualising the DOM, it avoided a lot of the problems with browsers. But it also means that you forfeit all the good things the DOM gives you in terms of accessibility and semantics/declarative code. It simply is easier to write a <super-button> than to create a fragment for it or write it in JavaScript.

Of course, either are easy for us clever and amazing developers, but the fact is that the web is not for developers. It is a publishing platform, and we are moving away from that concept at a ridiculous pace.

And whilst React gives us all the goodness of Web Components now, it is also a library by a commercial company. That it is open source, doesn’t make much of a difference. YUI showed that a truckload of innovation can go into “maintenance mode” very quickly when a company’s direction changes. I have high hopes for React, but I am also worried about dependencies on a single company.

Let’s rewind and talk about Web Components

Let’s do away with modules and imports for now, as I think this is a totally different discussion.

I always loved the idea of Web Components – allowing me to write widgets in the browser that work with it rather than against it is an incredible idea. Years of widget frameworks trying to get the correct performance out of a browser whilst empowering maintainers would come to a fruitful climax. Yes, please, give me a way to write my own controls, inherit from existing ones and share my independent components with other developers.

However, in four years, we haven’t got much to show.. When we asked the very captive and elite audience of EdgeConf about Web Components, nobody raised their hand that they are using them in real products. People either used React or Polymer as there is still no way to use Web Components in production otherwise. When we tried to find examples in the wild, the meager harvest was GitHub’s time element. I do hope that this was not all we wrote and many a company is ready to go with Web Components. But most discussions I had ended up the same way: people are interested, tried them out once and had to bail out because of lack of browser support.

Web Components are a chicken and egg problem where we are currently trying to define the chicken and have many a different idea what an egg could be. Meanwhile, people go to chicken-meat based fast food places to get quick results. And others increasingly mention that we should hide the chicken and just give people the eggs leaving the chicken farming to those who also know how to build a hen-house. OK, I might have taken that metaphor a bit far.

We all agreed that XHTML2 sucked, was overly complicated, and defined without the input of web developers. I get the weird feeling that Web Components and modules are going in the same direction.

In 2012 I wrote a longer post as an immediate response to Google’s big announcement of the foundation of the web platform following Alex Russell’s presentation at Fronteers 11 showing off what Web Components could do. In it I kind of lamented the lack of clean web code and the focus on developer convenience over clarity. Last year, I listed a few dangers of web components. Today, I am not too proud to admit that I lost sight of what is going on. And I am not alone. As Wilson’s post on Mozilla Hacks shows, the current state is messy to say the least.

We need to enable web developers to use “vanilla” web components

What we need is a base to start from. In the browser and in a browser that users have and doesn’t ask them to turn on a flag. Without that, Web Components are doomed to become a “too complex” standard that nobody implements but instead relies on libraries.

During the breakout session, one of the interesting proposals was to turn Bootstrap components into web components and start with that. Tread the cowpath of what people use and make it available to see how it performs.

Of course, this is a big gamble and it means consensus across browser makers. But we had that with HTML5. Maybe there is a chance for harmony amongst competitors for the sake of an extensible and modularised web that is not dependent on ES6 availability across browsers. We’re probably better off with implementing one sci-fi idea at a time.

I wished I could be more excited or positive about this. But it left me with a sour taste in my mouth to see that EdgeConf, that hot-house of web innovation and think-tank of many very intelligent people were as confused as I was.

I’d love to see a “let’s turn it on and see what happens” instead of “but, wait, this could happen”. Of course, it isn’t that simple – and the Mozilla Hacks post explains this well – but a boy can dream, right? Remember when using HTML5 video was just a dream?

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>