Bruce LawsonReading List

Anne van KesterenDOM: custom elements

Now JavaScript classes and subclassing are finally maturing there is a revived interest in custom elements. The idea behind custom elements is to give developers lifecycle hooks for elements and enable their custom element classes to be instantiated through markup. There is also some overarching goal of being able to explain the platform, though as html-as-custom-elements demonstrates this is extremely hard.

The first iteration of custom elements was based on mutating the prototype of a custom element object, followed by a callback that gives developers the ability to further mutate the object as needed. Google has shipped this in Chrome, but other browsers have been reluctant to follow. I created a CustomElements wiki page that summarizes where we are at with the second iteration, which will likely be incompatible with what is out there today. There is a couple of outstanding disputes, but the main one is how exactly a custom element object is to be instantiated from markup (referred to as “Upgrading”).

If you are interested in participating, most of the discussion is happening on public-webapps@w3.org. There is also some on IRC.

W3C Team blogThis week: W3C WoT initiative, Accessibility Research, Cory Doctorow Rejoins EFF, etc.

This is the 16-23 January 2015 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Other news

W3C in the Press (or blogs)

4 articles since the 16-Jan digest; a selection follows. You may read all articles in our Press Clippings page.

IEBlogProject Spartan and the Windows 10 January Preview Build

Yesterday, we announced that Windows 10 will ship with a brand new browser, codenamed “Project Spartan.” Designed for Windows 10, Spartan provides a more interoperable, reliable, and discoverable experience with advanced features including the ability to annotate on web pages, a distraction-free reading experience, and integration of Cortana for finding and doing things online faster.

Project Spartan on Windows 10 desktop

Spartan is a single browser designed to work great across the entire Windows 10 device family - from keyboard and mouse on the Windows 10 desktop to touch, gestures, voice, controllers and sensors.

Project Spartan on Windows 10 phone with dark theme     Project Spartan on Windows 10 phone with light theme

Powered by a new rendering engine, Spartan is designed for interoperability with the modern web. We’ve deliberately moved away from the versioned document modes historically used in Internet Explorer, and now use the same markup as other modern browsers. Spartan’s new rendering engine is designed to work with the way the web is written today.

Like Windows 10 itself Spartan will remain up-to-date: as a service, both providing new platform capabilities, security and performance improvements, and ensuring web developers a consistent platform across Windows 10 devices. Spartan and the new rendering engine are truly evergreen.

Spartan provides compatibility with the millions of existing enterprise web sites designed for Internet Explorer. To achieve this, Spartan loads the IE11 engine for legacy enterprise web sites when needed, while using the new rendering engine for modern web sites. This approach provides both a strong compatibility guarantee for legacy enterprise web sites and a forward looking interoperable web standards promise.

We recognize some enterprises have legacy web sites that use older technologies designed only for Internet Explorer, such as custom ActiveX controls and Browser Helper Objects. For these users, Internet Explorer will also be available on Windows 10. Internet Explorer will use the same dual rendering engines as Spartan, ensuring web developers can consistently target the latest web standards.

Dual rendering engine architecture animation

What does this mean to web developers?

If you are building a public consumer-facing web site here’s what you need to know:

  1. Our new rendering engine will be the default engine for Windows 10, Spartan, and Internet Explorer. This engine has interoperability at its core and consumes the same markup you send other modern browsers. Our standards support and roadmap can be found at http://status.modern.ie.
  2. Public Internet web sites will be rendered using the new engine and modern standards, and legacy Internet Explorer behaviors including document modes are not supported in the new engine. If your web sites depends on legacy Internet Explorer behaviors we encourage you to update to modern standards.
  3. Our goal is interoperability with the modern web and we need your help! You can test the new engine via the Windows Insider Program or using http://remote.modern.ie. Please let us know (via Connect or Twitter) when you find interoperability problems so we can work with the W3C and other browser manufacturers to ensure great interoperability.

New features and fixes in the January Insider Update

On Friday, we’re also rolling out a new preview build to Windows 10 Insiders. This new preview will also be available on RemoteIE soon. This build doesn’t have Project Spartan yet, but does have lots of updates to the new web rendering engine that Spartan will use. We started testing our new rendering engine by rolling it out to a portion of Insiders using the Windows Technical Preview in November.

Since that time, we’ve received over 12,000 feedback reports through the smiley face icon alone. This new build has over 2000 changes to the new platform, largely influenced by that feedback. In addition to many fixes, there are also several new platform features we are thrilled to be releasing in the updated preview:

Additionally, you’ll find updated F12 developer tools that include the updated UI we shipped to IE11 users last month as well as several new features and improvements. Here’s a few of our favorites:

  • New and Improved Network Tool—capture and debug network traffic with new UX and capabilities, such as auto-start, a content type filter, and error highlighting.
  • HTML & CSS Pretty Printing—just as you’ve been able to nicely reformat minified JavaScript in the debugger, you’ll now be able to do this for HTML and CSS.
  • Async Callstacks for Events and Timers—quickly view the “async callstack” to connect the dots between event dispatch and the original addEventListener call or between setting a timer and the timer being fired.
  • Sourcemaps for Styles and in the Memory Profiler—jump to your original sources, such as TypeScript or SASS, directly from the Styles pane or Memory Profiler tools.
  • Find Reference and Go To Definition—jump directly to a function call’s definition or find the references to a given variable.

New F12 network tools

With these improvements, we’re increasing the number of Insiders that get the new engine as we work towards this as the default for all users. If you’re curious and want to opt-in now, remember to navigate to about:flags and set “Enable Experimental Web Platform Features” to Enabled.

We’re excited to share our continued progress with you and to introduce Project Spartan to the Microsoft family. Please continue to share your feedback via Twitter, UserVoice (feature requests) and Connect (bug reports) and help shape our next browser. We’ll also be holding our next Twitter #AskIE session on Tuesday, January 27th from 10AM-12PM PST so you can ask questions to the team. See you there!

— Jason Weber, Group Program Manager, Internet Explorer

Planet MozillaBrowsers, Services and the OS – oh my…

Yesterday’s two hour Windows 10 briefing by Microsoft had some very interesting things in it (The Verge did a great job live blogging it). I was waiting for lots of information about the new browser, code name Spartan, but most was about Windows 10 itself. This is, of course, understandable and shows that I care about browsers maybe too much. There was interesting information about Windows 10 being a free upgrade, Cortana integration on all platforms, streaming games from xbox to Windows and vice versa. The big wow factor at the end of the brief was HoloLens, which makes interactivity like Iron Man had in his lab not that far-fetched any longer.

hololens working

For me, however, the whole thing was a bit of an epiphany about browsers. I’ve always seen browsers as my main playground and got frustrated by lack of standards support across them. I got annoyed by users not upgrading to new ones or companies making that hard. And I was disappointed by developers having their pet browsers to support and demand people to use the same. What I missed out on was how amazing browsers themselves have become as tools for end users.

For end users the browser is just another app. The web is not the thing alongside your computing interaction any longer, it is just a part of it. Just because I spend most of my day in the browser doesn’t make it the most important thing. In esssence, the interaction of the web and the hardware you have is what is the really interesting part.

A lot of innovation I have seen over the years that was controversial at that time or even highly improbable is now in phones and computers we use every day. And we don’t really appreciate it. Google Now, Siri and now Microsoft’s Cortana integration into the whole system is amazingly useful. Yes, it is also a bit creepy and there should be more granular insight into what gets crawled and what isn’t. But all in all isn’t it incredible that computers tell us about upcoming flights, traffic problems and remind us about things we didn’t even explicitly set as a reminder?

Spartan Demo screenshot by the verge

The short, 8 minute Spartan demo in the briefing showed some incredible functionality:

  • You can annotate web page with a stylus, mouse or add comments to any part of the text
  • You can then collect these, share them with friends or watch them offline later
  • Reading mode turns the web into a one-column, easy to read version. Safari, Mobile browsers like Firefox Mobile have this and third party services like Readability did that before.
  • Firefox’s awesome bar and Chrome’s Google Now integration also is in Windows with Cortana being available anywhere in the browser.

Frankly, not all of that is new, but I have never used these features. I was too bogged down into what browsers can not do instead of checking what is already possible for normal users.

I’ve mentioned this a few times in talks lately: a lot of the innovation of add-ons, apps and products is merging with our platforms. Where in the past it was a sensible idea to build a weather app and expect people to go there or even pay for it, we get this kind of functionality with our platforms. This is great for end users, but it means we have to be up to speed what user interfaces of the platforms look like these days instead of assuming we need to invent all the time.

Looking at this functionality made me remember a lot of things promised in the past but never really used (at least by me or my surroundings):

  • Back in 2001, Microsoft introduced Smart Tags, which caused quite a stir in the writing community as it allows third party commenting on your web content without notifying you. Many a web site added the MSSmartTagsPreventParsing to disallow this. The annotation feature of Spartan now is this on steroids. Thirdvoice (wayback machine archive) was a browser add-on that did the same, but got creepy very quickly by offering you things to buy. Weirdly enough Awesome Screenshot, an annotation plug-in also now gets very creepy by offering you price comparisons for your online shopping. This shows that a functionality like that doesn’t seem to be viable as a stand-alone business model, but very much makes sense as a feature of the platform.
  • Back in 2006, Ray Ozzie of Microsoft at eTech introduced the idea of the Live Clipboard. It was this:
    [Live Clipboard…] allows the copy and pasting of data, including dynamic, updating data, across and between web applications and desktop applications.
    The big thing about this was that it would have been an industrial size use case for Microformats and could have given that idea the boost it needed. However, despite me pestering Chris Wilson of – then – Microsoft at @media AJAX 2006 about it, this never took off. Until now, it seems – except that the clippings aren’t live.
  • When I worked in Yahoo, Browser Plus came out of a hackday, an extension to browsers that allows easier file uploads and drag and drop between browser and OS. It also gave you Desktop notifications. One of the use cases shown at the hack day was to drag and drop products from several online stores and then checkout in one step with all of them. This, still, is not possible. I’d wager to guess that legal problems and tax reasons are the main blocker there. Drag and Drop and uploads as well as Desktop notifications are now reality without add-ons. So we’re getting there.

This year will be very exciting. Not only does HTML5 and JavaScript get new features all the time. It seems to me that browsers become much, much smoother at integrating into our daily lives. This spells doom for a lot of apps. Why use an app when the functionality is already available with a simple click or voice command?

Of course, there are still many issues to fix, mainly offline and slow connection use cases. Privacy and security is another problem. Convenient as it is, there should be some way to know what is listening in on me right now and where the data goes. But, I for one am very interested about the current integration of services into the browser and the browser into the OS.

Bruce LawsonWhy we can’t do real responsive images with CSS or JavaScript

I’m writing a talk on <picture>, srcset and friends for Awwwards Conference in Barcelona next month (yes, I know this is unparalleled early preparation; I’m heading for the sunshine for 2 weeks soon). I decided that, before I get on to the main subject, I should address the question “why all this complex new markup? Why not just use CSS or JavaScript?” because it’s invariably asked.

But you might not be able to see me in Catalonia to find out, because tickets are nearly sold out. So here’s the answer.

All browsers have what’s called a preloader. As the browser is munching through the HTML – before it’s even started to construct a DOM – the preloader sees “<img>” and rushes off to fetch the resource before it’s even thought about speculating about considering doing anything about the CSS or JavaScript.

It does this to get images as fast as it can – after all, they can often be pretty big and are one of the things that boosts the perceived performance of a page dramatically. Steve Souders, head honcho of Velocity Conference, bloke who knows loads about site speed, and renowned poet called the preloader “the single biggest performance improvement browsers have ever made” in his sonnet “Shall I compare thee to a summer’s preloader, bae?”

So, by the time the browser gets around to dealing with CSS or script, it may very well have already grabbed an image – or at least downloaded a fair bit. If you try

<img id=thingy src=picture.png alt="a mankini">
…
@media all and (max-width:600px) {
 #thingy {content: url(medium-res.png);}
 }

@media all and (max-width:320px) {
 #thingy {content: url(low-res.png);}
 }

you’ll find the correct image is selected by the media query (assuming your browser supports content on simple selectors without :before or :after pseudo-elements) but you’ll find that the preloader has downloaded the resource pointed to by the <img src> and then the one that the CSS replaces it with is downloaded, too. So you get a double download which is not what you want at all.

Alternatively, you could have an <img> with no src attribute, and then add it in with JavaScript – but then you’re fetching the resource until much later, delaying the loading of the page. Because your browser won’t know the width and height of the image that the JS will select, it can’t leave room for it when laying out the page so you may find that your page gets reflowed and, if the user was reading some textual content, she might find the stuff she’s reading scrolls off the page.

So the only way to beat the preloader is to put all the potential image sources in the HTML and give the browser all the information it needs to make the selection there, too. That’s what the w and x descriptors in srcset are for, and the sizes attribute.

Of course, I’ll explain it with far more panache and mohawk in Barcelona. So why not come along? Go on, you know you want to and I really want to see you again. Because I love you.

Planet MozillaFirefox 36 in beta

Firefox 36 (Desktop and Mobile) is now available on the beta channel.

The release notes are published on the Mozilla website:

This version introduces many new HTML5/CSS features, in particular the Media Source Extensions (MSE) API which allow native HTML5 playback on YouTube. The new preferences implementation is also enabled for the first half of the beta cycle, please help us to test this new feature!

On the mobile version of Firefox, we are also shipping the new Tablet user interface!

Download this new version:

And as usual, please report any issues.

W3C Team blogThis week: W3C TAG election, HTML5 Japanese CG, W3C in figures (2014), etc.

This is the 9-16 January 2015 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

  • n/a

W3C in the Press (or blogs)

4 articles since the 9-Jan digest; see one below. You may read all articles in our Press Clippings page.

Planet MozillaVideo Subtitles and Localization

Let’s talk about localization and subtitles – not captions. From Wikipedia:

“Subtitles” assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they only transcribe dialogue and some on-screen text. “Captions” aim to describe to the deaf and hard of hearing all significant audio content—spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking – along with any significant music or sound effects using words or symbols.

So far I worked on two projects that involved subtitles – Web We Want and Firefox: Choose Independent – and this is what I learned in the process.

The Process

Step 1: Provide Source Content

You need the video (obviously) and English subtitles with the correct timing (less obvious). This means that the picture might no be final (sometimes they call this picture lock quality), but the audio track and its timing do.

Step 2: How Do I Localize Subtitles?

Currently the most common format for subtitles is SubRip, and it’s really simple: sequential number of the subtitle, timing (start-end), text.

For example this is the beginning of the Web We Want .srt file:

1
00:00:00,000 --> 00:00:00,864
THE WEB WE WANT

2
00:00:00,864 --> 00:00:03,388
THE WEB WE WANT
an open letter

3
00:00:05,503 --> 00:00:07,406
I am not a data point

4
00:00:07,642 --> 00:00:09,503
to be bought and sold.

Amara is a great tool to localize subtitles: you can use an interactive editor/timeline and you can adapt the timing of each sentence to your needs while watching the video. Potentially you can also choose to automatically sync subtitles between Amara and Youtube.

Step 3: Host the Video on Youtube

At this point subtitles are available, someone just needs to load them on YouTube and display the video on a web page.

Sounds simple, doesn’t it? What could go wrong?

Potential Localization Issues

The first issue is timing: if the English text requires 2 seconds to be read, that might not be enough to read the same sentence in Spanish, German or other verbose languages. That’s something you need to keep in mind from day one while producing the video and audio track.

Split sentences: sometimes a sentence is split into multiple parts because of its length. Unfortunately the sentence structure might be completely different in the target language, and some details will get lost in translation. Consider for example the final frame of the Choose Independent video:

Choose Independent frame

«It’s how we keep our independence online…» (pause, switch to burning fox) «…burning bright»

This works nicely in English since “burning” is perfectly synced with the picture, but most locales won’t be able to obtain the same result. Small bits, but still reducing the impact of the message.

In general, watching a subtitled video is a sub-optimal experience. If you plan to have content in video format, don’t focus your communication exclusively on that.

Why Youtube Is not Great

For the Firefox Independent page we force YouTube to load subtitles with cc_load_subtitles.

A few hours after the launch, I started receiving complaints from localizers saying that YouTube was loading subtitles with the wrong localization for some languages: for example nl for fy-NL (Dutch instead of Frisian), de for rm (German instead of Romansh), en-US for cy (English instead of Welsh).

This is how YouTube’s embed works:

  • If you’re logged in into Google, you’ll get the locale you choose for YouTube. For example I get English subtitles, even if I’m using a browser in Italian (a quick test is to watch the video in private mode).
  • If you’re not logged in, you’ll get the first good locale based on the Accept Language header sent by your browser. For example, for ‘dsb’ (Lower Sorbian) this equals to “dsb, hsb, de, en-US, en”, and German is the first available language on the list.

For some unknown reasons YouTube wasn’t founding a match in Accept Language and falling back to a different language for those locales, even if the subtitles were localized and loaded. My only guess would be a mismatch in the locale format understood by YouTube, like fy_NL vs fy-NL/fy.

Then I found out that we can send a parameter called ‘cc_lang_pref’ to force the language (not exactly well documented), and that fixes cy, fy-NL, rm. For example, if you open https://www.mozilla.org/it/firefox/independent you’ll get the Italian subtitles even if you’re using a browser in a different language. Since we have a good locale detection on top of mozilla.org, this makes sense and let us use the most of our localization teams’ work.

But then YouTube becomes not smart: if the requested locale is missing, it doesn’t rely on Accept Language, but just falls back to English. So Welsh (cy) is now getting subtitles in the expected language, Lower Sorbian (dsb) is getting English instead of German and can only manually switch language. Far from great.

Talking about YouTube, I find quite silly a claim as «I support 163 languages” but don’t provide the list of them anywhere. I have at least 5 locales code that are not supported or recognized (ast, dsb, es-AR, es-CL, hsb).

How Do We Fix It?

I think the solution will be to move away from YouTube and its limitations, use native or alternative video players (example), subtitles in VTT format and take full control of the entire chain. But these are discussion and experiments that still need to start.

Planet MozillaA time of change…

“The suspense is killing me,” said Arthur testily.
Stress and nervous tension are now serious social problems in all parts of the Galaxy, and it is in order that this situation should not in any way be exacerbated that the following facts will now be revealed in advance.
Hitchhiker’s Guide to the Galaxy

never do anything halfway

I am not returning to Mozilla in February but go on to bring the great messages of an open web somewhere else. Where, I do not know yet. I am open to offers and I am interested in quite a few things happening right now. I want something new, with a different audience. A challenge to open and share systems and help communication where the current modus operandi is to be secretive. I want to lead a team and have a clear career path for people to follow. If you have a good challenge for me, send me some information about it.

I love everything Mozilla has done and what it stands for. I also will continue being a Mozillian. I will keep in touch with the great community and contribute to MDN and other open resources.

Of course there are many reasons for this decision, none of which need to go here. Suffice to say, I think I have done in Mozilla what I set out to do and it now needs other people to fulfil the new challenges the company faces.

I came to Mozilla with the plan to make us the “Switzerland of HTML5”, or the calming negotiator and standards implementer in the browser wars raging at that time. I also wanted to build an evangelism team and support the community in outreach on a basis of shared information and trust. I am proud of having coached a lot of people in the Mozilla community. It was very rewarding seeing them grow and share their excitement. It was great to be a spokesperson for a misfit company. A company that doesn’t worry about turning over some apple-carts if the end result means more freedom for everyone. It was an incredibly interesting challenge to work with the press in a company that has many voices and not one single communication channel. It was also great to help a crazy idea like an HTML5 based mobile operating system come to fruition and be a player people take serious.

Returning to Mozilla I’d have to start from scratch with that. Maybe it is time for Mozilla not to have a dedicated evangelism team. It is more maintainable to build an internal information network. One that empowers people to represent Mozilla and makes it easy to always have newest information.

I am looking forward to seeing what happens with Mozilla next. There is a lot of change going on and change can be a great thing. It needs the right people to stand up and come up with new ideas, have a plan to execute them and a way to measure their success.

As for me, I am sure I will miss a few things I came to love working for Mozilla. The freedoms I had. The distributed working environment. The ability to talk about everything we did. The massive resource that is enthusiasts world-wide giving their time and effort to make the fox shine.

I am looking forward to being one of them and enjoy the support the company gives me. Mozilla will be the thing I want to support and point to as a great resource to use.

Faster speed leads to more disappointment

Making the web work, keeping our information secure and private and allowing people world-wide to publish and have a voice is the job of all the companies out there.

As enthusiastic supporters of these ideas we’re not reaching the biggest perpetrators. I am looking forward to giving my skills to a company that needs to move further into this mindset rather than having it as its manifesto. I also want to support developers who need to get a job done in a limited and fixed environment. We need to make the web better by changing it from the inside. Every day people create, build and code a part of the web. We need to empower them, not to tell them that they need a certain technology or change their ways to enable something new.

The web is independent of hardware, software, locale and ability. This is what makes it amazing. This means that we can not tell people to use a certain browser to get a better result. We need to find ways to get rid of hurtful solutions by offering upgrades for them.

We have a lot of excuses why things break on the web. We fail to offer solutions that are easy to implement, mature enough to use and give the implementers an immediate benefit. This is a good new challenge. We are good at impressing one another, time to impress others.

“Keep on rocking the free web”, as Potch says every Monday in the Mozilla meeting.

Planet MozillaWebdev Extravaganza – January 2015

Note: Apologies for the lack of posts in December; both the Webdev Extravaganza and Beer and Tell were cancelled due to a company-wide workweek and the holidays.

Once a month, web developers from across Mozilla get together to compete in a S’More-themed cooking contest. While we concoct a variety of meals out of the basic ingredients of graham cracker, marshmallow, and chocolate, we find time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Socorro Switch from HBase to S3

Lonnen shared with us the news that Socorro (Crash-Stats) has switched from storing crash data in HBase to Amazon S3. Socorro has roughly 150 terabytes of data, and HBase was the single largest source of problems for the site. Even with the extra latency of leaving the data center to read crash data, Socorro hasn’t seen any major issues with the new system.

about:home Fundraising Snippets

As part of the end-of-year fundraising push, Osmose worked with the amazing team over at the Mozilla Foundation to test and deploy several snippets encouraging Firefox users to donate to Mozilla. Throughout the campaign, the fundraising team has been maintaining a webpage and blog with info about how much we raised and what methods we used to optimize our fundraising.

Mozilla.org Landing Pages and Upcoming Tours

Craigcook shared news about a new landing page for Firefox OS on TVs that landed on Mozilla.org for CES. There’s also a not-yet-but-soon-to-be-launched landing page announcing Firefox Hello.

QMO on Shared WordPress

Craig also dropped a note about QMO being moved over to the shared WordPress instance instead of living on it’s own separate instance. This will make QMO easier to maintain and less fragile.

Peep 2.0

ErikRose proudly announced the 2.0 release of Peep, a wrapper around pip that cryptographically ensures that libraries you install are the same as the ones installed by the original developer. The 2.0 release primarily includes a security fix where the setup.py file of a package would be executed even if the package did not match the expected hash. Update today!

Air Mozilla Extracted Screenshots for Video Icons

Peterbe informed us that Air Mozilla now has thumbnails available for use as video icons instead of the previously-used static logos. If you have any archived videos, you can visit their “Edit event data” page and select which thumbnail you’d like to use as the icon for the video.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

ElasticUtils is Deprecated

Willkg gave a non-verbal update to inform us that ElasticUtils is deprecated, and he’s stepping down as maintainer. There’s a blog post explaining why, and he recommends that users switch to elasticsearch-dsl-py instead.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor. Or rather, we would, but no one new joined this week. Doh!

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Custom Context Menu Items

ErikRose wanted to share a neat article he found by davidwalsh describing how to add custom items to the context menu. Browser support is currently limited to Firefox.

DevOps Talk?

Jgmize asked about hosting a possible DevOps Extravaganza, but it was reinforced that DevOps-related topics are still relevant to developing the web, and thus can be talked about during the Webdev Extravaganza. If you’re doing something interesting related to DevOps, feel free to share during the meeting!

SUMO Developers Flattened

Lonnen shared the news that r1cky has transitioned from being a manager to being a full-time engineer again. The SUMO and Input teams that were under him are now all on the Web Engineering team under lonnen. Hooray for flat hierarchies!


This month’s winning entry in the cooking contest was a S’mores-flavored variety of Soylent that both provides all the nutrients and calories you probably need, but also is literally just S’mores thrown in a blender. Genius!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

W3C Team blogLast week: W3C and OGC to work on Spatial Data on the Web, WAI Tutorials, W3Training, etc.

This is the 2-9 January 2015 edition -after a hiatus on 19 December 2014- of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality

  • Ars Technica: Title II for Internet providers is all but confirmed by FCC chairmanFederal Communications Commission (FCC) Chairman Tom Wheeler implied that Title II of the Communications Act will be the basis for new net neutrality rules governing the broadband industry. […] proposed rules […] will be circulated within the Commission on February 5 and voted on on February 26.

W3C in the Press (or blogs)

5 articles since the last digest; a selection follows. You may read all articles in our Press Clippings page.

Bruce LawsonReading List

Planet MozillaA Device Blind Users Will Love

The Internet is a global public resource that must remain open and accessible.

<footer style="text-align: right;">— Mozilla manifesto</footer>

Mozilla invests in accessibility, because it’s the right thing to do.

We have staff, a team of engineers, who exclusively focus on accessibility in our products and play a positive influence in the general accessibility of the web. This has paid off well, Firefox is well regarded as a leader in screen reader support on the desktop and on Android. We have the best HTML5 accessibility support in our browser, and we are close to having a fully functional screen reader in Firefox OS.

Mozilla accessibility logo

I say “close”, because we are not yet there. Most websites are fairly accessible with little to no effort from the site developers. The document model of the web is relatively simple and is malleable enough that blind users are able to access them through screen readers. Advanced web applications are a whole other story, developers are required to be much more mindful about how they are authored and account for users with disabilities when designing them. The most recognized standard for making accessible rich internet application is called ARIA (accessible rich internet applications), and it allows augmenting markup with attributes that will help assistive technologies (such as screen readers) have a good understanding of the state of the app, and relay it to the user.

In Firefox OS we have a suite of core apps called Gaia that is the foundation for Firefox OS’s user interface. It is really one giant web app, perhaps one of the biggest out there. Since our mission dictates that we make our products accessible, we have embarked on that journey, we created a screen reader for Firefox OS, and we got to work in making Gaia screen-reader friendly. It has been a long and sisyphean process, where we would arrive at one module in gaia, learn the code, fix some issues, and move on to the next module. It feels something like this:

<figure class="wp-caption aligncenter" id="attachment_634" style="width: 460px;">helicopter dumps water on a grass fireA California Department of Forestry helicopter dumps water on a grass fire in Benicia. (Robinson Kuntz/Daily Republic)</figure>

Firefox OS has grown tremendously in a couple of years. Things never slowed down, and we were always revamping one app or another, trying out something new, and evolving rapidly. This means that accessibility was always one step behind. If we got an app accessible in version n, n+1 was around the corner with a whole new everything. Besides working on Gaia, we have always been looping back to our screen reader, making it more robust and adding features. We have consistently been straddling the gap:

The gap between Firefox OS and the screen reader

Firefox OS has achieved some amazing milestones in its short life. Early in the project, there was still a hushed uncertainty. Did we over promise? Could we turn a proof of concept into a mass-market device? There were so many moving parts for a version one release. Accessibility was not a product priority.

The return on investment

When I think about making our products accessible for the people that can’t see or to help a kid with autism, I don’t think about a bloody ROI.

<footer style="text-align: right;">— An angry Tim Cook</footer>

Take 5 seconds, and let that sink in. Apple is not a charity, they are one of the most profitable companies on the planet. Still, they understand the social value of making their products accessible.

Yet, I will argue that there is a bloody return on investment in accessibility.

Mobile is changing our social perception on disability and blurring the line between permanent and temporary barriers. The prevailing assumption used to be that your user will sit in front of a 14″ monitor with a keyboard, mouse and an undivided attention. But today there can be no assumptions, an app needs to be usable in many situations that impair the user in comparison to a desktop setup:

  • A user will browse the web on a small, 3.5″ device with no keyboard, and only their inaccurate fat fingers as a pointing device for activating links.
  • A driver will need to keep their eyes on the road and cannot interact with complex interfaces.
  • A cyclist on a cold winter day will have gloves and will want to look up where they are going on a map.
  • A pedestrian will look up a nearby restaurant on a sunny day with plenty of glare making it hard to read their phone’s screen.
<figure class="wp-caption aligncenter" id="attachment_655" style="width: 460px;">A driver texting in trafficThis shouldn’t happen.</figure>

The edge case of permanently impaired users is eclipsed by the common mobile use case which needs to appeal to users with all sorts of temporal impairments: motor, visual and cognitive. Apple understands that with Siri, and Google does too with Google Now. In Firefox OS, sooner or later we will need a good voice input/output story.

I made a case for accessibility, and I could probably stop here. But I won’t. Because the real benefit of an accessible device is priceless.

<figure class="wp-caption aligncenter" id="attachment_659" style="width: 460px;">Graph showing impact on blind users in contrast to other usersWhile blind smart phone users are a small fraction of the general population, the impact on their lives is so much greater.</figure>

We all benefit from that smart phone in our pocket. The first iPhone was a real revolution. It allows us to check mail on the go, share our lives on social networks, ignore our family, and pretend like we we are doing something important in awkward parties. But for blind users, smart phones have increased their quality of life in profound and amazing ways. Blind smart phone owners are more independent, less isolated. and they can participate in online life like never before. Prior to smart phones, blind folks depended on very expensive gadgets for mobile computing. Today, a smart phone with a few handy apps could easily replace a $10,000 specialty device.

Smart phones in the hands of blind users is a very big deal.

Three blind iphone owners

What we need to do

To make this happen, every decision by our product team, every design from UX, and every line of code from developers needs to account for the blind user experience. This isn’t as big a deal as it sounds, screen readers support is just another thing to account for, like localization. We know today that designing and developing UI for right-to-left languages take some consideration. Especially if you live in a left-to-right world.

What we need is project-wide consciousness around accessibility. It is great that we have an accessibility team, and I think Mozilla benefits from it. But this does not let anyone else off the hook from understanding accessibility, embedding it in our products, and embracing it as a value.

I fear that this post will disappoint because I won’t get into how blind users use smart phones, and how should developers account for the screen reader. I have written in the past about this, and Yura has some good posts on that as well. And yes, we need to step up our game, document and communicate more.

But for now, here are two things you could do to get a better picture:

  1. If you own an Android device or iPhone, turn on the screen reader, close your eyes and learn to use it. Challenge yourself to complete all sorts of tasks with your screen reader on. Test the screen readers limits.
  2. With your Firefox OS device, turn on the screen reader. It works in the same fashion as the iOS or Android one does. Check your latest creation, and see what is broken and missing.

2015 is going to be a great year for Firefox OS. I have already heard all sorts of product ideas that have the potential of greatness. We are destined to ship something amazing. But for bind users, it could be life changing.


Planet WebKitXabier Rodríguez Calvar: Streams API in WebKit at the Web Engines Hackfest

Yes, I know, I should have written this post before you know, blah, blah, excuse 1, blah, excuse 2, etc. ;)

First of course I would like to thank Igalia for allowing me to use the company time to attend the hackfest and meeting such a group of amazing programmers! It was quite intense and I tried to give my best though for different reasons (coordination, personal and so on) I missed some session.

My purpose at the hackfest was to work with Youenn Fablet from Canon on implementing the Streams API in WebKit. When we began to work together in November, Youenn had already a prototype working with some tests, so the idea was taking that, completing, polishing and shipping it. Easy, huh? Not so…

What is Streams? As you can read in the spec, the idea is to create a way of handling different kind of streams with a common high level API. Those streams can be a mapping of low level I/O system operations or can be easily created from JavaScript.

Fancy things you can do:

  • Create readable/writable streams mapping different operations
  • Read/write data from/to the streams
  • Pipe data between different streams
  • Handle backpressure (controlling the data flow) automagically
  • Handle chunks as the web application sees fit, including different data types
  • Implement custom loaders to feed different HTML tags (images, multimedia, etc.)
  • Map some existing APIs to Streams. XMLHttpRequest would be a wonderful first step.

First thing we did after the prototype was defining a roadmap:

  • General ReadableStream that you can create at JavaScript and read from it
  • XMLHttpRequest integration
  • Loaders for some HTML tags
  • WritableStream
  • Piping operations

As you can see in bugzilla we are close to finishing the first point, which took quite a lot of effort because it required:

  • Code cleaning
  • Making it build in debug
  • Improving the tests
  • Writing the promises based constructor
  • Fixing a lot of bugs

Of course we didn’t do all this at the hackfest, only Chuck Norris would have been able to do that. The hackfest provided the oportunity of meeting Youenn in person, working side by side and discussing different problems and possible strategies to solve them, like for example, the error management, queueing chunks and handling their size, etc. which are not trivial given the complexity created by the flexibility of the API.

After the hackfest we continued working and, as I said before, the result you can find at bugzilla. We hope to be able to land this soon and continue working on the topic within the current roadmap.

To close the topic about the hackfest, it was a pleasure to work with such amount of awesome web engines hackers and I would like to finish thanking the sponsors Collabora and Adobe and specially my employer, Igalia, that was sponsor and host.

Steve Faulkner et alNotes on providing alt text for twitter images

I use twitter a lot via the twitter web UI. Often I see images in my twitter stream that contain interesting information and text content. Unsurprisingly this content is not available to people who cannot see the images or have difficulty interpreting graphical content.

Unfortunately the twitter UI does not provide a built in method for providing text alternatives using the standard HTML methods for doing so. You cannot add an alt attribute to images and/or provide a caption using the figure and figcaption elements.
What you can do pretty easily is provide the alt text as text in the same tweet as the image (if it fits) or if it is a tweet from someone else or there is not enough space to provide in the same tweet you can reply to the tweet with an alt text.

Examples

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Note: You can often find the text version of quotes and other text embedded in graphics by sticking the first few words of the text in Google, (the Gene Roddenberry quote for example) you can then simply provide a link to the text source for everyone \0/.

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

It’s a bit more work

Providing text alternative for images on twitter is a bit more work, but it makes the interesting stuff available to anybody that follows you on twitter. (same goes for Facebook)

Sometimes, I find graphics whose alt text simply don’t fit into 140 characters or would benefit from structured HTML markup, if I think its really interesting and I have the time, I will make it available using a service such as codepen and then publish the link on twitter:

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>Note: The Easy Chirp twitter client also provides a method to provide text alternatives amongst many other accessibility features.

Addendum

Posted a few music video’s on twitter, for example:

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Thinking that using Gist is a simple way to to add lyrics to songs.

Further Reading

Adrian Roselli – (ranting) Don’t Tweet Pictures of Text

Planet MozillaDocShell in a Nutshell – Part 3: Maturation (2005 – 2010)

Whoops

First off, an apology. I’ve fallen behind on these posts, and that’s not good – the iron has cooled, and I was taught to strike it while it was hot. I was hit with classic blogcrastination.

Secondly, another apology – I made a few errors in my last post, and I’d like to correct them:

  1. It’s come to my attention that I played a little fast and loose with the notions of “global history” and “session history”. They’re really two completely different things. Specifically, global history is what populates the AwesomeBar. Global history’s job is to remember every site you visit, regardless of browser window or tab. Session history is a different beast altogether – session history is the history inside the back-forward buttons. Every time you click on a link from one page, and travel to the next, you create a little nugget of session history. And when you click the back button, you move backwards in that session history. That’s the difference between the two – “like chalk and cheese”, as NeilAway said when he brought this to my attention.
  2. I also said that the docshell/ folder was created on Travis’s first landing on October 15th, 1998. This is not true – the docshell/ folder was created several months earlier, in this commit by “kipp”, dated July 18, 1998.

I’ve altered my last post to contain the above information, along with details on what I found in the time of that commit to Travis’s first landing. Maybe go back and give that a quick skim while I wait. Look for the string “correction” to see what I’ve changed.

I also got some confirmation from Travis himself over Twitter regarding my last post:

@mike_conley Looks like general right flow as far as 14 years ago memory can aid. :) Many context points surround…
@mike_conley 1) At that time, Mozilla was still largely in walls of Netscape, so many reviews/ alignment happened in person vs public docs.
@mike_conley 2) XPCOM ideas were new and many parts of system were straddling C++ objects and Interface models.
@mike_conley 3) XUL was also new and boundaries of what rendering belonged in browser shell vs. general rendering we’re [sic] being defined.
@mike_conley 4) JS access to XPCOM was also new driving rethinking of JS control vs embedding control.
@mike_conley There was a massive unwinding of the monolith and (re)defining of what it meant to build a browser inside a rendered chrome.

It’s cool to hear from the guy who really got the ball rolling here. The web is wonderful!

Finally, one last apology – this is a long-ass blog post. I’ve been working on it off and on for about 3 months, and it’s gotten pretty massive. Strap yourself into whatever chair you’re near, grab a thermos, cancel any appointments, and flip on your answering machine. This is going to be a long ride.

Oh come on, it’s not that bad, right? … right?

OK, let’s get back down to it. Where were we?

2005

A frame spoofing bug

Ah, yes – 2005 had just started. This was just a few weeks after a community driven effort put a full-page ad for Firefox in the New York Times. Only a month earlier, a New York Times article highlighted Firefox, and how it was starting to eat into Internet Explorer’s market share.

So what was going on in DocShell? Here are the bits I found most interesting. They’re kinda few and far between, since DocShell appears to have stabilized quite a bit by now. Mostly tiny bugfixes are landed, with the occasional interesting blip showing up. I guess this is a sign of a “mature” section of the codebase.

I found this commit on January 11th, 2005 pretty interesting. This patch fixes bug 103638 (and bug 273699 while it’s at it). What was going on was that if you had two Firefox windows open, both with <frameset>’s, where two <frames> had the same name attribute, it was possible for links clicked in one to sometimes open in the other. Youch! That’s a pretty serious security vulnerability. jst’s patch added a bunch of checks and smarter selection for link targets.

One of those new checks involved adding a new static function called CanAccessItem to nsDocShell.cpp, and having FindItemWithName (an nsDocShell instance method used to find some child nsIDocShellTreeItem with a particular name) take a new parameter of the “original requestor”, and ensuring that whichever nsIDocShellTreeItem we eventually landed on with the name that was requested passes the CanAccessItem test with the original requestor.

DocShell and session history

There are two commits, one on January 20th, 2005, and one on January 30th, 2005, both of which fix different bugs, but are interrelated and I want to talk about them for a second.

The first commit, for bug 277224, fixes a problem where if we change location to an anchor located within a document within a <script> tag, we stop loading the page content because the browser thinks we’re about to start loading a document at a different location. bz fixed the more common case of location change via setting document.location.href in bug 233963. Bug 277224 is interested in the case where document.location.href is modified with the .replace() method.

The solution that bz uses is to add new flags for nsIDocShellLoadInfo, which gives more power in how to stop loading a page. Specifically, it adds a LOAD_FLAGS_STOP_CONTENT flag which allows the caller to stop the rendering of content and all network activity (where the default was just to stop network activity). I believe what happens is that replace() causes an InternalLoad to kick off, and we need content rendering to be stopped in order for this new load to take over properly. That’s my reading on the situation, anyhow. If bz or anybody else examining that patch has another interpretation, please let me know!

So what about the commit on January 30th? Well that one also involves anchors. What was happening was that if we browsed to some page, and then clicked a link that scrolled us to an anchor in that page, clicking back would reload the entire document off the cache again, when we really just need to restore the old scroll position.

The patch to fix this basically detected the case where we were going back from an anchor to a non-anchor but had the same URL, and allowed a scroll in that case.

So how is this related to the commit for bug 277224? Well, what it shows is that at this time, DocShell was responsible for not just knowing how to load a document and subdocuments, but also about the user’s state in that document – specifically, their scroll position. It also more firmly establishes the link between DocShell and Session History – as DocShell traverses pages, it communicates with Session History to let it know about those transitions, and refers to it when traveling backwards and forwards, and when restoring state for those session history entries.

I just thought that was kinda neat to know.

Window pains

On February 8th, 2005, danm landed a patch to fix bug 278143, which was a bug that caused windows opened with window.open to open in a new window if they had no target specified. This wouldn’t normally be a problem, except that this could override a user preference to open those new windows in new tabs instead. So that was bad.

This was simply a matter of adding a check for the null case for the target window name in nsWindowWatcher. No big deal.

The reason I bring this code up, is because I find it familiar – I brushed by it somewhat when I was working on making it possible to open new windows for multi-process Firefox.

Semi-related (because of the “popup” nature of things), is a commit on February 23rd, 2005. This one is for bug 277574, which makes it so that modal HTTP auth prompts focus the tabs that spawn them. This patch works by making sure HTTP auth prompts fire the same DOMWillOpenModalDialog and DOMModalDialogClosed events that tabbrowser listens for to focus tabs.

The copy and the cache

On March 11, 2005, NeilAway landed a commit to add the “Copy Image” command item to the context menu. This was for bug 135300.

What’s interesting here is that “Copy Image Location” was already in the context menu, and in the bug, it looks like there’s some contention over whether or not to keep it. It seems that right around here, the solution they go with is to copy both the image and the image location to the clipboard, and mark each copy with the right “flavours”, so that if you were to paste to a program or field that accepted an “image” flavour, like Photoshop, you’d get the image. If you pasted to a program or field that accepted a “text” flavour, like Notepad, you’d get the image URL.

That’s the solution that was landed, anyhow. Notice that nowadays, Firefox has context menu items that allow users to copy just the image, and just the URL – so at some point, this approach was deemed wanting. I’ll keep my eye out to see if I can find out where that happened, and why. If anybody knows, please comment!

On April 28, 2005, roc landed a commit for bug 240276, which splits up something called “nsGfxScrollFrame” into two things – nsHTMLScrollFrame and nsXULScrollFrame. It seems like, up until this point, layout for both XUL and HTML scrollable frames were handled by the same code. This meant that we were using XUL box-model style layout for HTML, and XUL layout is… well… kind of tricky to work with. This patch helped to further distance our HTML rendering from our XUL rendering. As for how this affected DocShell, the patch removed some scroll calculations from DocShell, where they probably didn’t belong in the first place.

On May 4, 2005, Brian Ryner landed a patch which made it possible to move back and forward across web pages much more quickly. This was for bug 274784, and a key part of a project called “fastback”. When you view a web page, a DocShell is put in charge of requesting network activity to retrieve the document source, and then passing that source onto an appropriate nsIContentViewer. Up until Brian’s patch, it looks like every nsIContentViewer was just getting thrown away after browsing away from a page. His patch made it possible to store a certain number of these nsIContentViewers in the session history of the window, and then retrieve it when we browse back or forward to the associated page. This is a textbook trade-off between speed (the time to instantiate and initialize an nsIContentViewer) and space (stored nsIContentViewers consume memory). And it looks like the trade-off paid-off! We still cache nsIContentViewers to this day. What’s interesting about Brian’s patch is that it exposes an about:config preference1 for setting how many content viewers are allowed to be cached2. As DocShell seems to go hand in hand with session history, it’s not surprising that Brian’s patch touches DocShell code.

about:neterror arrives, Inner and Outer windows appear, and then Session History gets all snuggly with DocShell

On July 14th of 2005, bsmedberg landed a patch to add about:neterror pages, and close a privilege-escalation security vulnerability. Up until this point, network error pages were shown by browsing the DocShell to chrome URLs3, but this allowed certain types of attacks which load iframes resolving to network error pages to potentially gain chrome privileges4.

So instead of going to a chrome URL, the patch causes DocShell to internally load about:neterror5 The great news about this about:neterror page is that it has restricted permissions, so that security hole got plugged.

On July 30th, 2005, jst landed a patch to introduce the notion of inner and outer windows for bug 296639. Inner and outer windows has confused me for a while, but I think I’ve somewhat wrapped my head around it. This document helped.

The idea goes something like this:

The thing that is showing you web content can be considered the outer window – so that could be a browser tab, or an iframe, for example. The inner window is the content being displayed – it’s transitory, and goes away as you browse the web via the outer window.

The outer window then has a notion of all of the inner windows it might contain, and the inner window (via Javascript) gets a handle on the outer window via the window global.

So, for example, if you call window.open, the returned value is an outer window. Methods that you call on that outer window are then forwarded to the inner window.

I hope I got that right. I was originally trying to piece together the meaning of all of it by reading this WHATWG spec describing browsing context, and that was pretty slow going. The MDN page seemed much more clear.

Please comment with corrections if I got any of that wrong.

I’m not entirely sure, but based entirely on instinct and experience, I’m inclined to believe there are interesting security effects of this split. It seems to add a bit more of a membrane6 between web content and the physical window.

Anyhow, jst’s change was pretty monumental. It’s for bug 296639 if you want to read up more about it.

A semi-related change was landed on August 12th by mrbkab, where the entire inner window is stashed in the bfcache (as opposed to what we were doing before, which looks like serialization and deserialization of window state). That was for bug 303267, and sounds related to the fast back and forward caching work that Brian Ryner was working on back in May.

On August 18th of 2005, radha landed the first in the series of patches to session history. Unfortunately, the commit message for this patch doesn’t have a bug number, so I had some trouble tracking down what this work is for. I think this work is for bug 230363, and is actually a copy of interfaces from xpfe/components/shistory/public to docshell/shistory/public. Like I mentioned earlier, DocShell and session history are closely linked, so I suppose it makes sense to put the session history code under docshell/. Later that day, another patch copies the nsISHistoryListener interfaces over as well. Finally, a patch landed to build those interfaces from their new locations, and removes xpfe/components/shistory from Makefile.in. The bug for that last change is bug 305090.

Last bit of 2005

On August 22 of 2005, mrbkap landed a patch that changed how content viewer caching worked. There’s a special page in Firefox called about:blank – if you go to that page right now, you’re going to get a blank page. Some people like to set that as their home page or new tab page, as it is (or should be) very lightweight to load. That page is also special because, from what I can tell, when a new tab or window opens, it’s initially pointing at about:blank before it goes to the requested destination. Before this patch, we used to cache that about:blank content viewer in session history. We didn’t put an entry in the back-forward cache for about:blank though7, so that was a useless cache and a waste of memory. mrbkap’s patch made DocShell check to see if the page it was traveling to was going to re-use the current inner window, and if so, it’d skip caching it. Memory win!

That was the last thing I found interesting in 2005. On to 2006!

Preferences and threads…

On February 7th, 2006, bz landed a patch that made it possible for embedders to override where popup windows get opened.

There are preferences in Firefox that allow you to tweak how web content is able to open new windows8. Those preferences are browser.link.open_newwindow and browser.link.open_newwindow.restriction. If a page is attempting to open a new window, these preferences allow a user to control what actually occurs. The default (in most cases) is to open a new tab instead – but these preferences allow you to open that new window, or to open the content in the same window that the link is executed in. These are the kind of tweaks that power-users love.

Up until this point, only Firefox had these tweaking capabilities. bz’s patch moved that tweaking logic “up the chain”, so to speak, which means that applications that embedded Gecko could also be tweaked like this. Pretty handy.

For the Gecko hackers reading this, this patch also introduced the nsIWindowProvider interface9.

On May 10th, 2006, “darin” landed a patch for bug 326273 that put the nsIThreadManager interface and implementation into the tree. It’s a big commit, and affected many parts of the codebase. nsIThreadManager is, not surprisingly, used to implement multi-threading and thread manipulation in Gecko. From my look at the patch, it looks like it replaces something called nsIEventQueue / nsIEventQueueService. It looks like Gecko already had some facility for multi-threading10, but nsIThreadManager looks like a different model for multi-threading.

For DocShell, this change meant modifying the way that restoring PresShell’s from history would work. Before, DocShell had a RestorePresentationEvent that extended PLEvent, which allowed it to be posted to an nsIEventQueue. Now, instead, we define an inner-class that implements nsRunnable11, and also define a weak pointer to that runnable on a DocShell.

So the way things would go is this: DocShell::RestorePresentation would get called, and this would cancel any pending RestorePresentationRunnable that the DocShell is weak-pointing to. Next, we’d instantiate a new RestorePresentationRunnable that we’d then dispatch to the main thread. This isn’t really different to what we were doing before, but it makes use of the nsIThreadManager and nsRunnable class instead of nsIEventQueue and nsIEventQueueService.

What’s interesting about this patch, DocShell-wise, is that it shows the usage of FavorPerformanceHint, which looks like a way of trading-off UI interactivity with page-to-screen time. Basically, it looks like the FavorPerformanceHint is used when restoring PresShell’s to tell the nsIAppShell, “hey – we want you to favor native events over other events for a small pocket of time so we can get this stuff to the screen ASAP”. If I’m interpreting that right, it’s a tradeoff between total time to execute and responsiveness here. “Do you want it fast, or do you want it smooth?”.

I was probably wrong about the name

In one of my past posts, I made some guesses about why DocShell was called DocShell. I thought:

I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

But now that I look at nsIAppShell, and nsIDocShell, and nsIPresShell… I think I’m starting to understand. A while back, when I first started planning these posts, I asked blassey why he thought nsIDocShell was named the way it was, and he said he thought it might be related to the notion of a command shell – like a terminal input. From my understanding, a shell is a command interface with which one can manipulate and control something pretty complex – like the file-system or processes of a computer. I think blassey is right – I think that’s the “Shell” in nsIDocShell. I think the idea is that this interface would be the one to control and manipulate the process of loading and displaying a document. It seems obvious now, but it sure wasn’t when I started looking into this stuff.

DOM Storage (session and global), KungFuDeathGrip, friendlier search…

On May 19th, 2006, jst picked up, finished and landed a patch originally be Enn that implemented DOM Storage for bug 335540.

This patch adds two new methods to nsIDocShell – getSessionStorageForDomain and addSessionStorage. The first method is accessed in a number of cases, but most importantly when some caller reads sessionStorage or globalStorage off of the window object12.

The relationship between nsGlobalWindow and nsDocShell is brought to my attention with this patch. Here’s a fragment from an old chat I had with Ms2ger, smaug and bz, which started when I asked Ms2ger what he’d rename DocShell to.

14:11 (Ms2ger) mconley, I would call it WindowProxy :)
14:12 (smaug) outer window? yes, WindowProxy please
14:12 (mconley) Ms2ger: wait, outer window = docshell currently?
14:12 (khuey) what are we doing with WindowProxy?
14:13 (Ms2ger) mconley, well, no, there’s nsDocShell and nsGlobalWindow (with IsOuterWindow() true)
14:13 (Ms2ger) mconley, those are pretty much isomorphic
14:13 (mconley) I see
14:13 (bz) nsDocShell and outer nsGlobalWindow are in a 1-1 relationship
14:14 (bz) The fact that they are two separate objects is sort of a historical accident that we may want to rectify sometime
14:14 (mconley) this sounds like another post to write – how nsDocShell and nsGlobalWindow are related…

So I think nsGlobalWindow (instances of which can either be “inner” or “outer”) when it has IsOuterWindow being true, works in tandem with nsDocShell to “be” the outer window. That’s really imprecise, hand-wavey language. I’ll probably need to tighten this up in a follow-up post once somebody reads this and gives me better words to describe things13.

On May 24, 2006, smaug landed a patch to fix bug 336978. Bug 336978 was a crash caused by loading the following code in an iframe:

<html>
<head></head>
<body>
  <script>
    window.addEventListener("pagehide", doe, true);
    function doe(e) {
      var x = parent.document.getElementsByTagName('iframe')[0];
      x.parentNode.removeChild(x);
    }
    setTimeout(doe2,500);
    function doe2() {
      window.location = 'about:blank';
    }
  </script>
</body>
</html>

What this code does is wait 500ms, and then change the location to about:blank. Changing the location causes the pagehide event to fire while we’re unloading the original page, and when we hear it, we get the host of the iframe to remove the iframe from itself.

smaug’s solution to this bug is for nsDocShell to hold a reference via an nsCOMPtr to the nsIContentViewer for the document while the pagehide event is fired. This ensures that the nsIContentViewer doesn’t get destructed before we’re truly done to it. The name we give this nsCOMPtr is “kungFuDeathGrip”. This isn’t the only place where some hold of an object is maintained with a variable called kungFuDeathGrip – check out dxr for some more uses.

I’d seen kungFuDeathGrip over the years, and I never looked closely at what it was doing. I always thought kungFuDeathGrip was some magical global function that destroyed things unequivocally, but on closer inspection, I’m pretty sure it’s really just a way of saying “this variable’s sole purpose is to hold a reference to this thing until I’m done with it.”

I think the phrase “kung fu” distracted me. I thought it did this:

Black Dynamite layin' the smack down.

Woooooo!

But it’s really more like this:

Spock taking out Kirk with the Vulcan nerve pinch thing.

Kkkg….*gurgle*…ngahh….

On June 15th, 2006, “brettw” landed a patch for bug 245597 to make it so that anything that gets put into the AwesomeBar that isn’t parse-able as a URI automatically turns into a keyword search. That’s great! This made both the search input and the AwesomeBar useful for more users. This change occurred in docshell/base/nsDefaultURIFixup.cpp, which is, as I understand it, the central location for code that turns erroneous URIs into what the user probably intended.

nsIMutationObserver, some new about: pages…

On July 2nd, 2006, Jonas Sicking added nsIMutationObserver to the tree for bug 342062, making it possible to observe changes to the DOM within a subtree. It’s a pretty big patch, but it looks like a good chunk of it is just swapping in usage of nsIMutationObserver to replace old usage of nsIDocumentObserver (which supplied the same observations, but for an entire document instead of a subtree). Note that it’d still be a few years before DOM3 Mutation Events would be exposed for web developers to use, and after a few more years, those events were deprecated in favour of the Mutation Observer API.

On September 15th, 2006, several new Gecko-wide about: pages landed, which means they got put into the redirection map in docshell/base/nsAboutRedirector.cpp. These pages were about:buildconfig (bug 140034), about:about (bug 56061), and about:license (bug 256945). That same day, bz landed a patch to make it so that new about: pages didn’t have to have special rules hardcoded into nsScriptSecurityManager::CanExecuteScripts to execute script even if the user has script disabled. Instead, the nsAboutRedirector mapping was extended to allow a boolean for indicating that an about: page required script execution.

Simplifying DocShell, and then some spoofing and malware protection

The next interesting thing (according to me, anyhow) didn’t occur until the following May 6th, 2007. That day, bz landed a patch for bug 377303 which simplified the structure of things inside the DocShell tree.

Up until that point, there had existed both nsIDocShellTreeItem and nsIDocShellTreeNode had both existed as interfaces for interacting with nodes within a DocShell tree. I’ll quote myself from my previous post:

The (somewhat nebulous) distinction of DocShell “treeItems” and “treeNodes” is made. At this point, the difference between the two is that nsIDocShellTreeItem must be implemented by anything that wishes to be a leaf or middle node of the DocShell tree. The interface itself provides accessors to various attributes on the tree item. nsIDocShellTreeNode, on the other hand, is for manipulating one of these items in the tree – for example, finding, adding or removing children. I’m not entirely sure this distinction is useful, but there you have it.

It looks like enough was finally enough. bz didn’t go so far as to fully merge the two interfaces (though he makes a note in his patch about doing so), but instead made the (arguably more complex) nsIDocShellTreeItem interface inherit from nsIDocShellTreeNode14.

Later, on May 17th, 2007, Mats Palmgren landed a patch for bug 376562 to remove a childOffset attribute from nsIDocShellTreeItem, and instead move a setter for the childOffset to the nsIDocShell interface instead.

Reading this bug comment as well as one of Mats’ comments in the patch, it sounds as if childOffset never really worked as advertised, and was a bit of a foot-gun.15

On June 14th, 2007, bz landed a patch for bug 371360, which prevents onUnload handlers from starting any page loads. Before this, it seems that it was possible for a page do to something like this:

<html>
  <body onunload="location.href = 'http://www.somesite.com';">
    <a href="http://slashdot.org/">http://slashdot.org/</a>
  </body>
</html>

With the result being that you could (potentially) phish a user. For example, suppose you’re a member of MySafeBank, which has a site at mysafebank.com. Suppose you’re at my seemingly innocent site totallyevil.com, and also suppose that I’ve registered a domain at rnysafebank.com (that’s an r and an n, which, if you’re not paying attention, look pretty close to a m). If you’re at my site, and I notice that you’re trying to head to mysafebank.com, I could redirect you to rnysafebank.com, which has a very similar user interface and favicon. Yadda yadda yadda, your bank info is now mine.

So bz stopped that one in its tracks by just preventing a DocShell from attempting any kind of load if we’re in the middle of an unload.

On July 3rd, 2007, johnath16 landed a patch for bug 380932 to add a new mode for about:neterror for pages suspected of serving up malware.

If you haven’t seen that page before, count yourself lucky – you’ve been surfing in safe places! This is what the page looks like (or, used to look like, anyhow):

Minefield reporting a Suspected Attack Site

Old version of Firefox showing a Suspected Attack Site

johnath’s patch allowed an about:neterror page to have a specific CSS class associated with it as part of its URL. This allowed for the dramatic styling in the image above17.

showModalDialog

showModalDialog was a non-standard function that Microsoft introduced in Internet Explorer 418. This allows a web page to create a modal dialog that contains web content. jst landed a patch on July 26th, 2007 to implement it in Firefox as part of bug 194404. This patch made it possible for a DocShell to have a modal dialog be its parent.

showModalDialog has since been marked as deprecated on MDN, and Google Chrome have announced they will no longer support it after May of 2015. Firefox will support it until sometime after Firefox 39 on the release channel19.

about:crashes, Larry, and tab tearing

There’s a long gap in time here where nothing really interesting happens under docshell/.

Finally, on January 24th, 2008, Mossop landed a patch for bug 411490 that exposes about:crashes as a handy way of getting at the list of crash reports that have been collected.

As about:crashes is a Gecko-wide about: page, this meant once again adding an entry to the docshell/base/nsAboutRedirector.cpp map, as had been done with about:buildconfig and about:about.

On April 28th, 2008, gavin landed a patch originally by ehsan that adds a friendlier set of icons for reporting SSL errors for bug 430904.

That icon is Larry. Have you met Larry? This is Larry:

Larry, the SSL dude.

This is Larry.

Larry is the name for a series of icons that were developed to describe how secure your communications are with a particular site. You might recognize him from the airport, because he looks a lot like a customs agent or border patrol.

You can read up on Larry here on johnath’s blog.

On August 7th, 2008, bz landed a patch for bug 113934 to lay the foundation for letting users drag tabs out from a window, or drag tabs between windows. This introduced a new method to nsIFrameLoader, “swapFrameLoaders”. This method is the real key to moving tabs between windows – each <xul:browser> implements nsIFrameLoaderOwner, and the nsIFrameLoader is (yet another) thing that can load web content. This is essentially a brain transplant between two <xul:browser>’s. You can see the real guts of the brain transplant in this method of the patch.

On January 13th, in a semi-related patch, bz landed code for bug 449780 to flush the bfcache20 when swapping frameloaders. Apparently, we were storing information in the bfcache that was simply incorrect after a frameloader swap. The best way to avoid internal confusion in such a case was to just invalidate the cache.

Big gaps…

Lots of big gaps between the next few changes.

On March 18th, 2009, Honza Bambas landed a patch for bug 422526 to implement window.localStorage. localStorage was a replacement for globalStorage21 that persisted across browser restarts (unlike sessionStorage).

Note that both localStorage and sessionStorage were synchronous storage APIs. It’d take until around June 24th of 2010 before an asynchronous storage mechanism became available.

On May 7th, 2009, bz landed a patch for bug 490957 to finally get rid of nsWebShell.cpp. If you recall from my earlier blog post, that was one of Travis Bogard’s goals at the start of this whole adventure. bz’s patch essentially folds the functionality of nsWebShell into nsDocShell. The webshell/ folder remained, but just contained interfaces.

Curiously, a good chunk of nsWebShell’s functionality seemed to revolve around anchor pings, a massively unpopular “feature” that allows a website to get your browser to send a request every time you click on a link. Thankfully, this “feature” is disabled by default in Firefox22. Here’s Jorge Villalobos’s post on anchor pings. Correction (Jan 4th, 2015) - I’ve since changed my tune about anchor pings. See these three comments.

On June 29th, 2009, dbolter landed a patch for bug 467144 so that nsIMutationObserver’s, when they observe an attribute being changed, also get a copy of the old attribute as well as the new one. Actually, to be specific, it adds a callback “AttributeWillChange” to include the old attribute. This fires before the “AttributeChanged” callback.

A day later, bsmedberg landed a massive patch to implement remote tabs. There’s no bug number in the commit message, but this is clearly part of the Electrolysis efforts that were just starting up around this time. Remote tabs means browsers that run in different processes, which is the overall goal of Electrolysis, and (possibly unbeknownst to bsmedberg at the time) a foundational piece for Fennec (Firefox for Android)23.

On October 3rd, 2009, vvuk landed about:memory for bug 515354, a key piece of the war against high-memory consumption in Firefox (a.k.a. MemShrink). This is very similar to the about:crashes page that Mossop landed back in 2008.

And finally…

On January 7th, 2010, smaug landed a patch for bug 534226 to remove support for multiple PresShell’s. PresShell stands for “Presentation Shell”, and as I understand it, is the primary interface to the “frame tree”24.

It looks like, up until this point, Gecko had the ability to have multiple frame trees per content tree. I’m not entirely sure what the point of that was, but the capability was there. smaug’s patch simplifies everything by making sure a document has only a single, primary PresShell. This removes a lot of iteration and management code for those multiple PresShell’s, which is nice.

And last, but not least, on June 30th, 2010, Benjamin Stover landed a patch for bug 556400 which made it so that visits to webpages are recorded asynchronously in Places. It looks like this patch takes I/O off the main-thread, so it gets a big thumbs-up from me.

Essentially, this patch adds new asynchronous write methods to the History service, and then makes nsDocShell use those methods on webpage visits. nsDocShell falls back to the synchronous methods of nsIGlobalHistory2 if, for some reason, it can’t get at the History service and its asynchronous methods.

Phew!

Did you make it? Are you still with me? I know it might feel like this:

qwop

everyone is a winner

but I think we’re making real progress here. I think we’re learning important stuff about the history of Firefox, and changes that have occurred in some of its core functionality over time.

So to sum, that, to me, was the most interesting stuff to happen in and around docshell/ from 2005-2010. There might have been other neat stuff in there, but it didn’t catch my eye when I was browsing commit messages.

There’s still much to do – I have to look at commits from 2011 to 2014. After that, I’m planning on doing a line-by-line code review / walkthrough of nsDocShell.cpp, and then I’d like to try to summarize my findings and any recommendations I’ve put together from my time studying this stuff.

Hold tight!


  1. This pref was browser.sessionhistory.max_viewers, if you’re interested – though that preference appears to have been superseded by browser.sessionhistory.max_total_viewers. The default value for that pref is -1, meaning to adjust the number of allowed cached viewers based on how much memory is available. If you’re looking to reduce how much memory Firefox consumes, it’s possible setting this to some low integer will allow you reverse that trade-off between space and speed. 

  2. I assume per session history 

  3. You wouldn’t notice that you were at a chrome URL though, because DocShell loads this URL internally, while pretending to be at the URL that caused the error. The end result is the user going to http://www.sitethatcausesnetworkerror.com still sees that URL in their AwesomeBar, despite the fact that their web content shows the appropriate network error page hosted at a chrome URL. 

  4. “chrome privileges” means that a web page now essentially has the same permissions that Firefox, the program on your computer, has – meaning it can potentially read and write files, and communicate with anybody on your network. Yikes! 

  5. You can visit this page in Firefox right now and see a generic network error. It’s showing a generic error because it hasn’t the foggiest idea how you’ve arrived at about:neterror, since it wasn’t passed any error information. 

  6. Or the infrastructure to create such a membrane. 

  7. So you couldn’t go back to about:blank in cases I described, where a tab or window was initialized at about:blank before going to a new page. 

  8. I’m actually quite familiar with this stuff because I worked on opening new windows for Electrolysis not too long ago. 

  9. From the header of that interface:

    /**
     * The nsIWindowProvider interface exists so that the window watcher's default
     * behavior of opening a new window can be easly modified.  When the window
     * watcher needs to open a new window, it will first check with the
     * nsIWindowProvider it gets from the parent window.  If there is no provider
     * or the provider does not provide a window, the window watcher will proceed
     * to actually open a new window.
     */

     

  10. The nsIEventQueueService service mentions that it is used to manage event queues for a particular thread, and makes use of nsIThread – so multi-threading must have already been a thing. 

  11. Still called RestorePresentationEvent though… strange that the opportunity wasn’t taken to rename this to RestorePresentationRunnable. 

  12. Those two properties are part of a new nsIDOMStorageWindow interface that nsGlobalWindow implements after this patch. That interface is later removed in bug 670331, and the two accessors are moved directly into nsIDOMWindow instead. 

  13. I have a feeling the real answer lies somewhere in the comments in bug 296639

  14. It wasn’t immediately clear to me why the inheritance didn’t go the other way around – especially since bz himself had a comment in nsIDocShellTreeNode suggesting that arrangement. Look at his first comment in the bug though:

    This would allow consumers to start using just nsIDocShellTreeItem in their code, until we can just merge nsIDocShellTreeNode into nsIDocShellTreeItem.

    Basically, it sounds like nsIDocShellTreeNode was being deprecated, and that callers who used to use nsIDocShellTreeNode should migrate to use nsIDocShellTreeItem instead (which inherits nsIDocShellTreeNode’s methods). Then the two interfaces could be merged. 

  15. Mats’ warning was removed on August 17, 2010 as part of bug 462076. It looks like SetChildOffset is still only ever used when adding the child though, so it’s still probably valid. 

  16. The same johnath who is currently the VP of Firefox! 

  17. Later on in August, dcamp would land this patch as part of bug 384941 which prevents suspected malware sites from even loading, instead of just not displaying them. 

  18. To quote Douglas Adams:

    This has made a lot of people very angry and has been widely regarded as a bad move.

     

  19. And the 38 ESR will continue to be supported until mid-2016. If you maintain a site that uses showModalDialog, you’d best get rid of it. 

  20. The bfcache, or “back-forward cache” is a collection of “frozen” pages that are stored in memory for fast back/forward action – see this page for more detail. 

  21. globalStorage, I believed, allowed all web properties read and write access to the same storage – so clearly it was a good idea to replace it. globalStorage was removed on October 9th, 2011 by Honza as part of bug 687579

  22. But according to this, is enabled by default in both Chrome and Opera. Lovely

  23. Remote tabs are also hugely important for Boot2Gecko / Firefox OS

  24. From this document:

    …the frame tree…is the visual representation of the document. Each frame can be thought of as a rectangular area on the page. The content nodes for XML elements are usually associated with one or more frames which display the element — one frame if the element is rectangular, more if the element is something more complex (like a chunk of bolded text that happens to be word-wrapped)…

    And from the nsIPresShell.h header:

    /**
    * Presentation shell interface. Presentation shells are the
    * controlling point for managing the presentation of a document. The
    * presentation shell holds a live reference to the document, the
    * presentation context, the style manager, the style set and the root
    * frame. <p>
    *
    * When this object is Release’d, it will release the document, the
    * presentation context, the style manager, the style set and the root
    * frame.

     

Planet WebKitManuel Rego: CSS Grid Layout 2014 Recap: Specification Evolution

Year 2014 is coming to an end, so it’s the perfect timing to review what has happened regarding the CSS Grid Layout spec, which Igalia has been implementing in both Blink and WebKit engines, as part of our collaboration with Bloomberg.

I was wondering what would be the best approach to write this post, and finally I’m going to split it in 2 different posts. This one covering the changes in the spec during the whole year and another one talking about the implementation details.

Two Working Drafts (WD) of the CSS Grid Layout Module have been published during 2014. In addition, during the last month of the year, somehow related with the work previously done at TPAC in the CSS Working Group (WG) face to face (F2F) meeting, several changes have been done in the spec (I guess that they’ll end up in a new WD in early 2015).
So, let’s review the most important changes introduced in each version.

Working Draft: 23 Jan 2014

Subgrids
This is the first WD where subgrids feature appears marked as at-risk. This means that it might end up out of Level 1 of the specification.
A subgrid is a grid inside another grid, but keeping a relationship between the rows/columns of the subgrid and the parent grid container. It shares the track sizing definitions with the parent. Just for the record, current implementations haven’t support for this feature yet.
However, nested grids are already available and will be part of Level 1. Basically nested grids have their own track sizing definitions completely independent of their parents. Of course, they’re not the same than subgrids.
Subgrid vs nested grid example

Subgrid vs nested grid example

Implicit named areas
This is related with the concept of the named grid areas. Briefly, in a grid you can name the different areas (group of adjacent cells), for example: menu, main, aside and/or footer, using the grid-template-areas property.
Each area will define 4 implicit named lines: 2 called foo-start (marking the row and column start) and 2 called foo-end (row and column end), where foo would be the name of the area.
This WD introduces the possibility to create implicit named areas, by defining named grid lines using the previous pattern. That way if you explicitly create lines called foo-start and foo-end, you’ll be defining an implicit area called foo that could be used to place items in the grid.
Example of implicit named grid areas

Example of implicit named grid areas

Aligning the grid
In this version the justify-content and align-content properties are added, which allow to align the whole grid within the grid container.
Other
In this WD appears a new informative section explaining the basic examples for the grid placement options. It’s an informative section but very useful to get an overview of the different possibilities.
In addition, it includes an explanatory example for the absolutely-positioned grid items behavior.

Working Draft: 13 May 2014

Track sizing algorithm
Probably the most important change in this version is the complete rewrite of the track sizing algorithm. Anyway, despite of the new wording, the algorithm keeps the very same behavior.
This is the main algorithm for grids, it defines how the track breaths should be calculated taking into account all the multiple available options that define the track sizes.
An appendix with a “translation” between the old algorithm and the new one is included too, mostly to serve as reference and help to detect possible mistakes.
Auto-placement changes
The grid-auto-flow property is modified in this WD:
  • none is substituted by a new stack packing mode.
  • As a consequence, property grid-auto-position (tied to none) is dropped.

Before this change, the default value for grid-auto-flow was none and, in that case, all the auto-placed items were positioned using the value determined by grid-auto-position (by default in 1×1).
With this change, the default value is row. But, you can specify stack and the grid’ll look for the first empty cell and use it to place there all the auto-positioned items.

Other
Implementations have now the possibility to clamp the maximum number of repetitions in a grid.
Besides, it brings up a new section related to sizing grid containers where it’s defined how they behave under max-content and min-content constraints.

Editor’s Draft: Last changes

Note: These changes are not yet in a WD version and might suffer some modifications before a new WD is published.

Automatic number of tracks
A new auto keyword has been added to repeat() function.
This will allow to repeat the track list specified as many times as needed, depending on the grid container size. Which used together with the auto-placement feature might be really nice combo.
For example, if the grid container is 350px width and it uses repeat(auto, 100px); to define the columns, you’ll end up having 3 columns.
Example of new auto keyword for repeat() function

Example of new auto keyword for repeat() function

Auto-placement stack removed
Finally, after some issues with the stack mode, it’s been decided to remove it from the spec. This means that grid-auto-flow property gets simplified, allowing you to determine the direction: row (by default) or column; and the packing algorithm: dense or “sparse” (if omitted).
On top of that, the grid item placement algorithm has now a more explicit wording regarding the different packing modes.
Fragmentation
This section has been missing since a lot of time ago, and it finally has got some content.
Anyway, this is still an initial proposal and more work is needed to settle it down.
Other
Reviewed the scope of align-content and justify-content, now they apply to the grid tracks rather than to the grid as a single unit.

As a side note, there’s an ongoing discussion regarding the symbol used to determine the named grid lines. Currently it’s a parenthesis, e.g.:

grid-template-columns: (first) 100px (mid central) 200px (last);

However these parenthesis have some issues for Sass preprocessor. The proposal of using square brackets was not accepted in the last CSS WG F2F meeting, though it’ll be revisited again in the future.

Conclusion

Of course this list is not complete, and I can be missing some changes. At least, these’re the most important ones from the implementor perspective.
As you can see, despite of not having big behavioral changes during this year, the spec has been evolving and becoming more and more mature. A bunch of glitches have been fixed, and some features have been adapted thanks to the feedback from users and implementors.
Thanks to the spec editors: Tab, fantasai and Rossen (and the rest of the CSS WG), for all their work and patience in the mailing list answering lots of doubts and questions.

Next year CSS Grid Layout will be hitting your browsers, but you’re still on time to provide feedback and propose changes in the spec. The editors will be more than happy to listen your suggestions for improvements and know what things are you missing.
If you want to have first-hand information regarding the evolution of the spec, you should follow the CSS WG blog and check the minutes of the meetings where they discuss about grid. On top of that, if you want all the information, you should subscribe yourself to the CSS WG mailing list and read the mails with “[css-grid]” in the subject.

Last, in the next post I’ll talk about the work we’ve been doing during 2014 regarding the implementation in Blink and WebKit and our plans for 2015. Stay tuned!

Igalia and Bloomberg working together to build a better web

Igalia and Bloomberg working together to build a better web

Tantek ÇelikStable 5+ years, yet any browsers do anything special with #HTML5 article aside figure footer header nav section tags?

Stable 5+ years, yet any browsers do anything special with #HTML5 article aside figure footer header nav section tags?

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>