Planet MozillaOn towards my next challenge…

One of the things I like where I live in London is my doctor. He is this crazy German who lived for 30 years in Australia before coming to England. During my checkup he speaks a mixture of German, Yiddish, Latin and English and has a dark sense of humour.

He also does one thing that I have massive respect for: he never treats my symptoms and gives me quick acting remedies. Instead he digs and analyses until he found the cause of them and treats that instead. Often this means I suffer longer from ill effects, but it also means that I leave with the knowledge of what caused the problem. I don’t just get products pushed in my face to make things easier and promises of a quick recovery. Instead, I get pulled in to become part of the healing process and to own my own health in the long run. This is a bad business decision for him as giving me more short-acting medication would mean I come back more often. What it means to me though is that I have massive respect for him, as he has principles.

Professor X to Magneto: If you know you can deflect it, you are not really challenging yourself

As you know if you read this blog, I left Mozilla and I am looking for a new job. As you can guess, I didn’t have to look long and hard to find one. We are superbly, almost perversely lucky as people in IT right now. We can get jobs easily compared to other experts and markets.

Hard to replace: Mozilla

Leaving Mozilla was ridiculously hard for me. I came to Mozilla to stay. I loved meeting people during my interviews who’ve been there for more than eight years – an eternity in our market. I loved the products, I am still madly in love with the manifesto and the work that is going on. A lot changed in the company though and the people I came for left one by one and got replaced by people with other ideals and ideas. This can be a good thing. Maybe this is the change the company needs.
I didn’t want to give up on these ideals. I didn’t want to take a different job where I have to promote a product by all means. I didn’t want to be in a place that does a lot of research, builds impressive looking tools and solutions but also discards them when they aren’t an immediate success. I wanted to find a place where I can serve the web and make a difference in the lives of those who build things for the web. In day to day products. Not in a monolithic product that tries to be the web.

Boredom on the bleeding edge

My presentations in the last year had a re-occuring theme. We can not improve the web if we don’t make it easy and reliable for everybody building things on it to take part in our innovations. Only a tiny amount of the workers of the web can use alpha or beta solutions. Even fewer can rely on setting switches in browsers or use prefixed functionality that might change on a whim. Many of us have to deliver products that are much more strict. Products that have to adhere to non-sensical legal requirements.

These people have a busy job, and they want to make it work. Often they have to cut corners. In many other cases they rely on massive libraries and frameworks that promise magical solutions. This leads to products that are in a working state, and not in an enjoyable state.

To make the web a continuous success, we need to clean it up. No innovation and no framework will replace the web, its very nature makes that impossible. What we need to do now is bring our bleeding edge knowledge of what means well performing and maintainable code down the line to those who do not attend every conference or spend days on Hacker News and Stack Overflow.

And the winner is…

This is why I answered a job offer I got and I will start working on the 2nd of February for a new company. The company is *drumroll*:

Microsoft. Yes, the bane of my existence as a standards protagonist during the dark days of the first browser wars. Just like my doctor, I am going to the source of a lot of our annoyances and will do my best to change the causes instead of fighting the symptoms.

My new title is “Senior Program Manager” in the Developer experience and evangelism org of Microsoft. My focus is developer outreach. I will do the same things I did at Mozilla: JavaScript, Open Web Technologies and cross-browser support.

Professor X to Magneto: you know, I've always found the real focus to be in between rage and serenity

Frankly, I am tired of people using “But what about Internet Explorer” as an excuse. An excuse to not embrace or even look into newer, sensible and necessary technology. At the same time I am bored of people praising experimental technology in “modern browsers” as a must. Best practices have to prove themselves in real products with real users. What’s great for building Facebook or GMail doesn’t automatically apply to any product on the web. If we want a standards-based web to survive and be the go-to solution for new developers we need to change. We can not only invent, we also need to clean up and move outdated technology forward. Loving and supporting the web should be unconditional. Warts and all.

A larger audience who needs change…

I’ve been asking for more outreach from the web people “in the know” to enterprise users. I talked about a lack of compassion for people who have to build “boring” web solutions. Now I am taking a step and will do my best to try to crack that problem. I want to have a beautiful web in all products I use, not only new ones. I’ve used enough terrible ticketing systems and airline web sites. It is time to help brush up the web people have to use rather than want to use.

This is one thing many get wrong. People don’t use Chrome over Firefox or other browsers because of technology features. These are more or less on par with another. They chose Chrome as it gives them a better experience with Google’s services. Browsers in 2015 are not only about what they can do for developers. It is more important how smooth they are for end users and how well they interact with the OS.

I’ve been annoyed for quite a while about the Mac/Chrome centric focus of the web development community. Yes, I use them both and I am as much to blame as the next person. Fact is though, that there are millions of users of Windows out there. There are a lot of developers who use Windows, too. The reason is that that’s what their company provides them with and that is what their clients use. It is arrogant and elitist to say we change the web and make the lives of every developer better when our tools are only available for a certain platform. That’s not the world I see when I travel outside London or the Silicon Valley.

We’re in a time of change as Alex Wilhelm put it on Twitter:

Microsoft is kinda cool again, Apple is boring, Google is going after Yahoo on Firefox, and calculator watches are back. Wtf is going on.

What made me choose

In addition to me pushing myself to fix one of the adoption problems of new web technologies from within there were a few more factors that made this decision the right one for me:

  • The people – my manager is my good friend and former Ajaxian co-writer Rey Bango. You might remember us from the FoxIE HTML5 training video series. Rey and me on the beach My first colleague in this team is also someone who I worked with on several open projects and got massive respect for.
  • The flexibility – I am a remote worker. I work from London, Stockholm, or wherever I need to be. This is becoming a rare occasion and many offers I got started with “you need to move to the Silicon Valley”. Nope, I work on the web. We all could. I have my own travel budget. I propose where I am going to present. And I will be in charge of defining, delivering and measuring the developer outreach program.
  • The respect – every person who interviewed me was on time, prepared and had real, job related problems for me to solve. There was no “let me impress you with my knowledge and ask you a thing you don’t know”. There was no “Oh, I forgot I needed to interview you right now” and there was no confusion about who I am speaking to and about what. I loved this. and in comparison to other offers it was refreshing. We all are open about our actions and achievements as developers. If you interview someone you have no clue about then you didn’t do your research as an interviewer. I interviewed people as much as they interviewed me and the answers I got were enlightening. There was no exaggerated promises and they scrutinised everything I said and discussed it. When I made a mistake, I got a question about it instead of letting me show off or talk myself into a corner.
  • The organisation – there is no question about how much I earn, what the career prospects are, how my travels and expenses will work and what benefits I get. These things are products in use and not a Wiki page where you could contact people if you want to know more.
  • The challenge – the Windows 10 announcement was interesting. Microsoft is jumping over its own shadow with the recent changes, and I am intrigued about making this work.

I know there might be a lot of questions, so feel free to contact me if you have any concerns, or if you want to congratulate me.

FAQ:

  • Does this mean you will not be participating in open communication channels and open source any longer?

    On the contrary. I am hired to help Microsoft understand the open source world and play a bigger part in it. I will have to keep some secrets until a certain time. That will also be a re-occuring happening. But that is much the same for anyone working in Google, Samsung, Apple or any other company. Even Mozilla has secrets now, this is just how some markets work. I will keep writing for other channels, I will write MDN docs, be active on GitHub and applaud in public when other people do great work. I will also be available to promote your work. All I publish will be open source or Creative Commons. This blog will not be a Microsoft blog. This is still me and will always remain me.

  • Will you now move to Windows with all your computing needs?

    No, like every other person who defends open technology and freedom I will keep using my Macintosh. (There might be sarcasm in this sentence, and a bit of guilt). I will also keep my Android and Firefox OS phones. I will of course get Windows based machinery as I am working on making those better for the web.

  • Will you now help me fix all my Internet Explorer problems when I complain loud enough on Twitter?

    No.

  • What does this mean to your speaking engagements and other public work?

    Not much. I keep picking where I think my work is needed the most and my terms and conditions stay the same. I will keep a strict separation of sponsoring events and presenting there (no pay to play). I am not sure how sponsorship requests work, but I will find out soon and forward requests to the right people.

  • What about your evangelism coaching work like the Mozilla Evangelism Reps and various other groups on Facebook and LinkedIn?

    I will keep my work up there. Except that the Evangelism Reps should have a Mozilla owner. I am not sure what should happen to this group given the changes in the company structure. I’ve tried for years to keep this independent of me and running smoothly without my guidance and interference. Let’s hope it works out.

  • Will you tell us now to switch to Internet Explorer / Spartan?

    No. I will keep telling you to support standards and write code that is independent of browser and environment. This is not about peddling a product. This is about helping Microsoft to do the right thing and giving them a voice that has a good track record in that regard.

  • Can we now get endless free Windows hardware from you to test on?

    No. Most likely not. Again, sensible requests I could probably forward to the right people

So that’s that. Now let’s fix the web from within and move a bulk of developers forward instead of preaching from an ivory tower of “here is how things should be”.

Steve Faulkner et alThe Browser Accessibility Tree

The accessibility tree and the DOM tree are parallel structures.

the accessibility tree, smiling and 2 thumbs up. With the universal access symbol in its foliage and the HTML5 logo on its trunk. Roughly speaking the accessibility tree is a subset of the DOM tree. It includes the user interface objects of the user agent and the objects of the document. Accessible objects are created in the accessibility tree for every DOM element that should be exposed to an assistive technology, either because it may fire an accessibility event or because it has a property, relationship or feature which needs to be exposed.

Generally if something can be trimmed out it will be, for reasons of performance and simplicity.

For example, a <span> with just a style change and no semantics may not get its own accessible object, but the style change will be exposed by other means.

<footer>source: Core Accessibility API Mappings 1.1</footer>

Show me the accessibility tree!

On Windows you can use an object inspection tool such as aViewer to view and interrogate the Accessibility Tree and roles, states and properties, in Firefox, Internet Explorer and Chrome.

DOM (not including Shadow DOM) tree of <video> element (example page)

<!-- DOM tree example from Firefox -->
<video controls="" width="400">
<source src="..." type="video/mp4">
</video>

Firefox Accessibility Tree (video element)

firefox accessibility tree dump

Note: You can also use Dom Inspector (free Firefox extension) to view the accessibility tree and roles, states and properties.

 Internet Explorer Accessibility Tree (video element)

Internet Exploerer accessibility tree dump

Chrome Accessibility tree (video element)

chrome-video

Note: You can also see a full dump of the accessibility tree including roles, states and properties using the Chrome://accessibility tab

Notes:

The example used is unusual as the video element has a complex UI which includes controls in the accessibility tree that are in the Shadow DOM.

Each browser presents a different accessibility tree based on the differing content of its shadow DOM. This occurs for 2 reasons:

  1. Each browser has differing UI for the video player controls.
  2. There are differences in the way the video element and its controls are mapped to Platform accessibility APIs and which APIs are used.

Jeremy KeithA question of timing

I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.

I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.

Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).

That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.

The issue with that hybrid approach—as I’ve discussed before—is that it’s hard. Isomorphic JavaScript (terrible name) can theoretically help here, but I haven’t seen too many examples of it in action. I suspect that’s because this approach doesn’t yet offer enough developer convenience.

Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:

Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.

Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.

I contend that spinners or “loading indicators” should become a rarity

I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?

I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Display a loading indicator,
  4. Request the new data from the server, and
  5. Update the page with the new data.

In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.

“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”

The updated flow now looks like this:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Start a timer, and
  4. Request the new data from the server.
  5. If the timer reaches an upper limit, show a loading indicator.
  6. When the server sends a response, cancel the timer and
  7. Update the page with the new data.

Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:

  1. I click on a button,
  2. I briefly see a loading indicator,
  3. I see the new data.

Now your experience is:

  1. I click on a button,
  2. I see the new data.

…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.

The question is: how long is too long? How long do I wait before showing the loading indicator?

The Nielsen Norman group offers this bit of research:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)

You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.

<iframe height="270" src="https://www.youtube.com/embed/6koX1CXjdKo" width="480"></iframe>

There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.

Now to see what else I can learn from the rest of those design principles.

Bruce LawsonReading List

Anne van KesterenDOM: custom elements

Now JavaScript classes and subclassing are finally maturing there is a revived interest in custom elements. The idea behind custom elements is to give developers lifecycle hooks for elements and enable their custom element classes to be instantiated through markup. There is also some overarching goal of being able to explain the platform, though as html-as-custom-elements demonstrates this is extremely hard.

The first iteration of custom elements was based on mutating the prototype of a custom element object, followed by a callback that gives developers the ability to further mutate the object as needed. Google has shipped this in Chrome, but other browsers have been reluctant to follow. I created a CustomElements wiki page that summarizes where we are at with the second iteration, which will likely be incompatible with what is out there today. There is a couple of outstanding disputes, but the main one is how exactly a custom element object is to be instantiated from markup (referred to as “Upgrading”).

If you are interested in participating, most of the discussion is happening on public-webapps@w3.org. There is also some on IRC.

W3C Team blogThis week: W3C WoT initiative, Accessibility Research, Cory Doctorow Rejoins EFF, etc.

This is the 16-23 January 2015 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Other news

W3C in the Press (or blogs)

4 articles since the 16-Jan digest; a selection follows. You may read all articles in our Press Clippings page.

IEBlogProject Spartan and the Windows 10 January Preview Build

Yesterday, we announced that Windows 10 will ship with a brand new browser, codenamed “Project Spartan.” Designed for Windows 10, Spartan provides a more interoperable, reliable, and discoverable experience with advanced features including the ability to annotate on web pages, a distraction-free reading experience, and integration of Cortana for finding and doing things online faster.

Project Spartan on Windows 10 desktop

Spartan is a single browser designed to work great across the entire Windows 10 device family - from keyboard and mouse on the Windows 10 desktop to touch, gestures, voice, controllers and sensors.

Project Spartan on Windows 10 phone with dark theme     Project Spartan on Windows 10 phone with light theme

Powered by a new rendering engine, Spartan is designed for interoperability with the modern web. We’ve deliberately moved away from the versioned document modes historically used in Internet Explorer, and now use the same markup as other modern browsers. Spartan’s new rendering engine is designed to work with the way the web is written today.

Like Windows 10 itself Spartan will remain up-to-date: as a service, both providing new platform capabilities, security and performance improvements, and ensuring web developers a consistent platform across Windows 10 devices. Spartan and the new rendering engine are truly evergreen.

Spartan provides compatibility with the millions of existing enterprise web sites designed for Internet Explorer. To achieve this, Spartan loads the IE11 engine for legacy enterprise web sites when needed, while using the new rendering engine for modern web sites. This approach provides both a strong compatibility guarantee for legacy enterprise web sites and a forward looking interoperable web standards promise.

We recognize some enterprises have legacy web sites that use older technologies designed only for Internet Explorer, such as custom ActiveX controls and Browser Helper Objects. For these users, Internet Explorer will also be available on Windows 10. Internet Explorer will use the same dual rendering engines as Spartan, ensuring web developers can consistently target the latest web standards.

Dual rendering engine architecture animation

What does this mean to web developers?

If you are building a public consumer-facing web site here’s what you need to know:

  1. Our new rendering engine will be the default engine for Windows 10, Spartan, and Internet Explorer. This engine has interoperability at its core and consumes the same markup you send other modern browsers. Our standards support and roadmap can be found at http://status.modern.ie.
  2. Public Internet web sites will be rendered using the new engine and modern standards, and legacy Internet Explorer behaviors including document modes are not supported in the new engine. If your web sites depends on legacy Internet Explorer behaviors we encourage you to update to modern standards.
  3. Our goal is interoperability with the modern web and we need your help! You can test the new engine via the Windows Insider Program or using http://remote.modern.ie. Please let us know (via Connect or Twitter) when you find interoperability problems so we can work with the W3C and other browser manufacturers to ensure great interoperability.

New features and fixes in the January Insider Update

On Friday, we’re also rolling out a new preview build to Windows 10 Insiders. This new preview will also be available on RemoteIE soon. This build doesn’t have Project Spartan yet, but does have lots of updates to the new web rendering engine that Spartan will use. We started testing our new rendering engine by rolling it out to a portion of Insiders using the Windows Technical Preview in November.

Since that time, we’ve received over 12,000 feedback reports through the smiley face icon alone. This new build has over 2000 changes to the new platform, largely influenced by that feedback. In addition to many fixes, there are also several new platform features we are thrilled to be releasing in the updated preview:

Additionally, you’ll find updated F12 developer tools that include the updated UI we shipped to IE11 users last month as well as several new features and improvements. Here’s a few of our favorites:

  • New and Improved Network Tool—capture and debug network traffic with new UX and capabilities, such as auto-start, a content type filter, and error highlighting.
  • HTML & CSS Pretty Printing—just as you’ve been able to nicely reformat minified JavaScript in the debugger, you’ll now be able to do this for HTML and CSS.
  • Async Callstacks for Events and Timers—quickly view the “async callstack” to connect the dots between event dispatch and the original addEventListener call or between setting a timer and the timer being fired.
  • Sourcemaps for Styles and in the Memory Profiler—jump to your original sources, such as TypeScript or SASS, directly from the Styles pane or Memory Profiler tools.
  • Find Reference and Go To Definition—jump directly to a function call’s definition or find the references to a given variable.

New F12 network tools

With these improvements, we’re increasing the number of Insiders that get the new engine as we work towards this as the default for all users. If you’re curious and want to opt-in now, remember to navigate to about:flags and set “Enable Experimental Web Platform Features” to Enabled.

We’re excited to share our continued progress with you and to introduce Project Spartan to the Microsoft family. Please continue to share your feedback via Twitter, UserVoice (feature requests) and Connect (bug reports) and help shape our next browser. We’ll also be holding our next Twitter #AskIE session on Tuesday, January 27th from 10AM-12PM PST so you can ask questions to the team. See you there!

— Jason Weber, Group Program Manager, Internet Explorer

Planet MozillaBrowsers, Services and the OS – oh my…

Yesterday’s two hour Windows 10 briefing by Microsoft had some very interesting things in it (The Verge did a great job live blogging it). I was waiting for lots of information about the new browser, code name Spartan, but most was about Windows 10 itself. This is, of course, understandable and shows that I care about browsers maybe too much. There was interesting information about Windows 10 being a free upgrade, Cortana integration on all platforms, streaming games from xbox to Windows and vice versa. The big wow factor at the end of the brief was HoloLens, which makes interactivity like Iron Man had in his lab not that far-fetched any longer.

hololens working

For me, however, the whole thing was a bit of an epiphany about browsers. I’ve always seen browsers as my main playground and got frustrated by lack of standards support across them. I got annoyed by users not upgrading to new ones or companies making that hard. And I was disappointed by developers having their pet browsers to support and demand people to use the same. What I missed out on was how amazing browsers themselves have become as tools for end users.

For end users the browser is just another app. The web is not the thing alongside your computing interaction any longer, it is just a part of it. Just because I spend most of my day in the browser doesn’t make it the most important thing. In esssence, the interaction of the web and the hardware you have is what is the really interesting part.

A lot of innovation I have seen over the years that was controversial at that time or even highly improbable is now in phones and computers we use every day. And we don’t really appreciate it. Google Now, Siri and now Microsoft’s Cortana integration into the whole system is amazingly useful. Yes, it is also a bit creepy and there should be more granular insight into what gets crawled and what isn’t. But all in all isn’t it incredible that computers tell us about upcoming flights, traffic problems and remind us about things we didn’t even explicitly set as a reminder?

Spartan Demo screenshot by the verge

The short, 8 minute Spartan demo in the briefing showed some incredible functionality:

  • You can annotate web page with a stylus, mouse or add comments to any part of the text
  • You can then collect these, share them with friends or watch them offline later
  • Reading mode turns the web into a one-column, easy to read version. Safari, Mobile browsers like Firefox Mobile have this and third party services like Readability did that before.
  • Firefox’s awesome bar and Chrome’s Google Now integration also is in Windows with Cortana being available anywhere in the browser.

Frankly, not all of that is new, but I have never used these features. I was too bogged down into what browsers can not do instead of checking what is already possible for normal users.

I’ve mentioned this a few times in talks lately: a lot of the innovation of add-ons, apps and products is merging with our platforms. Where in the past it was a sensible idea to build a weather app and expect people to go there or even pay for it, we get this kind of functionality with our platforms. This is great for end users, but it means we have to be up to speed what user interfaces of the platforms look like these days instead of assuming we need to invent all the time.

Looking at this functionality made me remember a lot of things promised in the past but never really used (at least by me or my surroundings):

  • Back in 2001, Microsoft introduced Smart Tags, which caused quite a stir in the writing community as it allows third party commenting on your web content without notifying you. Many a web site added the MSSmartTagsPreventParsing to disallow this. The annotation feature of Spartan now is this on steroids. Thirdvoice (wayback machine archive) was a browser add-on that did the same, but got creepy very quickly by offering you things to buy. Weirdly enough Awesome Screenshot, an annotation plug-in also now gets very creepy by offering you price comparisons for your online shopping. This shows that a functionality like that doesn’t seem to be viable as a stand-alone business model, but very much makes sense as a feature of the platform.
  • Back in 2006, Ray Ozzie of Microsoft at eTech introduced the idea of the Live Clipboard. It was this:
    [Live Clipboard…] allows the copy and pasting of data, including dynamic, updating data, across and between web applications and desktop applications.
    The big thing about this was that it would have been an industrial size use case for Microformats and could have given that idea the boost it needed. However, despite me pestering Chris Wilson of – then – Microsoft at @media AJAX 2006 about it, this never took off. Until now, it seems – except that the clippings aren’t live.
  • When I worked in Yahoo, Browser Plus came out of a hackday, an extension to browsers that allows easier file uploads and drag and drop between browser and OS. It also gave you Desktop notifications. One of the use cases shown at the hack day was to drag and drop products from several online stores and then checkout in one step with all of them. This, still, is not possible. I’d wager to guess that legal problems and tax reasons are the main blocker there. Drag and Drop and uploads as well as Desktop notifications are now reality without add-ons. So we’re getting there.

This year will be very exciting. Not only does HTML5 and JavaScript get new features all the time. It seems to me that browsers become much, much smoother at integrating into our daily lives. This spells doom for a lot of apps. Why use an app when the functionality is already available with a simple click or voice command?

Of course, there are still many issues to fix, mainly offline and slow connection use cases. Privacy and security is another problem. Convenient as it is, there should be some way to know what is listening in on me right now and where the data goes. But, I for one am very interested about the current integration of services into the browser and the browser into the OS.

Bruce LawsonWhy we can’t do real responsive images with CSS or JavaScript

I’m writing a talk on <picture>, srcset and friends for Awwwards Conference in Barcelona next month (yes, I know this is unparalleled early preparation; I’m heading for the sunshine for 2 weeks soon). I decided that, before I get on to the main subject, I should address the question “why all this complex new markup? Why not just use CSS or JavaScript?” because it’s invariably asked.

But you might not be able to see me in Catalonia to find out, because tickets are nearly sold out. So here’s the answer.

All browsers have what’s called a preloader. As the browser is munching through the HTML – before it’s even started to construct a DOM – the preloader sees “<img>” and rushes off to fetch the resource before it’s even thought about speculating about considering doing anything about the CSS or JavaScript.

It does this to get images as fast as it can – after all, they can often be pretty big and are one of the things that boosts the perceived performance of a page dramatically. Steve Souders, head honcho of Velocity Conference, bloke who knows loads about site speed, and renowned poet called the preloader “the single biggest performance improvement browsers have ever made” in his sonnet “Shall I compare thee to a summer’s preloader, bae?”

So, by the time the browser gets around to dealing with CSS or script, it may very well have already grabbed an image – or at least downloaded a fair bit. If you try

<img id=thingy src=picture.png alt="a mankini">
…
@media all and (max-width:600px) {
 #thingy {content: url(medium-res.png);}
 }

@media all and (max-width:320px) {
 #thingy {content: url(low-res.png);}
 }

you’ll find the correct image is selected by the media query (assuming your browser supports content on simple selectors without :before or :after pseudo-elements) but you’ll find that the preloader has downloaded the resource pointed to by the <img src> and then the one that the CSS replaces it with is downloaded, too. So you get a double download which is not what you want at all.

Alternatively, you could have an <img> with no src attribute, and then add it in with JavaScript – but then you’re fetching the resource until much later, delaying the loading of the page. Because your browser won’t know the width and height of the image that the JS will select, it can’t leave room for it when laying out the page so you may find that your page gets reflowed and, if the user was reading some textual content, she might find the stuff she’s reading scrolls off the page.

So the only way to beat the preloader is to put all the potential image sources in the HTML and give the browser all the information it needs to make the selection there, too. That’s what the w and x descriptors in srcset are for, and the sizes attribute.

Of course, I’ll explain it with far more panache and mohawk in Barcelona. So why not come along? Go on, you know you want to and I really want to see you again. Because I love you.

Planet MozillaFirefox 36 in beta

Firefox 36 (Desktop and Mobile) is now available on the beta channel.

The release notes are published on the Mozilla website:

This version introduces many new HTML5/CSS features, in particular the Media Source Extensions (MSE) API which allow native HTML5 playback on YouTube. The new preferences implementation is also enabled for the first half of the beta cycle, please help us to test this new feature!

On the mobile version of Firefox, we are also shipping the new Tablet user interface!

Download this new version:

And as usual, please report any issues.

W3C Team blogThis week: W3C TAG election, HTML5 Japanese CG, W3C in figures (2014), etc.

This is the 9-16 January 2015 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

  • n/a

W3C in the Press (or blogs)

4 articles since the 9-Jan digest; see one below. You may read all articles in our Press Clippings page.

Planet MozillaVideo Subtitles and Localization

Let’s talk about localization and subtitles – not captions. From Wikipedia:

“Subtitles” assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they only transcribe dialogue and some on-screen text. “Captions” aim to describe to the deaf and hard of hearing all significant audio content—spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking – along with any significant music or sound effects using words or symbols.

So far I worked on two projects that involved subtitles – Web We Want and Firefox: Choose Independent – and this is what I learned in the process.

The Process

Step 1: Provide Source Content

You need the video (obviously) and English subtitles with the correct timing (less obvious). This means that the picture might no be final (sometimes they call this picture lock quality), but the audio track and its timing do.

Step 2: How Do I Localize Subtitles?

Currently the most common format for subtitles is SubRip, and it’s really simple: sequential number of the subtitle, timing (start-end), text.

For example this is the beginning of the Web We Want .srt file:

1
00:00:00,000 --> 00:00:00,864
THE WEB WE WANT

2
00:00:00,864 --> 00:00:03,388
THE WEB WE WANT
an open letter

3
00:00:05,503 --> 00:00:07,406
I am not a data point

4
00:00:07,642 --> 00:00:09,503
to be bought and sold.

Amara is a great tool to localize subtitles: you can use an interactive editor/timeline and you can adapt the timing of each sentence to your needs while watching the video. Potentially you can also choose to automatically sync subtitles between Amara and Youtube.

Step 3: Host the Video on Youtube

At this point subtitles are available, someone just needs to load them on YouTube and display the video on a web page.

Sounds simple, doesn’t it? What could go wrong?

Potential Localization Issues

The first issue is timing: if the English text requires 2 seconds to be read, that might not be enough to read the same sentence in Spanish, German or other verbose languages. That’s something you need to keep in mind from day one while producing the video and audio track.

Split sentences: sometimes a sentence is split into multiple parts because of its length. Unfortunately the sentence structure might be completely different in the target language, and some details will get lost in translation. Consider for example the final frame of the Choose Independent video:

Choose Independent frame

«It’s how we keep our independence online…» (pause, switch to burning fox) «…burning bright»

This works nicely in English since “burning” is perfectly synced with the picture, but most locales won’t be able to obtain the same result. Small bits, but still reducing the impact of the message.

In general, watching a subtitled video is a sub-optimal experience. If you plan to have content in video format, don’t focus your communication exclusively on that.

Why Youtube Is not Great

For the Firefox Independent page we force YouTube to load subtitles with cc_load_subtitles.

A few hours after the launch, I started receiving complaints from localizers saying that YouTube was loading subtitles with the wrong localization for some languages: for example nl for fy-NL (Dutch instead of Frisian), de for rm (German instead of Romansh), en-US for cy (English instead of Welsh).

This is how YouTube’s embed works:

  • If you’re logged in into Google, you’ll get the locale you choose for YouTube. For example I get English subtitles, even if I’m using a browser in Italian (a quick test is to watch the video in private mode).
  • If you’re not logged in, you’ll get the first good locale based on the Accept Language header sent by your browser. For example, for ‘dsb’ (Lower Sorbian) this equals to “dsb, hsb, de, en-US, en”, and German is the first available language on the list.

For some unknown reasons YouTube wasn’t founding a match in Accept Language and falling back to a different language for those locales, even if the subtitles were localized and loaded. My only guess would be a mismatch in the locale format understood by YouTube, like fy_NL vs fy-NL/fy.

Then I found out that we can send a parameter called ‘cc_lang_pref’ to force the language (not exactly well documented), and that fixes cy, fy-NL, rm. For example, if you open https://www.mozilla.org/it/firefox/independent you’ll get the Italian subtitles even if you’re using a browser in a different language. Since we have a good locale detection on top of mozilla.org, this makes sense and let us use the most of our localization teams’ work.

But then YouTube becomes not smart: if the requested locale is missing, it doesn’t rely on Accept Language, but just falls back to English. So Welsh (cy) is now getting subtitles in the expected language, Lower Sorbian (dsb) is getting English instead of German and can only manually switch language. Far from great.

Talking about YouTube, I find quite silly a claim as «I support 163 languages” but don’t provide the list of them anywhere. I have at least 5 locales code that are not supported or recognized (ast, dsb, es-AR, es-CL, hsb).

How Do We Fix It?

I think the solution will be to move away from YouTube and its limitations, use native or alternative video players (example), subtitles in VTT format and take full control of the entire chain. But these are discussion and experiments that still need to start.

Planet MozillaA time of change…

“The suspense is killing me,” said Arthur testily.
Stress and nervous tension are now serious social problems in all parts of the Galaxy, and it is in order that this situation should not in any way be exacerbated that the following facts will now be revealed in advance.
Hitchhiker’s Guide to the Galaxy

never do anything halfway

I am not returning to Mozilla in February but go on to bring the great messages of an open web somewhere else. Where, I do not know yet. I am open to offers and I am interested in quite a few things happening right now. I want something new, with a different audience. A challenge to open and share systems and help communication where the current modus operandi is to be secretive. I want to lead a team and have a clear career path for people to follow. If you have a good challenge for me, send me some information about it.

I love everything Mozilla has done and what it stands for. I also will continue being a Mozillian. I will keep in touch with the great community and contribute to MDN and other open resources.

Of course there are many reasons for this decision, none of which need to go here. Suffice to say, I think I have done in Mozilla what I set out to do and it now needs other people to fulfil the new challenges the company faces.

I came to Mozilla with the plan to make us the “Switzerland of HTML5”, or the calming negotiator and standards implementer in the browser wars raging at that time. I also wanted to build an evangelism team and support the community in outreach on a basis of shared information and trust. I am proud of having coached a lot of people in the Mozilla community. It was very rewarding seeing them grow and share their excitement. It was great to be a spokesperson for a misfit company. A company that doesn’t worry about turning over some apple-carts if the end result means more freedom for everyone. It was an incredibly interesting challenge to work with the press in a company that has many voices and not one single communication channel. It was also great to help a crazy idea like an HTML5 based mobile operating system come to fruition and be a player people take serious.

Returning to Mozilla I’d have to start from scratch with that. Maybe it is time for Mozilla not to have a dedicated evangelism team. It is more maintainable to build an internal information network. One that empowers people to represent Mozilla and makes it easy to always have newest information.

I am looking forward to seeing what happens with Mozilla next. There is a lot of change going on and change can be a great thing. It needs the right people to stand up and come up with new ideas, have a plan to execute them and a way to measure their success.

As for me, I am sure I will miss a few things I came to love working for Mozilla. The freedoms I had. The distributed working environment. The ability to talk about everything we did. The massive resource that is enthusiasts world-wide giving their time and effort to make the fox shine.

I am looking forward to being one of them and enjoy the support the company gives me. Mozilla will be the thing I want to support and point to as a great resource to use.

Faster speed leads to more disappointment

Making the web work, keeping our information secure and private and allowing people world-wide to publish and have a voice is the job of all the companies out there.

As enthusiastic supporters of these ideas we’re not reaching the biggest perpetrators. I am looking forward to giving my skills to a company that needs to move further into this mindset rather than having it as its manifesto. I also want to support developers who need to get a job done in a limited and fixed environment. We need to make the web better by changing it from the inside. Every day people create, build and code a part of the web. We need to empower them, not to tell them that they need a certain technology or change their ways to enable something new.

The web is independent of hardware, software, locale and ability. This is what makes it amazing. This means that we can not tell people to use a certain browser to get a better result. We need to find ways to get rid of hurtful solutions by offering upgrades for them.

We have a lot of excuses why things break on the web. We fail to offer solutions that are easy to implement, mature enough to use and give the implementers an immediate benefit. This is a good new challenge. We are good at impressing one another, time to impress others.

“Keep on rocking the free web”, as Potch says every Monday in the Mozilla meeting.

Planet MozillaWebdev Extravaganza – January 2015

Note: Apologies for the lack of posts in December; both the Webdev Extravaganza and Beer and Tell were cancelled due to a company-wide workweek and the holidays.

Once a month, web developers from across Mozilla get together to compete in a S’More-themed cooking contest. While we concoct a variety of meals out of the basic ingredients of graham cracker, marshmallow, and chocolate, we find time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Socorro Switch from HBase to S3

Lonnen shared with us the news that Socorro (Crash-Stats) has switched from storing crash data in HBase to Amazon S3. Socorro has roughly 150 terabytes of data, and HBase was the single largest source of problems for the site. Even with the extra latency of leaving the data center to read crash data, Socorro hasn’t seen any major issues with the new system.

about:home Fundraising Snippets

As part of the end-of-year fundraising push, Osmose worked with the amazing team over at the Mozilla Foundation to test and deploy several snippets encouraging Firefox users to donate to Mozilla. Throughout the campaign, the fundraising team has been maintaining a webpage and blog with info about how much we raised and what methods we used to optimize our fundraising.

Mozilla.org Landing Pages and Upcoming Tours

Craigcook shared news about a new landing page for Firefox OS on TVs that landed on Mozilla.org for CES. There’s also a not-yet-but-soon-to-be-launched landing page announcing Firefox Hello.

QMO on Shared WordPress

Craig also dropped a note about QMO being moved over to the shared WordPress instance instead of living on it’s own separate instance. This will make QMO easier to maintain and less fragile.

Peep 2.0

ErikRose proudly announced the 2.0 release of Peep, a wrapper around pip that cryptographically ensures that libraries you install are the same as the ones installed by the original developer. The 2.0 release primarily includes a security fix where the setup.py file of a package would be executed even if the package did not match the expected hash. Update today!

Air Mozilla Extracted Screenshots for Video Icons

Peterbe informed us that Air Mozilla now has thumbnails available for use as video icons instead of the previously-used static logos. If you have any archived videos, you can visit their “Edit event data” page and select which thumbnail you’d like to use as the icon for the video.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

ElasticUtils is Deprecated

Willkg gave a non-verbal update to inform us that ElasticUtils is deprecated, and he’s stepping down as maintainer. There’s a blog post explaining why, and he recommends that users switch to elasticsearch-dsl-py instead.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor. Or rather, we would, but no one new joined this week. Doh!

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Custom Context Menu Items

ErikRose wanted to share a neat article he found by davidwalsh describing how to add custom items to the context menu. Browser support is currently limited to Firefox.

DevOps Talk?

Jgmize asked about hosting a possible DevOps Extravaganza, but it was reinforced that DevOps-related topics are still relevant to developing the web, and thus can be talked about during the Webdev Extravaganza. If you’re doing something interesting related to DevOps, feel free to share during the meeting!

SUMO Developers Flattened

Lonnen shared the news that r1cky has transitioned from being a manager to being a full-time engineer again. The SUMO and Input teams that were under him are now all on the Web Engineering team under lonnen. Hooray for flat hierarchies!


This month’s winning entry in the cooking contest was a S’mores-flavored variety of Soylent that both provides all the nutrients and calories you probably need, but also is literally just S’mores thrown in a blender. Genius!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

W3C Team blogLast week: W3C and OGC to work on Spatial Data on the Web, WAI Tutorials, W3Training, etc.

This is the 2-9 January 2015 edition -after a hiatus on 19 December 2014- of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality

  • Ars Technica: Title II for Internet providers is all but confirmed by FCC chairmanFederal Communications Commission (FCC) Chairman Tom Wheeler implied that Title II of the Communications Act will be the basis for new net neutrality rules governing the broadband industry. […] proposed rules […] will be circulated within the Commission on February 5 and voted on on February 26.

W3C in the Press (or blogs)

5 articles since the last digest; a selection follows. You may read all articles in our Press Clippings page.

Bruce LawsonReading List

Planet MozillaA Device Blind Users Will Love

The Internet is a global public resource that must remain open and accessible.

<footer style="text-align: right;">— Mozilla manifesto</footer>

Mozilla invests in accessibility, because it’s the right thing to do.

We have staff, a team of engineers, who exclusively focus on accessibility in our products and play a positive influence in the general accessibility of the web. This has paid off well, Firefox is well regarded as a leader in screen reader support on the desktop and on Android. We have the best HTML5 accessibility support in our browser, and we are close to having a fully functional screen reader in Firefox OS.

Mozilla accessibility logo

I say “close”, because we are not yet there. Most websites are fairly accessible with little to no effort from the site developers. The document model of the web is relatively simple and is malleable enough that blind users are able to access them through screen readers. Advanced web applications are a whole other story, developers are required to be much more mindful about how they are authored and account for users with disabilities when designing them. The most recognized standard for making accessible rich internet application is called ARIA (accessible rich internet applications), and it allows augmenting markup with attributes that will help assistive technologies (such as screen readers) have a good understanding of the state of the app, and relay it to the user.

In Firefox OS we have a suite of core apps called Gaia that is the foundation for Firefox OS’s user interface. It is really one giant web app, perhaps one of the biggest out there. Since our mission dictates that we make our products accessible, we have embarked on that journey, we created a screen reader for Firefox OS, and we got to work in making Gaia screen-reader friendly. It has been a long and sisyphean process, where we would arrive at one module in gaia, learn the code, fix some issues, and move on to the next module. It feels something like this:

<figure class="wp-caption aligncenter" id="attachment_634" style="width: 460px;">helicopter dumps water on a grass fireA California Department of Forestry helicopter dumps water on a grass fire in Benicia. (Robinson Kuntz/Daily Republic)</figure>

Firefox OS has grown tremendously in a couple of years. Things never slowed down, and we were always revamping one app or another, trying out something new, and evolving rapidly. This means that accessibility was always one step behind. If we got an app accessible in version n, n+1 was around the corner with a whole new everything. Besides working on Gaia, we have always been looping back to our screen reader, making it more robust and adding features. We have consistently been straddling the gap:

The gap between Firefox OS and the screen reader

Firefox OS has achieved some amazing milestones in its short life. Early in the project, there was still a hushed uncertainty. Did we over promise? Could we turn a proof of concept into a mass-market device? There were so many moving parts for a version one release. Accessibility was not a product priority.

The return on investment

When I think about making our products accessible for the people that can’t see or to help a kid with autism, I don’t think about a bloody ROI.

<footer style="text-align: right;">— An angry Tim Cook</footer>

Take 5 seconds, and let that sink in. Apple is not a charity, they are one of the most profitable companies on the planet. Still, they understand the social value of making their products accessible.

Yet, I will argue that there is a bloody return on investment in accessibility.

Mobile is changing our social perception on disability and blurring the line between permanent and temporary barriers. The prevailing assumption used to be that your user will sit in front of a 14″ monitor with a keyboard, mouse and an undivided attention. But today there can be no assumptions, an app needs to be usable in many situations that impair the user in comparison to a desktop setup:

  • A user will browse the web on a small, 3.5″ device with no keyboard, and only their inaccurate fat fingers as a pointing device for activating links.
  • A driver will need to keep their eyes on the road and cannot interact with complex interfaces.
  • A cyclist on a cold winter day will have gloves and will want to look up where they are going on a map.
  • A pedestrian will look up a nearby restaurant on a sunny day with plenty of glare making it hard to read their phone’s screen.
<figure class="wp-caption aligncenter" id="attachment_655" style="width: 460px;">A driver texting in trafficThis shouldn’t happen.</figure>

The edge case of permanently impaired users is eclipsed by the common mobile use case which needs to appeal to users with all sorts of temporal impairments: motor, visual and cognitive. Apple understands that with Siri, and Google does too with Google Now. In Firefox OS, sooner or later we will need a good voice input/output story.

I made a case for accessibility, and I could probably stop here. But I won’t. Because the real benefit of an accessible device is priceless.

<figure class="wp-caption aligncenter" id="attachment_659" style="width: 460px;">Graph showing impact on blind users in contrast to other usersWhile blind smart phone users are a small fraction of the general population, the impact on their lives is so much greater.</figure>

We all benefit from that smart phone in our pocket. The first iPhone was a real revolution. It allows us to check mail on the go, share our lives on social networks, ignore our family, and pretend like we we are doing something important in awkward parties. But for blind users, smart phones have increased their quality of life in profound and amazing ways. Blind smart phone owners are more independent, less isolated. and they can participate in online life like never before. Prior to smart phones, blind folks depended on very expensive gadgets for mobile computing. Today, a smart phone with a few handy apps could easily replace a $10,000 specialty device.

Smart phones in the hands of blind users is a very big deal.

Three blind iphone owners

What we need to do

To make this happen, every decision by our product team, every design from UX, and every line of code from developers needs to account for the blind user experience. This isn’t as big a deal as it sounds, screen readers support is just another thing to account for, like localization. We know today that designing and developing UI for right-to-left languages take some consideration. Especially if you live in a left-to-right world.

What we need is project-wide consciousness around accessibility. It is great that we have an accessibility team, and I think Mozilla benefits from it. But this does not let anyone else off the hook from understanding accessibility, embedding it in our products, and embracing it as a value.

I fear that this post will disappoint because I won’t get into how blind users use smart phones, and how should developers account for the screen reader. I have written in the past about this, and Yura has some good posts on that as well. And yes, we need to step up our game, document and communicate more.

But for now, here are two things you could do to get a better picture:

  1. If you own an Android device or iPhone, turn on the screen reader, close your eyes and learn to use it. Challenge yourself to complete all sorts of tasks with your screen reader on. Test the screen readers limits.
  2. With your Firefox OS device, turn on the screen reader. It works in the same fashion as the iOS or Android one does. Check your latest creation, and see what is broken and missing.

2015 is going to be a great year for Firefox OS. I have already heard all sorts of product ideas that have the potential of greatness. We are destined to ship something amazing. But for bind users, it could be life changing.


Planet WebKitXabier Rodríguez Calvar: Streams API in WebKit at the Web Engines Hackfest

Yes, I know, I should have written this post before you know, blah, blah, excuse 1, blah, excuse 2, etc. ;)

First of course I would like to thank Igalia for allowing me to use the company time to attend the hackfest and meeting such a group of amazing programmers! It was quite intense and I tried to give my best though for different reasons (coordination, personal and so on) I missed some session.

My purpose at the hackfest was to work with Youenn Fablet from Canon on implementing the Streams API in WebKit. When we began to work together in November, Youenn had already a prototype working with some tests, so the idea was taking that, completing, polishing and shipping it. Easy, huh? Not so…

What is Streams? As you can read in the spec, the idea is to create a way of handling different kind of streams with a common high level API. Those streams can be a mapping of low level I/O system operations or can be easily created from JavaScript.

Fancy things you can do:

  • Create readable/writable streams mapping different operations
  • Read/write data from/to the streams
  • Pipe data between different streams
  • Handle backpressure (controlling the data flow) automagically
  • Handle chunks as the web application sees fit, including different data types
  • Implement custom loaders to feed different HTML tags (images, multimedia, etc.)
  • Map some existing APIs to Streams. XMLHttpRequest would be a wonderful first step.

First thing we did after the prototype was defining a roadmap:

  • General ReadableStream that you can create at JavaScript and read from it
  • XMLHttpRequest integration
  • Loaders for some HTML tags
  • WritableStream
  • Piping operations

As you can see in bugzilla we are close to finishing the first point, which took quite a lot of effort because it required:

  • Code cleaning
  • Making it build in debug
  • Improving the tests
  • Writing the promises based constructor
  • Fixing a lot of bugs

Of course we didn’t do all this at the hackfest, only Chuck Norris would have been able to do that. The hackfest provided the oportunity of meeting Youenn in person, working side by side and discussing different problems and possible strategies to solve them, like for example, the error management, queueing chunks and handling their size, etc. which are not trivial given the complexity created by the flexibility of the API.

After the hackfest we continued working and, as I said before, the result you can find at bugzilla. We hope to be able to land this soon and continue working on the topic within the current roadmap.

To close the topic about the hackfest, it was a pleasure to work with such amount of awesome web engines hackers and I would like to finish thanking the sponsors Collabora and Adobe and specially my employer, Igalia, that was sponsor and host.

Steve Faulkner et alNotes on providing alt text for twitter images

I use twitter a lot via the twitter web UI. Often I see images in my twitter stream that contain interesting information and text content. Unsurprisingly this content is not available to people who cannot see the images or have difficulty interpreting graphical content.

Unfortunately the twitter UI does not provide a built in method for providing text alternatives using the standard HTML methods for doing so. You cannot add an alt attribute to images and/or provide a caption using the figure and figcaption elements.
What you can do pretty easily is provide the alt text as text in the same tweet as the image (if it fits) or if it is a tweet from someone else or there is not enough space to provide in the same tweet you can reply to the tweet with an alt text.

Examples

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Note: You can often find the text version of quotes and other text embedded in graphics by sticking the first few words of the text in Google, (the Gene Roddenberry quote for example) you can then simply provide a link to the text source for everyone \0/.

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

It’s a bit more work

Providing text alternative for images on twitter is a bit more work, but it makes the interesting stuff available to anybody that follows you on twitter. (same goes for Facebook)

Sometimes, I find graphics whose alt text simply don’t fit into 140 characters or would benefit from structured HTML markup, if I think its really interesting and I have the time, I will make it available using a service such as codepen and then publish the link on twitter:

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>Note: The Easy Chirp twitter client also provides a method to provide text alternatives amongst many other accessibility features.

Addendum

Posted a few music video’s on twitter, for example:

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

Thinking that using Gist is a simple way to to add lyrics to songs.

Further Reading

Adrian Roselli – (ranting) Don’t Tweet Pictures of Text

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>