Planet MozillaIntroducing MakeDrive

I've been lax in my blogging for the past number of months (apologies). I've had my head down in a project that's required all of my attention. On Friday we reached a major milestone, and I gave a demo of the work on the weekly Webmaker call. Afterward David Ascher asked me to blog about it. I've wanted to do so for a while, so I put together a proper post with screencasts.

I've written previously about our idea of a web filesystem, and the initial work to make it possible. Since then we've greatly expanded the idea and implementation into MakeDrive, which I'll describe and show you now.

MakeDrive is a JavaScript library and server (node.js) that provides an offline-first, always available, syncing filesystem for the web. If you've used services like Dropbox or Google Drive, you already know what it does. MakeDrive allows users to work with files and folders locally, then sync that data to the cloud and other browsers or devices. However, unlike Dropbox or other similar services, MakeDrive is based purely on JavaScript and HTML5, and runs on the web. You don't install it; rather, a web application includes it as a script, and the filesystem gets created or loaded as part of the web page or app.

Because MakeDrive is a lower-level service, the best way to demonstrate it is by integrating it into a web app that relies on a full filesystem. To that end, I've made a series of short videos demonstrating aspects of MakeDrive integrated into a modified version of the Brackets code editor. I actually started this work because I want to make Brackets work in the browser, and one of the biggest pieces it is missing in browser is a full featured filesystem (side-note: Brackets can run in a browser just fine :). This post isn't specifically about Brackets, but I'll return to it in future posts to discuss how we plan to use it in Webmaker. MakeDrive started as a shim for Brackets-in-a-browser, but Simon Wex encouraged me to see that it could and should be a separate service, usable by many applications.

In the first video I demonstrate how MakeDrive provides a full "local," offline-first filesystem in the browser to a web app:

The code to provide a filesystem to the web page is as simple as var fs = MakeDrive.fs();. Applications can then use the same API as node.js' fs module. MakeDrive uses another of our projects, Filer, to provide the low-level filesystem API in the browser. Filer is a full POSIX filesystem (or wants to be, file bugs if you find them!), so you can read and write utf8 or binary data, work with files, directories, links, watches, and other fun things. Want to write a text file? it's done like so:

  var data = '<html>...';
  fs.writeFile('/path/to/index.html', data, function(err) {
    if(err) return handleError();
    // data is now written to disk
  });

The docs for Filer are lovingly maintained, and will show you the rest, so I won't repeat it here.

MakeDrive is offline-first, so you can read/write data, close your browser or reload the page, and it will still be there. Obviously having access to your filesystem outside the current web page is also desirable. Our solution was to rework Filer so it could be used in both the browser and node.js, allowing us to mirror filesystems over the network using Web Sockets). We use a rolling-checksum and differential algorithm (i.e., only sending the bits of a file that have changed) inspired by rsync; Dropbox does the same.

In this video I demonstrate syncing the browser filesystem to the server:

Applications and users work with the local browser filesystem (i.e., you read and write data locally, always), and syncing happens in the background. That means you can always work with your data locally, and MakeDrive tries to sync it to/from the server automatically. MakeDrive also makes a user's mirrored filesystem available remotely via a number of authenticated HTTP end points on the server:

  • GET /p/path/into/filesystem - serve the path from the filesystem provided like a regular web server would
  • GET /j/path/into/filesystem - serve the path as JSON (for APIs to consume)
  • GET /z/path/into/filesystem - export the path as export.zip (e.g., zip and send user data)

This means that a user can work on files in one app, sync them, and then consume them in another app that requires URLs. For example: edit a web component in one app and include and use it in another. When I started web development in the 1990s, you worked on files locally, FTP'ed them to a server, then loaded them via your web server and browser. Today we use services like gh-pages and github.io. Both require manual steps. MakeDrive automates the same sort of process, and targets new developers and those learning web development, making it a seamless experience to work on web content: your files are always "on the web."

MakeDrive supports multiple, simultaneous connections for a user. I might have a laptop, desktop, and tablet all sharing the same filesystem via a web app. This app can be running in any HTML5 compatible browser, app, or device. In this video I demonstrate syncing changes between different HTML5 browsers (Chrome, Firefox, and Opera):

Like Dropbox, each client will have its own "local" version of the filesystem, with one authoritative copy on the server. The server manages syncing to/from this filesystem so that multiple clients don't try to sync different changes to the same data at once. After one client syncs new changes, the server informs other clients that they can sync as well, which eventually propagates the changes across all connected clients. Changes can include updates to a file's data blocks, but also any change to the filesystem nodes themselves: renames, deleting a file, making a new directory, etc.

The code to make this syncing happen is very simple. As long as there is network, a MakeDrive filesystem can be connected to the server and synced. This can be a one-time thing, or the connection can be left open and incremental syncs can take place over the lifetime of the app: offline first, always syncing, always available.

Because MakeDrive allows the same user to connect multiple apps/devices at once, we have to be careful not to corrupt data or accidentally overwrite data when syncing. MakeDrive implements something similar to Dropbox's Conflicted Copy mechanism: if two clients change the same data in different ways, MakeDrive syncs the server's authoritative version, but also creates a new file with the local changes, and lets the user decide how to proceed.

This video demonstrates the circumstances by which a conflicted copy would be created, and how to deal with it:

Internally, MakeDrive uses extended attributes on filesystem nodes to determine automatically what has and hasn't been synced, and what is in a conflicted state. Conflicted copies are not synced back to the server, but remain in the local filesystem. The user decides how to resolve conflicts by deleting or renaming the conflicted file (i.e., renaming clears the conflict attribute).

MakeDrive works today, but isn't ready for production quite yet. On Friday we reached the end of our summer work, where we tried hard to follow initial mockups are very cool. If you have a web-first filesystem, you can do some interesting things that might not make sense in a traditional filesystem (i.e., when the scope of your files is limited to web content).

  • Having a filesystem in a web page naturally got me wanting to host web pages from web pages. I wrote nohost to experiment with this idea, an httpd in browser that uses Blob URLs. It's really easy to load DOM elements from a web filesystem:
  • var img = document.createElement('img');
    fs.readFile('/path/into/filesystem/image.png', function(err, data) {
      if(err) return handleError();
    
      // Create a Blob and wrap in URL Object.
      var blob = new Blob([data], {type: 'image/png'})
      var url = URL.createObjectURL(blob);
      img.src = url;
    });
    
    • Using this technique, we could create a small bootloader and store entire web apps in the filesystem. For example, all of Brackets loading from disk, with a tiny bootloader web page to get to the filesystem in appcache. This idea has been discussed elsewhere, and adding the filesystem makes it much more natural.
    • The current work on the W3C stream spec is really exciting, since we need a way to implement streaming data in and out of a filesystem, and therefore IndexedDB.
    • Having the ability to move IndexedDB to worker threads for background syncs (bug 701634), and into third-party iframes with postMessage to share a single filesystem instance across origins (bug 912202) would be amazing
    • Mobile! Being able to sync filesystems in and out of mobile web apps is really exciting. We're going to help get MakeDrive working in Mobile Appmaker this fall.

    If any of this interests you, please get in touch (@humphd) and help us. The next 6 months should be a lot of fun. I'll try to blog again before that, though ;)

    Planet MozillaAggressively prefetching everything you might click

    I just rolled out a change here on my personal blog which I hope will make my few visitors happy.

    Basically; when you hover over a link (local link) long enough it prefetches it (with AJAX) so that if you do click it's hopefully already cached in your browser.

    If you hover over a link and almost instantly hover out it cancels the prefetching. The assumption here is that if you deliberately put your mouse cursor over a link and proceed to click on it you want to go there. Because your hand is relatively slow I'm using the opportunity to prefetch it even before you have clicked. Some hands are quicker than others so it's not going to help for the really quick clickers.

    What I also had to do was set a Cache-Control header of 1 hour on every page so that the browser can learn to cache it.

    The effect is that when you do finally click the link, by the time your browser loads it and changes the rendered output it'll hopefully be able to do render it from its cache and thus it becomes visually ready faster.

    Let's try to demonstrate this with this horrible animated gif:
    (or download the screencast.mov file)

    Screencast
    1. Hover over a link (in this case the "Now I have a Gmail account" from 2004)
    2. Notice how the Network panel preloads it
    3. Click it after a slight human delay
    4. Notice that when the clicked page is loaded, its served from the browser cache
    5. Profit!

    So the code that does is is quite simply:

    $(function() {
      var prefetched = [];
      var prefetch_timer = null;
      $('div.navbar, div.content').on('mouseover', 'a', function(e) {
        var value = e.target.attributes.href.value;
        if (value.indexOf('/') === 0) {
          if (prefetched.indexOf(value) === -1) {
            if (prefetch_timer) {
              clearTimeout(prefetch_timer);
            }
            prefetch_timer = setTimeout(function() {
              $.get(value, function() {
                // necessary for $.ajax to start the request :(
              });
              prefetched.push(value);
            }, 200);
          }
        }
      }).on('mouseout', 'a', function(e) {
        if (prefetch_timer) {
          clearTimeout(prefetch_timer);
        }
      });
    });
    

    Also, available on GitHub.

    I'm excited about this change because of a couple of reasons:

    1. On mobile, where you might be on a non-wifi data connection you don't want this. There you don't have the mouse event onmouseover triggering. So people on such devices don't "suffer" from this optimization.
    2. It only downloads the HTML which is quite light compared to static assets such as pictures but it warms up the server-side cache if needs be.
    3. It's much more targetted than a general prefetch meta header.
    4. Most likely content will appear rendered to your eyes faster.

    Anne van KesterenDOM: attributes sadness

    I have been reinstating “features” related to attribute handling in DOM. We thought we could get rid of them, but usage counters from Chrome and compatibility data from Gecko showed we could not. This is very sad so I thought I would share the pain.

    A simple design for attributes would consist of each having a name and a value (both strings) and a simple map-like API on every element would be sufficient to deal with them. The getAttribute(name), setAttribute(name, value), and removeAttribute(name) methods. As well as a way to iterate through the names and values.

    However, back in the day getAttribute(name) was required to return the empty string rather than null for a missing attribute, so hasAttribute(name) also exists. Fixing the specification to make getAttribute() return null was highly controversial back then. I even misguidedly ranted against developers who were making use of this feature as it prevented Opera from becoming standards compliant. “Please leave your sense of logic at the door, thanks!” was not a popular phrase back then.

    Unfortunately namespaced attributes are a thing. And instead of simply adding a namespace field to our existing name and value, a namespace, namespace prefix, and local name field were added. Indeed, the local name is not necessarily equal to the name of an attribute. The idea was to have some kind of modality where before namespace and after namespace attributes would not really interact. That never happened of course. To deal with namespaces we have getAttributeNS(namespace, localName), setAttributeNS(namespace, name, value) (indeed, name, not localName, so bad), removeAttributeNS(namespace, localName), and hasAttributeNS(namespace, localName).

    The real kicker is that the first four methods ignore the namespace fields, but can create attributes you cannot access with the *NS methods. There is no universal attribute API, though if you stay clear from namespaces everywhere you are probably mostly fine (except perhaps with SVG and such).

    This was still too simple. There is also attributes which returns a NamedNodeMap (only used for attributes these days). And hasAttributes() which can tell you whether that map is empty or not. These two used to be on all nodes (to limit the amount of casting in Java), but we are moving them to element since that is where they make sense. NamedNodeMap contains a collection of zero or more Attr objects so you can inspect their individual fields. The map has a length property, an item(index) method, and is implemented with some kind of JavaScript proxy so attributes.name works, as well as attributes[0]. Good times. Attr objects also allow manipulation of an attribute's value. Due to mutation observers this requires an element field on attributes to point back to the element the attribute belongs to. Namespace prefix also used to be mutable field, but fortunately this was poorly implemented and recently killed.

    The real reason attributes are so complicated, and more complicated still, ignoring namespaces for the moment, are DTDs. The SGML crowd was not brave enough to cut the lifeline when they did XML. Then XML got popular enough to end up in browsers and the DOM. This meant that attributes cannot contain just text, but also entity references. And therefore attributes became a type of node. Entity references were really never implemented and we managed to remove that cruft from the platform fortunately. However, attributes are still a type of node.

    The last things we are investigating is whether attributes can stop having child nodes and perhaps stop being a node altogether. Meanwhile, we had to add createAttribute(localName) on document, getAttributeNode(name), setAttributeNode(attr), and removeAttributeNode(attr) on element, and getNamedItem(name), setNamedItem(attr), and removeNamedItem(name) on NamedNodeMap back as sites use these. Oh wait, and all their *NS counterparts of course, bar removeAttributeNodeNS().

    Added together, we have twenty-five methods to deal with attributes rather than three. And attributes require six internal fields rather than two. And this is assuming we can get rid of child nodes and attributes being nodes, both semi-implemented today.

    Planet MozillaMakethumbnails.com – drop images into the browser, get a zip of thumbnails

    About 2½ years ago I wrote a demo for Mozilla Hacks how to use Canvas to create thumbnails. Now I felt the itch to update this a bit and add more useful functionality. The result is:

    http://makethumbnails.com

    It is very easy to use: Drop images onto the square and the browser creates thumbnails for them and sends them to you as a zip.

    homepage

    Thumbnail settings page

    You can set the size of the thumbnails, if you want them centered on a coloured background of your choice or cropped to their real size and you can set the quality. All of this has a live preview.

    If you resize the browser to a very small size (or click the pin icon on the site and open a popup) you can use it as neat extra functionality for Finder:

    resize to simple mode

    All of your settings are stored locally, which means everything will be ready for you when you return.

    As there is no server involved, you can also download the app and use it offline.

    The source, of course, of course is available on GitHub.

    To see it in action, you can also watch the a quick walkthrough of Makethumbnails on YouTube

    Happy thumbing!

    Chris

    Planet MozillaA new meditation app

    I had some time on my hands two weekends ago and was feeling a bit of an itch to build something, so I decided to do a project I’ve had in the back of my head for a while: a meditation timer.

    If you’ve been following this log, you’d know that meditation has been a pretty major interest of mine for the past year. The foundation of my practice is a daily round of seated meditation at home, where I have been attempting to follow the breath and generally try to connect with the world for a set period every day (usually varying between 10 and 30 minutes, depending on how much of a rush I’m in).

    Clock watching is rather distracting while sitting so having a tool to notify you when a certain amount of time has elapsed is quite useful. Writing a smartphone app to do this is an obvious idea, and indeed approximately a zillion of these things have been written for Android and iOS. Unfortunately, most are not very good. Really, I just want something that does this:

    1. Select a meditation length (somewhere between 10 and 40 minutes).
    2. Sound a bell after a short preparation to demarcate the beginning of meditation.
    3. While the meditation period is ongoing, do a countdown of the time remaining (not strictly required, but useful for peace of mind in case you’re wondering whether you’ve really only sat for 25 minutes).
    4. Sound a bell when the meditation ends.

    Yes, meditation can get more complex than that. In Zen practice, for example, sometimes you have several periods of varying length, broken up with kinhin (walking meditation). However, that mostly happens in the context of a formal setting (e.g. a Zendo) where you leave your smartphone at the door. Trying to shoehorn all that into an app needlessly complicates what should be simple.

    Even worse are the apps which “chart” your progress or have other gimmicks to connect you to a virtual “community” of meditators. I have to say I find that kind of stuff really turns me off. Meditation should be about connecting with reality in a more fundamental way, not charting gamified statistics or interacting online. We already have way too much of that going on elsewhere in our lives without adding even more to it.

    So, you might ask why the alarm feature of most clock apps isn’t sufficient? Really, it is most of the time. A specialized app can make selecting the interval slightly more convenient and we can preselect an appropriate bell sound up front. It’s also nice to hear something to demarcate the start of a meditation session. But honestly I didn’t have much of a reason to write this other than the fact than I could. Outside of work, I’ve been in a bit of a creative rut lately and felt like I needed to build something, anything and put it out into the world (even if it’s tiny and only a very incremental improvement over what’s out there already). So here it is:

    meditation-timer-screen

    The app was written entirely in HTML5 so it should work fine on pretty much any reasonably modern device, desktop or mobile. I tested it on my Nexus 5 (Chrome, Firefox for Android)[1], FirefoxOS Flame, and on my laptop (Chrome, Firefox, Safari). It lives on a subdomain of this site or you can grab it from the Firefox Marketplace if you’re using some variant of Firefox (OS). The source, such as it is, can be found on github.

    I should acknowledge taking some design inspiration from the Mind application for iOS, which has a similarly minimalistic take on things. Check that out too if you have an iPhone or iPad!

    Happy meditating!

    [1] Note that there isn’t a way to inhibit the screen/device from going to sleep with these browsers, which means that you might miss the ending bell. On FirefoxOS, I used the requestWakeLock API to make sure that doesn’t happen. I filed a bug to get this implemented on Firefox for Android.

    IEBlogAnnouncing new F12 dev tools features in August update

    Today we’re excited to share all the F12 features that shipped in the August update to IE11!

    In April, we shipped a swath of new features of F12 Developer Tools in Internet Explorer 11 focusing on providing accurate data in the DOM Explorer, actionable data in the memory and performance tools and a smoother debugging experience with Just My Code.

    With the IE Developer Channel in June we previewed more features in the F12 Developer Tools and now all of these features are shipping out to all of our customers. It’s a long list which you’ll find below or on MSDN but the highlights are:

    • Import and export sessions in the Memory and UI Responsiveness tools
    • Improved filtering capabilities in the Memory and UI Responsiveness tools
    • A color picker in the DOM Explorer that allows you to pick colors from any window on your desktop.

    With this update to IE11 and F12, we’re keeping the pace of updating the F12 developer tools more often, getting you the latest features and bug fixes as soon as we can. Expect to see and hear from us more and if you’d like to provide feedback, or ask for new features and help simply reach out on Twitter @IEDevChat, or on Connect.

    — Andy Sterland, Senior Program Manager, Internet Explorer


    Changes to the F12 user interface

    • New icons and notifications

      The icons for the Memory and Profiler tools have changed.

      There are now indicators on the icon bar for errors in the Console, changes in Emulation settings, and for active profiling sessions in the Memory, Profiler, and UI Responsiveness tools. The image below shows the new icons with notifications on the Console and Memory tool icons, indicating there are two Console errors displaying and that a Memory profiling session is currently in progress.

      New icons for memory and profiler

    • F6 superset navigation within tools

      Using F6 is like using the tab key to navigate around a tool, but it "tabs" through a selected set of the most commonly used elements in a tool pane, rather than through every selectable item. This is part of an overall cleaner system for using the keyboard to navigate within and between tools.

    • Move back and forth between recently used tools using the keyboard

      Use CTRL + [ to move backwards in your tool navigation history, CTRL + ] to go forward, much like the back and forward arrows when you're browsing.

    • Quick access to document mode

      Want to access the Document mode without switching tools? We added a new dropdown at the top that gives you access to the document mode from any tool.

      Quick access to document mode

    Console changes

    • console.timeStamp()

      When called from the Console or within code, it outputs to the Console the number of milliseconds the current browser tab has been open. If called while running a profiling session with the UI Responsiveness tool, it creates a user mark on the session's timeline with a timestamp based on the time since the session started.

    • CTRL+L clears the console of all messages

    • Accurate autocomplete

    • Console's autocomplete no longer includes indexer properties, making for a cleaner and more accurate selection of autocomplete suggestions.

      $, $$, $x, $0-$5, and $_ have been added to the Console autocomplete list for the convenience for those who use them and to make the Console's behavior more consistent with other browsers.

    • Stale message indicator

      If you have chosen to turn off the Clear on navigate option, older console messages have their icons greyed out to help distinguish between messages for the active page and messages from prior pages in your history.

      Stale message indicator

    DOM Explorer changes

    • Change bars in Computed pane

      The change bars (different colors for changed properties, added properties, and deleted properties) users have been enjoying the Styles pane, now appear in the Computed styles pane.

      Change bars in Computed pane

    • Color Picker

    • Clicking on the color picker icon (or by using the keyboard shortcut ctrl+k) will open up a free standing version of the color picker with the expanded color wheel that’s useful for getting colors that are then going to be pasted elsewhere either in F12 or back in a text or image editor.

      Color picker

    • Color Wheel

      The color wheel is the 2nd icon on the color picker and when activated will expand to show the color wheel. The actual color wheel is a set of four sliders (as below) which can be used to change the HSL and transparency values for a color.

      Color wheel

    • Color Swatch

      The swatch on the color picker is a palette of all the colors F12 found in the CSS files associated with the page sorted by the number of occurrences. This should make finding common colors, such as accent colors, much easier. The swatch can be navigated with the left and right arrow keys which will scroll through all the colors.

    • Color Square

      Clicking on a color square brings up the color picker with the color wheel collapsed and can be used to set the color for the particular CSS property.

    • Eye Dropper

      The eye dropper can be used to pick a color under the cursor from any screen on the computer which is great for getting values from image editors or from other Web pages. There’s a limitation in the eye dropper where the color will be off by 1/255 of its real value. We’ll fix this in a future update.

    Debugger changes

    • Sourcemaps designate

      Right-click on a document's tab in the Debugger and you can specify a source map. This makes it possible to use source maps with shipped code that has had the source map comment removed.

      Sourcemaps designate

    • Autocomplete in watches

      Now, when adding a watch, you get autocomplete options suggested.

      Autocomplete in watches

    • Return value inspection

      When breaking on a function with a return value, step into the function until you've stepped to the closing curly bracket. The return value will be displayed in the Locals portion of the Watches pane. Step again and the value will be returned to the code that called for it.

      For a quick demonstration, try this code in the Console:

      function showval() { var x = 0; x++; debugger; return x; } showval(); 

      It will call the function, break on debugger, and you can step into it to see the return value.

    • Multi-select for breakpoints

      CTRL + CLICK, SHIFT + CLICK, and CTRL + A work to select multiple breakpoints in the Breakpoints pane.

    • Continue and ignore breaks

      Press F5 to continue to the next break. Hold F5 to continue past multiple breaks until you release F5.

    • Event breakpoints and tracepoints

      These work much like the breakpoints and tracepoints already present in F12 tools, but instead of being triggered when a specific block of code is executed, they are triggered when a specific event fires. Each has an optional conditional filter to help you narrow down their scope to the specific instance of an event that you want to inspect. They can be added using the Add event tracepoint and Add event breakpoint icons highlighted in the image below.

      Event breakpoints and tracepoints

    • Async call stack code

      You can now see the call stack for those pesky async calls!

      Async call stack code

    UI Responsiveness tool changes

    • Import/export performance sessions

      You shouldn't have to reproduce your test case every time you want to analyze data it produces or share that data with a colleague. The import (folder) and export (disk) icons on the UI Responsiveness tool's icon bar let you save your memory snapshots to a file that can be imported later.

    • Image preview

      If you've seen an HTTP request for an image and wondered which image it was, the image is now previewed in the event details.

      Image preview

    • Filtering events

      The Filter events button is small but mighty. Hidden behind that button is a menu that lets you filter events in multiple ways and each way has a significant impact.

      Event name filter

      Filter for any event name containing a match for the filter text.

      UI activity filter

      Using the checkboxes, you can exclude larger categories of events to make it easier to focus on the area you're investigating. For example, if you're just interested in network activity, you can filter out all the noise of the UI and garbage collection.

      Time threshold filter

      This feature filters out top-level events less than 1ms in duration. In many scenarios, this dramatically simplifies the waterfall view and helps you focus on more impactful events.

      Time threshold filter

    • HTML5 scripting events

      If you use media query listeners or MutationObservers, you can now identify their respective costs when running a performance profiling session.

      HTML5 scripting events

    • Frame grouping

      The button between the Sort by dropdown and the Filter events menu toggles Frame grouping. This groups top-level events into their corresponding unit of work (or "frame") during periods of time where animations/visual updates were occurring. The frames are treated like other events, so they can be sorted and filtered, and they provide an Inclusive time summary.

      Frame grouping

    • User measures

      If you use the performance.mark() API to add triangles to the timeline, indicating where specific events happened, the performance.measure() API extends the usefulness of performance marks. Use the performance.measure() to create a User measure event encompassing the time between two performance.mark() events, right click the event, and use the Filter to event option to select just the events between the two marks.

      User measures

    • Colorization for DOM

      This feature adds colorization to DOM elements, string literals and number literals. Besides making the content within the different F12 tools look and behave more alike, it adds a little more visual interest to the UI Responsiveness tool.

    • Selection summary

      When you select a portion of the timeline, the event details pane will show a summary of the selection. Hover over different segments of the circular chart for a tooltip with the segment's event category.

      Selection summary

    • Support for console.timeStamp()

      Using the console.timeStamp() method in your code or in the console during a profiling session creates a user mark on the timeline with the time since the profiling session began.

    Memory tool changes

    • Dominator folding

      Dominator folding helps simplify the contents of a snapshot by removing objects from the top-level views that are logically components of another object (e.g. a <BR> within a <DIV>, a Scope held on to by a Function) and tend to be extra details that don’t improve your insight into the data, but could waste your time.

      For example, the image below shows before and after views, demonstrating how dominator folding improves the "story" the tool is telling. The folded view shows 30 HTML <DIV> elements, which account for 15.64 MB of memory, and are holding on to detached DOM nodes. In many cases, it isn’t important to know the composition of an object, so much as simply knowing that it is too large, or that it is leaking (especially when using third-party libraries).

      Dominator folding

    • Colorization of DOM, String & Number literals

      This feature adds colorization to DOM elements, string literals, and number literals. Besides making the content within the different F12 tools look and behave more alike, it makes memory analysis a little more visually interesting.

    • Roots cycle filtering

      Want to be able to investigate the composition of an object without getting unknowingly lost in a circular reference path? This feature detects child references which are circular and “trims” them, so that you don’t get confused by traversing them into infinity. Additionally, it annotates these references so that it’s clear when a reference has in fact been "trimmed."

    • Import/export session

      You shouldn't have to reproduce your test case every time you want to analyze data it produces or share that data with a colleague. The import (folder) and export (disk) icons on the Memory tool's icon bar let you save your memory snapshots to a file that can be imported later.

      Import/export session

    Emulation tool changes

    • Settings persistence and reset

      A Persist Emulation settings icon is added to the Emulation tool. This will maintain your current emulation settings until specifically disabled, allowing you to work, close the browser, and come back with your emulation settings intact. To its right is a Reset Emulation settings icon, which quickly resets the tool back to default values.

      Emulation settings persistence and reset

    IEBlogAugust updates for Internet Explorer

    The August update for Internet Explorer includes updates to the WebGL and F12 features, in addition to the latest security updates.

    As discussed on the Windows blog, we’ll continue to use our existing monthly update process to deliver more frequent improvements in addition to security updates.

    New features for Internet Explorer 11

    The update includes four feature improvements, based on customer and developer feedback. Some of these improvements were previewed in the Internet Explorer Developer Channel and are now ready for release thanks to your feedback.

    Improvements to F12 Developer Tools

    This update provides substantial improvements to the F12 developer tools – The user interface, console, DOM explorer, debugger, emulation tool, UI responsiveness and memory profiling tools all have new features and bug fixes.

    See Microsoft Knowledge Base Article 2990946 for more information; we will also be posting more details on the F12 improvements in an upcoming IE blog post.


    Updated F12 UI Responsiveness Tool

    Improvements to WebGL renderer

    The WebGL renderer has also been updated with support for ANGLE_instanced_arrays, OES_element_index_uint and WEBGL_debug_renderer_info extensions, the failIfMajorPerformanceCaveat context creation attribute, 16-bit textures, more GLSL conformance, and line loop and triangle fan primitives.

    Additionally, more Windows 7 systems will now render WebGL in hardware mode if your drivers are up-to-date.

    This release improves our Khronos WebGL Conformance Test 1.0.3 score from 89.9% to 96.8%. See Microsoft Knowledge Base Article 2991001 for more information. We’ll also have more to share on WebGL in future IE blog posts.

    WebDriver support

    The August update also provides the foundation for IE11 support of the emerging WebDriver standard through which Web developers can write tests to automate Web browsers to test their sites. It’s a programmable remote control for developing complex user scenarios and running them in an automated fashion in your Web site and browser. The August update contains changes to the browser engine needed to enable native WebDriver support. You will need to install a separate package to run WebDriver scripts, which we will release soon.

    Blocking out-of-date ActiveX controls in Internet Explorer

    With out-of-date ActiveX control blocking, Internet Explorer helps keep ActiveX controls up-to-date and safer to use. For more information, see our blog post and Microsoft Knowledge Base Article 2991000.

    Security updates for Internet Explorer

    The August update also includes the following security updates:

    • Microsoft Security Bulletin MS14-051 - This critical security update resolves one publicly disclosed vulnerability and twenty-five privately reported vulnerabilities in Internet Explorer. For more information see the full bulletin.
    • Security Update for Flash Player (2982794) - This security update for Adobe Flash Player in Internet Explorer 10 and 11 on supported editions of Windows 8, Windows 8.1 and Windows Server 2012 and Windows Server 2012 R2 is also available. The details of the vulnerabilities are documented in Adobe security bulletin APSB14-18. This update addresses the vulnerabilities in Adobe Flash Player by updating the affected Adobe Flash binaries contained within Internet Explorer 10 and Internet Explorer 11. For more information, see the advisory.

    Staying up-to-date

    Most customers have automatic updating enabled and will not need to take any action because these update will be downloaded and installed automatically. Customers who have automatic updating disabled need to check for updates and install this update manually.

    We look forward to hearing your feedback @IEDevChat or via Connect.

    — Sharon Meramore, Program Manager, Internet Explorer

    — Charles Morris, Program Manager Lead, Internet Explorer

    W3C Team blogHTML5 and RWD training: early bird rates extension!

    Do not miss the early bird registration extension to 27 August for two online training courses from W3C: HTML5 and Responsive Web Design!

    • HTML5: starting 22 September 2014, this course features a JavaScript crash course, numerous interactive examples and an “animated monster” contest. Acclaimed trainer Michel Buffa has updated the (already dense) course material, notably with an introduction of Web components and new examples of canvas animations. [Register - HTML5]
    • Responsive Web Design (RWD): starting 3 October 2014, this course focuses on best practices, accessibility and optimization. Our trainer Frances de Waal will guide you step by step through an approach that uses HTML and CSS to make your Web site fit in all viewport sizes. [Register - RWD]

    W3DevCampus logoLearn more about W3DevCampus, the official W3C online training for Web developers and watch our fun intro video.

    Get new skills, earn certificates and start collecting W3C training badges!

    Steve Faulkner et alWhat ARIA does not do

    ARIA is a set of attributes that can be added to HTML elements (and other markup languages) to communicate accessibility role, state, name and properties which are exposed by browsers via platform accessibility APIs. This provides a common, interoperable method of relaying the information to assistive technologies. That’s it. It is the same method used by browsers to convey the inbuilt (or default) accessibility information of native HTML features. The difference being that authors can wire up this information for themselves in the DOM using ARIA, before they could not.

    A simple example of what ARIA does and does not do:

    ARIA does not magically make any element act differently to what it is,  it only provides a method to make it appear as something else to assistive technology users. For example, in the sample code below, the ARIA role attribute makes the  <div> appear as a link to  assistive technology. Developers must provide the substance to the semantics conveyed using ARIA, otherwise users are confronted with a UI masquerade.

    <div role="link">poot</div>
    versus
    <a href="...">poot</a>
    
    feature role=”link” <a href=”…”>
    conveys role to accessibility API yes yes
    conveys accessible name to accessibility API yes yes
    keyboard support (focus) no yes
    keyboard support (operation) no yes
    element specific context menu no yes
    default visual semantics (e.g. underlining) no yes

     

    Anne van KesterenAsynchronicity

    There is ever more asynchronicity within the web platform. Asynchronous being some set of steps that could be performed in parallel with JavaScript running in a given environment such as a window or worker. Fetching networked resources, computing crypto, and audio processing, are examples of things that can be done asynchronous. The JavaScript language does not really know about threading or background processing. The platform however has had this for a long time and synchronized with JavaScript using events and these days also through resolving promises.

    The way any environment works, simplified, is by going through a stack of tasks. Whenever the user moves the mouse, or XMLHttpRequest fetches, new tasks are queued to eventually dispatch events and then run event handlers and listeners. Asynchronous steps run in parallel with this.

    When new standards are written, this is often done wrong. A set of asynchronous steps cannot refer to global state that might change, such as a document's base URL. They also cannot change state, such as properties on an object. Remember, these steps run in parallel, so if you change obj.prop, obj.prop === obj.prop would no longer be guaranteed. Bad. Instead you queue a task. Effectively scheduling some code to run in the environment at some point in the future when it has the bandwidth. The Fetch layer queues tasks whenever new network chunks arrive. The UI layer queues tasks whenever the user moves the mouse. Etc.

    In summary, you have the environments where based on a sequence of tasks, JavaScript is executed. Then there is the background processing, known as asynchronous steps in standards, which queues new tasks to the various environments to stay synchronized over time.

    (Not all of this is properly defined as of yet, please fill in the gaps as you run across them. Note Asynchronous Steps Explicitly has advice for how to go about that.)

    Planet MozillaWhat is a Living Brand?

    Today, we’re starting the Mozilla ID project, which will be an exploration into how to make the Mozilla identity system as bold and dynamic as the Mozilla project itself. The project will look into tackling three of our brand elements – typography, color, and the logo. Of these three, the biggest challenge will be creating a new logo since we currently don’t have an official mark at the moment. Mozilla’s previous logo was the ever amazing dino head that we all love, which has now been set as a key branding element for our community-facing properties. Its replacement should embody everything that Mozilla is, and our goal is to bake as much of our nature into the visual as we can while keeping it clean and modern. In order to do this, we’re embracing the idea of creating a living brand.

    A living brand you say? Tell me more.

    mtv4

    Image from DesignBoom

    I’m pleased to announce you already know what a living brand is, you just may not know it under that term. If you’ve ever seen the MTV logo – designed in 1981 by Manhattan Design – you’ve witnessed a living brand. The iconic M and TV shapes are the base elements for their brand and building on that with style, color, illustrations and animations creates the dynamic identity system that brings it alive. Their system allows designers to explore unlimited variants of the logo, while maintaining brand consistency with the underlying recognizable shapes. As you can tell through this example, a living brand can unlock so much potential for a logo, opening up so many possibilities for change and customization. It’s because of this that we feel a living brand is perfect for Mozilla – we’ll be able to represent who we are through an open visual system of customization and creative expression.

    You may be wondering how this is so open if Mozilla Creative will be doing all of the variants for this new brand? Here’s the exciting part. We’re going to be helping define the visual system, yes, but we’re exploring dynamic creation of the visual itself through code and data visualization. We’re also going to be creating the visual output using HTML5 and Web technologies, baking the building blocks of the Web we love and protect into our core brand logo.

    OMG exciting, right? Wait, there’s still more!

    In order to have this “organized infinity” allow a strong level of brand recognition, we plan to have a constant mark as part of the logo, similar to how MTV did it with the base shapes. Here’s the fun part and one of several ways you can get involved – we’ll be live streaming the process with a newly minted YouTube channel where you can follow along as we explore everything from wordmark choices to building out those base logo shapes and data viz styles. Yay! Open design process!

    So there it is. Our new fun project. Stay tuned to various channels coming out of Creative – this blog, my Twitter account, the Mozilla Creative blog and Twitter account – and we’ll update you shortly on how you’ll be able to take part in the process. For now, fell free to jump in to #mologo on IRC to say hi and discuss all things Mozilla brand!

    It’s a magical time for design, Mozilla. Let’s go exploring!

    Planet MozillaLet's build a browser engine! Part 2: Parsing HTML

    This is the second in a series of articles on building a toy browser rendering engine:

    This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

    HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

    A Simple HTML Dialect

    I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

    <html>
        <body>
            <h1>Title</h1>
            <div id="main" class="test">
                <p>Hello <em>world</em>!</p>
            </div>
        </body>
    </html>
    

    The following syntax is allowed:

    • Balanced tags: <p>...</p>
    • Attributes with quoted values: id="main"
    • Text nodes: <em>world</em>

    Everything else is unsupported, including:

    • Comments
    • Doctype declarations
    • Escaped characters (like &amp;) and CDATA sections
    • Self-closing tags: <br/> or <br> with no closing tag
    • Error handling (e.g. unbalanced or improperly nested tags)
    • Namespaces and other XHTML syntax: <html:body>
    • Character encoding detection

    At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

    Example Code

    Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

    The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

    struct Parser {
        pos: uint,
        input: String,
    }
    

    We can use this to implement some simple methods for peeking at the next characters in the input:

    impl Parser {
        /// Read the next character without consuming it.
        fn next_char(&self) -> char {
            self.input.as_slice().char_at(self.pos)
        }
    
        /// Do the next characters start with the given string?
        fn starts_with(&self, s: &str) -> bool {
            self.input.as_slice().slice_from(self.pos).starts_with(s)
        }
    
        /// Return true if all input is consumed.
        fn eof(&self) -> bool {
            self.pos >= self.input.len()
        }
    
        // ...
    }
    

    Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

        /// Return the current character, and advance to the next character.
        fn consume_char(&mut self) -> char {
            let range = self.input.as_slice().char_range_at(self.pos);
            self.pos = range.next;
            range.ch
        }
    

    Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

        /// Consume characters until `test` returns false.
        fn consume_while(&mut self, test: |char| -> bool) -> String {
            let mut result = String::new();
            while !self.eof() && test(self.next_char()) {
                result.push_char(self.consume_char());
            }
            result
        }
    

    We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

        /// Consume and discard zero or more whitespace characters.
        fn consume_whitespace(&mut self) {
            self.consume_while(|c| c.is_whitespace());
        }
    
        /// Parse a tag or attribute name.
        fn parse_tag_name(&mut self) -> String {
            self.consume_while(|c| match c {
                'a'..'z' | 'A'..'Z' | '0'..'9' => true,
                _ => false
            })
        }
    

    Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

        /// Parse a single node.
        fn parse_node(&mut self) -> dom::Node {
            match self.next_char() {
                '<' => self.parse_element(),
                _   => self.parse_text()
            }
        }
    
        /// Parse a text node.
        fn parse_text(&mut self) -> dom::Node {
            dom::text(self.consume_while(|c| c != '<'))
        }
    

    An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

        /// Parse a single element, including its open tag, contents, and closing tag.
        fn parse_element(&mut self) -> dom::Node {
            // Opening tag.
            assert!(self.consume_char() == '<');
            let tag_name = self.parse_tag_name();
            let attrs = self.parse_attributes();
            assert!(self.consume_char() == '>');
    
            // Contents.
            let children = self.parse_nodes();
    
            // Closing tag.
            assert!(self.consume_char() == '<');
            assert!(self.consume_char() == '/');
            assert!(self.parse_tag_name() == tag_name);
            assert!(self.consume_char() == '>');
    
            dom::elem(tag_name, attrs, children)
        }
    

    Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

        /// Parse a single name="value" pair.
        fn parse_attr(&mut self) -> (String, String) {
            let name = self.parse_tag_name();
            assert!(self.consume_char() == '=');
            let value = self.parse_attr_value();
            (name, value)
        }
    
        /// Parse a quoted value.
        fn parse_attr_value(&mut self) -> String {
            let open_quote = self.consume_char();
            assert!(open_quote == '"' || open_quote == '\'');
            let value = self.consume_while(|c| c != open_quote);
            assert!(self.consume_char() == open_quote);
            value
        }
    
        /// Parse a list of name="value" pairs, separated by whitespace.
        fn parse_attributes(&mut self) -> dom::AttrMap {
            let mut attributes = HashMap::new();
            loop {
                self.consume_whitespace();
                if self.next_char() == '>' {
                    break;
                }
                let (name, value) = self.parse_attr();
                attributes.insert(name, value);
            }
            attributes
        }
    

    To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

        /// Parse a sequence of sibling nodes.
        fn parse_nodes(&mut self) -> Vec<dom::Node> {
            let mut nodes = vec!();
            loop {
                self.consume_whitespace();
                if self.eof() || self.starts_with("</") {
                    break;
                }
                nodes.push(self.parse_node());
            }
            nodes
        }
    

    Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

    /// Parse an HTML document and return the root element.
    pub fn parse(source: String) -> dom::Node {
        let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();
    
        // If the document contains a root element, just return it. Otherwise, create one.
        if nodes.len() == 1 {
            nodes.swap_remove(0).unwrap()
        } else {
            dom::elem("html".to_string(), HashMap::new(), nodes)
        }
    }
    

    That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

    Exercises

    Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

    1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

    2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

    3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

    Shortcuts

    If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

    // <html><body>Hello, world!</body></html>
    let root = element("html");
    let body = element("body");
    root.children.push(body);
    body.children.push(text("Hello, world!"));
    

    Or you can find an existing HTML parser and incorporate it into your program.

    The next article in this series will cover CSS data structures and parsing.

    Planet MozillaPresenter tip: animated GIFs are not as cool as we think

    Disclaimer: I have no right to tell you what to do and how to present – how dare I? You can do whatever you want. I am not “hating” on anything – and I don’t like the term. I am also guilty and will be so in the future of the things I will talk about here. So, bear with me: as someone who spends most of his life currently presenting, being at conferences and coaching people to become presenters, I think it is time for an intervention.


    <figure>The hardest part of putting together a talk for developers is finding the funny gifs that accurately represent your topic.</figure>
    The Tweet that started this and its thread

    If you are a technical presenter and you consider adding lots of animated GIFs to your slides, stop, and reconsider. Consider other ways to spend your time instead. For example:

    • Writing a really clean code example and keeping it in a documented code repository for people to use
    • Researching how very successful people use the thing you want the audience to care
    • Finding a real life example where a certain way of working made a real difference and how it could be applied to an abstract coding idea
    • Researching real numbers to back up your argument or disprove common “truths”

    Don’t fall for the “oh, but it is cool and everybody else does it” trap. Why? because when everybody does it there is nothing cool or new about it.

    Animated GIFs are ubiquitous on the web right now and we all love them. They are short videos that work in any environment, they are funny and – being very pixelated – have a “punk” feel to them.

    This, to me, was the reason presenters used them in technical presentations in the first place. They were a disruption, they were fresh, they were different.

    We all got bored to tears by corporate presentations that had more bullets than the showdown in a Western movie. We all got fed up with amazingly brushed up presentations by visual aficionados that had just one too many inspiring butterfly or beautiful sunset.

    added text to sunrise

    We wanted something gritty, something closer to the metal – just as we are. Let’s be different, let’s disrupt, let’s show a seemingly unconnected animation full of pixels.

    This is great and still there are many good reasons to use an animated GIF in our presentations:

    • They are an eye catcher – animated things is what we look at as humans. The subconscious check if something that moves is a saber tooth tiger trying to eat me is deeply ingrained in us. This can make an animated GIF a good first slide in a new section of your talk: you seemingly do something unexpected but what you want to achieve is to get the audience to reset and focus on the next topic you’d like to cover.
    • They can be a good emphasis of what you are saying. When Soledad Penades shows a lady drinking under the table (6:05) when talking about her insecurities as someone people look up to it makes a point. soledad and drinking lady When Jake Archibald explains that navigator.onLine will be true even if the network cable is plugged into some soil (26:00) it is a funny, exciting and simple thing to do and adds to the point he makes. jake and the soil
    • It is an in-crowd ting to do – the irreverence of an animated, meme-ish GIF tells the audience that you are one of them, not a professional, slick and tamed corporate speaker.

    But is it? Isn’t a trick that everybody uses way past being disruptive? Are we all unique and different when we all use the same content? How many more times do we have to endure the “this escalated quicklyGIF taken from a 10 year old movie? Let’s not even talk about the issue that we expect the audience to get the reference and why it would be funny.

    We’re running the danger here of becoming predictable and boring. Especially when you see speakers who use an animated GIF and know it wasn’t needed and then try to shoe-horn it somehow into their narration. It is not a rite of passage. You should use the right presentation technique to achieve a certain response. A GIF that is in your slides just to be there is like an unused global variable in your code – distracting, bad practice and in general causing confusion.

    The reasons why we use animated GIFs (or videos for that matter) in slides are also their main problem:

    • They do distract the audience – as a “whoa, something’s happening” reminder to the audience, that is good. When you have to compete with the blinking thing behind you it is bad. This is especially true when you chose a very “out there” GIF and you spend too much time talking over it. A fast animation or a very short loop can get annoying for the audience and instead of seeing you as a cool presenter they get headaches and think “please move on to the next slide” without listening to you. I made that mistake with my rainbow vomiting dwarf at HTML5Devconf in 2013 and was called out on Twitter.
    • They are too easy to add – many a times we are tempted just to go for the funny cat pounding a strawberry because it is cool and it means we are different as a presenter and surprising.

    Well, it isn’t surprising any longer and it can be seen as a cheap way out for us as creators of a presentation. Filler material is filler material, no matter how quirky.

    You don’t make a boring topic more interesting by adding animated images. You also don’t make a boring lecture more interesting by sitting on a fart cushion. Sure, it will wake people up and maybe get a giggle but it doesn’t give you a more focused audience. We stopped using 3D transforms in between slides and fiery text as they are seen as a sign of a bad presenter trying to make up for a lack of stage presence or lack of content with shiny things. Don’t be that person.

    When it comes to technical presentations there is one important thing to remember: your slides do not matter and are not your presentation. You are.

    Your slides are either:

    • wallpaper for your talking parts
    • emphasis of what you are currently covering or
    • a code example.

    If a slide doesn’t cover any of these cases – remove it. Wallpaper doesn’t blink. It is there to be in the background and make the person in front of it stand out more. You already have to compete with a lot of of other speakers, audience fatigue, technical problems, sound issues, the state of your body and bad lighting. Don’t add to the distractions you have to overcome by adding shiny trinkets of your own making.

    You don’t make boring content more interesting by wrapping it in a shiny box. Instead, don’t talk about the boring parts. Make them interesting by approaching them differently, show a URL and a screenshot of the boring resources and tell people what they mean in the context of the topic you talk about. If you’re bored about something you can bet the audience is, too. How you come across is how the audience will react. And insincerity is the worst thing you can project. Being afraid or being shy or just being informative is totally fine. Don’t try too hard to please a current fashion – be yourself and be excited about what you present and the rest falls into place.

    So, by all means, use animated GIFs when they fit – give humorous and irreverent presentations. But only do it when this really is you and the rest of your stage persona fits. There are masterful people out there doing this right – Jenn Schiffer comes to mind. If you go for this – go all in. Don’t let the fun parts of your talk steal your thunder. As a presenter, you are entertainer, educator and explainer. It is a mix, and as all mixes go, they only work when they feel rounded and in the right rhythm.

    Planet MozillaLet's build a browser engine! Part 1: Getting started

    I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles describing my project and how you can make your own. But first, let me explain why.

    You’re building a what?

    Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

    A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

    Why a “toy” rendering engine?

    A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

    Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

    So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

    Try this at home.

    I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

    Before you start, a few remarks on some choices you can make:

    On Programming Languages

    You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

    If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

    On Libraries and Shortcuts

    In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

    I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

    Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

    First Step: The DOM

    Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

    The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

    struct Node {
        // data common to all nodes:
        children: Vec<Node>,
    
        // data specific to each node type:
        node_type: NodeType,
    }
    

    There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

    enum NodeType {
        Text(String),
        Element(ElementData),
    }
    

    An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

    struct ElementData {
        tag_name: String,
        attributes: AttrMap,
    }
    
    type AttrMap = HashMap<String, String>;
    

    Finally, some constructor functions to make it easy to create new nodes:

    fn text(data: String) -> Node {
        Node { children: vec![], node_type: Text(data) }
    }
    
    fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
        Node {
            children: children,
            node_type: Element(ElementData {
                tag_name: name,
                attributes: attrs,
            })
        }
    }
    

    And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.

    Exercises

    These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

    1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

    2. Install the latest version of Rust, then download and build robinson. Open up dom.rs and extend NodeType to include additional types like comment nodes.

    3. Write code to pretty-print a tree of DOM nodes.

    References

    For much more detailed information about browser engine internals, see Tali Garsiel’s wonderful How Browsers Work and its links to further resources.

    For example code, here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

    You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

    To be continued

    Bruce LawsonReading List

    Standards ‘n’ all that jazz

    French joke corner

    Heard about the French chef who killed himself? He lost the huile d’olive.

    Video corner

    “Coders and hackers, ready to change the world, and the hackathon is the perfect place. But things don’t always go as planned…” by @ourmaninjapan

    <style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; height: auto; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
    <iframe allowfullscreen="allowfullscreen" frameborder="0" src="http://www.youtube.com/embed/8xGrPQG1i0E"></iframe>

    Planet MozillaLinux Meetup Montréal et son intérêt pour Mozilla

    Une partie du groupe d'hier (cliquez pour voir en pleine résolution)

    Une partie du groupe d’hier (cliquez pour voir en pleine résolution)

    Hier soir je présentais au groupe Linux Meetup Montréal non à propos de Linux, mais bien au sujet de Mozilla, Firefox OS, mais aussi Firefox. L’idée était de faire une présentation haut niveau, tout en gardant un aspect technique, car plusieurs développeurs étaient présent. Avec une salle comble, j’ai eu le plaisir de constater l’intérêt des utilisateurs de Linux envers Mozilla. Malheureusement, mon enregistrement n’a pas fonctionné, mais voici tout de même les diapositives.


    Ce fut une belle opportunité de présenter Firefox OS vu que bien des participants avait à peine entendu parlé du système d’exploitation. Je voulais tenter une approche différente pour la troisième partie, soit celle sur Firefox, mais celle-ci aura été vaine: presque la totalité des pros logiciels libres présents hier soir utilisait déjà Firefox comme navigateur par défaut. Ce fut donc une belle soirée pour ma première fois au Linux Meetup Montréal. De ce fait, je vous invite à vous joindre au groupe si vous avez un intérêt avec Linux: pas besoin d’être un pro avec cet OS, tous sont invités!


    --
    Linux Meetup Montréal et son intérêt pour Mozilla is a post on Out of Comfort Zone from Frédéric Harper

    Steve Faulkner et alUsing the tabindex attribute

    HTML 5The HTML tabindex attribute is used to manage keyboard focus. Used wisely, it can effectively handle focus within web widgets. Used unwisely however, the tabindex attribute can destroy the usability of web content for keyboard users.

    The tabindex attribute indicates that an element can be focused on, and determines how that focus is handled. It takes an integer (whole number) as a value, and the resulting keyboard interaction varies depending on whether the integer is positive, negative or 0.

    To understand why the tabindex attribute has such a powerful effect on usability, it’s necessary to know something of the way keyboard interaction works. A keyboard user will typically move through web content using the tab key, moving from one focusable element to the next in sequential order.

    Certain interactive HTML elements like links and form controls are focusable by default. When they’re included in a web page, their sequential order is determined by the source order of the HTML.

    
    <label for="username">Username:</label>
    <input type="text" id="username">
    
    <label for="password">Password:</label>
    <input type="password" id="password">
    
    <input type="submit" value="Log in">
    

    A keyboard user would tab first to the username field, then the password field, and finally to the log in button. All three elements take focus by default, and they’re accessed in the order that they appear in the source code. In other words, there’s no need to explicitly set the tabindex because it’s all handled effortlessly in the browser.

    tabindex=0

    When tabindex is set to 0, the element is inserted into the tab order based on its location in the source code. If the element is focusable by default there’s no need to use tabindex at all, but if you’re repurposing an element like a <span> or <div>, then tabindex=0 is the natural way to include it in the tab order.

    It’s worth mentioning at this point that it’s easier to use a focusable HTML element wherever possible. For example, when you choose to use a <button> or <input type="checkbox">, keyboard focus and keyboard interaction are handled automatically by the browser. When you repurpose other elements to create custom widgets, you’ll need to provide keyboard focus and interaction support manually.

    tabindex=-1

    When tabindex is set to a negative integer like -1, it becomes programmatically focusable but it isn’t included in the tab order. In other words, it can’t be reached by someone using the tab key to navigate through content, but it can be focused on with scripting.

    An example is moving focus to a summary of errors returned by a form. The summary would typically be located at the start of the form, so you want to draw the attention of screen reader/magnifier users to it, and to position all keyboard-only users at the start of the form so they can begin correcting any errors. You don’t want the error summary itself to be included in the tab order of the page though.

    
    <div role="group" id="errorSummary" aria-labelledby="errorSummaryHeading" tabindex="-1">
    <h2 id="errorSummaryHeading">Your information contains three errors</h2>
    <ul>
    ...
    </ul>
    </div>
    

    tabindex=1+

    It’s when tabindex is set to a positive integer that things get problematic. It imposes a tab order on the content that bears no resemblance to the expected tab order.

    
    <label for="username">Username:</label>
    <input type="text" id="username" tabindex="3">
    
    <label for="password">Password:</label>
    <input type="password" id="password" tabindex="1">
    
    <input type="submit" value="Log in" tabindex="2">
    

    In this example the visual presentation of the form would be as expected: Username and password fields, followed by the log in button. The tab order would make no sense at all however. Focus would move first to the password field, then the log in button, and finally to the username field.

    Things get worse when you realise that the password field would be the first focusable element on the page containing the form. It doesn’t matter how many focusable elements appear in the source order/visual presentation before the password field, the tabindex of 1 means it’ll be the first element to receive focus on the page.

    The tabindex attribute is versatile, and it has the capacity to improve or destroy usability for keyboard-only users. When you think about using the tabindex attribute, keep these things in mind:

    • Use tabindex=0 to include an element in the natural tab order of the content, but remember that an element that is focusable by default may be an easier option than a custom control
    • Use tabindex=-1 to give an element programmatic focus, but exclude it from the tab order of the content
    • Avoid using tabindex=1+.

    Further reading

     

    When Can I UseNew option to display full prefixes + update on latest web tech features added

    New option to display full prefixes + update on latest web tech features added

    By popular demand it is now possible again to see directly what prefixes to use for features that require a prefix. By enabling the Show full prefixes option under Settings, the symbol in each browser version cell is now replaced by the prefix required. Without this option it's still also possible to see the prefix in the cell's tooltip.

    Additionally, having been busy with implementing the redesign and work on the beta site for the past while I forgot to take the time to mention what new features were added on the site. Because there's been so many in the past five months, here they are listed by category:

    CSS

    - CSS Shapes Level 1

    - CSS will-change property

    - CSS Appearance

    - :placeholder-shown CSS pseudo-class

    - Improved kerning pairs & ligatures

    - Blending of HTML/SVG elements

    - CSS text-size-adjust

    - CSS3 image-orientation

    - text-decoration styling

    HTML5

    - Picture element

    - Srcset attribute

    - seamless attribute for iframes

    - Custom Elements

    - HTML Imports

    - Multiple file selection

    DOM

    - Web Animations API

    - relList (DOMTokenList)

    - DOMContentLoaded

    JS APIs & Other

    - Base64 encoding and decoding

    - Resource Timing

    - Speech Synthesis API

    - Proximity API

    - Ambient Light API

    - WOFF 2.0 - A better web font compression format

    It's also worth noting that the default feature list now shows the latest added feature first, so if you just click on the "Can I use" text with no other filters used, you'll see the latest feature on top.

    Bruce LawsonReading List

    Standards and tech

    Freedom corner

    Lonely hearts’ corner

    Readers who are single may find this 80s dating video helpful. Invite me to the wedding, please.

    <style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; height: auto; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
    <iframe allowfullscreen="allowfullscreen" frameborder="0" src="http://www.youtube.com/embed/0bomkgXeDkE"></iframe>

    Steve Faulkner et alfiling bugs

    Fireox IE  chrome  Safari Opera

    One of the ways to get things fixed in browsers, or at least understand why things aren’t/won’t be fixed,  is to file a bug. I spend a fair bit of time filing and commenting on bugs. (Updated to include reference to Opera bug report wizard)

    Filing bugs:

    Firefox

    Example bug: expose heading level in acc layer based on outline depth not heading numeric value

    Use the Firefox bugzilla, you need to register to file. Read the bug writing guidelines

    For accessibility bugs Mozilla Accessibility Engineer Marco Zehe  advises 

    Use product Core, Component Disability Access APIs, and we’ll take it from there.

    “Firefox: Disability Access” will also work, but usually we assign it to the other product/component straight away anyway.

    Internet Explorer

    Internet Explorer Feedback, you need to sign up for a Microsoft account (if you don’t have one) to file.  Read the Bug reporting guidelines

    Advice from David Storey:

    If you can:

    1. Also include a reduction that just shows issue.
    2. Include real world site it affects as (at least for IE and Opera in my experience) a bug that affects real world sites have higher priority as it breaks things.

    Safari

    Example bug: HTML indeterminate IDL attribute not mapped to checkbox value=2

    Use the Webkit bugzilla, there is an accessibility related category, you need to sign up for a webkit bugzilla account to file.  Read the Bug writing guidelines

    Chrome

    You need a Google account to file bugs.

    Example bug: hr element not exposed with a separator role

    Opera

    Use the Opera bug report wizard (though note that, once filed, you won’t be able to check back on the bug’s status…but if it gets fixed, it’ll appear in the relevant Opera release notes).

    Writing bugs

    As an example of things to do when writing a bug report, the IE bug reporting guide is excerpted below:

    Before Filing a New Bug Report

    Ensure you’re running the latest version of the browser, since your issue may already have been resolved in a newer version./p>

    Search for existing bugs in the bug database to see if your issue has already been reported.

    Try to identify specifics steps that would allow us to consistently reproduce the issue you’re seeing, if possible.

    Please file separate bug reports for each problem encountered.

    Key Components of a Good Bug Report

    Title

    A good bug report has a clear and concise title (usually less than 15 words). Summarizing your problem concisely improves our ability to understand and fix bugs.

    Good: “[browser version XX] Unable to type in search box of www.example-url.com”

    Bad:“Search functionality broken”

    Description

    A good bug report clearly explains what you were expecting to see, and how it differs from what you actually saw happen. We’ve found that both of these perspectives are valuable and help us fix more bugs. For example, describing your expectation can help us fill in gaps and fix bugs that otherwise might not be reproducible, since bugs sometimes depend on external factors like the language of the underlying operating system.

    Selecting the correct area

    A good bug report has the correct area selected. We want to get your bug report to the right developer as quickly as possible, and selecting the correct area in the form helps.  Since some issues might fall under more than one area, try searching in the bug database to find similar bugs in that area.

    Precise, reproducible steps

    A good bug report is descriptive and easy to follow. A clear step-by-step guide that lets us reproduce a bug is often the single most important factor in determining whether we can fix a bug or not. When we aren’t able to reproduce the bug, it will often be resolved as “Not Repro”. Clearer and more detailed bug reports greatly improve our ability to understand the underlying issues and respond accordingly.

    Screenshots & attachments

    A good bug report may include screenshots (or even videos). For example, you can use screenshots to guide us through relatively complex steps or to highlight where we need to look to see the issue you’re reporting.

    Also check out webcompat.com – Bug reporting for the internet

    A great new service/initiative that makes it really easy to report a bug you have found with a browser or web site.

    How It Works

    1. Report a bug for any website or browser.
    2. the webcompat team of volunteers diagnoses the bug.
    3. Then send a fix to the site owner or browser.

    Footnotes

    Updated: .  Michael(tm) Smith <mike@w3.org>