Planet MozillaService Worker and the grand re-architecture proposal of Firefox OS Gaia

TL;DR: Service Worker, a new Web API, can be used as a mean to re-engineering client-side web applications, and a departure from the single-page web application paradigm. Detail of realizing that is being experimented on Gaia and proposed. In Gaia, particularly, “hosted packaged app” is served as a new iteration of security model work to make sure Service Workers works with Gaia.

Last week, I spent an entire week, in face-to-face meetings, going through the technical plans of re-architecture Gaia apps, the web applications that powers the front-end of Firefox OS, and the management plan on resourcing and deployment. Given the there were only a few of developers in the meeting and the public promise of “the new architecture”, I think it’s make sense to do a recap on what’s being proposed and what are the challenges already foreseen.

Using Service Worker

Before dive into the re-architecture plan, we need to explain what Service Worker is. From a boarder perspective, Service Worker can be understood as simply a browser feature/Web API that allow web developers to insert a JavaScript-implemented proxy between the server content and the actual page shown. It is the latest piece of sexy Web technologies that is heavily marketed by Google. The platform engineering team of Mozilla is devoting to ship it as well.

Many things previously not possible can be done with the worker proxy. For starter, it could replace AppCache while keeping the flexibility of managing cache in the hand of the app. The “flexibility” bits is the part where it gets interesting — theologically everything not touching the DOM can be moved into the worker — effectively re-creating the server-client architecture without a real remove HTTP server.

The Gaia Re-architecture Plan

Indeed, that’s what the proponent of the re-architecture is aiming for — my colleagues, mostly whom based in Paris, proposed such architecture as the 2015 iteration/departure of “traditional” single-page web application. What’s more, the intention is to create a framework where the backend, or “server” part of the code, to be individually contained in their own worker threads, with strong interface definitions to achieve maximum reusability of these components — much like Web APIs themselves, if I understand it correctly.

It does not, however, tie to a specific front-end framework. User of the proposed framework should be free to use any of the strategy she/he feel comfortable with — the UI can be as hardcore as entirely rendered in WebGL, or simply plain HTML/CSS/jQuery.

The plan is made public on a Wiki page, where I expect there will be changes as progress being made. This post intentionally does not cover many of the features the architecture promise to unlock, in favor of fresh contents (as opposed of copy-edit) so I recommend readers to check out the page.

Technical Challenges around using Service Workers

There are two major technical challenges: one is the possible performance (memory and cold-launch time) impact to fit this multi-thread framework and it’s binding middleware in to a phone, the other is the security model changes needed to make the framework usable in Gaia.

To speak about the backend, “server” side, the one key difference between real remote servers and workers is one lives in data centers with endless power supply, and the other depend on your phone battery. Remote servers can push constructed HTML as soon as possible, but for an local web app backed by workers, it might need to wait for the worker to spin up. For that the architecture might be depend on yet another out-of-spec feature of Service Worker, a cache that the worker thread have control of. The browser should render these pre-constructed HTML without waiting for the worker to launch.

Without considering the cache feature, and considering the memory usage, we kind of get to a point where we can only say for sure on performance, once there is a implementation to measure. The other solution the architecture proposed to workaround that on low-end phones would be to “merge back” the back-end code into one single thread, although I personally suspect the risk of timing issues, as essentially doing so would require the implementation to simulate multi-threading in one single thread. We would just have to wait for the real implementation.

The security model part is really tricky. Gaia currently exists as packaged zips shipped in the phone and updates with OTA images, pinning the Gecko version it ships along with. Packaging is one piece of sad workaround since Firefox OS v1.0 — the primary reasons of doing so are (1) we want to make sure proprietary APIs does not pollute the general Web and (2) we want a trusted 3rd-party (Mozilla) to involve in security decisions for users by check and sign contents.

The current Gecko implementation of Service Worker does not work with the classic packaged apps which serve from an app: URL. Incidentally, the app: URL something we feel not webby enough so we try to get rid off. The proposal of the week is called “hosted packaged apps”, which serve packages from the real, remote Web and allow references of content in the package directly with a special path syntax. We can’t get rid of packaged yet because of the reasons stated above, but serving content from HTTP should allow us to use Service Worker from the trusted contents, i.e. Gaia.

One thing to note about this mix is that a signed package means it is offline by default by it’s own right, and it’s updates must be signed as well. The Service Worker spec will be violated a bit in order to make them work well — it’s a detail currently being work out.

Technical Challenges on the proposed implementation

As already mentioned on the paragraph on Service Worker challenges, one worker might introduce performance issue, let along many workers. With each worker threads, it would imply memory usage as well. For that the proposal is for the framework to control the start up and shut down threads (i.e. part of the app) as necessary. But again, we will have to wait for the implementation and evaluate it.

The proposed framework asks for restriction of Web API access to “back-end” only, to decouple UI with the front-end as far as possible. However, having little Web APIs available in the worker threads will be a problem. The framework proposes to workaround it with a message routing bus and send the calls back to the UI thread, and asking Gecko to implement APIs to workers from time to time.

As an abstraction to platform worker threads and attempt to abstract platform/component changes, the architecture deserves special attention on classic abstraction problems: abstraction eventually leaks, and abstraction always comes with overhead, whether is runtime performance overhead, or human costs on learning the abstraction or debugging. I am not the export; Joel is.

Technical Challenges on enabling Gaia

Arguably, Gaia is one of the topmost complex web projects in the industry. We inherit a strong Mozilla tradition on continuous integration. The architecture proposal call for a strong separation between front-end application codebase and the back-end application codebase — includes separate integration between two when build for different form factors. The integration plan, itself, is something worthy to rethink along to meet such requirement.

With hosted packaged apps, the architecture proposal unlocks the possibility to deploy Gaia from the Web, instead of always ships with the OTA image. How to match Gaia/Gecko version all the way to every Nightly builds is something to figure out too.

Conclusion

Given everything is in flux and the immense amount of work (as outlined above), it’s hard to achieve any of the end goals without prioritizing the internals and land/deploy them separately. From last week, it’s already concluded parts of security model changes will be blocking Service Worker usage in signed package — we’ll would need to identify the part and resolve it first. It’s also important to make sure the implementation does not suffer any performance issue before deploy the code and start the major work of revamping every app. We should be able to figure out a scaled-back version of the work and realizing that first.

If we could plan and manage the work properly, I remain optimistic on the technical outcome of the architecture proposal. I trust my colleagues, particularly whom make the architecture proposal, to make reasonable technical judgements. It’s been years since the introduction of single-page web application — it’s indeed to worth to rethink what’s possible if we depart from it.

The key here is trying not to do all the things at once, strength what working and amend what’s not, along the process of making the proposal into a usable implementation.

Planet MozillaNo more excuses – a “HTML5 now” talk at #codemotion Rome

Yesterday I closed up the “inspiration” track of Codemotion Rome with a talk about the state of browsers and how we as developers make it much too hard for ourselves. You can see the slides on Slideshare and watch a screencast on YouTube.

Planet MozillaPasting into contenteditable elements in Firefox for Android, ~*wowowowowow*~

Bug 783846 landing in Nightly means that Firefox for Android users—starting in version 39—can finally paste into contenteditable elements, which is huge news for the mobile-html5-responsive-shadow-and-or-virtual-dom-contenteditable apps crowd, developers and users alike.

"That's amazing! Can I cut text from contenteditable elements too?", you're asking yourself. The answer is um no, because we haven't fixed that yet. But if you wanna help me get that working come on over to bug 1112276 and let's party. Or write some JS and fix the bug or whatever.

Fun fact, when I told my manager I was working on this bug in my spare time he asked, "…Why?". 🎉

Planet MozillaRefresh HTTP Header

Through discussions on whatwg, I learned (or I had just forgotten) about the Refresh HTTP header. Let's cut strait to the syntax:

HTTP/1.1 200 OK
Refresh: 5; url=http://www.example.org/fresh-as-a-summer-breeze

where

  • 5 means here 5 seconds.
  • url= gives the destination where the client should head after 5 seconds.

Simon Pieters (Opera) is saying in that mail:

I think Refresh as an HTTP header is not specified anywhere, so per spec
it shouldn't work. However I think browsers all support it, so it would be
good to specify it.

Eric Law (ex-Microsoft) has written about The Performance Impact of META REFRESH. If we express the previous HTTP header in HTML, we get:

<meta http-equiv="refresh" content="5;url=http://www.example.org/fresh-as-a-summer-breeze" />

In his blog post, Eric is talking about people using refresh to… well refresh the page. He means loading the same exact page over and over again. And indeed it means for the browser to create a certain number of "unconditional and conditional HTTP requests to revalidate the page’s resources" for each reload (refresh).

On the Web Compatibility side of things, I see the <meta http-equiv="refresh" …/> used quite often.

<meta http-equiv="refresh" content="0;url=http://example.com/there" />

Note the 0. Probably the result of sysadmins not willing to touch the configuration of the servers, and so front-end developers taking the lead to "fix it", instead of using HTTP 302 or HTTP 301. Anyway, it is something which is being used for most of the time, redirecting to another domain name or uri. Refresh HTTP Header on the other hand, I don't remember seeing it that often.

Should it be documented?

Simon is saying: "it would be good to specify it." I'm not so sure. First things first.

Testing

Let's create a test, by making a page sending a Refresh.

Header set Refresh "0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4"

which gives

HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Length: 200
Content-Type: text/html; charset=utf-8
Date: Thu, 26 Mar 2015 05:48:57 GMT
ETag: "c8-5122a67ec0240"
Expires: Thu, 02 Apr 2015 05:48:57 GMT
Keep-Alive: timeout=5, max=100
Last-Modified: Thu, 26 Mar 2015 05:37:05 GMT
Refresh: 0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4

This should redirect to this Fresh page

  • Yes - Firefox 36.0.4
  • Yes - Opera 29.0.1795.26
  • Yes - Safari 8.0.4 (10600.4.10.7)
  • Yes - IE11
  • Yes - Chrome (something) said Hallvord ;)

If someone could test for IE and Chrome at least.

Browser Bugs?

On Mozilla bug tracker, there are a certain number of bugs around refresh. This bug about inline resources is quite interesting and might indeed need to be addressed if there was a documentation. The bug is what the browser should do when the Refresh HTTP header is on an image included in a Web page (this could be another test). For now, the refresh is not done for inline resources. Then what about scripts, stylesheets, JSON files, HTML document in iframes, etc? For the SetupRefreshURIFromHeader code, there are Web Compatibility hacks in the source code of Firefox. We can read:

// Also note that the seconds and URL separator can be either
// a ';' or a ','. The ',' separator should be illegal but CNN
// is using it."

also:

// Note that URI should start with "url=" but we allow omission

and… spaces!

// We've had at least one whitespace so tolerate the mistake
// and drop through.
// e.g. content="10 foo"

Good times…

On Webkit bug tracker, I found another couple of bugs but about meta refresh and not specifically Refresh:. But I'm not sure it's handled by WebCore or if it's handled elsewhere in MacOSX (NSURLRequest, NSURLConnection, …). If someone knows, tell me. I didn't explore yet the source code.

On Chromium bug tracker, another couple of bugs for meta refresh, with some interesting such as this person complaining that a space doesn't work instead of a ;. This is also tracked on WebKit. Something like:

<meta http-equiv="refresh" content="0 url=http://example.com/there" />

Also what should be done with a relative URL.

<meta http-equiv="refresh" content="0;url=/there" />

But for Chromium, I have not found anything really specific to Refresh header. I didn't explore yet the source code.

On Opera bug tracker, it is still closed. We tried to open it when I was working there, and it didn't work.

Competition Of Techniques

Then you can also imagine the hierarchy of commands in a case like this:

HTTP/1.1 301 Permanent Redirect
Refresh: 0;url=http://example.net/refresh-header
Location: http://example.net/location

<!DOCTYPE html>
<html>
<title>Fresh</title>
<meta http-equiv="refresh" content="0;url=http://example.net/meta" />
<body onload="document.location.replace('http://example.net/body')">

</body>
</html>

My guess is the 301 always win with the Location HTTP header, or at least it's what I hope.

History

I can find very early references of meta refresh such as in Netscape Developer documentation.

The earliest mention seems to be An Exploration Of Dynamic Documents I can't find anywhere the documentation for Refresh HTTP header on old Netscape Web sites. (Thanks to SecuriTeam Web site and Amit Klein)

So another thing you obviously want to do, in addition to causing the current document to reload, is to cause another document to be reloaded in n seconds in place of the current document. This is easy. The HTTP response header will look like this:

Refresh: 12; URL=http://foo.bar/blatz.html

In June 1996, Jerry Jongerius posted about HTTP/1.1 Refresh header field comments

My concern with "Refresh" is that I do not want it to be a global concept (a browser can only keep track of one refresh)--it looks to be implemented this way in Netscape 2.x. I would like "Refresh" to apply to individual objects (RE: the message below to netscape).

which Roy T. Fielding replied to:

Refresh is no longer in the HTTP/1.1 document -- it has been deferred to HTTP/1.2 (or later).

Should it be documented? Well, there are plenty of issues, there are plenty of hacks around it. I have just touched the surface of it. Maybe it would be worth to document indeed how it is working as implemented now and how it is supposed to be working when there's no interoperability. If I was silly enough, maybe I would do this. HTTP, Archeology and Web Compatibility issues that seems to be close enough from my vices.

Otsukare!

IEBlogUpdates from the “Project Spartan” Developer Workshop

Today we’re excited to host some of our top web site partners, enterprise developers and web framework authors at the Microsoft Silicon Valley campus for a "Project Spartan" developer workshop to get an early look at Windows 10’s new default browsing experience as it rapidly approaches a public preview. This is another step in our renewed focus on reaching out and listening to the developer community we depend on, in keeping with the focus on openness and feedback-driven development that is driving initiatives like status.modern.ie and our Windows Insider Program.


Charles Morris at the "Project Spartan" Developer Workshop


If you’re interested in attending a similar event to learn more about “Project Spartan,” there are some great opportunities coming up. We’ll have lots to say about Project Spartan at Build 2015 (April 29th – May 1st in San Francisco) and Microsoft Ignite (May 4th – 8th in Chicago). We’re also excited to announce an all-new Windows 10 Web Platform Summit hosted by the Project Spartan team, which will be open to the public on May 5-6, 2015 at the Microsoft Silicon Valley Campus. Stay tuned to the blog and @IEDevChat for more information on how to register!

A simpler browser strategy in Windows 10

One of the items we’re discussing in today’s workshop is how we are incorporating feedback from the community into the work we are doing on Project Spartan, including some updates we are making related to the rendering engines. When we announced Project Spartan in January, we laid out a plan to use our new rendering engine to power both Project Spartan and Internet Explorer on Windows 10, with the capability for both browsers to switch back to our legacy engine when they encounter legacy technologies or certain enterprise sites.

However, based on strong feedback from our Windows Insiders and customers, today we’re announcing that on Windows 10, Project Spartan will host our new engine exclusively. Internet Explorer 11 will remain fundamentally unchanged from Windows 8.1, continuing to host the legacy engine exclusively.


Rendering engines in Project Spartan and Internet Explorer 11 on Windows 10


We’re making this change for a number of reasons:

  • Project Spartan was built for the next generation of the Web, taking the unique opportunity provided by Windows 10 to build a browser with a modern architecture and service model for Windows as a Service. This clean separation of legacy and new will enable us to deliver on that promise. Our testing with Project Spartan has shown that it is on track to be highly compatible with the modern Web, which means the legacy engine isn’t needed for compatibility.
  • For Internet Explorer 11 on Windows 10 to be an effective solution for legacy scenarios and enterprise customers, it needs to behave consistently with Internet Explorer 11 on Windows 7 and Windows 8.1. Hosting our new engine in Internet Explorer 11 has compatibility implications that impact this promise and would have made the browser behave differently on Windows 10.
  • Feedback from Insiders and developers indicated that it wasn’t clear what the difference was between Project Spartan and Internet Explorer 11 from a web capabilities perspective, or what a developer would need to do to deliver web sites for one versus the other.

We feel this change simplifies the role of each browser. Project Spartan is our future: it is the default browser for all Windows 10 customers and will provide unique user experiences including the ability to annotate on web pages, a distraction-free reading experience, and integration of Cortana for finding and doing things online faster. Web developers can expect Project Spartan’s new engine to be interoperable with the modern Web and remain “evergreen” with no document modes or compatibility views introduced going forward.

For a small set of sites on the Web that were built to work with legacy technologies, we’ll make it easy for customers to access that site using Internet Explorer 11 on Windows 10. Enterprises with large numbers of sites that rely on these legacy technologies can choose to make Internet Explorer 11 the default browser via group policy. In addition, since Internet Explorer 11 will now remain fundamentally unchanged from Windows 7 and Windows 8.1, it will provide a stable and predictable platform for enterprise customers to upgrade to Windows 10 with confidence.

Call to action for developers

Our request to web developers remains the same – try out and test our new rendering engine in the Windows 10 Technical Preview via the Windows Insider Program or via RemoteIE. It is currently hosted in Internet Explorer and can activated via the “Enable experimental web platform features” setting in about:flags. Starting in the next flight to Insiders, the new rendering engine will be removed from IE and available exclusively within Project Spartan.

We look forward to your feedback – you can reach us on Twitter at @IEDevChat, the Internet Explorer Platform Suggestion Box on UserVoice, and in the comments below. Remember to mark your calendars for our next Project Spartan developer event on May 5th – 6th in Silicon Valley. We look forward to sharing more details soon!

Kyle Pflug, Program Manager, Project Spartan

IEBlogPartnering with Adobe on new contributions to our web platform

In recent releases, we’ve talked often about our goal to bring the team and technologies behind our web platform closer to the community of developers and other vendors who are also working to move the Web forward. This has been a driving motivation behind our emphasis on providing better developer tools, resources for cross-browser testing, and more ways than ever to interact with the "Project Spartan" team.

In the same spirit of openness, we’ve been making changes internally to allow other major Web entities to contribute to the growth of our platform, as well as to allow our team to give back to the Web. In the coming months we’ll be sharing some of these stories, beginning with today’s look at how Adobe’s Web Platform Team has helped us make key improvements for a more expressive Web experience in Windows 10.

Adobe is a major contributor to open source browser engines such as WebKit, Blink, and Gecko. In the past, it was challenging for them (or anyone external to Microsoft) to make contributions to the Internet Explorer code base. As a result, as Adobe improved the Web platform in other browsers, but couldn't bring the same improvements to Microsoft's platform. This changed a few months ago when Microsoft made it possible for the Adobe Web Platform Team to contribute to Project Spartan. The team contributes in the areas of layout, typography, graphic design and motion, with significant commits to the Web platform. Adobe engineers Rik Cabanier, Max Vujovic, Sylvain Galineau, and Ethan Malasky have provided contributions in partnership with engineers on the IE team.

Adobe contributions in the Windows 10 March Technical Preview

The Adobe Web Platform Team hit a significant milestone with their first contribution landing in the March update of the Windows 10 Technical Preview! The feature is support for CSS gradient midpoints (aka color hints) and is described in the upcoming CSS images spec. With this feature, a Web developer can specify an optional location between the color stops of a CSS gradient. The color will always be exactly between the color of the 2 stops at that point. Other colors along the gradient line are calculated using an exponential interpolation function as described by the CSS spec.

Syntax:

linear-gradient(90deg, black 0%, 75%, yellow 100%)
radial-gradient(circle, black 0%, 75%, yellow 100%)

CSS Gradients in the Windows 10 Technical Preview

You can check this out yourself on this CSS Gradient Midpoints demo page. Just install the March update to Windows 10 Technical Preview and remember to enable Experimental Web Platform Features in about:flags to enable the new rendering engine. This change will bring IE to the same level as WebKit Nightly, Firefox beta and Chrome.

Another change that Adobe has recently committed is full support for <feBlend> blend modes. The W3C Filter Effects spec extended <feBlend> to support all blend modes per the CSS compositing and blending specification. Our new engine will now support these new values like the other major browsers.

New blend modes expand existing values normal, multiply, screen, overlay, darken and lighten with color-dodge, color-burn, hard-light, soft-light, difference, exclusion, hue, saturation, color and luminosity.

To use the new modes just specify the desired mode in the <feBlend> element. For example:

<feBlend mode='luminosity' in2='SourceGraphic' />

Internet Explorer 11

feBlend in Internet Explorer 11

Project Spartan

feBlend in Project Spartan on the Windows 10 Technical Preview

You can try this out today at Adobe's CodePen demo in Internet Explorer on the Windows 10 Technical Preview by selecting "Enable Experimental Web Platform Features" under about:flags.

We are just getting started

Congratulations to the Adobe Web Platform Team on their first commit! We are looking forward to a more expressive Web and moving the Web platform forward! Let us know what you think via @IEDevChat or in the comments below.

— Bogdan Brinza, Program Manager, Project Spartan

Planet WebKitCarlos García Campos: WebKitGTK+ 2.8.0

We are excited and proud of announcing WebKitGTK+ 2.8.0, your favorite web rendering engine, now faster, even more stable and with a bunch of new features and improvements.

Gestures

Touch support is one the most important features missing since WebKitGTK+ 2.0.0. Thanks to the GTK+ gestures API, it’s now more pleasant to use a WebKitWebView in a touch screen. For now only the basic gestures are implemented: pan (for scrolling by dragging from any point of the WebView), tap (handling clicks with the finger) and zoom (for zooming in/out with two fingers). We plan to add more touch enhancements like kinetic scrolling, overshot feedback animation, text selections, long press, etc. in future versions.

HTML5 Notifications

notifications

Notifications are transparently supported by WebKitGTK+ now, using libnotify by default. The default implementation can be overridden by applications to use their own notifications system, or simply to disable notifications.

WebView background color

There’s new API now to set the base background color of a WebKitWebView. The given color is used to fill the web view before the actual contents are rendered. This will not have any visible effect if the web page contents set a background color, of course. If the web view parent window has a RGBA visual, we can even have transparent colors.

webkitgtk-2.8-bgcolor

A new WebKitSnapshotOptions flag has also been added to be able to take web view snapshots over a transparent surface, instead of filling the surface with the default background color (opaque white).

User script messages

The communication between the UI process and the Web Extensions is something that we have always left to the users, so that everybody can use their own IPC mechanism. Epiphany and most of the apps use D-Bus for this, and it works perfectly. However, D-Bus is often too much for simple cases where there are only a few  messages sent from the Web Extension to the UI process. User script messages make these cases a lot easier to implement and can be used from JavaScript code or using the GObject DOM bindings.

Let’s see how it works with a very simple example:

In the UI process, we register a script message handler using the WebKitUserContentManager and connect to the “script-message-received-signal” for the given handler:

webkit_user_content_manager_register_script_message_handler (user_content, 
                                                             "foo");
g_signal_connect (user_content, "script-message-received::foo",
                  G_CALLBACK (foo_message_received_cb), NULL);

Script messages are received in the UI process as a WebKitJavascriptResult:

static void
foo_message_received_cb (WebKitUserContentManager *manager,
                         WebKitJavascriptResult *message,
                         gpointer user_data)
{
        char *message_str;

        message_str = get_js_result_as_string (message);
        g_print ("Script message received for handler foo: %s\n", message_str);
        g_free (message_str);
}

Sending a message from the web process to the UI process using JavaScript is very easy:

window.webkit.messageHandlers.foo.postMessage("bar");

That will send the message “bar” to the registered foo script message handler. It’s not limited to strings, we can pass any JavaScript value to postMessage() that can be serialized. There’s also a convenient API to send script messages in the GObject DOM bindings API:

webkit_dom_dom_window_webkit_message_handlers_post_message (dom_window, 
                                                            "foo", "bar");

 

Who is playing audio?

WebKitWebView has now a boolean read-only property is-playing-adio that is set to TRUE when the web view is playing audio (even if it’s a video) and to FALSE when the audio is stopped. Browsers can use this to provide visual feedback about which tab is playing audio, Epiphany already does that :-)

ephy-is-playing-audio

HTML5 color input

Color input element is now supported by default, so instead of rendering a text field to manually input the color  as hexadecimal color code, WebKit now renders a color button that when clicked shows a GTK color chooser dialog. As usual, the public API allows to override the default implementation, to use your own color chooser. MiniBrowser uses a popover, for example.

mb-color-input-popover

APNG

APNG (Animated PNG) is a PNG extension that allows to create animated PNGs, similar to GIF but much better, supporting 24 bit images and transparencies. Since 2.8 WebKitGTK+ can render APNG files. You can check how it works with the mozilla demos.

webkitgtk-2.8-apng

SSL

The POODLE vulnerability fix introduced compatibility problems with some websites when establishing the SSL connection. Those problems were actually server side issues, that were incorrectly banning SSL 3.0 record packet versions, but that could be worked around in WebKitGTK+.

WebKitGTK+ already provided a WebKitWebView signal to notify about TLS errors when loading, but only for the connection of the main resource in the main frame. However, it’s still possible that subresources fail due to TLS errors, when using a connection different to the main resource one. WebKitGTK+ 2.8 gained WebKitWebResource::failed-with-tls-errors signal to be notified when a subresource load failed because of invalid certificate.

Ciphersuites based on RC4 are now disallowed when performing TLS negotiation, because it is no longer considered secure.

Performance: bmalloc and concurrent JIT

bmalloc is a new memory allocator added to WebKit to replace TCMalloc. Apple had already used it in the Mac and iOS ports for some time with very good results, but it needed some tweaks to work on Linux. WebKitGTK+ 2.8 now also uses bmalloc which drastically improved the overall performance.

Concurrent JIT was not enabled in GTK (and EFL) port for no apparent reason. Enabling it had also an amazing impact in the performance.

Both performance improvements were very noticeable in the performance bot:

webkitgtk-2.8-perf

 

The first jump on 11th Feb corresponds to the bmalloc switch, while the other jump on 25th Feb is when concurrent JIT was enabled.

Plans for 2.10

WebKitGTK+ 2.8 is an awesome release, but the plans for 2.10 are quite promising.

  • More security: mixed content for most of the resources types will be blocked by default. New API will be provided for managing mixed content.
  • Sandboxing: seccomp filters will be used in the different secondary processes.
  • More performance: FTL will be enabled in JavaScriptCore by default.
  • Even more performance: this time in the graphics side, by using the threaded compositor.
  • Blocking plugins API: new API to provide full control over the plugins load process, allowing to block/unblock plugins individually.
  • Implementation of the Database process: to bring back IndexedDB support.
  • Editing API: full editing API to allow using a WebView in editable mode with all editing capabilities.

Planet MozillaThis Week in Rust 75

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

79 pull requests were merged in the last week, and 9 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Johannes Oertel
  • kjpgit
  • Nicholas
  • Paul ADENOT
  • Sae-bom Kim
  • Tero Hänninen

Approved RFCs

New RFCs

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

<mbrubeck> the 5 stages of loss and rust
<mbrubeck> 1. type check. 2. borrow check. 3. anger. 4. acceptance. 5. rust upgrade

Thanks to jdm for the tip. Submit your quotes for next week!.

Planet MozillaYouTube, MSE and Firefox 37

Being invisible is important when you build infrastructure. You don't notice what your browser does for you unless it is doing a poor job. We have been busy making Firefox video playback more robust, more asynchronous, faster and better.

You may have some memory of selecting between high and low quality on YouTube. When you switched it would stop the video and buffer the video at the new quality. Now it defaults to Auto but allows you to manually override. You may have noticed that the Auto mode doesn't stop playing when it changes quality. Nobody really noticed, but a tiny burden was lifted from users. You need to know exactly one less thing to watch videos on YouTube and many other sites.

This Auto mode is otherwise known as DASH, which stands for Dynamic Adaptive Streaming over HTTP. Flash has supported adaptive streaming for some time. In HTML5 video, DASH is supported on top of an API called MSE (Media Source Extensions). MSE allows Javascript to directly control the data going into the video element. This allows DASH to be supported in Javascript, along with some other things that I'm not going to go into.


It has taken a surprising amount of work to make this automatic. My team has been working on adding MSE to Firefox for a couple of years now as well as adding MP4 support on a number of platforms. We're finally getting to the point where it is working really well on Windows Vista and later in Firefox 37 beta. I know people will ask, so MSE is coming soon for Mac, and it is coming later for Linux and Windows XP.

Making significant changes isn't without its pain but it is great to finally see the light at the end of the tunnel. Firefox beta, developer edition and nightly users have put up with a number of teething problems. Most of them have been sorted out. I'd like to thank everyone who has submitted a crash report, written feedback or filed bugs. It has all helped us to find problems and make the video experience better.

Robustness goes further than simply fixing bugs. To make something robust it is necessary to keep simplifying the design and to create re-usable abstractions. We've switched to using a thread pool for decoding, which keeps the number of threads down. Threads use a lot of address space, which is at a premium in a 32 bit application.

We've used a promise-like abstraction to make many things non-blocking. They make chaining asynchronous operations much simpler. They're like Javascript promises, except being C++ they also guarantee you get called back on the right thread.

We're working towards getting all the complex logic on a single thread, with all the computation done in a thread pool. Putting the video playback machinery on a single thread makes it much clearer which operations are synchronous and which ones are asynchronous. It doesn't hurt performance as long as the state machine thread never blocks. In fact you get a performance win because you avoid locking and cache contention.

We're white listing MSE for YouTube at first but we are intending to roll it out to other sites soon. There are a couple of spec compliance issues that we need to resolve before we can remove the white list. Meanwhile, YouTube is looking really good in Firefox 37.



Steve Faulkner et alCurrent Standards Work at W3C

People regularly ask me what I am working on at the W3C, here is a run down of standards/guidance documents I am editing/co-editing, contributing to (note: I am only 1 of the people at The Paciello Group involved directly in standards development at the W3C)

HTML 5.1

This specification defines the 5th major version, first minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). Editing updates to and maintenance of (mainly) accessibility related advice and requirements, with an emphasis on information for web developers.

ARIA in HTML

HTML5.1 specification module defining the web developer rules  for the use of ARIA attributes on HTML 5.1 elements. It also defines requirements for Conformance Checking tools. In HTML 5.1 this spec replaces the web developer (author) conformance requirements in section 3.2.7 WAI-ARIA of the HTML5 spec (titled 3.2.7 WAI-ARIA and HTML Accessibility API Mappings in HTML 5.1).

HTML Accessibility API Mappings 1.0

Defines how user agents map HTML 5.1 elements and attributes to platform accessibility application programming interfaces (APIs).  this spec replaces (and extends to all HTML elements/attributes) the user agent implementation requirements in section 3.2.7 WAI-ARIA of the HTML5 Recommendation (titled 3.2.7 WAI-ARIA and HTML Accessibility API Mappings in HTML 5.1).

ARIA in SVG1.1

This specification defines the web developer (author) rules (conformance requirements) for the use of ARIA attributes on SVG 1.1 elements. It also defines the requirements for Conformance Checking tools.

Custom Elements

This specification describes the method for enabling the author to define and use new types of DOM elements in a document.

Editing the Custom Element Semantics section of the specification.

Best Practice

HTML5: Techniques for providing useful text alternatives

This document contains best practice guidance for authors of HTML documents on providing text alternatives for images.  Edited until October 2104, the bulk of this document is  included in the HTML5 and HTML 5.1 specifications under section 4.8.5.1 Requirements for providing text to act as an alternative for images where I continue to update and maintain.

Using WAI-ARIA in HTML

This document is a practical guide for developers on how to add accessibility information to HTML elements using the Accessible Rich Internet Applications (ARIA) specification.

 

 

IEBlogRendering engine updates in March for the Windows 10 Technical Preview

Based on feedback from Windows Insiders, we are working to release preview builds more often. Today we flighted the first update to Insiders on this accelerated cadence, which includes the latest updates to our new rendering engine. Due to the change in cadence, this build does not yet include the Project Spartan preview, which will be available in the next release.

Today’s build has a number of updates to the new engine, including new features and improvements to existing features. Some of these include:

In addition, you may notice some features partially implemented and available for testing in Internet Explorer under about:flags. These features are under active development and will continue to evolve in future preview builds.

Watch this space over the next week as we’ll be diving into these in more detail in a series of individual posts. In the meantime, you can try these improvements out in the latest preview by signing up for the Windows Insiders program and joining the “Fast” update ring. To enable the new engine in Internet Explorer on preview builds, navigate to about:flags and select “Enable experimental Web platform features.” We'll also be updating RemoteIE with the new preview soon. Don’t forget to share your feedback via the Internet Explorer Platform Suggestion Box on UserVoice, @IEDevChat on Twitter, and in the comments below.

Kyle Pflug, Program Manager, Project Spartan

Planet MozillaPriority of constituencies

Since the HTML design principles (which are effectively design principles for modern Web technology) were published, I've thought that the priority of constituencies was among the most important. It's certainly among the most frequently cited in debates over Web technology. But I've also thought that it was wrong in a subtle way.

I'd rather it had been phrased in terms of utility, so that instead of stating as a rule that value (benefit minus cost) to users is more important than value to authors, it recognized that there are generally more users than authors, which means that a smaller value per user multiplied by the number of users is generally more important than a somewhat larger value per author, because it provides more total value when the value is multiplied by the number of people it applies to. However, this doesn't hold for a very large difference in value, that is, one where multiplying the cost and benefit by the numbers of people they apply to yields results where the magnitude of the cost and benefit control which side is larger, rather than the numbers of people. The same holds for implementors and specification authors; there are generally fewer in each group. Likewise, the principle should recognize that something that benefits a very small portion of users doesn't outweigh the interests of authors as much, because the number of users it benefits is no longer so much greater than the number of authors who have to work to make it happen.

Also, the current wording of the principle doesn't consider the scarcity of the smaller groups (particularly implementors and specification authors), and thus the opportunity costs of choosing one behavior over another. In other words, there might be a behavior that we could implement that would be slightly better for authors, but would take more time for implementors to implement. But there aren't all that many implementors, and they can only work on so many things. (Their number isn't completely fixed, but it can't be changed quickly.) So given the scarcity of implementors, we shouldn't consider only whether the net benefit to users is greater than the net cost to implementors; we should also consider whether there are other things those implementors could work on in that time that would provide greater net benefit to users. The same holds for scarcity of specification authors. A good description of the principle in terms of utility would also correct this problem.

Planet MozillaRenaming your window.Request global

If you've defined your own global window.Request object and have users running Firefox 39 and Chrome 42 (and Opera and soon others), you're gonna have a bad time (because they ship with the Fetch API which defines its own Request class, obvs.).

Webcompat issue #793 details how dailymotion.com breaks (thankfully the videos of awesome Japanese public toilets still work, but all the sidebar content is missing) because they define their own Request object.

Uncaught TypeError: Request.getHashParams is not a function.

So, anyways. If you're defining your own window.Request your code is going to break and you should pick a new global identifier. Here's a few suggestions inspired by mid-March conference synergy-fests in Austin, TX:

window.Oppressed window.WaspsNest window.Unimpressed window.SouthBySouthDepressed

Picking any one of those should fix the bugs you're about to have.

Bruce LawsonReading List

Planet MozillaAdvancing JavaScript without breaking the web

Current advancements in ECMAScript are a great opportunity, but also a challenge for the web. Whilst adding new, important features we’re also running the danger of breaking backwards compatibility.

These are my notes for a talk I gave at the MunichJS meetup last week. You can see the slides on Slideshare and watch a screencast on YouTube. There will also be a recording of the talk available once the organisers are done with post-production.The video of the talk is live on YouTube

JavaScript – the leatherman of the web

rainbow unicorn kittenAccurate visualisation of the versatility of JavaScript

JavaScript is an excellent tool, made for the web. It is incredibly flexible, light-weight, has a low learning threshold and a simple implementation mechanism. You add a SCRIPT element to an HTML document, include some JS directly or link to a JS file and you are off to the races.

JavaScript needs no compilation step, and is independent of IDE of development environment. You can write it in any text editor, be it Notepad, VI, Sublime Text, Atom, Brackets or even using complex IDEs like Eclipse, Visual Studio or whatever else you use to put text into a file.

JavaScript – the enabler of new developers

JavaScript doesn’t force you to write organised code. From a syntax point of view and when it comes to type safety and memory allocation it is an utter mess. This made JavaScript the success it is now. It is a language used in client environments like browsers and apps. For example you can script illustrator and Photoshop with JavaScript and you can now also automate OSX with it. Using node or io you can use JavaScript server-side and write APIs and bespoke servers. You can even run JS directly on hardware.

The forgivefulness of JS is what made it the fast growing success it is. It allows people to write quick and dirty things and get a great feeling of accomplishment. It drives a fast-release economy of products. PHP did the same thing server-side when it came out. It is a templating language that grew into a programming language because people used it that way and it was easier to implement than Perl or Java at that time.

JavaScript broke with conventions and challenged existing best practices. It didn’t follow an object orientated approach and its prototypical nature and scope trickery can make it look like a terribly designed hack to people who come from an OO world. It can also make it a very confusing construct of callbacks and anonymous functions to people who come from it from CSS or the design world.

But here’s the thing: every one of these people is cordially invited to write JavaScript – for better or worse.

JavaScript is here to stay

The big success of JavaScript amongst developers is that it was available in every browser since we moved on from Lynx. It is out there and people use it and – in many cases – rely on it. This is dangerous, and I will come back to this later, but it is a fact we have to live with.

As it is with everything that is distributed on the web once, there is no way to get rid of it again. We also can not dictate our users to use a different browser that supports another language or runtime we prefer. The fundamental truth of the web is that the user controls the experience. That’s what makes the web work: you write your code for the Silicon Valley dweller on a 8 core state-of-the-art mobile device with an evergreen and very capable browser on a fast wireless connection and much money to spend. The same code, however, should work for the person who saved up their money to have a half hour in an internet cafe in an emerging country on a Windows XP machine with an old Firefox connected with a very slow and flaky connection. Or the person whose physical condition makes them incapable to see, speak, hear or use a mouse.

Our job is not to tell that person off to keep up with the times and upgrade their hardware. Our job is to use our smarts to write intelligent solutions. Intelligent solutions that test which of their parts can execute and only give those to that person. Web technologies are designed to be flexible and adaptive, and if we don’t understand that, we shouldn’t pretend that we are web developers.

The web is a distributed system of many different consumers. This makes it a very hostile development environment, as you need to prepare for a lot of breakage and unknowns. It also makes it the platform to reach much more people than any – more defined and closed – environment could ever reach. It is also the one that allows the next consumers to get to us. It’s hardware independence means people don’t have to wait for availability of devices. All they need is something that speaks HTTP.

New challenges for JavaScript

This is all fine and well, but we reached a point in the evolution of the web where we use JavaScript to such an extend that we need to start to organise it better. It is possible to hack together large applications and even server-side solutions in JavaScript, but in order to control and maintain them we need to consider writing cleaner JavaScript and be more methodical in our approach. We could invent new ways of using it. There is no shortage of that happening, seeing that there are new JavaScript frameworks and libraries published almost weekly. Or, we could try to tweak the language itself to play more by rules that have proven themselves over decades in other languages.

In other words, we need JavaScript to be less forgiving. This means we will lose some of the first-time users as stricter rules are less fun to follow. It also means though that people coming from other, higher and more defined languages can start using it without having to re-educate themselves. Seeing that there is a need for more JavaScript developers than the job market can deliver, this doesn’t sound like a bad idea.

JavaScript – the confused layer of the web

Whilst JS is a great solution to making the web respond more immediately to our users, it is also very different to the other players like markup and style sheets. Both of these are built to be forgiving without stopping execution when encountering an error.

A browser that encounters a unknown element shrugs, doesn’t do anything to it and moves on in the DOM to the next element it does understand and knows what to do with. The HTML5 parser encountering an unclosed element or a wrongly nested element will fix these issues under the hood and move on turning the DOM into an object collection and a visual display.

A CSS parser encountering a line with a syntax error or a selector it doesn’t understand skips that instruction and moves on to the next line. This is why we can use browser-prefixed selectors like – webkit – gradient without having to test if the browser really is WebKit.

JavaScript isn’t that way. When a script encounters a syntax error or you try to access a method, object or property that doesn’t exist it stops executing and throws an error. This makes sense seeing that JavaScript is much more powerful than the others and even can replace them. You are perfectly able to create a web product with single empty BODY element and let JavaScript do the rest.

JavaScript – playing it safe by enhancing progressively

This makes JavaScript a less reliable technology than the other two. We punish our end users for mistakes made by us, the developers. Ironically, this is exactly the reason why whe shunned XHTML and defined HTML5 as its successor.

Many things can go wrong when you rely on JavaScript and end users deliberately turning it off is a ridiculously small part of that. That’s why it is good practice to not rely on JavaScript, but instead test for it and enhance a markup and page reload based solution when and if the browser was able to load and execute our script. This is called progressive enhancement and it has been around for a long time. We even use it in the physical world.

When you build a house and the only way to get to the higher floors is a lift, you broke the house when the lift stops working. If you have stairs to also get up there, the house still functions. Of course, people need to put more effort in to get up and it is not as convenient. But it is possible. We even have moving stairs called escalators that give us convenience and a fall-back option. A broken down escalator is a set of stairs.

Our code can work the same. We build stairs and we move them when the browser can execute our JavaScript and we didn’t make any mistakes. We can even add a lift later if we wanted to, but once we built the stairs, we don’t need to worry about them any more. Those will work – even when everything else fails.

JavaScript – setting a sensible baseline

The simplest way to ensure our scripts work is to test for capabilities of the environment. We can achieve that with a very simple IF statement. By using properties and objects of newer browsers this means we can block out those we don’t want to support any longer. As we created an HTML/Server solution to support those, this is totally acceptable and a very good idea.

There is no point punishing us as developers by having to test in browsers used by a very small percentage of our users and that aren’t even available on our current operating systems any longer. By not giving these browsers any JavaScript we have them covered. We don’t bother them with functionality the hardware they run on is most likely not capable to support anyways.

The developers in the BBC call this “cutting the mustard” and published a few articles on it. The current test used to not bother old browsers is this:

if ('querySelector' in document &&
    'localStorage' in window &&
    'addEventListener' in window) {
  // Capable browser. 
  // Let's add JavaScript functionality
}

Recently, Jake Archibald of Google found an even shorter version to use:

if ('visibilityState' in document) {
  // Capable browser. 
  // Let's add JavaScript functionality
}

This prevents JavaScript to run in Internet Explorer older than 10 and Android browsers based on WebKit. It is also extensible to other upcoming technologies in browsers and simple to tweak to your needs:

if ('visibilityState' in document) {
  // Capable browser. 
  // Let's load JavaScript
  if ('serviceWorker' in navigator) {
    // Let's add offline support
    navigator.serviceWorker.register('sw.js', {
      scope: './'
    });
  }
}

This, however, fails to work when we start changing the language itself.

Strict mode – giving JavaScript new powers

In order to make JavaScript safer and cleaner, we needed its parser to be less forgiving. To ensure we don’t break the web by flagging up all the mistakes developers did in the past, we needed to find a way to opt-in to these stricter parsing rules.

A rather ingenious way of doing that was to add the “use strict” parser instruction. This meant that we needed to preceed our scripts with a simple string followed by a semicolon. For example, the following JavaScript doesn’t cause an error in a browser:

x = 0;

The lenient parser doesn’t care that the variable x wasn’t initiated, it just sees a new x and defines it. If you use strict mode, the browser doesn’t keep as calm about this:

'use strict';
x = 0;

In Firefox’s console you’ll get a “ReferenceError: assignment to undeclared variable x”.

This opt-in works to advance JavaScript to a more defined and less memory consuming language. In a recent presentation Andreas Rossberg of the V8 team in Google proposed to use this to advance JavaScript to safer and cleaner versions called SaneScript and subsequently SoundScript. All of these are just proposals and – after legitimate complaints of mental health community – there is now a change to call it StrongScript. Originally the idea was to opt in to this new parser using a string called ‘use sanity’, which is cute, but also arrogant and insulting to people suffering from cognitive problems. As you can see, advancing JS isn’t easy.

ECMAScript – changing the syntax

Opting in to a new parser with a string, however, doesn’t work when we change the syntax of the language. And this is what we do right now with ECMAScript, which is touted as the future of JavaScript and covered in many a post and conference talk. For a history lesson on all of this, check out Florian Scholz’s talk at jFokus called “Whats next for JavaScript – ES6 and beyond”.

ECMAScript has a few excellent new features. You can see all of them in the detailed documentation on MDN and this post also has a good overview. It brings classes to JavaScript, sanitises scope issues, allows for template strings that span several lines and have variable replacement, adds promises, does away with the need of a lot of anonymous functions to name just a few.

It does, however, change the syntax of JavaScript and by including it into a document or putting it inside a script element in a browser that doesn’t support it, all you do is create a JavaScript error.

There is no progressive enhancement way around this issue, and an opt-in string doesn’t do the job either. In essence, we break backwards compatibility of scripting of the web. This could be not a big issue, if browsers supported ES6, but we’re not quite there yet.

ES6 support and what to do about it

The current support table of ECMAScript6 in browsers, parsers and compilers doesn’t look too encouraging. A lot is red and it is unknown in many cases if the makers of the products we rely on to run JavaScript will take the plunge.

In the case of browsers, the ECMAScript test suite to run your JavaScript engine against is publicly available on GitHub. That also means you can run your current browser against it and see how it fares.

If you want to help with the adoption of ECMAscript in browsers, please contribute to this test suite. This is the one place all of them test against and the better tests we supply, the more reliable our browsers will become.

Ways to use the upcoming ECMAScript right now

The very nature of the changes to ECMAScript make it more or less impossible to use it across browsers and other JavaScript-consuming tools. As a lot of the changes to the language are syntax errors in JavaScript and the parser is not lenient about them, we advance the language by writing erroneous code for legacy environments.

If we consider the use cases of ECMAScript, this is not that much of an issue. Many of the problems solved by the new features are either enterprise problems that only pay high dividends when you build huge projects or features needed by upcoming functionality of browsers (like, for example, promises).

The changes mostly mean that JS gets real OO features, is more memory optimised, and that it becomes a more enticing compile target for developers starting in other languages. In other words, targetted at an audience that is not likely to start writing code from scratch in a text editor, but already coming from a build environment or IDE.

That way we can convert the code to JavaScript in a build process or on the fly. This is nothing shocking these days – after all, we do the same when we convert SASS to CSS or Jade to HTML.

Quite some time ago, new languages like TypeScript got introduced that give us the functionality of ECMAScript6 now. Another tool to use is Babel.js, which even has a live editor that allows you to see what your ES6 code gets converted to in order to run in legacy environments.

Return of the type attribute?

Another way to get around the issue of browsers not supporting ECMAScript and choking on the new syntax could be to use a type attribute. Every time you add a type value to a script element the browser doesn’t understand, it skips it and doesn’t bother the script engine with its content. In the past we used this to create HTML templates and Microsoft had an own JS derivate called JScript. That one gave you much more power to use Windows internals than JavaScript.

One way to ensure that all of us could use the ECMAScript of now and tomorrow safely would be to get browsers to support a type of ‘ES’ or something similar. The question is if that is really worth it seeing the problems ECMAScript is trying to solve.

We’ve moved on in the web development world from embedding scripts and view-source to development toolchains and debugging tools in browsers. If we stick with those, switching from JavaScript to ES6 is much less of an issue than trying to get browsers to either parse or ignore syntactically wrong JavaScript.

Update: Axel Rauschmayer proposes something similar for ECMAScript modules. He proposes a MODULE element that gets a SCRIPT with type of module as a fallback for older browsers.

It doesn’t get boring

In any case, this is a good time to chime in when there are discussions about ECMAScript. We need to ensure that we are using new features when they make sense, not because they sound great. The power of the web is that everybody is invited to write code for it. There is no 100% right or wrong.

Bruce LawsonReading List

Anne van KesterenStatement regarding the URL Standard

The goal of the URL Standard is to reflect where all implementations will converge. It should not describe today’s implementations as that will not lead to convergence. It should not describe yesterday’s implementations as that will also not lead to convergence. And it should not describe an unreachable ideal, e.g. by requiring something that is known to be incompatible with web content.

This is something all documents published by the WHATWG have in common, but I was asked to clarify this for the URL Standard in particular. Happy to help!

Steve Faulkner et alTPG at CSUN 2015

The 30th Annual International Technology and Persons with Disabilities conference (otherwise known as CSUN), gets underway on Monday 2nd March 2015. Several of the TPG team will be there, and here’s where you’ll find us if you’d like to say hello or hangout for a bit.

Monday 2nd March

Beyond Code and Compliance: Integrating Accessibility Across The Development Life Cycle

When:
9am PST (full day pre-conference workshop).
Where:
Hillcrest AB.
Who:
Billy Gregory, Hans Hillen, Henny Swan, Karl Groves, Léonie Watson, Mike Paciello, Shane Paciello and Steve Faulkner.

TPG will explore ways for integrating accessibility into software development. Whether you work in a large or small organization, follow an agile or waterfall process, are experienced or just starting out, this workshop will guide you toward reliably developing accessible and usable products.

Tuesday 3rd March

Implementing ARIA and HTML5 into Modern Web Applications

When:
1.30pm PST (half day pre-conference workshop).
Where:
Hillcrest AB.
Who:
Hans Hillen and Steve Faulkner.

In this afternoon session we will discuss how ARIA and HTML5 can be utilized to create modern, accessible web applications. The session complements the morning’s “Introduction to ARIA and HTML5″ session, by continuing with more advanced topics and hands-on examples. We recommend attending both half sessions as a full day workshop.

Wednesday 4th March

Web Components: Background, opportunities and challenges

When:
1.20pm PST.
Where:
Cortez Hill C.
Who:
Karl Groves with Alice Boxhall (Google).

Web Components are a potential paradigm shift for the way we build websites. We will explain the technologies involved and discuss the accessibility challenges faced.

Do we need to change the web accessibility game plan (Redux)?

When:
1.20pm PST.
Where:
Gaslamp AB.
Who:

 Léonie Watson with John Foliot (JP Morgan Chase), Jared Smith (WebAIM), Glenda Sims (Deque) and Jennison Asuncion (Linked In).

Revisiting the 2011 CSUN session of the same name, we look back 4-years to evaluate predictions, successes and failures, and a re-setting of the state-of-our-state.

Screen readers, browsers and HTML: The current state of play

When:
1.20pm PST.
Where:
Hillcrest AB.
Who:
Steve Faulkner and Charlie Pike, with Sarita Sharan (CA Technologies).

Introducing a research project into screen reader and browser interaction with HTML elements, including a review of findings using slides and live demonstrations.

Thursday 5th March

Accessible graphics with SVG

When:
9am PST.
Where:
Seaport B (IBM suite).
Who:
Léonie Watson, with Rich Schwertdfeger (IBM), Fred Esch (IBM), Doug Schepers (W3C), Jason White (ETS), Markku Häkkinen (ETS) and Charles McCathie Nevile (Yandex).

A look at the future of accessible graphics on the web and the possibility space of SVG content.

CEO Roundtable: The Future of Web Accessibility

When:
1.20pm PST.
Where:
Solana Beach AB.
Who:
Mike Paciello with David Wu (AISquared) and Tim Springer (SSB-Bart).

A panel discussion of accessibility industry leaders, addressing needs and challenges faced by website owners who try to provide accessibility and inclusion on their websites.

Moving the Digital Accessibility Needle: Updates & Perspectives from US DOJ & Access Board

When:
2.20pm PST.
Where:
Mission Beach AB.
Who:
Mike Paciello with Rebecca Bond (DoJ) and Gretchen Jacobs (Access Board).

Two of the nation’s leading agencies – the US Access Board and the Department of Justice will explain important updates to Section 508 and the ADA, as well as share their organizational perspective concerning accessibility to ICT.

30th CSUN Anniversary Celebration

When:
7pm PST.
Where:
Seaport Ballroom D/E.
Who:
Billy Gregory, Charlie Pike, Deb Rapsis, Graeme Coleman, Hans Hillen, Henny Swan, Karl Groves, Léonie Watson, Mike Paciello, Shane Paciello and Steve Faulkner.

Please join us for an exciting evening of entertainment to celebrate the 30th anniversary of the CSUN Conference. Geri Jewell, one of our past Keynote Speakers, will serve as the program’s emcee and performances by musician and humorist, Mark Goffeney and comedian Chris Fonseca are sure to make this a night to remember. The celebration is sponsored by IBM and The Paciello Group and will continue with a reception sponsored by Amazon, following the program.

Friday 6th March

WAI-ARIA: Common pitfalls and solutions with the Viking and the Lumberjack

When:
9am PST.
Where:
Cortez Hill B.
Who:
Billy Gregory and Karl Groves.

With great power comes great responsibility. Learn how to avoid WAI-ARIA anti-patterns with Karl “The Viking” Groves and Billy “The Lumberjack” Gregory.

What’s in a name? Accessible name computation

When:
10am PST.
Where:
Cortez Hill B.
Who:
Billy Gregory and Karl Groves.

Assistive technologies convey names for UI controls according to an algorithm. This session discusses how this algorithm works, and ways developers get it wrong.

The secret life of an accessible media player

When:
10am PST.
Where:
Hillcrest CD.
Who:
Henny Swan.

A journey around the web looking at what makes both an accessible and usable multimedia player.

Interaction notifier: Making RIA interaction accessible to everyone

When:
1.20pm PST.
Where:
Gaslamp CD.
Who:
Hans Hillen and Léonie Watson.

Users are not always aware of interaction models in Rich Internet Applications. To address this, “Interaction Notifier” adds discoverable contextual documentation to rich web content.

Managing remediation of accessibility web defects in a large enterprise

When:
1.20pm PST.
Where:
Gaslamp AB.
Who:
Karl Groves with Daniel Frank (Wells Fargo Bank).

The authors describe the practical implementation of a weighting and scoring methodology for web property accessibility defects at a large business enterprise.

Lessons learned: Dynamic content updates and ARIA live regions

When:
2.20pm PST.
Where:
Gaslamp CD.
Who:
Hans Hillen with Jennifer Gauvreau (CGI) and Elizabeth Whitmer (CGI).

TPG/CGI will share techniques for dealing with dynamic content updates and discuss challenges with inconsistent ARIA Live Region support by browser and AT products.

Implementing Accessibility In a Widely Distributed Web Based Visualization and Analysis Platform

When:
3:20 PST.
Where:
Hillcrest AB.
Who:
Mike Paciello and Graeme Coleman, with Dr. Georges Grinstein (U-Mass Lowell), Franck Kamayou (U-Mass Lowell).

We present a solution based on previous research that allows a system to do automatic analysis of a line chart visualization to extract and then present it’s intended message for blind and low vision consumers. Previous advancements in this area, an implemented prototype of the proposed solution and a description of the platform in which it was built are presented, as well as a discussion of the implications of this research and future work.

Saturday 7th March

SS12 Code for a Cause Finals – Project Possibility

When:
9am.
Where:
Mission Beach C.
Who:
Mike Paciello with students from CSUN, USC, UCLA.

This exciting event will host the innovative open source projects the top teams from CSUN, UCLA and USC have created. A continental breakfast will be served following the presentations and judging, prior to the announcement of
the First Place Team. We encourage you to mark your calendars for this important occasion to support the student teams and the time and work they have invested.

Planet MozillaBringing Native Games to the Web is About to get a Whole Lot Easier

GDC 2015 is a major milestone in a long term collaboration between Mozilla and the world’s biggest game engine makers. We set out to bring high performance games to the Web without plugins, and that goal is now being realized. Unity Technologies is including the WebGL export preview as part of their Unity 5 release, available today. Epic Games has added a beta HTML5 exporter as part of their regular binary engine releases. This means plugin-free Web deployment is now in the hands of game developers working with these popular tools. They select the Web as their target platform and, with one click, they can build to it. Now developers can unlock the world’s biggest open distribution platform leveraging two Mozilla-pioneered technologies, asm.js and WebGL.

What has changed?

Browser vendors are moving to reduce their dependency on plugins for content delivery, with Chrome planning to drop support for NPAPI entirely. Developers such as King, Humble Bundle, Game Insider, and Zynga are using Emscripten to bring their C and C++ based games to the Web. Disney has shipped Where’s My Water on Firefox OS, which was ported using the same technology. Emscripten allows developers to cross-compile their native games to asm.js, a subset of JavaScript that can be optimized to run at near native speeds. However, this approach to Web delivery can be challenging to use, and most of these companies have been working with in-house engines to achieve their goals. This has put some of the most advanced Web deployment techniques out of reach of the majority of developers, until now.

The technology is spreading

Browser support for the underlying Web standards is growing. WebGL has now spread to all modern browsers, both desktop and mobile. We are seeing all browsers optimize for asm.js-style code, with Firefox and Internet Explorer committed to advanced optimizations.

“With the ability to reach hundreds of millions of users with just a click, the Web is a fantastic place to publish games,” said Andreas Gal, CTO of Mozilla. “We’ve been working hard at making the platform ready for high performance games to rival what’s possible on other platforms, and the success of our partnerships with top-end engine and game developers shows that the industry is taking notice.”

Handwritten JavaScript games: can you spot the difference?

At GDC, Mozilla will be showcasing a few amazing examples of HTML5 using handwritten JavaScript. The Firefox booth will include a demonstration of a truly ubiquitous product called Tanx, developed by PlayCanvas. It runs on multiple desktop and mobile platforms. It can even be played inside an iOS WebView, launched within Twitter. Gamepad and multiplayer support are also part of the experience. Mozilla will in addition be featuring The Marvelous Miss Take by Wonderstruck and Turbulenz. This title is soon to ship on both Firefox Marketplace and is available on Steam today. For Steam distribution, the HTML5 application is packaged as a native application but you would be hard pressed to know it.

Not done yet

Mozilla is committed to advancing what is possible on the Web. While already capable of running great game experiences, there is plenty of potential still to be unlocked. This year’s booth showcase will include some bleeding edge technologies such as WebGL 2 and WebVR, as well as updated developer tools aimed at game and Web developers alike. These tools will be demonstrated in our recently released 64-bit version of Firefox Developer Edition. Mozilla will also be providing developers access to SIMD and experimental threading support. Developers are invited to start experimenting with these technologies, now available in Firefox Nightly Edition. Visit the booth to learn more about Firefox Marketplace, now available in our Desktop, Android, and Firefox OS offerings as a distribution opportunity for developers.

To learn more about Mozilla’s presence at GDC, read articles from the developers on the latest topics, or learn how to get involved, visit games.mozilla.org or come see us at South Hall Booth #2110 till March 6th. For press inquiries please email press@mozilla.com.

Footnotes

Updated: .  Michael(tm) Smith <mike@w3.org>