TL;DR: Service Worker, a new Web API, can be used as a mean to re-engineering client-side web applications, and a departure from the single-page web application paradigm. Detail of realizing that is being experimented on Gaia and proposed. In Gaia, particularly, “hosted packaged app” is served as a new iteration of security model work to make sure Service Workers work with Gaia.
Last week, I spent an entire week, in face-to-face meetings, going through the technical plans of re-architecture Gaia apps, the web applications that powers the front-end of Firefox OS, and the management plan on resourcing and deployment. Given the there were only a few of developers in the meeting and the public promise of “the new architecture”, I think it’s make sense to do a recap on what’s being proposed and what are the challenges already foreseen.
Using Service Worker
Many things previously not possible can be done with the worker proxy. For starter, it could replace AppCache while keeping the flexibility of managing cache in the hand of the app. The “flexibility” bits is the part where it gets interesting — theologically everything not touching the DOM can be moved into the worker — effectively re-creating the server-client architecture without a real remote HTTP server.
The Gaia Re-architecture Plan
Indeed, that’s what the proponent of the re-architecture is aiming for — my colleagues, mostly whom based in Paris, proposed such architecture as the 2015 iteration/departure of “traditional” single-page web application. What’s more, the intention is to create a framework where the backend, or “server” part of the code, to be individually contained in their own worker threads, with strong interface definitions to achieve maximum reusability of these components — much like Web APIs themselves, if I understand it correctly.
It does not, however, tie to a specific front-end framework. User of the proposed framework should be free to use any of the strategy she/he feel comfortable with — the UI can be as hardcore as entirely rendered in WebGL, or simply plain HTML/CSS/jQuery.
The plan is made public on a Wiki page, where I expect there will be changes as progress being made. This post intentionally does not cover many of the features the architecture promise to unlock, in favor of fresh contents (as opposed of copy-edit) so I recommend readers to check out the page.
Technical Challenges around using Service Workers
There are two major technical challenges: one is the possible performance (memory and cold-launch time) impact to fit this multi-thread framework and it’s binding middleware in to a phone, the other is the security model changes needed to make the framework usable in Gaia.
To speak about the backend, “server” side, the one key difference between real remote servers and workers is one lives in data centers with endless power supply, and the other depend on your phone battery. Remote servers can push constructed HTML as soon as possible, but for an local web app backed by workers, it might need to wait for the worker to spin up. For that the architecture might be depend on yet another out-of-spec feature of Service Worker, a cache that the worker thread have control of. The browser should render these pre-constructed HTML without waiting for the worker to launch.
Without considering the cache feature, and considering the memory usage, we kind of get to a point where we can only say for sure on performance, once there is a implementation to measure. The other solution the architecture proposed to workaround that on low-end phones would be to “merge back” the back-end code into one single thread, although I personally suspect the risk of timing issues, as essentially doing so would require the implementation to simulate multi-threading in one single thread. We would just have to wait for the real implementation.
The security model part is really tricky. Gaia currently exists as packaged zips shipped in the phone and updates with OTA images, pinning the Gecko version it ships along with. Packaging is one piece of sad workaround since Firefox OS v1.0 — the primary reasons of doing so are (1) we want to make sure proprietary APIs does not pollute the general Web and (2) we want a trusted 3rd-party (Mozilla) to involve in security decisions for users by check and sign contents.
The current Gecko implementation of Service Worker does not work with the classic packaged apps which serve from an
app: URL. Incidentally, the
app: URL something we feel not webby enough so we try to get rid off. The proposal of the week is called “hosted packaged apps”, which serve packages from the real, remote Web and allow references of content in the package directly with a special path syntax. We can’t get rid of packages yet because of the reasons stated above, but serving content from HTTP should allow us to use Service Worker from the trusted contents, i.e. Gaia.
One thing to note about this mix is that a signed package means it is offline by default by it’s own right, and it’s updates must be signed as well. The Service Worker spec will be violated a bit in order to make them work well — it’s a detail currently being work out.
Technical Challenges on the proposed implementation
As already mentioned on the paragraph on Service Worker challenges, one worker might introduce performance issue, let along many workers. With each worker threads, it would imply memory usage as well. For that the proposal is for the framework to control the start up and shut down threads (i.e. part of the app) as necessary. But again, we will have to wait for the implementation and evaluate it.
The proposed framework asks for restriction of Web API access to “back-end” only, to decouple UI with the front-end as far as possible. However, having little Web APIs available in the worker threads will be a problem. The framework proposes to workaround it with a message routing bus and send the calls back to the UI thread, and asking Gecko to implement APIs to workers from time to time.
As an abstraction to platform worker threads and attempt to abstract platform/component changes, the architecture deserves special attention on classic abstraction problems: abstraction eventually leaks, and abstraction always comes with overhead, whether is runtime performance overhead, or human costs on learning the abstraction or debugging. I am not the expert; Joel is.
Technical Challenges on enabling Gaia
Arguably, Gaia is one of the topmost complex web projects in the industry. We inherit a strong Mozilla tradition on continuous integration. The architecture proposal call for a strong separation between front-end application codebase and the back-end application codebase — includes separate integration between two when build for different form factors. The integration plan, itself, is something worthy to rethink along to meet such requirement.
With hosted packaged apps, the architecture proposal unlocks the possibility to deploy Gaia from the Web, instead of always ships with the OTA image. How to match Gaia/Gecko version all the way to every Nightly builds is something to figure out too.
Given everything is in flux and the immense amount of work (as outlined above), it’s hard to achieve any of the end goals without prioritizing the internals and land/deploy them separately. From last week, it’s already concluded parts of security model changes will be blocking Service Worker usage in signed package — we’ll would need to identify the part and resolve it first. It’s also important to make sure the implementation does not suffer any performance issue before deploy the code and start the major work of revamping every app. We should be able to figure out a scaled-back version of the work and realizing that first.
If we could plan and manage the work properly, I remain optimistic on the technical outcome of the architecture proposal. I trust my colleagues, particularly whom make the architecture proposal, to make reasonable technical judgements. It’s been years since the introduction of single-page web application — it’s indeed worthy to rethink what’s possible if we depart from it.
The key here is trying not to do all the things at once, strength what working and amend what’s not, along the process of making the proposal into a usable implementation.
Edit: This post have since been modified to fix some of the grammar errors.
Bug 783846 landing in Nightly means that Firefox for Android users—starting in version 39—can finally paste into contenteditable elements, which is huge news for the mobile-html5-responsive-shadow-and-or-virtual-dom-contenteditable apps crowd, developers and users alike.
"That's amazing! Can I cut text from contenteditable elements too?", you're asking yourself. The answer is um no, because we haven't fixed that yet. But if you wanna help me get that working come on over to bug 1112276 and let's party. Or write some JS and fix the bug or whatever.
Fun fact, when I told my manager I was working on this bug in my spare time he asked, "…Why?". 🎉
Through discussions on whatwg, I learned (or I had just forgotten) about the
Refresh HTTP header. Let's cut strait to the syntax:
HTTP/1.1 200 OK Refresh: 5; url=http://www.example.org/fresh-as-a-summer-breeze
5means here 5 seconds.
url=gives the destination where the client should head after 5 seconds.
Simon Pieters (Opera) is saying in that mail:
I think Refresh as an HTTP header is not specified anywhere, so per spec
it shouldn't work. However I think browsers all support it, so it would be
good to specify it.
Eric Law (ex-Microsoft) has written about The Performance Impact of META REFRESH. If we express the previous HTTP header in HTML, we get:
<meta http-equiv="refresh" content="5;url=http://www.example.org/fresh-as-a-summer-breeze" />
In his blog post, Eric is talking about people using
refresh to… well refresh the page. He means loading the same exact page over and over again. And indeed it means for the browser to create a certain number of "unconditional and conditional HTTP requests to revalidate the page’s resources" for each reload (refresh).
On the Web Compatibility side of things, I see the
<meta http-equiv="refresh" …/> used quite often.
<meta http-equiv="refresh" content="0;url=http://example.com/there" />
0. Probably the result of sysadmins not willing to touch the configuration of the servers, and so front-end developers taking the lead to "fix it", instead of using
HTTP 302 or
HTTP 301. Anyway, it is something which is being used for most of the time, redirecting to another domain name or uri.
Refresh HTTP Header on the other hand, I don't remember seeing it that often.
Should it be documented?
Simon is saying: "it would be good to specify it." I'm not so sure. First things first.
Let's create a test, by making a page sending a
Header set Refresh "0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4"
HTTP/1.1 200 OK Accept-Ranges: bytes Connection: Keep-Alive Content-Length: 200 Content-Type: text/html; charset=utf-8 Date: Thu, 26 Mar 2015 05:48:57 GMT ETag: "c8-5122a67ec0240" Expires: Thu, 02 Apr 2015 05:48:57 GMT Keep-Alive: timeout=5, max=100 Last-Modified: Thu, 26 Mar 2015 05:37:05 GMT Refresh: 0;url=https://www.youtube.com/watch?v=sTJ1XwGDcA4
This should redirect to this Fresh page
- Yes - Firefox 36.0.4
- Yes - Opera 29.0.1795.26
- Yes - Safari 8.0.4 (10600.4.10.7)
- Yes - IE11
- Yes - Chrome (something) said Hallvord ;)
If someone could test for IE and Chrome at least.
On Mozilla bug tracker, there are a certain number of bugs around refresh. This bug about inline resources is quite interesting and might indeed need to be addressed if there was a documentation. The bug is what the browser should do when the
Refresh HTTP header is on an image included in a Web page (this could be another test). For now, the refresh is not done for inline resources. Then what about scripts, stylesheets, JSON files, HTML document in iframes, etc? For the
SetupRefreshURIFromHeader code, there are Web Compatibility hacks in the source code of Firefox. We can read:
// Also note that the seconds and URL separator can be either // a ';' or a ','. The ',' separator should be illegal but CNN // is using it."
// Note that URI should start with "url=" but we allow omission
// We've had at least one whitespace so tolerate the mistake // and drop through. // e.g. content="10 foo"
On Webkit bug tracker, I found another couple of bugs but about
meta refresh and not specifically
Refresh:. But I'm not sure it's handled by WebCore or if it's handled elsewhere in MacOSX (
NSURLConnection, …). If someone knows, tell me. I didn't explore yet the source code.
On Chromium bug tracker, another couple of bugs for
meta refresh, with some interesting such as this person complaining that a space doesn't work instead of a
;. This is also tracked on WebKit. Something like:
<meta http-equiv="refresh" content="0 url=http://example.com/there" />
Also what should be done with a relative URL.
<meta http-equiv="refresh" content="0;url=/there" />
But for Chromium, I have not found anything really specific to
Refresh header. I didn't explore yet the source code.
On Opera bug tracker, it is still closed. We tried to open it when I was working there, and it didn't work.
Competition Of Techniques
Then you can also imagine the hierarchy of commands in a case like this:
HTTP/1.1 301 Permanent Redirect Refresh: 0;url=http://example.net/refresh-header Location: http://example.net/location <!DOCTYPE html> <html> <title>Fresh</title> <meta http-equiv="refresh" content="0;url=http://example.net/meta" /> <body onload="document.location.replace('http://example.net/body')"> </body> </html>
My guess is the
301 always win with the
Location HTTP header, or at least it's what I hope.
I can find very early references of
meta refresh such as in Netscape Developer documentation.
The earliest mention seems to be An Exploration Of Dynamic Documents I can't find anywhere the documentation for
Refresh HTTP header on old Netscape Web sites. (Thanks to SecuriTeam Web site and Amit Klein)
So another thing you obviously want to do, in addition to causing the current document to reload, is to cause another document to be reloaded in n seconds in place of the current document. This is easy. The HTTP response header will look like this:
Refresh: 12; URL=http://foo.bar/blatz.html
In June 1996, Jerry Jongerius posted about HTTP/1.1 Refresh header field comments
My concern with "Refresh" is that I do not want it to be a global concept (a browser can only keep track of one refresh)--it looks to be implemented this way in Netscape 2.x. I would like "Refresh" to apply to individual objects (RE: the message below to netscape).
which Roy T. Fielding replied to:
Refresh is no longer in the HTTP/1.1 document -- it has been deferred to HTTP/1.2 (or later).
Should it be documented? Well, there are plenty of issues, there are plenty of hacks around it. I have just touched the surface of it. Maybe it would be worth to document indeed how it is working as implemented now and how it is supposed to be working when there's no interoperability. If I was silly enough, maybe I would do this. HTTP, Archeology and Web Compatibility issues that seems to be close enough from my vices.
Today we’re excited to host some of our top web site partners, enterprise developers and web framework authors at the Microsoft Silicon Valley campus for a "Project Spartan" developer workshop to get an early look at Windows 10’s new default browsing experience as it rapidly approaches a public preview. This is another step in our renewed focus on reaching out and listening to the developer community we depend on, in keeping with the focus on openness and feedback-driven development that is driving initiatives like status.modern.ie and our Windows Insider Program.
If you’re interested in attending a similar event to learn more about “Project Spartan,” there are some great opportunities coming up. We’ll have lots to say about Project Spartan at Build 2015 (April 29th – May 1st in San Francisco) and Microsoft Ignite (May 4th – 8th in Chicago). We’re also excited to announce an all-new Windows 10 Web Platform Summit hosted by the Project Spartan team, which will be open to the public on May 5-6, 2015 at the Microsoft Silicon Valley Campus. Stay tuned to the blog and @IEDevChat for more information on how to register!
A simpler browser strategy in Windows 10
One of the items we’re discussing in today’s workshop is how we are incorporating feedback from the community into the work we are doing on Project Spartan, including some updates we are making related to the rendering engines. When we announced Project Spartan in January, we laid out a plan to use our new rendering engine to power both Project Spartan and Internet Explorer on Windows 10, with the capability for both browsers to switch back to our legacy engine when they encounter legacy technologies or certain enterprise sites.
However, based on strong feedback from our Windows Insiders and customers, today we’re announcing that on Windows 10, Project Spartan will host our new engine exclusively. Internet Explorer 11 will remain fundamentally unchanged from Windows 8.1, continuing to host the legacy engine exclusively.
We’re making this change for a number of reasons:
- Project Spartan was built for the next generation of the Web, taking the unique opportunity provided by Windows 10 to build a browser with a modern architecture and service model for Windows as a Service. This clean separation of legacy and new will enable us to deliver on that promise. Our testing with Project Spartan has shown that it is on track to be highly compatible with the modern Web, which means the legacy engine isn’t needed for compatibility.
- For Internet Explorer 11 on Windows 10 to be an effective solution for legacy scenarios and enterprise customers, it needs to behave consistently with Internet Explorer 11 on Windows 7 and Windows 8.1. Hosting our new engine in Internet Explorer 11 has compatibility implications that impact this promise and would have made the browser behave differently on Windows 10.
- Feedback from Insiders and developers indicated that it wasn’t clear what the difference was between Project Spartan and Internet Explorer 11 from a web capabilities perspective, or what a developer would need to do to deliver web sites for one versus the other.
We feel this change simplifies the role of each browser. Project Spartan is our future: it is the default browser for all Windows 10 customers and will provide unique user experiences including the ability to annotate on web pages, a distraction-free reading experience, and integration of Cortana for finding and doing things online faster. Web developers can expect Project Spartan’s new engine to be interoperable with the modern Web and remain “evergreen” with no document modes or compatibility views introduced going forward.
For a small set of sites on the Web that were built to work with legacy technologies, we’ll make it easy for customers to access that site using Internet Explorer 11 on Windows 10. Enterprises with large numbers of sites that rely on these legacy technologies can choose to make Internet Explorer 11 the default browser via group policy. In addition, since Internet Explorer 11 will now remain fundamentally unchanged from Windows 7 and Windows 8.1, it will provide a stable and predictable platform for enterprise customers to upgrade to Windows 10 with confidence.
Call to action for developers
Our request to web developers remains the same – try out and test our new rendering engine in the Windows 10 Technical Preview via the Windows Insider Program or via RemoteIE. It is currently hosted in Internet Explorer and can activated via the “Enable experimental web platform features” setting in about:flags. Starting in the next flight to Insiders, the new rendering engine will be removed from IE and available exclusively within Project Spartan.
We look forward to your feedback – you can reach us on Twitter at @IEDevChat, the Internet Explorer Platform Suggestion Box on UserVoice, and in the comments below. Remember to mark your calendars for our next Project Spartan developer event on May 5th – 6th in Silicon Valley. We look forward to sharing more details soon!
– Kyle Pflug, Program Manager, Project Spartan
In recent releases, we’ve talked often about our goal to bring the team and technologies behind our web platform closer to the community of developers and other vendors who are also working to move the Web forward. This has been a driving motivation behind our emphasis on providing better developer tools, resources for cross-browser testing, and more ways than ever to interact with the "Project Spartan" team.
In the same spirit of openness, we’ve been making changes internally to allow other major Web entities to contribute to the growth of our platform, as well as to allow our team to give back to the Web. In the coming months we’ll be sharing some of these stories, beginning with today’s look at how Adobe’s Web Platform Team has helped us make key improvements for a more expressive Web experience in Windows 10.
Adobe is a major contributor to open source browser engines such as WebKit, Blink, and Gecko. In the past, it was challenging for them (or anyone external to Microsoft) to make contributions to the Internet Explorer code base. As a result, as Adobe improved the Web platform in other browsers, but couldn't bring the same improvements to Microsoft's platform. This changed a few months ago when Microsoft made it possible for the Adobe Web Platform Team to contribute to Project Spartan. The team contributes in the areas of layout, typography, graphic design and motion, with significant commits to the Web platform. Adobe engineers Rik Cabanier, Max Vujovic, Sylvain Galineau, and Ethan Malasky have provided contributions in partnership with engineers on the IE team.
Adobe contributions in the Windows 10 March Technical Preview
The Adobe Web Platform Team hit a significant milestone with their first contribution landing in the March update of the Windows 10 Technical Preview! The feature is support for CSS gradient midpoints (aka color hints) and is described in the upcoming CSS images spec. With this feature, a Web developer can specify an optional location between the color stops of a CSS gradient. The color will always be exactly between the color of the 2 stops at that point. Other colors along the gradient line are calculated using an exponential interpolation function as described by the CSS spec.
linear-gradient(90deg, black 0%, 75%, yellow 100%)
radial-gradient(circle, black 0%, 75%, yellow 100%)
You can check this out yourself on this CSS Gradient Midpoints demo page. Just install the March update to Windows 10 Technical Preview and remember to enable Experimental Web Platform Features in about:flags to enable the new rendering engine. This change will bring IE to the same level as WebKit Nightly, Firefox beta and Chrome.
Another change that Adobe has recently committed is full support for <feBlend> blend modes. The W3C Filter Effects spec extended <feBlend> to support all blend modes per the CSS compositing and blending specification. Our new engine will now support these new values like the other major browsers.
New blend modes expand existing values normal, multiply, screen, overlay, darken and lighten with color-dodge, color-burn, hard-light, soft-light, difference, exclusion, hue, saturation, color and luminosity.
To use the new modes just specify the desired mode in the <feBlend> element. For example:
<feBlend mode='luminosity' in2='SourceGraphic' />
Internet Explorer 11
You can try this out today at Adobe's CodePen demo in Internet Explorer on the Windows 10 Technical Preview by selecting "Enable Experimental Web Platform Features" under about:flags.
We are just getting started
Congratulations to the Adobe Web Platform Team on their first commit! We are looking forward to a more expressive Web and moving the Web platform forward! Let us know what you think via @IEDevChat or in the comments below.
— Bogdan Brinza, Program Manager, Project Spartan
We are excited and proud of announcing WebKitGTK+ 2.8.0, your favorite web rendering engine, now faster, even more stable and with a bunch of new features and improvements.
Touch support is one the most important features missing since WebKitGTK+ 2.0.0. Thanks to the GTK+ gestures API, it’s now more pleasant to use a WebKitWebView in a touch screen. For now only the basic gestures are implemented: pan (for scrolling by dragging from any point of the WebView), tap (handling clicks with the finger) and zoom (for zooming in/out with two fingers). We plan to add more touch enhancements like kinetic scrolling, overshot feedback animation, text selections, long press, etc. in future versions.
Notifications are transparently supported by WebKitGTK+ now, using libnotify by default. The default implementation can be overridden by applications to use their own notifications system, or simply to disable notifications.
WebView background color
There’s new API now to set the base background color of a WebKitWebView. The given color is used to fill the web view before the actual contents are rendered. This will not have any visible effect if the web page contents set a background color, of course. If the web view parent window has a RGBA visual, we can even have transparent colors.
A new WebKitSnapshotOptions flag has also been added to be able to take web view snapshots over a transparent surface, instead of filling the surface with the default background color (opaque white).
User script messages
Let’s see how it works with a very simple example:
webkit_user_content_manager_register_script_message_handler (user_content, "foo"); g_signal_connect (user_content, "script-message-received::foo", G_CALLBACK (foo_message_received_cb), NULL);
webkit_dom_dom_window_webkit_message_handlers_post_message (dom_window, "foo", "bar");
Who is playing audio?
WebKitWebView has now a boolean read-only property is-playing-adio that is set to TRUE when the web view is playing audio (even if it’s a video) and to FALSE when the audio is stopped. Browsers can use this to provide visual feedback about which tab is playing audio, Epiphany already does that
HTML5 color input
Color input element is now supported by default, so instead of rendering a text field to manually input the color as hexadecimal color code, WebKit now renders a color button that when clicked shows a GTK color chooser dialog. As usual, the public API allows to override the default implementation, to use your own color chooser. MiniBrowser uses a popover, for example.
APNG (Animated PNG) is a PNG extension that allows to create animated PNGs, similar to GIF but much better, supporting 24 bit images and transparencies. Since 2.8 WebKitGTK+ can render APNG files. You can check how it works with the mozilla demos.
The POODLE vulnerability fix introduced compatibility problems with some websites when establishing the SSL connection. Those problems were actually server side issues, that were incorrectly banning SSL 3.0 record packet versions, but that could be worked around in WebKitGTK+.
WebKitGTK+ already provided a WebKitWebView signal to notify about TLS errors when loading, but only for the connection of the main resource in the main frame. However, it’s still possible that subresources fail due to TLS errors, when using a connection different to the main resource one. WebKitGTK+ 2.8 gained WebKitWebResource::failed-with-tls-errors signal to be notified when a subresource load failed because of invalid certificate.
Ciphersuites based on RC4 are now disallowed when performing TLS negotiation, because it is no longer considered secure.
Performance: bmalloc and concurrent JIT
bmalloc is a new memory allocator added to WebKit to replace TCMalloc. Apple had already used it in the Mac and iOS ports for some time with very good results, but it needed some tweaks to work on Linux. WebKitGTK+ 2.8 now also uses bmalloc which drastically improved the overall performance.
Concurrent JIT was not enabled in GTK (and EFL) port for no apparent reason. Enabling it had also an amazing impact in the performance.
Both performance improvements were very noticeable in the performance bot:
The first jump on 11th Feb corresponds to the bmalloc switch, while the other jump on 25th Feb is when concurrent JIT was enabled.
Plans for 2.10
WebKitGTK+ 2.8 is an awesome release, but the plans for 2.10 are quite promising.
- More security: mixed content for most of the resources types will be blocked by default. New API will be provided for managing mixed content.
- Sandboxing: seccomp filters will be used in the different secondary processes.
- Even more performance: this time in the graphics side, by using the threaded compositor.
- Blocking plugins API: new API to provide full control over the plugins load process, allowing to block/unblock plugins individually.
- Implementation of the Database process: to bring back IndexedDB support.
- Editing API: full editing API to allow using a WebView in editable mode with all editing capabilities.
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.
What's cooking on master?
Now you can follow breaking changes as they happen!
- Fixed-size byte string literals. RFC.
- allow inherent implementations on primitives, remove some extension traits.
- Deprecate range, range_step, count, distributions.
- Remove old_io and old_path from the prelude.
- Require braces when a closure has an explicit return type.
- Johannes Oertel
- Paul ADENOT
- Sae-bom Kim
- Tero Hänninen
- RFC 529: Generic conversion traits.
- RFC 803: Type ascription.
- RFC 909: Move
- RFC 921: Entry API v3.
- RFC 940: Disallow hyphens in crate names.
- RFC 968: Tweak closure return type syntax.
- Add std::env::concurrency_hint.
- Make type ascription expressions lvalues.
- Function pointers reform.
- Allow unstable features in 1.0.
- Would Rust have prevented Heartbleed? Another look. Bascule comes to Rust's defense! /r/rust. HN.
- Rust infrastructure can be your infrastructure.
- Weekly-meetings/2015-03-17. Checked overflow and casts; hyphens in crate names.
- Rust programming language presentation at Sungkyunkwan University, South Korea.
- What is Rust bad at?
- Roguelike game architecture in Rust, parts 1, 2, 3.
- Using Rust from the Unreal engine.
- gfx_scene. High-level rendering for gfx-rs.
- glium project update, march edition.
- html5ever project update: one year!.
- capnproto-rust: error handling revisited. David Renshaw has recently modernized Cap'n'Proto's error handling.
- google-apis-rs has been relased with bindings for many Google APIs.
- rust-backtrace. Klutzy is implementing backtracing in Rust.
- Playform gets voxel terrain and even better voxel terrain.
Quote of the Week
<mbrubeck> the 5 stages of loss and rust <mbrubeck> 1. type check. 2. borrow check. 3. anger. 4. acceptance. 5. rust upgrade
Thanks to jdm for the tip. Submit your quotes for next week!.
You may have some memory of selecting between high and low quality on YouTube. When you switched it would stop the video and buffer the video at the new quality. Now it defaults to Auto but allows you to manually override. You may have noticed that the Auto mode doesn't stop playing when it changes quality. Nobody really noticed, but a tiny burden was lifted from users. You need to know exactly one less thing to watch videos on YouTube and many other sites.
It has taken a surprising amount of work to make this automatic. My team has been working on adding MSE to Firefox for a couple of years now as well as adding MP4 support on a number of platforms. We're finally getting to the point where it is working really well on Windows Vista and later in Firefox 37 beta. I know people will ask, so MSE is coming soon for Mac, and it is coming later for Linux and Windows XP.
Making significant changes isn't without its pain but it is great to finally see the light at the end of the tunnel. Firefox beta, developer edition and nightly users have put up with a number of teething problems. Most of them have been sorted out. I'd like to thank everyone who has submitted a crash report, written feedback or filed bugs. It has all helped us to find problems and make the video experience better.
Robustness goes further than simply fixing bugs. To make something robust it is necessary to keep simplifying the design and to create re-usable abstractions. We've switched to using a thread pool for decoding, which keeps the number of threads down. Threads use a lot of address space, which is at a premium in a 32 bit application.
We're working towards getting all the complex logic on a single thread, with all the computation done in a thread pool. Putting the video playback machinery on a single thread makes it much clearer which operations are synchronous and which ones are asynchronous. It doesn't hurt performance as long as the state machine thread never blocks. In fact you get a performance win because you avoid locking and cache contention.
We're white listing MSE for YouTube at first but we are intending to roll it out to other sites soon. There are a couple of spec compliance issues that we need to resolve before we can remove the white list. Meanwhile, YouTube is looking really good in Firefox 37.
People regularly ask me what I am working on at the W3C, here is a run down of standards/guidance documents I am editing/co-editing, contributing to (note: I am only 1 of the people at The Paciello Group involved directly in standards development at the W3C)
This specification defines the 5th major version, first minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). Editing updates to and maintenance of (mainly) accessibility related advice and requirements, with an emphasis on information for web developers.
HTML5.1 specification module defining the web developer rules for the use of ARIA attributes on HTML 5.1 elements. It also defines requirements for Conformance Checking tools. In HTML 5.1 this spec replaces the web developer (author) conformance requirements in section 3.2.7 WAI-ARIA of the HTML5 spec (titled 3.2.7 WAI-ARIA and HTML Accessibility API Mappings in HTML 5.1).
Defines how user agents map HTML 5.1 elements and attributes to platform accessibility application programming interfaces (APIs). this spec replaces (and extends to all HTML elements/attributes) the user agent implementation requirements in section 3.2.7 WAI-ARIA of the HTML5 Recommendation (titled 3.2.7 WAI-ARIA and HTML Accessibility API Mappings in HTML 5.1).
This specification defines the web developer (author) rules (conformance requirements) for the use of ARIA attributes on SVG 1.1 elements. It also defines the requirements for Conformance Checking tools.
This specification describes the method for enabling the author to define and use new types of DOM elements in a document.
Editing the Custom Element Semantics section of the specification.
This document contains best practice guidance for authors of HTML documents on providing text alternatives for images. Edited until October 2104, the bulk of this document is included in the HTML5 and HTML 5.1 specifications under section 220.127.116.11 Requirements for providing text to act as an alternative for images where I continue to update and maintain.
This document is a practical guide for developers on how to add accessibility information to HTML elements using the Accessible Rich Internet Applications (ARIA) specification.
Based on feedback from Windows Insiders, we are working to release preview builds more often. Today we flighted the first update to Insiders on this accelerated cadence, which includes the latest updates to our new rendering engine. Due to the change in cadence, this build does not yet include the Project Spartan preview, which will be available in the next release.
Today’s build has a number of updates to the new engine, including new features and improvements to existing features. Some of these include:
- Improved ECMAScript 6 compatibility (up to 74% in the Kangax ES6 compatibility test in this preview)
- Expanded support for DOM L3 XPath
- Support for WAI-ARIA Landmark Roles
- Support for CSS Conditional Rules (@Supports)
- Support for the Web Audio API
- Support for CSS Gradient Midpoints (aka color hints)
- Improved FeBlend mode support
- Unprefixed support for the Fullscreen API
- Support for the Touch Events API and related interoperability improvements. Touch Events are enabled when a touchscreen is present, and can be enabled under about:flags.
In addition, you may notice some features partially implemented and available for testing in Internet Explorer under about:flags. These features are under active development and will continue to evolve in future preview builds.
- Support for the HTML5 Date-Related Inputs API. This is off by default and can be enabled in about:flags. Improved accessibility is coming in a future build.
- Partial support for CSS Transitions & Animations on SVG elements
- A toggle for the CSS Filters API is available in about:flags but the feature is not yet implemented in this build.
Watch this space over the next week as we’ll be diving into these in more detail in a series of individual posts. In the meantime, you can try these improvements out in the latest preview by signing up for the Windows Insiders program and joining the “Fast” update ring. To enable the new engine in Internet Explorer on preview builds, navigate to about:flags and select “Enable experimental Web platform features.” We'll also be updating RemoteIE with the new preview soon. Don’t forget to share your feedback via the Internet Explorer Platform Suggestion Box on UserVoice, @IEDevChat on Twitter, and in the comments below.
– Kyle Pflug, Program Manager, Project Spartan
Since the HTML design principles (which are effectively design principles for modern Web technology) were published, I've thought that the priority of constituencies was among the most important. It's certainly among the most frequently cited in debates over Web technology. But I've also thought that it was wrong in a subtle way.
I'd rather it had been phrased in terms of utility, so that instead of stating as a rule that value (benefit minus cost) to users is more important than value to authors, it recognized that there are generally more users than authors, which means that a smaller value per user multiplied by the number of users is generally more important than a somewhat larger value per author, because it provides more total value when the value is multiplied by the number of people it applies to. However, this doesn't hold for a very large difference in value, that is, one where multiplying the cost and benefit by the numbers of people they apply to yields results where the magnitude of the cost and benefit control which side is larger, rather than the numbers of people. The same holds for implementors and specification authors; there are generally fewer in each group. Likewise, the principle should recognize that something that benefits a very small portion of users doesn't outweigh the interests of authors as much, because the number of users it benefits is no longer so much greater than the number of authors who have to work to make it happen.
Also, the current wording of the principle doesn't consider the scarcity of the smaller groups (particularly implementors and specification authors), and thus the opportunity costs of choosing one behavior over another. In other words, there might be a behavior that we could implement that would be slightly better for authors, but would take more time for implementors to implement. But there aren't all that many implementors, and they can only work on so many things. (Their number isn't completely fixed, but it can't be changed quickly.) So given the scarcity of implementors, we shouldn't consider only whether the net benefit to users is greater than the net cost to implementors; we should also consider whether there are other things those implementors could work on in that time that would provide greater net benefit to users. The same holds for scarcity of specification authors. A good description of the principle in terms of utility would also correct this problem.
If you've defined your own global
window.Request object and have users running Firefox 39 and Chrome 42 (and Opera and soon others), you're gonna have a bad time (because they ship with the Fetch API which defines its own
Request class, obvs.).
Webcompat issue #793 details how dailymotion.com breaks (thankfully the videos of awesome Japanese public toilets still work, but all the sidebar content is missing) because they define their own
Uncaught TypeError: Request.getHashParams is not a function.
So, anyways. If you're defining your own
window.Request your code is going to break and you should pick a new global identifier. Here's a few suggestions inspired by mid-March conference synergy-fests in Austin, TX:
Picking any one of those should fix the bugs you're about to have.
- GOLD: grunt-respimg A responsive image workflow for optimizing and resizing your images
- Increasing engagement with Web App install banners and Push Notifications on the Open Web – bringing the web closer to native. This is jolly good news from the Chrome team. (I’m one of the team that is working on this in Opera, too.)
- Accessible Drag and Drop with Multiple Items by James “Brothercake” Edwards
- Blink has a new memory team, to ensure that memory is used more efficiently. Sigbjorn from Opera’s on the team.
- MySpace – what went wrong ‘The site was a massive spaghetti-ball mess’ (by former VP of online marketing)
- UX Tactics To Make Slow Things Seem Faster – nothing earth-shattering here, but an interesting list.
- Brand Simplicity vs. Our Innovation Complex – “while consumers claim interest in complex innovations (“Bring on the shiny new objects!”), they actually buy simpler products”
- Apple has a software problem by Matt Wilcox (and bonus follow-up.)
- Fifty Shades Generator – Snorem ipsum – goodbye! Spice up your comps with the “50 Shades generator”. NSFW, obvs.
Current advancements in ECMAScript are a great opportunity, but also a challenge for the web. Whilst adding new, important features we’re also running the danger of breaking backwards compatibility.
These are my notes for a talk I gave at the MunichJS meetup last week. You can see the slides on Slideshare and watch a screencast on YouTube.
There will also be a recording of the talk available once the organisers are done with post-production.
The forgivefulness of JS is what made it the fast growing success it is. It allows people to write quick and dirty things and get a great feeling of accomplishment. It drives a fast-release economy of products. PHP did the same thing server-side when it came out. It is a templating language that grew into a programming language because people used it that way and it was easier to implement than Perl or Java at that time.
As it is with everything that is distributed on the web once, there is no way to get rid of it again. We also can not dictate our users to use a different browser that supports another language or runtime we prefer. The fundamental truth of the web is that the user controls the experience. That’s what makes the web work: you write your code for the Silicon Valley dweller on a 8 core state-of-the-art mobile device with an evergreen and very capable browser on a fast wireless connection and much money to spend. The same code, however, should work for the person who saved up their money to have a half hour in an internet cafe in an emerging country on a Windows XP machine with an old Firefox connected with a very slow and flaky connection. Or the person whose physical condition makes them incapable to see, speak, hear or use a mouse.
Our job is not to tell that person off to keep up with the times and upgrade their hardware. Our job is to use our smarts to write intelligent solutions. Intelligent solutions that test which of their parts can execute and only give those to that person. Web technologies are designed to be flexible and adaptive, and if we don’t understand that, we shouldn’t pretend that we are web developers.
The web is a distributed system of many different consumers. This makes it a very hostile development environment, as you need to prepare for a lot of breakage and unknowns. It also makes it the platform to reach much more people than any – more defined and closed – environment could ever reach. It is also the one that allows the next consumers to get to us. It’s hardware independence means people don’t have to wait for availability of devices. All they need is something that speaks HTTP.
Whilst JS is a great solution to making the web respond more immediately to our users, it is also very different to the other players like markup and style sheets. Both of these are built to be forgiving without stopping execution when encountering an error.
A browser that encounters a unknown element shrugs, doesn’t do anything to it and moves on in the DOM to the next element it does understand and knows what to do with. The HTML5 parser encountering an unclosed element or a wrongly nested element will fix these issues under the hood and move on turning the DOM into an object collection and a visual display.
A CSS parser encountering a line with a syntax error or a selector it doesn’t understand skips that instruction and moves on to the next line. This is why we can use browser-prefixed selectors like – webkit – gradient without having to test if the browser really is WebKit.
When you build a house and the only way to get to the higher floors is a lift, you broke the house when the lift stops working. If you have stairs to also get up there, the house still functions. Of course, people need to put more effort in to get up and it is not as convenient. But it is possible. We even have moving stairs called escalators that give us convenience and a fall-back option. A broken down escalator is a set of stairs.
The simplest way to ensure our scripts work is to test for capabilities of the environment. We can achieve that with a very simple IF statement. By using properties and objects of newer browsers this means we can block out those we don’t want to support any longer. As we created an HTML/Server solution to support those, this is totally acceptable and a very good idea.
The developers in the BBC call this “cutting the mustard” and published a few articles on it. The current test used to not bother old browsers is this:
Recently, Jake Archibald of Google found an even shorter version to use:
This, however, fails to work when we start changing the language itself.
x = 0;
The lenient parser doesn’t care that the variable x wasn’t initiated, it just sees a new x and defines it. If you use strict mode, the browser doesn’t keep as calm about this:
'use strict'; x = 0;
In Firefox’s console you’ll get a “ReferenceError: assignment to undeclared variable x”.
ECMAScript – changing the syntax
There is no progressive enhancement way around this issue, and an opt-in string doesn’t do the job either. In essence, we break backwards compatibility of scripting of the web. This could be not a big issue, if browsers supported ES6, but we’re not quite there yet.
ES6 support and what to do about it
If you want to help with the adoption of ECMAscript in browsers, please contribute to this test suite. This is the one place all of them test against and the better tests we supply, the more reliable our browsers will become.
Ways to use the upcoming ECMAScript right now
If we consider the use cases of ECMAScript, this is not that much of an issue. Many of the problems solved by the new features are either enterprise problems that only pay high dividends when you build huge projects or features needed by upcoming functionality of browsers (like, for example, promises).
The changes mostly mean that JS gets real OO features, is more memory optimised, and that it becomes a more enticing compile target for developers starting in other languages. In other words, targetted at an audience that is not likely to start writing code from scratch in a text editor, but already coming from a build environment or IDE.
Quite some time ago, new languages like TypeScript got introduced that give us the functionality of ECMAScript6 now. Another tool to use is Babel.js, which even has a live editor that allows you to see what your ES6 code gets converted to in order to run in legacy environments.
Return of the type attribute?
One way to ensure that all of us could use the ECMAScript of now and tomorrow safely would be to get browsers to support a type of ‘ES’ or something similar. The question is if that is really worth it seeing the problems ECMAScript is trying to solve.
Update: Axel Rauschmayer proposes something similar for ECMAScript modules. He proposes a MODULE element that gets a SCRIPT with type of module as a fallback for older browsers.
It doesn’t get boring
In any case, this is a good time to chime in when there are discussions about ECMAScript. We need to ensure that we are using new features when they make sense, not because they sound great. The power of the web is that everybody is invited to write code for it. There is no 100% right or wrong.
- HTTP2 for front-end web developers by Matt Wilcox
- scrap the srcset attribute – an outraged dev hates the syntax. (I hear this a lot). @zcorpan & @tabatkins explain. Responsive images syntax *is* ugly. But it’s less ugly than all the alternatives. Like the web.
- The State of Web Type by Bram Stein
- Quantity Queries for CSS by Heydon Pickering
- Emulating failure – “HTML has a problem. As implemented in browsers many interactive elements cannot be styled as desired by web developers… Is it just me, or are new web UI technologies continuing to try to solve the wrong problems?” by Steve Faulkner.
- flexbox in 5 minutes – an interactive Flexbox tour.
- Transitioning to TLS – web standards dreamboat @marcosc writes up how he did it (and “triggered a cascade of shit”)
- Talking of which, Switching to HTTPS – Jeremy Keith documents the intuitive 768-step process he went through.
- So, let’s make upgrading to HTPPS slightly easier. Upgrade Insecure Requests is a draft spec by @mikewest (feedback welcome!) so an author can tell the browser that subresource requests (`*<img>`, etc) should be loaded via HTPPS, even if the code just says “HTTP”. Only same-origin navigational requests (`<a>`) will be upgraded. This is to make it easy for those managing massive piles of archived content.
- Intent to Implement: API to support web/native app install banners – “Many websites currently implement ‘door slam interstitials’ demanding users install their native app. Our implementation only prompts users to install the app if they have already shown engagement on the site and provides much more respectful UX. We hope developers will adopt this, reducing the number of door slam users experience on the web.”
- Pandora’s box model – An experiment in containing stylesheet complexity by Mr Brian Kardell
- Box Tree API Level 1 – “A Collection of Interesting Ideas” from Project Houdini, a joint W3C TAG and CSS Working Group initiative to provide APIs into the CSS magic in the browser. (The minutes of the initial meet explain Houdini Aspirations and Scope and Proposed Charter, Box Tree Spec. There’s also a houdini wiki.)
- The State of Global Connectivity – Only 40% of the world has ever connected to the Internet. Why? internet.org reports. (My employer, Opera, is part of internet.org.)
- The Affordability Report from The Alliance for Affordable Internet.
- February in Africa: All the tech news you shouldn’t miss from the past month
- Report: Web Compatibility Summit – Mozilla’s nice Mr @miketaylr had a Web Compatibility summit (I attended remotely). Here’s a report.
- The future of HTML with Bruce Lawson – Paul Boag’s podcast interview with me (transcribed)
- How To Use GitHub and the Terminal: A Guide – illustrated, jargon-free.
The goal of the URL Standard is to reflect where all implementations will converge. It should not describe today’s implementations as that will not lead to convergence. It should not describe yesterday’s implementations as that will also not lead to convergence. And it should not describe an unreachable ideal, e.g. by requiring something that is known to be incompatible with web content.
This is something all documents published by the WHATWG have in common, but I was asked to clarify this for the URL Standard in particular. Happy to help!
The 30th Annual International Technology and Persons with Disabilities conference (otherwise known as CSUN), gets underway on Monday 2nd March 2015. Several of the TPG team will be there, and here’s where you’ll find us if you’d like to say hello or hangout for a bit.
Monday 2nd March
Beyond Code and Compliance: Integrating Accessibility Across The Development Life Cycle
- 9am PST (full day pre-conference workshop).
- Hillcrest AB.
- Billy Gregory, Hans Hillen, Henny Swan, Karl Groves, Léonie Watson, Mike Paciello, Shane Paciello and Steve Faulkner.
TPG will explore ways for integrating accessibility into software development. Whether you work in a large or small organization, follow an agile or waterfall process, are experienced or just starting out, this workshop will guide you toward reliably developing accessible and usable products.
Tuesday 3rd March
Implementing ARIA and HTML5 into Modern Web Applications
- 1.30pm PST (half day pre-conference workshop).
- Hillcrest AB.
- Hans Hillen and Steve Faulkner.
In this afternoon session we will discuss how ARIA and HTML5 can be utilized to create modern, accessible web applications. The session complements the morning’s “Introduction to ARIA and HTML5″ session, by continuing with more advanced topics and hands-on examples. We recommend attending both half sessions as a full day workshop.
Wednesday 4th March
- 1.20pm PST.
- Cortez Hill C.
- Karl Groves with Alice Boxhall (Google).
Web Components are a potential paradigm shift for the way we build websites. We will explain the technologies involved and discuss the accessibility challenges faced.
- 1.20pm PST.
- Gaslamp AB.
Léonie Watson with John Foliot (JP Morgan Chase), Jared Smith (WebAIM), Glenda Sims (Deque) and Jennison Asuncion (Linked In).
Revisiting the 2011 CSUN session of the same name, we look back 4-years to evaluate predictions, successes and failures, and a re-setting of the state-of-our-state.
- 1.20pm PST.
- Hillcrest AB.
- Steve Faulkner and Charlie Pike, with Sarita Sharan (CA Technologies).
Introducing a research project into screen reader and browser interaction with HTML elements, including a review of findings using slides and live demonstrations.
Thursday 5th March
Accessible graphics with SVG
- 9am PST.
- Seaport B (IBM suite).
- Léonie Watson, with Rich Schwertdfeger (IBM), Fred Esch (IBM), Doug Schepers (W3C), Jason White (ETS), Markku Häkkinen (ETS) and Charles McCathie Nevile (Yandex).
A look at the future of accessible graphics on the web and the possibility space of SVG content.
CEO Roundtable: The Future of Web Accessibility
- 1.20pm PST.
- Solana Beach AB.
- Mike Paciello with David Wu (AISquared) and Tim Springer (SSB-Bart).
A panel discussion of accessibility industry leaders, addressing needs and challenges faced by website owners who try to provide accessibility and inclusion on their websites.
Moving the Digital Accessibility Needle: Updates & Perspectives from US DOJ & Access Board
- 2.20pm PST.
- Mission Beach AB.
- Mike Paciello with Rebecca Bond (DoJ) and Gretchen Jacobs (Access Board).
Two of the nation’s leading agencies – the US Access Board and the Department of Justice will explain important updates to Section 508 and the ADA, as well as share their organizational perspective concerning accessibility to ICT.
30th CSUN Anniversary Celebration
- 7pm PST.
- Seaport Ballroom D/E.
- Billy Gregory, Charlie Pike, Deb Rapsis, Graeme Coleman, Hans Hillen, Henny Swan, Karl Groves, Léonie Watson, Mike Paciello, Shane Paciello and Steve Faulkner.
Please join us for an exciting evening of entertainment to celebrate the 30th anniversary of the CSUN Conference. Geri Jewell, one of our past Keynote Speakers, will serve as the program’s emcee and performances by musician and humorist, Mark Goffeney and comedian Chris Fonseca are sure to make this a night to remember. The celebration is sponsored by IBM and The Paciello Group and will continue with a reception sponsored by Amazon, following the program.
Friday 6th March
- 9am PST.
- Cortez Hill B.
- Billy Gregory and Karl Groves.
With great power comes great responsibility. Learn how to avoid WAI-ARIA anti-patterns with Karl “The Viking” Groves and Billy “The Lumberjack” Gregory.
- 10am PST.
- Cortez Hill B.
- Billy Gregory and Karl Groves.
Assistive technologies convey names for UI controls according to an algorithm. This session discusses how this algorithm works, and ways developers get it wrong.
- 10am PST.
- Hillcrest CD.
- Henny Swan.
A journey around the web looking at what makes both an accessible and usable multimedia player.
- 1.20pm PST.
- Gaslamp CD.
- Hans Hillen and Léonie Watson.
Users are not always aware of interaction models in Rich Internet Applications. To address this, “Interaction Notifier” adds discoverable contextual documentation to rich web content.
- 1.20pm PST.
- Gaslamp AB.
- Karl Groves with Daniel Frank (Wells Fargo Bank).
The authors describe the practical implementation of a weighting and scoring methodology for web property accessibility defects at a large business enterprise.
- 2.20pm PST.
- Gaslamp CD.
- Hans Hillen with Jennifer Gauvreau (CGI) and Elizabeth Whitmer (CGI).
TPG/CGI will share techniques for dealing with dynamic content updates and discuss challenges with inconsistent ARIA Live Region support by browser and AT products.
- 3:20 PST.
- Hillcrest AB.
- Mike Paciello and Graeme Coleman, with Dr. Georges Grinstein (U-Mass Lowell), Franck Kamayou (U-Mass Lowell).
We present a solution based on previous research that allows a system to do automatic analysis of a line chart visualization to extract and then present it’s intended message for blind and low vision consumers. Previous advancements in this area, an implemented prototype of the proposed solution and a description of the platform in which it was built are presented, as well as a discussion of the implications of this research and future work.
Saturday 7th March
SS12 Code for a Cause Finals – Project Possibility
- Mission Beach C.
- Mike Paciello with students from CSUN, USC, UCLA.
This exciting event will host the innovative open source projects the top teams from CSUN, UCLA and USC have created. A continental breakfast will be served following the presentations and judging, prior to the announcement of
the First Place Team. We encourage you to mark your calendars for this important occasion to support the student teams and the time and work they have invested.
GDC 2015 is a major milestone in a long term collaboration between Mozilla and the world’s biggest game engine makers. We set out to bring high performance games to the Web without plugins, and that goal is now being realized. Unity Technologies is including the WebGL export preview as part of their Unity 5 release, available today. Epic Games has added a beta HTML5 exporter as part of their regular binary engine releases. This means plugin-free Web deployment is now in the hands of game developers working with these popular tools. They select the Web as their target platform and, with one click, they can build to it. Now developers can unlock the world’s biggest open distribution platform leveraging two Mozilla-pioneered technologies, asm.js and WebGL.
What has changed?
The technology is spreading
Browser support for the underlying Web standards is growing. WebGL has now spread to all modern browsers, both desktop and mobile. We are seeing all browsers optimize for asm.js-style code, with Firefox and Internet Explorer committed to advanced optimizations.
“With the ability to reach hundreds of millions of users with just a click, the Web is a fantastic place to publish games,” said Andreas Gal, CTO of Mozilla. “We’ve been working hard at making the platform ready for high performance games to rival what’s possible on other platforms, and the success of our partnerships with top-end engine and game developers shows that the industry is taking notice.”
Not done yet
Mozilla is committed to advancing what is possible on the Web. While already capable of running great game experiences, there is plenty of potential still to be unlocked. This year’s booth showcase will include some bleeding edge technologies such as WebGL 2 and WebVR, as well as updated developer tools aimed at game and Web developers alike. These tools will be demonstrated in our recently released 64-bit version of Firefox Developer Edition. Mozilla will also be providing developers access to SIMD and experimental threading support. Developers are invited to start experimenting with these technologies, now available in Firefox Nightly Edition. Visit the booth to learn more about Firefox Marketplace, now available in our Desktop, Android, and Firefox OS offerings as a distribution opportunity for developers.
To learn more about Mozilla’s presence at GDC, read articles from the developers on the latest topics, or learn how to get involved, visit games.mozilla.org or come see us at South Hall Booth #2110 till March 6th. For press inquiries please email email@example.com.