W3C Workshop on Performance Report

  1. Comparing In-Browser Methods of Measuring Resource Load Times
  2. HTTP Extension to provide Timing Data for Performance Measurements
  3. Extending HTTP and HTML to enable Automatic Collection of Performance Data
  4. Discussion: Expanding and Improving on Performance Timing Interfaces
  5. HTTP Client Hints for Content Adaption without increased Client Latency
  6. Browser Enhancements to Help Improve Page Load Performance using Delta Delivery
  7. Improving Performance Diagnostics Capabilities in Browsers
  8. Improving Web Performance on Mobile Web Browsers
  9. Improving Mobile Power Consumption with HTML5 Connectivity Methods
  10. Memory Management and JavaScript Garbage Collection
  11. Preserving Frame Rate on Television and Web Browsing
  12. Use-case of smart network utilization for better user experience
  13. Open Discussion

Comparing In-Browser Methods of Measuring Resource Load Times

Eric presented a study on comparing in-browser methods of measuring resource load times that he and his colleagues had conducted at the University of North Carolina. In this study, they used various methods of measuring how long it takes to load a resource, using the DOM, XHR and Navigation Timing APIs, and compared those results with the ground truth. They found that though these interfaces did a reasonable job of measuring the timing information, there were differences in measurements amongst the browsers due to internal implementation differences.

Discussion Takeaways:

HTTP Extension to provide Timing Data for Performance Measurements

In this presentation, Mike shared a proposal for extending HTTP such that browsers would automatically send up the timing information (Navigation, Resource and User Timing) to a web server without having to explicitly call the JavaScript APIs in the context of the page. The proposal has three steps: (A) UA sends a request header, Accept-Measurement, at the initiation of a HTTP session, (B) server negotiates with the UA to determine which measurements should be sent, as well as a TTL for the data collection, (C) once all measurements have been collected or TTL has expired, the measurements are beaconed back in an HTTP POST with a Timing-Measurements header.

Discussion Takeaways:

Extending HTTP and HTML to enable Automatic Collection of Performance Data

Philippe presented a proposal, on behalf of Radware, for extending the HTTP and HTML standards for enabling automatic collection of web page timing data; a concept similar to the previous presentation by Mike McCall. In this proposal, Radware suggest a few options in how to gather this data: (A) a new HTTP header, Performance_Reporting_Target: <URL for reporting>\r\n, or (B) a new Boolean Element attribute called perfcollect. The data would be sent back to the server in a HTTP POST and would include entries from the PerfromanceResourceTiming object.

Discussion Takeaways:

Discussion: Expanding and Improving on Performance Timing Interfaces

This session consisted of a discussion on the ideas raised in the survey results. There were three areas that were most raised in the survey: expanding Navigation Timing, new performance metrics, and error logging interfaces.

Discussion Takeaways:

HTTP Client Hints for Content Adaption without increased Client Latency

Ilya presented a proposal on allowing user agents to provide HTTP client hints to the server for content adaption. The problem today is that there are many different devices accessing the web, with different capabilities and preferences. Today, web developers either load resources that may not be used by the UA or use "JavaScript loaders" to detect the UA and load the appropriate resources. This proposal allows the user agent to give the server hints on its capabilities within the HTTP header. The server can then serve up exactly the resources appropriate for the UA, reducing the time spent on the wire.

Discussion Takeaways:

Browser Enhancements to Help Improve Page Load Performance using Delta Delivery

Robert presented a proposal for helping improve page load performance using delta delivery. Today, Gmail on average only takes a few seconds to load, however, in the higher percentiles, it can take up to minutes to load. This typically correlates to some geographies where network bandwidth is slow. They have found that in the slower initial page load cases are dominated by time spent downloading JavaScript and CSS. This proposal suggests sending only different (delta) of a resource between what the client has (in cache or local storage) and the latest version. The deltas would be encoded in the efficient VCDIFF format. Experimentation has shown that this would improve downloads time by 11% in the median load cases, but up to 50% in the 99 percentile cases (in places like India). To do delta delivery, changes will need to be made to the HTTP protocol (client will need to indicate current cached version of the content and servers will need to know that they only need to send a delta), cryptography APIs will need to be exposed, and pre-loading of "all" cached JavaScript will need to be done.

Discussion Takeaways:

Improving Performance Diagnostics Capabilities in Browsers

Alois presented a proposal where browsers would surface more browser diagnostic information through standardized APIs to web developers. Today, developers have access to some diagnostics information in individual developer tools for specific browsers, however, this makes it hard to collect common metrics as analyst have to use multiple different tools and approaches. Such an API can be used to analyze performance across browsers, client-side monitoring of a web application in production, resolving user complaints, understanding impact of third party code on a web application and other reasons. The proposal here is to provide more information via JavaScript APIs on JavaScript execution hot spots, memory information, layout and rendering hotspots and other areas.

Discussion Takeaways:

Improving Web Performance on Mobile Web Browsers

In this session, Ben gives details on the state of web performance on mobile web browsers. Today, desktop browsing is relatively fast, whereas mobile web performance is bad; pages don't usually load in under a second (average mobile page load is 9 seconds). He found that mobile web performance is highly variable, available bandwidth has a wide distribution, small changes to location affect bandwidth, time of day affect bandwidth, and performance is variable based on carriers. He also found that gzip is off for 20% of the Alexa-1000 sites, 57% of resources don't have cache-control headers, and in many cases resources are much larger than they need to be. The ask in this presentation was that we provide better tools to measure page loads, inform origins of expected performance to get different content sent, and help developers diagnose problems.

Improving Mobile Power Consumption with HTML5 Connectivity Methods

In this session, Giridhar presented on HTML5 connectivity APIs, like Web Sockets and WebRTC, and their impact on mobile power consumption, which is potentially significant. This session gave best practices on how to use Web Sockets and WebRTC to both connect and manage power consumption. The ask of this presentation was that this working group work to provide better best practices on using HTML5 APIs in a power-efficient way, ensure performance for implementations of new W3C battery API, indicate to web developers whether cellular QoS is being leveraged in a persistent connection session, and explicit metrics regarding the state of the connection.

Discussion Takeaways:

Memory Management and JavaScript Garbage Collection

In this session, Paul gave a talk on the importance of runtime memory management in gaming scenarios. Fast loading helps bring players to the game, but runtime performances keeps them playing. Today, web developers don't have insight into many of the browser's internal memory managements. For example, a developer knows when a resource is loaded into memory but there is no way to unload the resource, there is no information on when textures are still alive or released on the GPU and garbage collection can occur at an inopportune time. The ask is that JavaScript APIs be given that allow trigger GC manually, give GC timing, disable GC, and give more browser memory information.

Discussion Takeaways:

Preserving Frame Rate on Television Web Browsing

In this session, Yosuke gave a presentation on the importance of preserving frame rate on television web browsing. Traditionally, television viewing has not resulted in dropped frames for users. However, with the web browser runtime working in televisions today, users are experiences scenarios where frames are being dropped. The ask is that the Web Perf working group, Web and TV interest group, and Web and Broadcasting business group work together to consider ways to ensure frame rates are not dropped during television web browsing.

Use Case of Smart Network Utilization for Better User Experience

In this session, Chihiro discusses ways in browsers and servers can use network information (e.g., Wifi or 3G or LAN), and provide content that is best suited for those user environment. For example, on a LAN network, high quality video/audio/images can be sent down, however, on a 3G network, lower quality video/audio/images can be sent to improve the performance and user experience. Some of the suggestions in this session are to provide APIs that give more detailed information on the network usage and control network interfaces for fine-grained network selection.

Open Discussion

In the last session of the day, we opened up the floor for discussion on any of the topics presented today or topics that had not been brought up.

Discussions: