This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 17896 - <video> add bytesReceived, downloadTime, and networkWaitTime metrics
Summary: <video> add bytesReceived, downloadTime, and networkWaitTime metrics
Status: RESOLVED WONTFIX
Alias: None
Product: WHATWG
Classification: Unclassified
Component: HTML (show other bugs)
Version: unspecified
Hardware: Other other
: P3 normal
Target Milestone: Unsorted
Assignee: Ian 'Hixie' Hickson
QA Contact: contributor
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2012-07-18 07:14 UTC by contributor
Modified: 2012-09-11 23:52 UTC (History)
19 users (show)

See Also:


Attachments

Description contributor 2012-07-18 07:14:12 UTC
This was was cloned from bug 12399 as part of operation convergence.
Originally filed: 2011-03-30 02:27:00 +0000
Original reporter: Silvia Pfeiffer <silviapfeiffer1@gmail.com>

================================================================================
 #0   Silvia Pfeiffer                                 2011-03-30 02:27:22 +0000 
--------------------------------------------------------------------------------
For several reasons, we need to expose the performance of media elements to JavaScript.

One concrete use case is that content publishers want to understand the quality of their content as being played back by their users and how much a user is actually playing back. For example, if a video always goes into buffering mode after 1 min for all users - maybe there is a problem in the encoding, or the video is too big for the typical bandwidth/CPU combination. Also, publishers want to track the metrics of how much of their video and audio files is actually being watched.

A further use case is HTTP adaptive streaming, where an author wants to manually implement an algorithm for switching between different resources of different bandwidth or screen size. For example, if the user goes full screen and the user's machine and bandwidth allow for it, the author might want to switch to a higher resolution video.

Note that recent discussions on issue-147 [1] at least included a need to report on the actual playback rate achieved after trying to set it via playbackRate.

Note also that Mozilla is implementing player metrics [2].


[1] http://lists.w3.org/Archives/Public/public-html/2011Mar/0699.html
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=580531
================================================================================
 #1   Silvia Pfeiffer                                 2011-03-30 02:54:04 +0000 
--------------------------------------------------------------------------------
Note: I also just started a wiki page at http://wiki.whatwg.org/wiki/Video_Metrics to collect proposals of statistics. I'm hoping people can contribute there.
================================================================================
 #2   Chris Pearce                                    2011-03-31 03:11:37 +0000 
--------------------------------------------------------------------------------
We have landed support for the mozParsedFrames, mozDecodedFrames, mozPresentedFrames, mozPaintedFrames, and mozPaintDelay in Firefox trunk. These should ship in Firefox 5. Looks like Silvia has already updated http://wiki.whatwg.org/wiki/Video_Metrics to include our new stats.
================================================================================
 #3   Ian 'Hixie' Hickson                             2011-06-14 01:32:22 +0000 
--------------------------------------------------------------------------------
If I add this, is this something I should add to the W3C copy as well? Or is this too much of a new feature at this stage?
================================================================================
 #4   Silvia Pfeiffer                                 2011-06-14 03:51:06 +0000 
--------------------------------------------------------------------------------
In my opinion it would be good to have it as part of the W3C copy, too. Particularly since we already have stats from Webkit and Firefox. Maybe make the changes in WHATWG copy first, then we can discuss on public-html list and have another bug to add it if agreeable?
================================================================================
 #6   Ian 'Hixie' Hickson                             2011-09-21 18:28:21 +0000 
--------------------------------------------------------------------------------
I will likely not get to this in time for the Last Call deadline; does anyone think this is urgent?
================================================================================
 #7   Silvia Pfeiffer                                 2011-09-22 18:42:15 +0000 
--------------------------------------------------------------------------------
Yes, I do think basic statistics need to be part of the HTML5 spec of W3C. It is one of the key things that professional publishers continue to ask for.

How about for now we add what is proposed here: http://wiki.whatwg.org/wiki/Video_Metrics#Proposal except for "playbackJitter", which we discussed at OVC would not be that useful.

Note that Mozilla already have a bug to introduce these: https://bugzilla.mozilla.org/show_bug.cgi?id=686370
================================================================================
 #8   Silvia Pfeiffer                                 2011-10-05 01:43:19 +0000 
--------------------------------------------------------------------------------
I had discussions with YouTube about the proposed metrics. Here's some feedback:

"Without a "download time" metric, i.e. the total amount of time spent thus far loading video data, the "bytes downloaded" metric is not especially useful, because we will need to expect differences in download-start-time behavior between browsers.

Measuring the download time through JavaScript would both be difficult and error-prone. "Download time" has to stop counting when there's suddenly no connection, for example. Also, in order to be accurate, it has to start
instantaneously when any downloading begins, and end instantaneously when any downloading stops, possibly faster than a JavaScript event timer will fire.

While developers could watch networkState, that's a lot of possibly-complex and error-prone code that would have to be duplicated by every developer who wants to know this information, instead of having it simply provided by the platform (which is probably actually more capable of tracking this information simply than we would be as JS developers)."
---

So, there is suggestion to add a downloadTime metric for:
The time since first HTTP request is sent until now or the download stops/finishes/terminates (whichever is earlier).

I've also added this to the proposal in the WHATWG wiki.
================================================================================
 #9   Silvia Pfeiffer                                 2011-10-05 01:44:11 +0000 
--------------------------------------------------------------------------------
More feedback from YouTube - this needs to be clarified for implementation:

"For the (vide|audi)o(Fram|Byt)esDecoded properties, it's not clear if, for example, a user who watches part of a video, and replays over a segment he just watched, will see those bytes/frames counted once, for the *initial* presentation -- or twice, once for each display on-screen.

In either case, we could use statistics on remaining buffer size, to monitor rebufferings."
================================================================================
 #10  Silvia Pfeiffer                                 2011-10-05 01:50:10 +0000 
--------------------------------------------------------------------------------
And another requested metric from YouTube:

"playbackJitter could be caused by either networking or decoding issues. The strategies to attack these two types of issues are very different. It would be better to use a different metric - networkWaitTime, which is the total duration of a playback being blocked on waiting for more data from network."

I've also added this to the WHATWG wiki page.
================================================================================
 #11  Ian 'Hixie' Hickson                             2011-10-06 00:50:00 +0000 
--------------------------------------------------------------------------------
What is the proposed latency on these attributes? e.g. should the same attribute always return the same value within one task, or should it return the real value?

What should attributes that aren't applicable return? (e.g. audioBytesDecoded when the user agent hasn't been decoding audio data because all the audio tracks are disabled, or droppedFrames if the media resource in the <video> element has no video track.)

What are the use cases for each proposed attribute?
================================================================================
 #12  Philip J                                        2011-10-06 08:41:04 +0000 
--------------------------------------------------------------------------------
There is nothing user-visible one can do by observing changes within a task, so please don't do that. I still have hope that we can follow Mozilla in making HTMLMediaElement stable, so let's not introduce more racy stuff here.
================================================================================
 #13  Silvia Pfeiffer                                 2011-10-06 08:51:21 +0000 
--------------------------------------------------------------------------------
Since this is just about observing what is happening, I wouldn't think it introduces raciness. Returning the real value rather than anything task-dependent would be the objective.

Attributes that aren't applicable would just return 0, e.g. in an audio-only file no video frames would be decoded.

I'll put use cases into the wiki page.
================================================================================
 #14  Max Kanat-Alexander                             2011-10-06 17:57:56 +0000 
--------------------------------------------------------------------------------
No, I think we actually want the task-dependent values. If you're going to do math with them, you want them all to be stable versus each other.
================================================================================
 #15  Ian 'Hixie' Hickson                             2011-10-11 00:32:57 +0000 
--------------------------------------------------------------------------------
Ah, yeah, needing them to be stable to do maths with them is a good point.

Defaulting to zero seems ok.

Use cases for each of these attributes would be exceedingly helpful for providing examples in the spec.
================================================================================
 #16  Silvia Pfeiffer                                 2011-10-11 04:08:05 +0000 
--------------------------------------------------------------------------------
Apologies for the delay: I've just had a go at some use cases for the attributes at http://wiki.whatwg.org/wiki/Video_Metrics from what I understand they'd be used for.
================================================================================
 #17  Ian 'Hixie' Hickson                             2011-10-11 22:52:28 +0000 
--------------------------------------------------------------------------------
bytesReceived, downloadTime, and networkWaitTime seem reasonable. I propose to provide them on a MediaMetrics object hung off a HTMLMediaElement.metrics attribute.

The others all seem to have the same use case, essentially "determine the client's performance". It's not at all clear to me why one of these would be better than the other, or how they would be used differently. 

If the goal is just to be able to determine if, given the current system load, the browser can handle a given track or whether the page should step down to a lower-quality track, surely you only really need playbackJitter. Even then though, it's not at all clear to me that that's the right approach. Consider a situation where a user's system can't decode a high quality video in real time, but has plenty of RAM to decode it ahead of time and then present it to the user with perfect fidelity. Or consider a system where the user is playing two videos, one of which is trying to be nice and so lowers the video quality when it notices high load, and the other is more aggressive and increases the video quality whenever the load lightens up in any way. The net result will just be that the aggressive one will show at the highest quality and the polite one will show at the lowest quality. Surely what would be better is a way for the browser to automatically step videos up or down, so that there's no risk of a Web page being more aggressive than another. (It's more than just video, too; consider a similar API for Web Workers where a worker increases the load when it notices the total system load goes down, vs a video that politely lowers the load whenever it has reason to think the load is high or increasing).
================================================================================
 #18  Silvia Pfeiffer                                 2011-10-12 02:22:00 +0000 
--------------------------------------------------------------------------------
(In reply to comment #17)
> The others all seem to have the same use case, essentially "determine the
> client's performance". It's not at all clear to me why one of these would be
> better than the other, or how they would be used differently.

One goal for determining the client's performance is indeed to use it for HTTP adaptive streaming approaches. But that's not the only goal.

By being able to determine whether a client's performance sucks because their network connection sucks, their decoding pipeline is slow, or their rendering engine is  slow, you can better determine what changes to make: if it's the rendering engine, you're better off providing a video in the resolution that is appropriate for the viewer's screen and thus reduces load on the rendering engine. If it's the decoding pipeline, you will need to instead switch to a video that lightens the load on the decoding pipeline - maybe it needs to have more iframes/keyframes. And if it's the network, well then you better reduce your bitrate and go to a lower resolution.

All of those proposed metrics provide a better basis for reporting of quality of delivery (e.g. if you are providing guarantees to your customers on the QoS of video delivery), as well as for decision making for the HTTP adaptive streaming approach, as well as for decision making on what types of encodings to actually make available on your video servers.

Your example of two video providers having different strategies for how to react to the information provided is one that the market will sort out. If you are saying that we should provide a HTTP adaptive streaming solution natively in the browser, I'd agree. But that doesn't imply that the metrics are not necessary - they still are and introduction of a native HTTP adaptive streaming solution is orthogonal to this issue (and should be dealt with separately from this bug).
================================================================================
 #19  Max Kanat-Alexander                             2011-10-12 18:12:12 +0000 
--------------------------------------------------------------------------------
To be clear, one of the major things we want to do with the performance data is validate or invalidate experiments.

For example, let's say we want to turn on a new feature for 1% of all our users, and see if, on the aggregate, it affects their framerate. We would do this by sampling the framerate on the client side (or simply watching the number of dropped frames). Then we would send that information back to our servers, where it could be aggregated and tagged as being a part of this experiment. Then we would compare that to our control numbers (the aggregate averages from the other 99%) and see if there was significant deviation.

When we do this analysis, we want to know very specifically what is causing the changes in the numbers. Are we causing there to be more time spent downloading? Are we causing more dropped frames?

So it's true that the things happening on an individual user's machine may bias the data on that individual machine, but when we aggregate the data, if we have a large enough sample size, that sort of noise should be insignificant.
================================================================================
 #20  Ian 'Hixie' Hickson                             2011-10-13 20:01:03 +0000 
--------------------------------------------------------------------------------
Ah, that's an interesting use case not mentioned on the wiki.

If what you're trying to determine is whether the decoding pipeline or the rendering pipeline are suffering, and if either, which one, then it seems like what you'd want is just two numbers: one, the rate of frames dropped by the decoder (which we can expose as two numbers, the number of frames passed to the decode and number of frames actually decoded), and two, the jitter.

Why would you need the count of bytes decoded, the number of dropped frames (unless that's defined as dropped by the decoder, rather than by any part of the pipeline? — the wiki isn't clear on this), and the number of presented frames?
================================================================================
 #21  Silvia Pfeiffer                                 2011-10-25 06:04:30 +0000 
--------------------------------------------------------------------------------
We need to be able to measure all three: the network performance, the decoding pipeline and the rendering pipeline. Each of these bear different information and result in different consequences/actions.

From a network POV we can only deal with bytes.

The decoder gets bytes as input. It's not really possible to count how many frames go into the decoder, because the framing is done as part of decoding IIUC, so counting the decoded bytes lets us know how much the decoder dropped. The decoder can then tell the number of frames it outputs.

The renderer deals only with frames.

The proposed metrics in the wiki cover measuring the performance of all these three steps. Jitter is an aggregate metrics that is better calculated from the more detailed information that the other metrics provide, so we should not use Jitter. But the other metrics in the wiki make sense to me and seem sufficiently independent of each other.
================================================================================
 #22  Philip J                                        2011-10-25 09:25:11 +0000 
--------------------------------------------------------------------------------
How does one measure the number of bytes going into the demuxer in a way that makes sense cross-browser? The WebM demuxer Opera uses is a bit particular in that it reads overlapping blocks from the input, not just consecutive blocks. If one just counts the bytes going in, that would exceed the size of the entire file after a single playthrough due to the overlap.

Another issue is that the demuxer is involved in QoS, skipping forward in the input if there have been dropped frames in order to catch up. This would influence the measurement of both incoming or outgoing bytes.

Even trying to measure something like the number of expected frames is a bit hard, because when the demuxer skips forward to catch up it can't know how many frames it just skipped, unless it spends time trying to figure that out and thereby falling further behind.

Saying how many frames were decoded is not a problem, but anything upstream to that in the pipeline seems a bit dodgy as long as one has some kind of QoS in the demuxer. It perhaps shouldn't be surprising that it's hard for JavaScript to adapt the quality when the decoding pipeline is trying to do the same thing...
================================================================================
 #23  Ian 'Hixie' Hickson                             2011-10-26 23:14:18 +0000 
--------------------------------------------------------------------------------
Yeah, to me this all seems quite misguided. If we want to do adaptive quality streaming, we should do that at the network layer, not in JS. I don't think the proposed attributes solve the use case presented. I see how bytesReceived, downloadTime, and networkWaitTime could be used to collect aggregate data about the user population to help a service optimise in general, but I don't see how the other attributes can be helpful.
================================================================================
 #24  Max Kanat-Alexander                             2011-10-27 00:58:33 +0000 
--------------------------------------------------------------------------------
  Are you saying that there should be a specification on an HTMLMediaElement for it to stream adaptively automatically, by the browser making a determination at the network layer? That sounds like it risks freezing adaptive technologies in time, although I do agree that it would be nice to have for the average developer! The advantage of exposing the necessary information to JS, on the other hand, is that developers can be more innovative about adaptive strategies if they have to be.
================================================================================
 #25  Silvia Pfeiffer                                 2011-10-29 09:16:20 +0000 
--------------------------------------------------------------------------------
After lengthy discussions on the FOMS mailing list over the last year and a bit I have come to the conclusion that we need both: a built-in solution for adaptive streaming, and the possibility to implement it in JS. The metrics listed here contribute towards making this possible.
================================================================================
 #26  Silvia Pfeiffer                                 2011-10-29 09:55:57 +0000 
--------------------------------------------------------------------------------
(In reply to comment #22)
> How does one measure the number of bytes going into the demuxer in a way that
> makes sense cross-browser? The WebM demuxer Opera uses is a bit particular in
> that it reads overlapping blocks from the input, not just consecutive blocks.


The metrics here are really not about comparing browsers with each other. They are about measuring the quality of service that the user sees in video and about allowing the video publisher to determine where quality problems originate from: the network, the browser, or the device (i.e. is the computer overloaded). Having the metrics available allows the video publisher to take measure to counter-act against poor video quality and fix it, e.g. get a better network service (a better CDN), file browser bugs, or change the resource appropriately that is being delivered (smaller resolution, lower bitrate etc).


> If one just counts the bytes going in, that would exceed the size of the entire
> file after a single playthrough due to the overlap.

The bytesDecoded measure is about measuring what bytes have come out of the decoding pipeline, not what has gone in. If you are referring to the bytesReceived, well they are not measured at the point where they are fed to the demuxer, but right after they have been received from the network, so should not be counted doubly.

I envisage the bytesDecoded to be polled frequently so as to provide a bitrate measure. I.e. at time 1sec of the video playback we have bytesDecoded=8K, at time 2sec we have bytesDecoded=12K, at time 3sec we have bytesDecoded=12K. Assuming the bytesReceived has been growing continuously over this time, this tells us that something is hanging in the decoding pipeline.


> Another issue is that the demuxer is involved in QoS, skipping forward in the
> input if there have been dropped frames in order to catch up. This would
> influence the measurement of both incoming or outgoing bytes.

Yes it would, but because you have the metric droppedFrames, you can determine that this has happened and that your bytes arrived too late.


> Even trying to measure something like the number of expected frames is a bit
> hard, because when the demuxer skips forward to catch up it can't know how many
> frames it just skipped, unless it spends time trying to figure that out and
> thereby falling further behind.
>
> Saying how many frames were decoded is not a problem, but anything upstream to
> that in the pipeline seems a bit dodgy as long as one has some kind of QoS in
> the demuxer.

So, you're saying that it's not possible to measure droppedFrames? Webkit is doing it. However, we can discuss whether we should replace the {decodedFrames, droppedFrames, presentedFrames} set with {decodedFrames, presentedFrames, paintedFrames} as Mozilla has implemented.

> It perhaps shouldn't be surprising that it's hard for JavaScript
> to adapt the quality when the decoding pipeline is trying to do the same
> thing...

The decoding pipeline is trying to do the best with the data it has been given. JavaScript has the possibility to replace that data with something that the decoding pipeline can deal with more easily. I don't see a conflict at all.
================================================================================
 #27  Clarke Stevens                                  2011-11-09 23:08:20 +0000 
--------------------------------------------------------------------------------
At the F2F meetings in Santa Clara, the HTML WG appeared to support this bug as
a way to get relevant feedback for support of adaptive bit rate media as well as content protection schemes. However, the HTML WG would like a concise list of common errors and/or events for these purposes.

The Media Pipeline task force (MPTF) has agreed to review this and come up with a proposed list.

Here are links to the relevant MPTF requirements:

http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Requirements#R7._Additional_Media_Parameters
http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Requirements#R10._Content_Protection_Parameters
================================================================================
 #28  Clarke Stevens                                  2011-11-09 23:10:29 +0000 
--------------------------------------------------------------------------------
(In reply to comment #27)

>Incorrect links
> http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Requirements#R7._Additional_Media_Parameters
> http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Requirements#R10._Content_Protection_Parameters

Oops: here are the corrected links:
http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Requirements#R8._Additional_Media_Feedback_and_Errors
http://www.w3.org/2011/webtv/wiki/MPTF/MPTF_Requirements#R11._Content_Protection_Feedback_and_Errors
================================================================================
 #29  Michael[tm] Smith                               2011-11-20 18:25:02 +0000 
--------------------------------------------------------------------------------
Note comment #27:
> The Media Pipeline task force (MPTF) has agreed to review this and come up with
> a proposed list.

This bug is waiting for that proposed list to be provided.
================================================================================
 #30  Ian 'Hixie' Hickson                             2011-11-24 21:44:39 +0000 
--------------------------------------------------------------------------------
(In reply to comment #24)
> Are you saying that there should be a specification on an HTMLMediaElement
> for it to stream adaptively automatically, by the browser making a
> determination at the network layer?

I'm saying there should be a network protocol that does this, yes. It wouldn't be part of the HTMLMediaElement specification; the same technology would apply in all streaming situations. Indeed such technology probably already exists.


> That sounds like it risks freezing adaptive
> technologies in time, although I do agree that it would be nice to have for the
> average developer!

I don't see why it would freeze anything in time.


> The advantage of exposing the necessary information to JS,
> on the other hand, is that developers can be more innovative about adaptive
> strategies if they have to be.

I don't see why browser vendors and server vendors can't innovate also.



Currently, the state of this bug is that there is one use case of those that have been presented that seems like it would be best addressed via additions to the JS API: the ability to collect aggregate data about the user population's bandwidth availability to help a service optimise in general; this use case argues for providing bytesReceived, downloadTime, and networkWaitTime attributes. I intend to add this in the near future. The other use cases presented either don't seem best handled by an API or do not seem to be handled by the proposed metrics. I would recommend filing separate bugs for other use cases, though, rather than having all metrics-related use cases dealt with in one bug. I'm sure that if you ask the chairs they'd be happy to handle such split-out bugs as LC1 also, if that matters.
================================================================================
 #31  Max Kanat-Alexander                             2011-11-28 22:32:53 +0000 
--------------------------------------------------------------------------------
(In reply to comment #30)
> I'm saying there should be a network protocol that does this, yes. 

  Oh. It couldn't all be just a network protocol, though, because a lot of the decisions have to be made on the client--only the client can know if it's able to play back a format well-enough.

  Also, are you thinking this would be something on top of HTTP? Inserting other protocols than HTTP is going to make life difficult for client-side developers.

> I don't see why it would freeze anything in time.

  There's always going to be some browser somewhere that can't be updated (or which updates slowly--for example, most mobile browsers today) which the server-side will have to support.

> I don't see why browser vendors and server vendors can't innovate also.

  They can! And I agree that they should, and I agree that for most developers, having this built into the browser is absolutely the best solution. My point is that JS developers can push out new code from day to day, while it can take months or years for a new browser to have sufficient usage.

> I would recommend filing separate bugs for other use
> cases, though, rather than having all metrics-related use cases dealt with in
> one bug. I'm sure that if you ask the chairs they'd be happy to handle such
> split-out bugs as LC1 also, if that matters.

  I think that sounds totally reasonable. I'll file a separate bug for the playback-quality stuff.
================================================================================
 #32  Max Kanat-Alexander                             2011-11-28 22:45:08 +0000 
--------------------------------------------------------------------------------
Okay. Bug 14970 filed.
================================================================================
 #33  Ian 'Hixie' Hickson                             2011-12-07 01:06:15 +0000 
--------------------------------------------------------------------------------
Thanks.
================================================================================
 #34  Clarke Stevens                                  2011-12-16 07:01:09 +0000 
--------------------------------------------------------------------------------
(In reply to comment #29)
> Note comment #27:
> > The Media Pipeline task force (MPTF) has agreed to review this and come up with
> > a proposed list.
> 
> This bug is waiting for that proposed list to be provided.

The Media Pipeline Task Force has submitted the following proposals in response to this comment.

http://www.w3.org/2011/webtv/wiki/MPTF/ADR_Minimal_Control_Model_Proposal#Feedback
http://www.w3.org/2011/webtv/wiki/MPTF/HTML_Error_codes
================================================================================
 #35  Ian 'Hixie' Hickson                             2012-02-10 00:33:07 +0000 
--------------------------------------------------------------------------------
> > I'm saying there should be a network protocol that does this, yes. 
> 
> Oh. It couldn't all be just a network protocol, though, because a lot of the
> decisions have to be made on the client--only the client can know if it's able
> to play back a format well-enough.

Sure. The network protocol is nothing alone, it's just a way for the client to communicate to the server. My point is that it should be done by the client and the server, via a network protocol, not by a script running on the client.


> There's always going to be some browser somewhere that can't be updated (or
> which updates slowly--for example, most mobile browsers today) which the
> server-side will have to support.

There's always going to be some browser that doesn't support any of this. Or that has a bug that means scripting doesn't run. Or indeed, that has scripting disabled. Or any number of other weird states. I see nothing special about the browser here as compared to script based on browser APIs.


> > I don't see why browser vendors and server vendors can't innovate also.
> 
> They can! And I agree that they should, and I agree that for most developers,
> having this built into the browser is absolutely the best solution. My point is
> that JS developers can push out new code from day to day, while it can take
> months or years for a new browser to have sufficient usage.

Modern browsers update continuously in a matter of weeks these days. It's just as possible for a browser to improve faster than a site updates its code than the other way around. In fact, since there are fewer browsers, it's more likely that they'll be updated.
================================================================================
 #36  Ian 'Hixie' Hickson                             2012-02-10 00:36:05 +0000 
--------------------------------------------------------------------------------
Looking specifically at bytesReceived, downloadTime, and networkWaitTime, is this something that might make sense more generically for all resources rather than specifically for video? e.g. something the perf work might more appropriately handle?
================================================================================
 #37  Silvia Pfeiffer                                 2012-02-13 09:39:02 +0000 
--------------------------------------------------------------------------------
It could be useful for other resources, too. However, the decodedFrames and the droppedFrames are video-only as is the playbackJitter.

At one stage we were discussing creation of a playback metrics object that the mediaElement could use. Is that what you want perf to work on?

Also something that may be of interest to explain the use case better: if you right click on a YouTube video and select "show video info" you get these kinds of quality of service metrics, too.
================================================================================
 #38  Ian 'Hixie' Hickson                             2012-03-22 18:54:47 +0000 
--------------------------------------------------------------------------------
Silvia: comment 37 does not seem to answer the question in comment 36 (you seem to be talking about different attributes).
================================================================================
 #39  Silvia Pfeiffer                                 2012-03-22 20:51:57 +0000 
--------------------------------------------------------------------------------
Hmm, I guess the implied answer was: I believe they are and would be a good start.

I was thinking beyond just these three features, though, to how we can extend them to the more video-specific ones requested in the Wiki page and whether design decisions made now may make it awkward in the future. But I'm happy to just start with this concise lot.
================================================================================
 #40  Ian 'Hixie' Hickson                             2012-04-30 23:38:49 +0000 
--------------------------------------------------------------------------------
Silvia: So to confirm, you're agreeing that bytesReceived, downloadTime, and networkWaitTime are not media-specific and that we should move them to a WebPerf API rather than HTMLMediaElement?
================================================================================
 #41  Silvia Pfeiffer                                 2012-05-01 05:13:25 +0000 
--------------------------------------------------------------------------------
Are you suggesting to add it to https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/ResourceTiming/Overview.html ?
================================================================================
 #42  Silvia Pfeiffer                                 2012-05-09 02:29:47 +0000 
--------------------------------------------------------------------------------
(In reply to comment #40)
> Silvia: So to confirm, you're agreeing that bytesReceived, downloadTime, and
> networkWaitTime are not media-specific and that we should move them to a
> WebPerf API rather than HTMLMediaElement?

I've spoken with some others and we agree that these three are not media-specific and could be progressed in WebPerf.



We have also identified that a generic DroppedFrames measure for video is important so Web Devs can get information about how good the quality of playback is that the users are seeing. It basically signals how much "system bandwidth" is available for video. Web Devs can gather these stats to make a better informed decision on which bitrate resource to choose for start of the next video's playback, they can switch to alternative lower bitrate resources mid-stream, or inform the user to close other apps, and build a profile of typical bandwidth cases to decide on which bitrates to encode resources into. The DroppedFrames metric is already available in WebKit through the webkitDroppedFrames attribute and in Firefox through (mozPaintedFrames - mozParsedFrames).
================================================================================
Comment 1 Ian 'Hixie' Hickson 2012-09-11 23:52:36 UTC
Re the last paragraph, please keep that to bug 17803.

I'm closing this since this is to be dealt with in the perf wg per comments above.