Bugzilla – Bug 14970
<video> Expose statistics for tracking playback quality (framerate information)
Last modified: 2013-03-11 22:20:04 UTC
It would be useful for the <video> element to expose information about framerate.
At least one concrete use-case is aggregating these statistics so that an organization can prove that <video> playback is looking just as good as Flash playback for the same videos, from the user's perspective, across a wide range of clients. Also, an organization may want to use these statistics in the aggregate to make sure that client changes they make don't impact the general viewing experience.
There are currently various quality-related statistics proposed here:
Note that this bug was split out from bug 12399, which is an LC1 bug
(In reply to comment #0)
> It would be useful for the <video> element to expose information about
What exactly does "framerate" mean?
I agree with Eric that "framerate" is ill defined. A video is composed of frames that are individually timestamped. A frame rate implies that each frame is timestamped at a constant interval, but that isn't always the case.
Check out the notes from the playback metrics session at OVC 2011:
(In reply to comment #2)
> What exactly does "framerate" mean?
Part of this bug would be defining what statistics we want. As opposed to framerate, what we really want to know is (a) how well the video is playing for the user in terms of non-network-related aspects and (b) what's causing it to play poorly or well.
If the goal is to "prove that browser-native video renders better than Flash", I'd be concerned about asking the browser for the evidence...
(In reply to comment #5)
> If the goal is to "prove that browser-native video renders better than Flash",
> I'd be concerned about asking the browser for the evidence...
That might be a side effect. But what you're really after when measuring the performance of video at the client is the quality at which the video is presented to users. That may have nothing to do with the browser: it can have a lot of different causes including poor network performance, machine overload with other processes (so the video decoder starves) or a poor video card.
The idea is that if a user complains to a publisher that their experience is bad that the publisher has a means to track down exactly what is causing that poor experience.
> The idea is that if a user complains to a publisher that their experience is
> bad that the publisher has a means to track down exactly what is causing that
> poor experience.
Ah well that's an interesting use case that wasn't brought up before.
If that's the use case, it seems like the best API would be something that returned a list of components involved in the display of the video, and for each one gave some sort of performance metric. The components could be UA defined, since different UAs could have different components, but could e.g. be "network", "decoding", and "display". Each one would then have an attribute saying what fraction of the media stream it was handling per unit time, and an attribute saying whether this performance was constrained by hardware limitations (e.g. pegging the CPU, the cache, the network or GPU bandwidth), or whether it was constrained by software limitations (e.g. the decoding can only happen at the display rate because the software doesn't know how to buffer decoded frames), or whether it was being artificially constrained to maintain a good user experience (e.g. the download could go faster but is being throttled by the client because the user might want to use the bandwidth for other things). So e.g. if the network was downloading a 30 minute video as fast as i could at a rate that would take 15 minutes, it would have the value "2" (twice real time) and "hardware" (it's going as fast as it can). We'd probably want some sort of indicator of regularity, too, e.g. to report cases where the decoding is happening at an ideal 25 fps, but actually doing 50 frames one second and zero the next.
This is going in the right direction.
The idea of the metrics listed at http://wiki.whatwg.org/wiki/Video_Metrics#Proposal is to provide the measurements to calculate for each of the individual components what fraction of the media stream they were handling per unit time:
The network component would report how many bytes of video it received and since when and how much of that time was waiting time. This allows calculating the bitrate at which the video is being received.
The decoding component would report per video and audio track how many bytes it was given to decode and how many it was actually able to decode.
The rendering component would report how many frames it was given from the decoder and how many of these it presented and how many had to be dropped because they were too late.
I think these components are not UA specific, but generic. Also, these measures are not UA specific nor are they encoding format specific.
There are two ways of approaching these measurements: you can measure from the start of video download, or you can measure over a certain time frame (e.g. 100ms). The latter gives a rate that can be plotted, but the earlier provides more accurate information that can be polled by JS at a resolution as required and the rate can be calculated from differences between polling.
(In reply to comment #8)
> I think these components are not UA specific, but generic.
We've already received implementation feedback to the contrary, which is why I think it makes sense to make this more of a UA-defined list of components than a defined list.
For the given use case, it doesn't matter if every browser has the same list, since the use case is specifically about determining why specific cases render poorly.
Implementation feedback on the idea in comment 7, intended to address the use case in comment 6, would be helpful at this point.
This bug was cloned to create bug 17803 as part of operation convergence.
Mass move to "HTML WG"