This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
The WebVTT parser is rather lenient in what it allows for the hour specification: "Optionally (required if hour is non-zero): 1. Two or more characters in the range U+0030 DIGIT ZERO (0) to U+0039 DIGIT NINE (9), representing the hours as a base ten integer. 2. A U+003A COLON character (:)" It would be helpful to specify an implementation limit to ensure uniform rejection of "large" timestamps across browsers and other WebVTT tools. The question thus is: what are browser implementers using to store the timestamp in. If a unsigned long int is used, we are limited to about 1193 hours (=(2**32-1)/1000.0/3600), so having the parser reject hours that are lager than 3 digits makes sure that we don't overflow. If a double is used, we can correctly represent about 1,250,999,896 hours (=(2**52-1)/1000.0/3600), i.e. 9 digits, which gives us a 142K years limit. So it should be either 9 or 3, but we should provide a limit within which implementations are expected to be correct.
Note that the WebKit implementation indeed uses a double to represent timestamps (see http://svn.webkit.org/repository/webkit/trunk/Source/WebCore/html/track/WebVTTParser.cpp). Note to Eric & Anna (with thanks to Ralph Giles for finding it): That implementation actually uses an int to represent the hours and converts these to the float by multiplying with another int, thus essentially limiting it to 596,523 hours (=((2**31)-1)/3600), but that's a bug and can be fixed by casting the first int to a double in the calculation: return (value1 * secondsPerHour) + (value2 * secondsPerMinute) + value3 + ((double)value4 / 1000);
With reference to issue 11647 where a limitation to two digits was rejected, I would suggest to require accuracy by apps for hours of up to 9 digits and thus warn users that they may run into problems if they expect it to work with more than 9 digits (which, IMHO, is unlikely :-).
I don't really see an advantage to limiting this. If one day someone wants to use a 256 bit integer to store this, who are we to stop them? It's just a QoI issue.
(In reply to comment #3) > It's just a QoI issue. Having a QoI issue is always worse than not having one, when all else is equal. I think all else is equal--limiting hours to 9 digits doesn't limit the format in any meaningful way (that I can think of).
My choice of words was unfortunate. Instead of using "implementation limit" I should have said "minimum implementation requirement". So, we should state that implementations are expected to correctly deal with hours below a 9 digit length. This means that anyone can expect their WebVTT files to work reliably in all implementations as long as their hours value is below that limit. Above this limit it does indeed become a QoI issue, but below it it's a compatibility issue.
Having a minimum requirement like that is pointless. On implementations that can, the minimum will be at least as good as is necessary to play back content in the wild, and the minimum will grow as content grows. For implementations that are so constrained that they can't do such a minimum, they'll just ignore the spec if we require one. There's nothing wrong with not defining limits, IMHO. We do this all the time. In practice it doesn't cause any problems. It's not like not defining error handling where there might be multiple equally valid ways to handle something; with limits, the "better" solution is always to have a bigger limit.