15:07:36 RRSAgent has joined #tt 15:07:36 logging to https://www.w3.org/2021/05/13-tt-irc 15:07:38 RRSAgent, make logs Public 15:07:39 Meeting: Timed Text Working Group Teleconference 15:09:40 nigel_ has joined #tt 15:11:09 nigel__ has joined #tt 15:14:26 scribe: nigel 15:14:28 cpn has joined #tt 15:15:16 Present: Atsushi, Chris_Needham, Cyril, Gary, Pierre, Nigel 15:15:21 Chair: Gary, Nigel 15:15:32 Agenda: https://github.com/w3c/ttwg/issues/185 15:15:56 Previous meeting: https://www.w3.org/2021/04/29-tt-minutes.html 15:16:02 Topic: This meeting 15:16:43 Nigel: Today we have a couple of TTML2 issues to circle back on, and an agenda item on WebVTT requirements gathering for possible syntax changes. 15:16:48 .. And in AOB, TPAC 2021 15:17:01 .. Is there any other business, or anything to make sure we cover? 15:17:21 group: [no other business] 15:18:07 Topic: Shear calculations and origin of coordinate system. w3c/ttml2#1199 15:18:13 github: https://github.com/w3c/ttml2/issues/1199 15:18:32 Cyril: The initial issue is about clarifying what happens. 15:18:41 .. I think we came up with a possible clarification for some writing mode. 15:18:47 .. We need to propose some text. 15:18:53 pal has joined #tt 15:18:59 Nigel: Yes, that's great, do that! 15:19:07 Cyril: Okay I'll propose text maybe for next time. 15:19:21 .. The discussion veered a bit to how to map to CSS, which won't be solved easily. 15:19:29 .. But better documenting what we have is important for interoperability. 15:20:15 Nigel: From https://github.com/w3c/ttml2/issues/1199#issuecomment-802057127 I summarised that we need to know 15:20:22 .. what we want it to do. Do you think that's clear now? 15:20:38 Cyril: I think we said it does depend on the writing mode 15:20:43 Pierre: I'm not sure about that actually 15:20:51 .. Clarifying the current text is a good idea. 15:21:56 .. I'm hesitating only because CSS was about to do something. Do we know if they are planning 15:22:10 .. to address it soon? It would be a shame if we come to a different conclusion from what they are planning to do. 15:23:22 Cyril: Maybe we could say it's not defined in Horizontal writing mode, which we don't need for now. 15:23:40 -> https://github.com/w3c/csswg-drafts/issues/4818 [css-font-4] oblique angle for vertical text with text-combine-upright #4818 15:24:55 Nigel: I noticed today that the issue above got a useful comment 13 days ago. 15:25:12 Pierre: Yes, but what that says is "to the side" 15:25:46 Nigel: Yes, it's vertical for vertical text, i.e. in the inline direction, which some could consider "to the side" 15:25:50 group: [amusement] 15:26:07 Cyril: I can propose for the TTML spec an update that says the behaviour is undefined for anything 15:26:24 .. other than top-to-bottom, right-to-left, and that behaviour would match what CSS implementations do. 15:26:52 .. [thinks] Maybe the solution will be different for fontShear, lineShear and shear. I'll think about it. 15:27:13 Nigel: Thank you. 15:27:33 Pierre: Cyril, a bigger question is: today IMSC supports only blockShear. Is that really the right thing? 15:27:53 Cyril: It's a difficult question. I can tell you that ideally what we would like at Netflix is the behaviour of fontShear, with vertical 15:28:08 .. and tate-chu-yoko and ruby being handled correctly, where correctly here is still subject to interpretation. 15:28:27 .. I think we understand that lineShear is complicated in terms of line layout and reflow, and blockShear is the simplest we came up with 15:28:32 .. in terms of implementation. 15:28:47 Pierre: That's how it's done but it's not clear if that's right. It has the potential of overflowing. 15:28:56 .. It sounds like we don't have a answer there. 15:30:13 Nigel: I'm surprised by your view Cyril, I thought lineShear would be the preferred option, as it is simpler for layout and retains the alignments. 15:30:26 Cyril: But line lengths can change with lineShear. 15:30:30 Nigel: I think that's blockShear. 15:30:42 Pierre: For lineShear you can predict the line length in advance and layout once. 15:31:19 .. For blockShear you need to know the height of the block, but then there might be overflow causing a change to the block height. 15:31:30 Cyril: Ok I thought that it was simpler, I need to think about it. 15:31:53 .. I know what we want to achieve with shear in subtitles, it's complex because of what is implemented. 15:32:08 Pierre: I'm fairly sure we want lineShear, but we couldn't adopt it because of lack of implementation in CSS. 15:33:20 Cyril: The difference between lineShear and fontShear is essentially that they're the same but if you combine glyphs before shear for tate-chu-yoko they come out the same. 15:33:25 Nigel: What about ruby alignment? 15:33:38 Pierre: The alignment changes - I have heard it argued both ways. 15:33:50 Cyril: The difference is subtle, I wonder if we need to worry about it. 15:34:09 Pierre: If tomorrow all browsers supported fontShear with the tate-chu-yoko hack then I suspect we'd use it, 15:34:17 .. rather than CSS shear. I don't disagree with you. 15:34:28 .. Treating tate-chu-yoko as a single glyph is kind of weird though. 15:34:52 Pierre: I like your plan for vertical text to provide a clarification. That would be helpful, even if we 15:34:56 .. leave the rest undefined for now. 15:35:13 SUMMARY: @cconcolato to propose text 15:35:32 Topic: Mention fingerprinting vectors in privacy considerations. w3c/ttml2#1189 15:35:39 github: https://github.com/w3c/ttml2/issues/1189 15:36:11 Nigel: Just to note I opened a pull request for this yesterday. 15:36:25 -> https://github.com/w3c/ttml2/pull/1231 Pull Request: Add further fingerprinting considerations w3c/ttml2#1231 15:36:33 Nigel: That's open for review, please take a look. 15:37:05 .. The original commenter, Jeffrey Yasskin, gave a thumbs-up to the analysis I did 2 weeks ago, and this pull request implements that. 15:37:11 .. I see Glenn has already approved it. 15:37:39 .. It'd be good to get this merged in our normal 2 week period if we can to get this done and dusted. 15:38:03 SUMMARY: Group to review as per normal process 15:38:25 Topic: WebVTT - Requirements-gathering for syntax changes to support unbounded cues, cue updating etc 15:39:04 Chris: This is a use case and requirements gathering exercise for unbounded cues in WebVTT 15:39:21 .. specifically. It's been discussed here, and in the Media Timed Events Task Force that the MEIG is running. 15:39:41 .. We brought it to an IG meeting on Tuesday where we decided to do the use case and requirements work as part of the 15:39:57 .. Media Timed Events activity that we have, and then use the information that we gather there to help with design decisions 15:40:03 .. around support for unbounded cues in WebVTT. 15:40:21 .. To that end I created an initial (very initial) requirements use case document that we can use as a basis. 15:40:24 https://github.com/w3c/media-and-entertainment/blob/master/media-timed-events/unbounded-cues.md 15:40:30 ... Link pasted above. 15:40:50 .. What I'm looking for in terms of use cases I think are the quite detailed use cases like the specific actions that we need. 15:41:13 .. For example, if I pick the WebVMT example, we can distil a lot of what Rob is looking for to the idea that we have timed measurements, 15:41:31 .. be it location or whatever, that are aligned to the video, and those get updated at points in time in the video, and he's choosing to 15:41:36 .. represent those as unbounded cues. 15:41:58 .. Then the application can receive and respond to those or in his case he does interpolation. I'm not sure if that level of detail matters 15:42:06 .. from the point of view of how it may affect the syntax. 15:42:33 .. We also have the use cases around captions that span multiple VTT documents, like in DASH or segmented media delivery in general. 15:43:07 .. I'm hoping we can gather those handful of use cases and capture and explain them, and make sure we have everything cover. 15:43:19 .. That leads us towards being able to consider how the syntax may need to change. 15:43:30 .. I include the backwards compatibility requirement in there. 15:43:51 .. All of those are captured in the document as it stands. There is a list of to-do comments to write some information. 15:44:04 .. I don't know if it is complete. I'm hoping that contributors will be able to help fill in the details. 15:44:18 q+ 15:44:27 .. I think Cyril, in the last meeting you mentioned that there's an MPEG document that talks about how this may be carried in MP4. 15:44:45 Cyril: Yes, there was a proposal to update the carriage of WebVTT in MP4 for unbounded cues, but it was mentioned that since 15:44:55 .. there was no syntax for unbounded cues you could not carry them. 15:45:10 .. So the proposal was to remove the amendment to 14496-30, but the resolution of the comment 15:45:28 .. is currently "if there is a way to specify unbounded cues then here's how you deal with it". It's moving sand. 15:45:35 Chris: The dependency is on us? 15:45:38 Cyril: Yes 15:45:46 ack pal 15:46:12 Pierre: I've not been following this closely. Are we talking about unbounded cues in the file format, or the API. 15:46:21 Cyril: The API problem is solved, it's merged. 15:46:33 Pierre: I don't understand why there need to be unbounded cues in the serialisation. 15:46:42 .. Especially in the case of ISOBMFF wrapping or segmentation. 15:46:50 Cyril: It's a valid point, I don't fully understand it either. 15:47:02 Pierre: I'm 99% certain that they want something other than what they're asking for. 15:47:15 Cyril: Think of a cue serializer separate from the packager. 15:47:29 .. Let's say a cue is produced but the end time is unknown, but you still want to package and send. 15:47:36 .. One approach is to assign some time. 15:47:46 .. Another is to do it unbounded, and then update it later. 15:47:50 .. I think that's the use case. 15:48:04 Gary: Yes. I think the key with the proposal is to be able to mark a cue as "we don't know what the time is". 15:48:16 .. It may never get an end time, but you should be able to specify an end time at a later date. 15:48:24 Pierre: I agreed with the first statement, not the second. 15:48:43 .. My understanding of how implementations have been designed and built is to allow the last cue to have no end time. 15:48:53 Gary: VTT doesn't care right now - everything requires an end time. 15:49:05 Pierre: Right, but I don't think there's a model that allows _any_ cue to be unbounded. 15:49:17 .. Allowing the _last_ cue to be unbounded would be the least impact on the WebVTT model. 15:49:37 Gary: Right, that's the question, why is this needed. Once we know that we can work on the solution. 15:49:48 Pierre: I think you can do it today so I don't think you really need it. 15:50:03 q+ 15:50:06 .. Going to unbounded is a pandora's box. I'm not a proponent of WebVTT but when you go into live subtitling and 15:50:24 .. captioning, people type in real time. Sometimes they backspace. If a cue can be updated later, then is it the same one, or being replaced? 15:50:41 Gary: Right now the proposal is only about the end time, but it has been brought up that we could allow updating everything. 15:50:44 .. It's worth discussing. 15:50:47 q+ 15:50:50 ack c 15:51:07 Cyril: One thing that's important to clarify is if there is a use case for more than one unbounded active cue at a time. 15:51:13 .. That would shape the solution. 15:51:24 Gary: Yes that has also been discussed, how to match cues, or only allow one. 15:51:34 Chris: This is the level of detail I want to get to. 15:52:00 Cyril: In terms of packaging in MP4, there's the notion of a sync sample which can be randomly accessed without knowing previous data. 15:52:16 .. If you have unbounded cues then you'd have to duplicate them at sync samples and aggregate them, which gets complicated. 15:52:29 .. Frankly I think it should be the job of the serializer to do this. 15:52:37 Pierre: I think you can do it today without changing anything. 15:52:57 .. It might not do what you want semantically, but with richness comes complication, like causality, how far back do you have to go. 15:53:01 .. It's really complicated. 15:53:03 q? 15:54:44 Nigel: This is a specific question related to the broader point that there is an API that is not fully supported by the syntax, and 15:54:59 .. we are wondering what parts of the API need to be opened up within the syntax. 15:55:21 .. Also anything that requires statefulness in the receiver is a recipe for different clients having different experiences, in a bad way. 15:55:39 Pierre: Yes, one of the advantages of TTML and WebVTT over 608, say, is the lack of statefulness. 15:56:03 Gary: Yes, one of the issues now is that you can't have both the proposed new syntax and the fallback syntax in the same file, 15:56:11 q+ 15:56:22 .. because new clients will show two cues where older ones that don't support the new syntax will only show one, which is not good. 15:56:23 ack n 15:56:28 ack c 15:57:03 Chris: Next steps: we have a monthly meeting for media timed events. The next meeting would be Monday 17th May, so I propose 15:57:10 .. we use that as the place to discuss. Same hour as this call now. 15:57:36 .. I'm aware that there's another strand around DASH and emsg events that is being covered in that activity. I need to be careful to allow 15:57:48 .. enough time to cover both. Could be a dedicated separate call. 15:58:00 Cyril: I favour both (but may not attend both). 15:58:07 Chris: I'm open to suggestion for when. 15:58:22 Pierre: For the issues related to the syntax of WebVTT I think this call is the best one. 15:59:37 Nigel: I support the wider scope of MEIG gathering requirements. 15:59:57 .. Our calls are every 2 weeks so there's a potential slot, say on 20th May, in this hour, that might work for people. 16:00:15 Chris: Happy to do 20th. We should use that meeting to decide a frequency. 16:00:32 .. Gary and Cyril, you've both mentioned knowledge of people with use cases, so that would be really useful input. 16:00:44 .. Otherwise, aside from the WebVMT use case, I'm less aware of who the proponents are. 16:02:08 Nigel: It's surprisingly low-key in the discussion so far, but I think live delivery of captions is a use case, and 16:02:27 .. it may be worth understanding and describing a working model for how to deliver live captions in a VTT context. 16:14:49 .. I'm 100% confident we know how to do that in a TTML context, but it could be that there's a different model for it in VTT. 16:14:52 Topic: Meeting close 16:15:15 Nigel: Thanks everyone, we're out of time. 16:15:37 .. Apologies again for the difficulty joining at the start. I hope nobody was excluded because of that. 16:15:42 .. [adjourns meeting] 16:15:44 rrsagent, make minutes 16:15:44 I have made the request to generate https://www.w3.org/2021/05/13-tt-minutes.html nigel_ 16:20:10 Regrets+ Rob_Smith 16:27:35 rrsagent, make minutes 16:27:35 I have made the request to generate https://www.w3.org/2021/05/13-tt-minutes.html nigel_ 16:28:17 scribeOptions: -final -noEmbedDiagnostics 16:28:22 zakim, end meeting 16:28:22 As of this point the attendees have been Atsushi, Chris_Needham, Cyril, Gary, Pierre, Nigel 16:28:24 RRSAgent, please draft minutes v2 16:28:24 I have made the request to generate https://www.w3.org/2021/05/13-tt-minutes.html Zakim 16:28:27 I am happy to have been of service, nigel_; please remember to excuse RRSAgent. Goodbye 16:28:32 Zakim has left #tt 17:03:14 rrsagent, excuse us 17:03:14 I see no action items