<cpn> scribenick: cpn
Chris: Happy new year! Welcome to the
first M&E IG call for 2020. We decided to use this call to make
progress on Bullet Chatting, so this call is dedicated for the
TF.
... We've heard the use cases presented before, so today will be
more about the detail.
... AOB?
Nigel: If anyone is a TTWG member, please rejoin the group following TTWG re-chartering.
Song: Huaqi is the coordinator of the Bullet Chatting, and will introduce the data interchange format.
<scribe> scribenick: Song
Huaqi: In the agenda, we have 4
topics. Firstly, we discuss why we need to define a bullet chatting
data interchange format.
... I'll introduce a proposal to extend WebVTT to support bullet
chatting animation, then we can discuss next steps.
Huaqi:
Why do we need
define a bullet chatting data interchange format?
... After gap analysis, we found it is necessary to define a data
interchange format standard in order to support multiple scenarios,
multiple applications and platforms.
... Bullet chatting is designed to support on-demand video, live
video streaming, virtual reality video, 360 degrees video, also
support non-video scenarios, interaction within a webpage,
interactive wall, etc.
... Bullet chatting supports web app, native app and mini app,
etc.
Huaqi:
What are the minimum
requirements to support bullet chatting?
... There are three main aspects.
... The first one is animation. Bullet chatting supports multiple
lines of subtitles displaying at the same time. supports scrolling
subtitles, e.g., from the right to the left. also supports
scrolling duration.
<xfq> Bullet Chatting Use Cases https://w3c.github.io/danmaku/usecase.html
Huaqi:
The second one is
that bullet chatting is in connection with media timeline and is
displayed in sync with media timeline.
... The third one is the implementation has good scalability to
support live video and non video.
Huaqi: We propose to extend WebVTT to
support bullet chatting since WebVTT has many advantages, simple,
light-weighted, mature, browser supportive, easier to extend to
multiple applications and platforms.
... We also face some challenges. A least, how to extend WebVTT to
support animation and live video. We have some ideas. As far as
animation, we may try to extend WebVTT Cue Settings. fFr live
video, we can refer to HLS m3u8 which supports both on-demand video
and live video.
Huaqi: Currently, WebVTT doesn't
support animation.
... If we use CSS to implement bullet chatting animation, we need
to use the CSS engine to interpret on different applications. It is
not so easy.
... So we suggest to extend WebVTT cue settings by using
declarative syntax to implement bullet chatting animation.
... Let's have a look at the example:
... This first line with NOTE syntax is WebVTT's existing cue
settings, The next line with NOTE syntax is our proposal: the
extended cue settings. The main difference is whether there is
attribute value, separated by a semicolon.
... Take position as an example, it indicates the horizontal
offset. position:50% means in the middle of the screen.
position:100%;10% means scrolling from the right to the left 10%
offset.
... Currently, WebVTT supports fixed bullet chatting, that's why we
choose to extend WebVTT.
... Example: line, postition, align are the attributes which WebVTT
already supports.
... line indicates vertical offset, e.g., line:0 indicates on the
top of the screen, line:100% indicates the bottom of the
screen.
... position indicates vertical offset, position:0 indicates the
left of the screen, position:100% indicates the right of the
screen.
... align is used to set alignment, optional value is start, middle
or end. You can refer to Mozilla Develop Network document for more
details.
<cpn> scribenick: cpn
Huaqi: For scrolling bullet chatting,
values are separated with a semi-colon, the first value is the
start, the second is the end value
... The cue timing can be set, [example of transitioning opacity
from 0;1, color:red;blue]
... We can learn a lot from TTML, we appreciate your comments on
our proposal to extend WebVTT?
Nigel: You say there isn't UA support for what you need, but there is support in TTML. Why do you want to extend WebVTT? Have you looked at TTML?
<xfq> https://w3c.github.io/danmaku/usecase.html#subtitles hasn't been updated for a while, and we need to update that indeed
<xfq> (we looked into TTML and WebVTT more after writing that section)
Rob: Do you have an example of something supported in TTML?
Nigel: TTML2 has the ability to do animation, for example
<kaz> TTML2
Nigel: This proposal suggests not to use CSS Animation. It would be interesting to know what the implementation constraints are.
Huaqi: I'll share the gap analysis in
detail, this shows the gaps between WebVTT and TTML.
... We preferred WebVTT as it is light-weighted and simpler, and
can be implemented more easily when it is migrated to other client
like native app, MiniApp etc.
... TTML does support animation. And we think it's a good idea to
use declarative syntax to implement animation since CSS animation
need be parsed via complex CSS engine.
... So when extending WebVTT, we prefer to resusing the way of TTML
supporting animation.
Nigel: An approach is to profile to TTML, e.g., take IMSC and add in the animation parts from TTML2.
<kaz> IMSC1
Nigel: If there's an important functional requirement around size, would be good to know that.
<nigel> Specifically if there's something that needs optimisation to meet the "lightweight" requirement, what is that optimisation with respect to? Document size, speed, implementation size?
<Song> scribenick: Song
Huaqi: We think WebVTT has limits in
support for live video.
... Bullet chatting need support live video so we face some
challenge and we extend WebVTT to support. How to support live
video via WebVTT?
... One option, we may refer to HLS m3u8 fragment. We can define a
similar container file which contains several bullet chatting file
fragments, e.g., several VTT files, as the example.
... By default WebVTT cue timings are in connection with media
timeline. In the live video scenario, there is no timeline, how can
we do?
... We have one idea. We don't use cue timings in live video and
only use basic animation and bullet chatting. And rendering will be
done by user agent.
... The user agent has to continuously read the container files to
fetch the latest bullet chatting data.
... In this way it can be accelerated via CDN, it is easier.
<cpn> scribenick: cpn
Huaqi: With HTTP-FLV, can keep a persistent connection and push bullet chatting data, works for low latency, but needs a protocol defining.
Rob: In what way does WebVTT not support live video?
Huaqi: Referring to the use case document, the live streaming interaction use case (4.2) isn't supported. Although technically it can be OKļ¼but The WebVTT document doesn't define any information for live updates.
<Yajun_Chen> https://w3c.github.io/danmaku/usecase.html#live-streaming-interaction
<nigel> RobSmith, there is no inbuilt delivery mechanism for live updates of WebVTT at all
<nigel> there's no semantic model for it
<Larry_Zhao> +q
Rob: I think this is supported
Larry: The WebVTT effects rely on cue
timing, from one time to another time. So we need to know the exact
time, this is why we say it doesn't support live streaming
... The VTTCue has 3 parameters: start time, end time, and the
content to add.
Nigel: This is confusing the WebVTT document format and the VTTCue interface. We should separate the API and the document, as these have different capabilities.
Rob: We're looking at live streaming with DataCue in WICG.
Nigel: The WebVTT document doesn't specify any support for live updates
Gary: There's nothing specifically about live video, but that doesn't mean you can't do it. You can chunk up the WebVTT.
Kaz: We don't have to mention WebVTT within the requirements description here, focus on the requirements, can do the detailed gap analysis with WebVTT, etc., later.
Huaqi: We have prioritised the
requirements gaps
... The first is to support writing real time data and a web API
reading this data for VoD and live video
... We found that neither WebVTT nor TTML support non-video.
Nigel: You could do it differently, if you have a different source of time.
<xfq> Is it possible to use TTML without video in HTML?
<nigel> That depends on your TTML player xfq.
<nigel> There's no reason why not, as long as you have a time source.
<xfq> I see. Thanks nigel.
Igarashi: Is the requirement to render bullet chatting without video? Render with audio, for example?
Huaqi: We want to support non-video: e.g., use case 4.5 Interaction with a web page
Igarashi: So the rendering doesn't use a video timeline?
Huaqi: That's right
Igarashi: So all the rendering uses it's own bullet chat timeline?
Huaqi: There's no timeline. The comments are rendered when they are sent, and display speed is set by the user.
<xfq> here's a demo: https://w3c.github.io/danmaku/demos/no-media/
Huaqi: Another scenario is 4.6 Interactive wall, which is similar
Gary: Is the timing model wall clock time?
Huaqi: The idea is we don't use cue times in this scenario in the proposal
<tidoust> [I note "en passant" that the Timing Object spec explores the idea of exposing a timing object independent of (or possibly connected to) a media element, precisely to allow scenarios such as interactive walls: http://webtiming.github.io/timingobject/]
Larry: For live video, we can use m3u8, with a WebVTT index file. The live example only has cue settings, no times.
Igarashi: What is the accuracy of the time synchronisation between the video and the bullet chatting?
Larry: In our real web application, the bullet chatting doesn't synchronise with the timeline. For live video, we can't guarantee the synchronisation
<Song> scribenick: Song
Fuqiao: Displaying bullet chats have higher priority than time synchronization in live streaming, even if time synchronization is possible technically. That's to say, accuracy of synchronization isn't so important. But in the on-demand video, we should keep the synchronisation between the bullet chatting and video timeline exactly.
<scribe> scribenick: cpn
Igarashi: So accuracy of synchronisation isn't so important
<kaz> Bullet Chatting Use Cases - 4.6 Interactive wall
Kaz: We can follow up to clarify those requirements for synchronisation for these use cases
Huaqi: We can go through the gap analysis at the next meeting
<scribe> scribenick: Song
Huaqi: After discussion of data
interchange format, we plan to discuss the rendering of data
interchange format.
... We need to define the rendering rules,
... and how to support non video scenarios.
... In the bullet chatting API proposal, we define new component
and , how to render?
... Bullet chatting needs to support images, how to extend WebVTT
to display image in bullet chatting? Can we re-use the tag?
... Besides, we need new APIs to add real-time bullet chatting, and
setting the duration of bullet chatting.
<kaz> scribenick: kaz
Igarashi: Based on today's
discussion, there are generic requirements as well.
... So discussing those generic issues should be split
... and we should focus on requirements for bullet chatting
itself.
Kaz: +1
... We should add clarifications to the description of the use
cases and requirements a bit more.
Pierre: I would encourage everybody
to send your questions/comments on the reflector.
... Also we could schedule a follow-up discussion.
Song: I agree
<RobSmith> Can you post a link to the reflector please?
[ the MEIG list is public-web-and-tv@w3.org ]
<tidoust> https://github.com/w3c/danmaku/issues (related GitHub issue, fyi)
Kaz: let's continue the discussion about how to deal with the GitHub issues as well on the MEIG mailing list as Pierre suggested
Pierre: The MEIG Chairs will organize
the discussion.
... We need to ask questions on the reflector.
Chris: I agree. Do we have a plan to meet again as the TF?
Huaqi: Yes, and we can continue the TF work.
<igarashi> ok
Pierre: Let's continue the discussion on the reflector.
Chris: Thanks for presenting this and
making progress, Huaqi.
... The next MEIG call will be Feb. 4,
... a joint call with the WoT WG.
... We're planning a slightly longer call so that we can also
discuss MEIG topics.
... Will make an announcement about the detail.
... Anything else?
Huaqi: Note that there will be Chinese New Year holidays the end of January, so the TF call would be early February
Chris: Good idea, thank you!
[adjourned]