<scribe> scribenick: kaz
Takiguchi: Tohru Takiguchi from NHK
... outline first
... 1. latest update on the Hybridcast spec
... 2. Hybridcast and WoT
... Oct 20, latest Hybridcast spec was approved
... broadcast-independent managed app
... Mr. Hoya will talk about that
Hoya: working form Fuji TV
... a member of JBA
... will talk about broadcast-independent managed app today
... business use cases first
... two use cases here
... uc1: broadcast independent Hybridcast VOD apps
... provided by JP broadcasters
... can move to provider's broadcasting o
suggestions/recommendations inside the content area
... e.g., for sport events
... high demand for getting suggestions
... uc2: cross-network/cross-broadcasters VOD apps
... there is an official common VOD platform called TVer
... aggregating multiple broadcasters' contents
... broadcasters might want to move from an app to the
corresponding TV program
... this feature is allowed to specific apps
(some authentication???)
Hoya: added a new app type with a
different lifecycle from the broadcast-oriented managed apps
... launching the app is much easier than broadcast-oriented
managed apps
... linear program generation is needed from the broadcasters'
viewpoint but TV set could be tuned for this purpose
... tuneTo() API is specifically provided for this purpose
Takiguchi: next, extension for
Hybridcast
... now supports CMAF
... Hybridcast Video is a DASH profile for Hybridcast TV
... uses MSE and EME
... features for low-latency playback
... CMAF chunk allowed without an intra-frame
... ECMAScript 6th Edition, so Promise is available
... Media Timed Events
... MTE delivery mechanism is supported as well
... both MPD events and emsg box are supposed to be handled by
JS
... 3D Audio now supported, Enhanced AC-3, 5.1.4ch audio on
Hybridcast video
... API for checking supported codec by receivers,
MediaSource.isTypeSupported() is the best option for Hybridcast,
but MediaElement.canPlayType is also available.
... Others
... Browser API
... JS API to receiverDevice custom object
... also Hybridcast Connect
... connection for devices
Ikeo: Hybrdcast Connect enables
connection between TV and Smartphones.
... It includes device discovery and control, and is experimentally
installed on some of the TV sets already.
... Hybridcast Connect as a WoT Component
... Using Hybridcast Connect (HC), a TV works as a hub for
connection with the brodcasters services
... with various IoT devices and services
... WoT Use Case for Broadcasting
... "IoT-based Media Framework including User Context, Thing
Description and Content Description
... generated a demo video
... (shows the video)
Endo: Connection between TV
broadcasting and IoT
... get a message from the TV app to the smartphone
... speech input on the smartphone handles the TV program
... possible targets include fridges, smart speakers, etc.
... also developing a haptic device
Makiko: People can experience the
program via haptic interface using this haptic cube device
... the device vibrates according to the TV program's content
(how to define the multimodal kind of behavior?)
Endo: Also linking the weather forecast program as well
Ikeo: The haptic device can let the
user know about news updates
... Prototype of WoT Use case
... Node-RED programming
... incluing Hybridcast TV, Application, and devices (haptic device
and smartphone)
... IoT devices are receiver of events here
... developed using Node-RED environment
... participated in the WoT PlugFest the end of Sep
... Application Development Tools
... several tools
... antwapp4hc, hyconet4j, hyconet.js, node-red-contribhyconet,
hyconet-android-sample
... available on the NHK R&D's GitHub repo
Ikeo: Purposes of OSS for Hybridcast
Connect
... the tools are provided as OSS
... those are reference mplementations for standardizations
... interoperability test by IPTV Forum as well
Chris: tx!
... we have some questions
<Zakim> cpn, you wanted to ask about MSE buffer issues, Web Media API
Chris: question about MSE buffer
... we can forward the issues to the Media WG
... with the MSE issue, have you brought any issues for the Media
WG?
Matsumura: this is a long-term
issue
... so we had raised an issue about this already
Igarashi: which repo?
Matsumura: Media WG
Igarashi: URL handy?
Matsumura: would let you know about that later
Igarashi: do you want to continue the discussion with the Media WG about this?
Chris: we'll be happy to follow this
issue up with you
... to fix potential issues, would like to understand the
details
Matsumura: tx
Igarashi: tuneTo() API, does it tune to the lastly selected program?
hoya: tuneTo() is available for any
channels
... tuning to the last channel is just a typical example
Igarashi: next question is about CMAF guideline
<hfujisawa> path to discussion: https://github.com/w3c/media-source/issues/172
Igarashi: the MEIG has a joint meeting
with CTA WAVE the other day
... do you want to follow the global guidelines being produced by
CTA WAVE?
Matsumura: good point
... no concrete action by IPTV Forum yet
Igarashi: is the guideline by the IPTV
Forum available publicly?
... so that CTA guys can refer to it?
Matsumura: it's publicly available but only JP version
Kaz: authentication for usage of tuneTo() API?
Matsumura: no authentication so far, but should be considered
Kaz: regarding the haptic
device
... we should define some specific behavior mapped with some
specific scene and scene change
... good topic for extending the multimodal UI
Ikeo: some kind of event signals to
be defined
... to be mapped with the actual behaviors of the devices
Kaz: good starting point for the discussion :)
Chris: question about Web Media
API
... latest new JS API
Ikeo: getAvailableMedia, etc., are used within Hybridcast
joint discussion with CTA WAVE
Chris: maybe you might want to look into the newly proposed Web Media API as well
Ikeo: not sure how to work with CTA WAVE at the moment
<Zakim> cpn, you wanted to ask about Web Media API
Ikeo: anything about EPG handling, etc.?
Chris: no, probably we should continue the discussion offline
Yamaoka: question on CMAF
... more complicated than ISOBMFF one?
Takiguchi: this is based on the ISO BMFF
<
Ikeo: quickly mentions the OSS API implementation
Chris: Web media pipeline
... The Web platform for media is evolving
... Web Codecs, Web Transport, WebRTC, WebGPU, WASM, WebNN,
...
... Many possibilities for further use cases for media
... Goals: discuss how the introduction of these technologies
impacts media use cases on the Web
... Use cases: interactive content, video stream with synchronized
graphics or other Web content,
... question on frame-accurate overlay
... media production, apply color balancing or brightness
adjustment
... and augmented reality
... Typical media pipeline [diagram] shows capture in the top half,
playback / rendering in the bottom half
...
Record->Encode->Mux->Send->Fetch->Demux->Decode->...
Kaz: agree with those three use case
categories
... and would suggest we work on concrete use case descriptions for
them
... think further collaboration with WoT is also important
regarding device integration
Cyril: your presentation raising
interesting questions
... in the joint discussion with WebRTC they showed similar
diagrams
... would be good to map the existing APIs onto your diagram, and
we can see mechanism (e.g., APIs) is missing
... currently not homogeneous with the Web platform
... agree we should work on use cases
... potential issues with graphic overlay, etc.
Chris: Interesting point,
... a lot of components to handle the pipeline here
... Web Codecs for playback is a big shift of responsibility from
the browser to the web app
... it would have to handle buffering, timing of rendering,
etc.
... Can we achieve smooth rendering?
Cyril: If you have a video like a
movie and also graphic content at the same time
... and graphical overlay that you want to synchronize.
... Why would you have to process all of them on the client?
Chris: One thing BBC is interested in
is adjusting the content to your interests or preferences,
... e.g, time points for segmented content so you can dynamically
adjust the duration to fit a length of time,
... or a video overlay with sign language presenter,
... where you don't necessarily want to burn the additional content
into the main video.
Cyril: Makes sense, and possibly for picture in picture
Kaz: fyi, joint discussion on edge
computing offloading
... some part of the process can be done by the client side
... but some still needs to be done by the server side
... when/how to share the load is to be discussed
Francois: interested to hear from the media
companies about what kind of media pipeline is needed
... current trend is to provide access to low-level
primitives
... you can use another video with mask and then superimpose it
with another video
... but overall architecture for advanced media handling might be
needed
... WebRTC diagram is a good resource to be shared
... different pipelines exist there
... good to look at existing devices
... e.g., PCs, smartphones, ...
... CPU, memory constraint, etc.
Chris: interesting from several
viewpoints
... could be usign video tracks, or multiple MSE instances
... this relates to capability issues too, how to know if multiple
streams can be decoded simultaneously
... how to handle embedded devices with limitations?
Igarashi: we need to think about use
cases
... this approach is good
... also gap analysis between the use cases and the existing
technologies needed
... would like to clarify the use cases for low-latency
communication too
Chris: yeah
... how to proceed then?
... we look at Web Transport
... something we could look more closely look at is the integration
point with MSE
Chris: brief review of the current
activities
... thinking about our priority topics to work on in the near
future
... Current open topics from last year and this year
... frame accurate seeking
... production use cases
... and then identified issues with subtitle and caption support in
WebXR and 360 video for accessibility
... no direct way to render captions for VR/360 environment
... secure path for 360 video, somehow we need to see how to do
that
... and requirements for MSE v.Next
... does the current work to update MSE include all the use cases
we have in mind?
... low latency, context switching, ad insertion, timed metadata
events
... media use cases for WoT
... bullet chatting / synchronized video commentary overlay
... media production use cases and requirements
... how do we organize the activity and make progress on them?
Subtopic: Media Capabilities and MSE
support for CMAF
... requirements input to the Media WG
... want to have discussion on that, any input/thought?
Kaz: agree this topic is very
important
... we need to continue to collaborate with them
... and bring our requirements to the Media WG
... how to proceed is the question
... joint discussion during the upcoming IG call?
... or starting a dedicated TF?
Chris: would like to start with another joint discussion during the MEIG call next month
ChrisC: about CMAF
... handling media capabilities
... specific API for that purpose?
John: 3 things here
... spec CTA WAVE has written
... overview of the changes needed based on the ISO spec
... in order to meet the requirements for CMAF
... publicly available
... some updates based on the ISO standard there, emsg boxing,
etc.
<cpn> https://cta-wave.github.io/Resources/CMAF%20Byte%20Stream%20Format%20-%20PR.pdf
John: second is the difference
matters
... discussion needs to be done using some interactive way
... like vF2F
... CMAF is a data format based on ISO standard
... just from MIME type
... support profiles parameter can be played
... how to use the container is different from the original ISO
standard
... only sensible way for the UA to support CMAF would be handled
by the profile
ChrisC: ok
... I recall the issue
... the issue is #98 (on the media-capabilities repo)
<tidoust> MCAPI Support for MIME Type Profiles subparameter in order to support CMAF, etc
John: we had a meeting and got
agreement on support for profile strings
... are you saying media capabilities will make it possible for MSE
to identify the capability based on the ISO standard?
ChrisC: media source support
specifically,
... it would be possible to say whatever component of CMAF, via a
polyfill library to translate the profiles parameter to the
underlying capabilities.
... If the media capability says "supported" the UA should be able
to handle it.
John: There is a document describing
a proposed CMAF bytestream format spec.
... If it indeed does, the next question is can it convey all the
need?
... so that the UA can say "I know what it is", based on the ISO
file format
... Maybe I can set up a call with you, ChrisC, and the other
stakeholders
... to understand how media capability provides support for CMAF or
not.
ChrisC: ok
John: will invite you :)
... The document also covers some updates for CMAF constraints,
e.g, emesg boxing, etc.
... will verify the document
... yes, the PDF one should be used
https://cta-wave.github.io/Resources/CMAF%20Byte%20Stream%20Format%20-%20PR.pdf
John: have been chatting with Chris separately but should make it broader
Chris: would suggest we use the MEIG call for that purpose
John: ok
... ChrisC and I will have some more chat before the IG call
... hope we can get enough update for the expected IG call
Subtopic: Color on the Web
... high dynamic range and wide gamut color support
... a lot of discussions happening various places within W3C
... wondering how to proceed
... MEIG, CSS WG, Color on the Web CG
... where/how to continue the discussion?
... if and how to bring it into one place?
... a couple of options here, e.g., revitalize the CG, run a
workshop, discuss in MEIG
Chris: great
Igarashi: question about Color on the
Web
... very good progress during the joint meeting the other day
... we do need a central place for further discussion
... the MEIG should consolidate the requirements for the Color on
the Web topics
... if the MEIG participants are also interested in that topic,
would get feedback
Pierre: a lot of energy during the
call
... before jumping into the conclusion, probably we should think
about coordination as the next step
... will be working with ChrisL about that
Chris: yes
Pierre: if you have concrete opinions,
please raise issues
... via ML or contacting me directly
Kaz: which GitHub to be used for this
topic?
... maybe can use "me" to collect issues?
Chris: Possibly https://github.com/w3c/media-and-entertainment/issues/2
Kaz: agree
Igarashi: what can we make requirements contribution from the media industry viewpoint?
<tidoust> [I note Chris Lilley also mentioned during that meeting that he's organizing a workshop on WCG and HDR for the Web early 2021, tracked in https://github.com/w3c/strategy/issues/230]
<xfq> https://w3c.github.io/ColorWeb-CG/
Igarashi: is there any document available?
Chris: Yes, I generated a document and ChrisL has worked on it, but it needs more input. It doesn't make recommendations
<cpn> Color on the web draft report: https://w3c.github.io/ColorWeb-CG/
<igarashi> Chris, is the web draft that media industry people should review the requirements on the Color on Web?
Subtopic: WebRTC for cloud gaming
Huaqi: would suggest we add
WebRTC-related topics on games as well
... cloud games to be investigated
... can provide some use cases in the future
Chris: tx, yes this is a good topic to follow up
Subtopic: Bullet chatting
Huaqi: We have been gathering use
cases and requirements,
... including relationship with MiniApp
Chris: We are out of time today, but I would like us to follow up with you on bullet chatting
Kaz: we could add a topic on bullet
chatting next step to the upcoming MEIG call
... and we could have a dedicated additional call for that too
Kaz: We should continue the
discussion with ChrisL about the Color on the Web topic
... Pierre will talk with ChrisL
Chris: Also we'll continue the
discussion on CMAF extension including John and ChrisC,
... possibly as topic for next MEIG call on December 2nd.
[adjourned]