W3C

TV Control WG call

18 Oct 2016

Agenda

See also: IRC log

Attendees

Present
Francois_Daoust, Chris_Needham, Ryan_Davis, Steve_Morris, Tatsuya_Igarashi
Regrets
Alexander_Erk
Chair
Chris
Scribe
Francois

Contents


[Chris reviewing the agenda]

Group status

Chris: We had a very good meeting at TPAC. Discussions were very interesting and lively. We made some good progress in terms of the direction that we'd like the spec to take.
... One of the things that we really needed for the group was someone to edit the spec.
... The spec initially came from Mozilla but Mozilla is not around anymore, so we were out of an editor.
... It made it difficult to make updates to the spec.
... I'd like to say "Thank you" to Steve who agreed to become the editor of the spec.
... Your contributions have been very useful already. I'm glad that you can take on the role.

Steve: You're welcome. I hope I can do a good job!

TVSource and TVTuner objects

-> Issue #4 on TVSource and TVTuner

Ryan: I have some perspectives on this particular issue

Chris: Great. It kind of became clear that the relationship between sources and tuners was not entirely clear to us at TPAC.
... The last comment in the issue tries to summarize the problem.
... The goal is to move to a source-centric model rather than a tuner-centric model.
... It's consequent changes to the spec, so it's important for us to agree on the direction, and also to work on details.
... One of the goals for changing the API is to simplify it from the application developer's point of view.
... and in doing so, maybe hide some details that implementers may care about but developers may not.
... Essentially, the proposal is to change the TVManager interface, which has a getTuners method, to rather expose a getSources method to retrieve a list of sources.
... From sources, you can request a channel list.
... When you request a tune-to-channel operation, this returns you a tuner object that gives you access to the stream.
... I'm wondering about the name of "TVTuner" here as well.

Ryan: Generic tuner is almost a hardware container of sources.
... How do you get around the constraints of available tuners?
... One of our constraints that there are multiple users of the same tuner.
... Sometimes tuners can be sourced twice.
... The front seat and back seat could tune in to different sources

Steve: One of the issues we discussed at TPAC is that there is an implementation issue. It also depends on the decoding pipeline that you have.
... The tuner is just one of the aspects here. It's not clear what the relationship between these resources is.
... Is that something that an application will care about?
... Or does it care whether it can decode and present a stream?
... Where does that piece of intelligence that selects the tuner gets built? The app? The UA?

Ryan: A tuner that is already used should leave the "available" list.
... So that the user cannot select it. User experience would not be good.

Igarashi: In term of requirements, we have a requirement that applications will want to know the tuner, the physical resource.
... About tuner-centric or source-centric, it's more a question on how an app wants to find about these resources.
... I would suggest to separate abstracting physical resources from discovery of content.
... In terms of content discovery, I agree with the source-centric approach.
... However, that does not address the requirement I mentioned before. We should address it and give a way for an app to select the physical tuner.

Steve: I'm not disagreeing with that, but think we should be careful. I think tuner is going to be only one type of available resources (also hardware decode blocks, hardware decryption blocks).
... I don't think we should define all terms in the spec but we should be very clear about what we mean.

Igarashi: Since we have the Picture in Picture (PiP) use case on our plate, we need to expose that to the app.

Steve: I don't necessarily agree about that, but let's not have that discussion here.

Igarashi: Right, my point is that it is a separate issue that we need to discuss, independent of the source-centric approach discussion.

Steve: If I rephrase it slightly, the app needs to know about the decoding capabilities to know what resources it can render.

Igarashi: Yes, if the app wants to render two resources, it will want to know this. The specific capabilities are another discussion.

Steve: I think we agree, although we use different words.
... I am thinking about the case if an application is presenting a 4K TV channel and it wants to know whether it can present another SD channel or another HD channel.
... I can see some value for an API that forces the UA to use specific tuning settings because the app knows that something valid will come out of it.

Igarashi: Apps should know that tuner availability depends on the type of resources and on the device state.
... It depends on which source gets streamed.
... Also, if the receiver device is in recording mode, the underlying tuner may not be availaible. We need to think about these cases.
... We may be able to create abstractions not to expose the capabilities.

Steve: The ability to present a stream depends on the content that is currently being presented as well, the device state. There are several things that relate to that.

Igarashi: One tuner instance cannot render any content resources. The availability of the tuner resources varies over time and depending on its capabilities.

Chris: In the new API that we drafted, we have this tuneToChannel method that returns a Promise to get a TVTuner object which we can then either resolve successfully which means you can get a stream
... or we can reject which means you cannot get a stream.
... Is that the right model? Do we need something on top of that?
... More precisely, what does the API need to have that this proposal does not?

Steve: I think the shape of the API as is currently crafted is mostly correct.
... I think what we're saying is that we want another method that says "can I tune in to this channel?", or "what capabilities are still available?".
... Perhaps "How many more HD channels can I present?", "How many more FM radio channels can I tune to?"
... I'm not sure about the shape of the API for that part.
... I haven't thought about that too much.

Chris: So, no objection to making the API more source-centric, provided that there's some kind of query interface available to determine what we could stream, display, or record?
... Igarashi-san, would you be able to help with providing the parts to this API that you think are missing?

Igarashi: The TVManager gives a list of sources. Then the application chooses a source and wants to know how many tuners are available. The maximum number of instances should be exposed somehow.
... In addition, the application may want to know the availability of a specific tuner instance.
... Third requirement, decoder capabilities can be different (OTT, IPTV cases). Assuming that IPTV content is a source, but the device may have one HD decoder and one SD decoder.
... Application cannot render two different streams. It should know about this.
... How we spec the API in the end is a discussion point.

Ryan: I still see other sources. If you go through the different sources, some come with numeric range. Media streams come with different properties. When you are on an HD station and need to be on the station for a certain amount of time to know that there are other stations, things are more difficult.
... The way I try to solve it is to have a notion of sub-channel. I don't know if HDTV works in similar ways. That's the most difficult requirement for us.
... When you're in a vehicle, the list of stations can change rapidly.

Chris: Do you see any gap in the API that we have?

Ryan: No, I'm just "glad" that you have the same problems as ours :)

Radio update

Ryan: I came up with the same challenges as you seem to have, which is good. The ability for the client to know the available zones.
... There is nothing in the TV Control API specification.
... With earphones and other audio output devices, an app may want to select the output device.
... In a vehicle, it's important, some vehicles are 3 tuners and different zones.
... Persons may be listening to totally different streams using headphones.
... I could see this with TVs as well.
... The audio zone could be the room in the house for instance.

Igarashi: I think that, in the case of TVs, we also have similar use cases.
... To address these use cases, we should be careful about the type of applications that we're targeting.

Igarashi: I wonder whether the audio use cases you mention apply to type 1, type 2 or type 3 applications.

Ryan: In my mind, any of these applications should be able to do that.

Chris: That's something that I don't think anybody else has really thought about, the concept of audio zones.

Steve: Conceptually, this is similar to some of the discussion that is happening in the Second Screen WG around remote playback.

Francois: with WebRTC you can enumerate output audio devices, and select the device you prefer to use
... This could perhaps be extended to cover the 'zone' notion we're discussing

Ryan: User preferences are another area that I haven't seen in the spec.

Chris: I wonder if that should not be up to the app.

Ryan: User-based list of preferences could be useful across apps.

Chris: That may be something to pick up with Alex.

Igarashi: "zones" could be a generic feature, addressed elsewhere, e.g. in WebRTC. Selecting an audio output zone could be generic to the HTML5 media.

Chris: I think that's a very good point. This routine could be a generic thing to HTML5 media elements.
... I'm not sure that doing a TV-specific thing is the right approach.

Steve: I agree with that.
... It's important but not TV-specific.

Chris: Igarashi-san, can you capture your requirements on the Wiki?

Igarashi: Yes.

Chris: Just use the use cases page.
... Ryan, feel free to do the same.

Ryan: OK.

Chris: I'll give some thoughts on capabilities and lookup of available resources.
... Thank you very much for joining. Now that we have an editor, I hope we'll be able to make good progress.
... Next call in four weeks from now. Please use GitHub issues in the meantime.

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.148 (CVS log)
$Date: 2016/10/18 14:56:41 $