13:00:46 RRSAgent has joined #webtv 13:00:46 logging to http://www.w3.org/2014/07/23-webtv-irc 13:00:51 Bin_Hu has joined #webtv 13:01:03 zakim, this will be webtv 13:01:04 ok, kaz, I see UW_WebTVIG()9:00AM already started 13:01:16 zakim, call me 13:01:16 Sorry, kaz; you need to be more specific about your location 13:01:22 zakim, call kazuyuki-617 13:01:22 ok, kaz; the call is being made 13:01:23 +Kazuyuki 13:01:39 zakim, who is here? 13:01:39 On the phone I see Paul_Higgs, Kazuyuki 13:01:41 On IRC I see Bin_Hu, RRSAgent, Zakim, kaz_, yosuke, kaz, PaulHiggs, jcverdie, sangwhan, timeless, schuki, tobie, trackbot 13:01:44 + +1.650.946.aaaa 13:01:55 zakim, aaaa is Bin_Hu 13:01:55 +Bin_Hu; got it 13:02:21 +[IPcaller] 13:02:33 zakim, who is here? 13:02:33 On the phone I see Paul_Higgs, Kazuyuki, Bin_Hu, [IPcaller] 13:02:35 On IRC I see Bin_Hu, RRSAgent, Zakim, yosuke, kaz, PaulHiggs, jcverdie, sangwhan, timeless, schuki, tobie, trackbot 13:02:42 kaz_ has joined #webtv 13:02:53 zakim, IPcaller is yosuke 13:02:53 +yosuke; got it 13:03:31 ddavis has joined #webtv 13:03:44 agenda: http://lists.w3.org/Archives/Public/public-web-and-tv/2014Jul/0003.html 13:03:57 zakim, who is here? 13:03:57 On the phone I see Paul_Higgs, Kazuyuki, Bin_Hu, yosuke 13:03:59 On IRC I see ddavis, kaz_, Bin_Hu, RRSAgent, Zakim, yosuke, kaz, PaulHiggs, jcverdie, sangwhan, timeless, schuki, tobie, trackbot 13:04:14 +??P17 13:04:17 zakim, ??P17 is me 13:04:17 +ddavis; got it 13:05:11 zakim, who is here? 13:05:11 On the phone I see Paul_Higgs, Kazuyuki, Bin_Hu, yosuke, ddavis 13:05:13 On IRC I see ddavis, kaz_, Bin_Hu, RRSAgent, Zakim, yosuke, kaz, PaulHiggs, jcverdie, sangwhan, timeless, schuki, tobie, trackbot 13:05:42 kaz_ has joined #webtv 13:06:13 zakim, who is on the phone? 13:06:13 On the phone I see Paul_Higgs, Kazuyuki, Bin_Hu, yosuke, ddavis 13:08:30 +Daniel_Wester 13:08:49 https://www.w3.org/2011/webtv/wiki/New_Ideas 13:08:54 zakim, Daniel_Wester is Cyril 13:08:54 +Cyril; got it 13:09:03 CyrilRa has joined #webtv 13:09:12 zakim, Cyril is CyrilRa 13:09:12 +CyrilRa; got it 13:09:20 yosuke: Let's look through the use cases. 13:09:30 yosuke: The reviewer of the first use case is me. 13:09:54 yosuke: This is a simple use case. 13:09:55 https://www.w3.org/2011/webtv/wiki/New_Ideas 13:10:17 https://www.w3.org/2011/webtv/wiki/New_Ideas#UC2-1_Audio_Fingerprinting 13:10:41 yosuke: I think there are three entities web developers need to specify. 13:10:49 yosuke: The first is audio source - mic, etc. 13:11:03 yosuke: The second is finger-print generation algorithm. 13:11:16 yosuke: The second is the finger-print database, e.g. on the web. 13:11:34 yosuke: These three things are enough to declare a fingerprinting service. 13:11:48 yosuke: In addition, if we have timeout or duration we can have better control. 13:12:08 yosuke: So, this interface should be an asynchronous interface, e.g. JavaScript promises. 13:12:25 yosuke: Because it will take time to resolve the fingerprint from an online service. 13:12:42 kaz_ has joined #webtv 13:13:35 PaulHiggs: What do you mean generation? 13:13:52 yosuke: You need to generate a finger print from the audio source. 13:14:34 PaulHiggs: What you're trying to recognise is in the audio source. 13:14:42 kaz_ has joined #webtv 13:15:06 PaulHiggs: and then process it. I don't think you're creating anything, rather returning an identifier for the audio source. 13:15:40 yosuke: In many cases, fingerprinting services use only one algorithm for their services. 13:16:20 yosuke: In that case, we don't need to specify which algorithm we need. 13:16:29 q+ 13:16:29 s/The second is the finger-print database/The third is the finger-print database/ 13:16:41 PaulHiggs: Are we confusing watermarking and fingerprinting? 13:17:04 PaulHiggs: Watermarking would take an extra identifier encoded as an inaudible tone. 13:17:55 @@@: On the backend you have to have a hash. 13:18:36 s/@@@/CyrilRa/ 13:19:02 rrsagent, make log public 13:19:05 rrsagent, draft minutes 13:19:05 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 13:19:14 CyrilRa: Would you have the hashing done on the server-side or send a sample? 13:19:29 PaulHiggs: I thought fingerprinting was sending an audio sample. 13:19:41 CyrilRa: So then you'd send it to a recognition service. 13:19:45 zakim, who is on the phone? 13:19:45 On the phone I see Paul_Higgs, Kazuyuki, Bin_Hu, yosuke, ddavis, CyrilRa 13:20:01 PaulHiggs: You could do a local hash but that's not generation, that's hashing. 13:20:05 zakim, mute me 13:20:05 ddavis should now be muted 13:20:11 Present: Paul_Higgs, Kazuyuki, Bin_Hu, yosuke, ddavis, CyrilRa 13:20:28 PaulHiggs: If the second item said hashing that would be fine. 13:21:08 yosuke: So the front end gets the audio, the back-end service generates a fingerprint. 13:21:24 yosuke: In other services, the front-end gets a hash and sends that to the backend. 13:21:45 yosuke: I'll do some research about existing fingerprinting services. 13:22:34 yosuke: If it's just in audio clips then we don't need to clarify the generation in the use case. 13:23:01 Bin_Hu: It seems we have two functions - one is a database service and one is audio clip matching. 13:23:50 Bin_Hu: The algorithm to match the audio is in the implementation so I think it's a good starting point to do more research about what existing services provide. 13:24:03 q? 13:24:08 zakim, unmute me 13:24:08 ddavis should no longer be muted 13:24:15 q+ 13:24:41 ack b 13:24:42 ack k 13:24:46 zakim, mute me 13:24:46 ddavis should now be muted 13:25:00 kaz: I was wondering if we should think of a model like EME for this. 13:25:14 kaz: EME has a model for the idea of the mechanism. 13:25:27 kaz: Maybe we could use that as a starting point for the fingerprinting discussion. 13:25:32 -> https://dvcs.w3.org/hg/html-media/raw-file/tip/encrypted-media/encrypted-media.html EME 13:25:52 yosuke: You mean we should create a dialog to understand the architecture? 13:26:00 kaz: Right 13:26:05 s/dialog/diagram/ 13:26:17 yosuke: OK, I'll create a diagram based on my understanding. 13:26:50 yosuke: II'll create that and Daniel can check it. 13:26:51 rrsagent, draft minutes 13:26:51 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 13:27:01 kaz: That's great. 13:27:16 s/II'll/I'll 13:27:36 https://www.w3.org/2011/webtv/wiki/New_Ideas#UC2-2_Audio_Watermarking 13:27:40 yosuke: Next use case is audio watermarking 13:28:01 i/UC2-1/topic: UC2-1/ 13:28:09 i/UC2-2/topic: UC2-2/ 13:28:15 rrsagent, draft minutes 13:28:15 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 13:28:20 yosuke: I think watermarking is much simpler than fingerprinting because we don't need a backend service to generate the watermark. 13:28:34 q+ 13:28:34 zakim, unmute me 13:28:35 ddavis should no longer be muted 13:29:44 ddavis: Do you still need a backend? 13:30:00 PaulHiggs: No, the data is within the audio stream, as long as you know the algorithm. 13:30:40 PaulHiggs: If someone wanted to, they could encode a link to another service. 13:30:49 q? 13:31:14 zakim, mute me 13:31:14 ddavis should now be muted 13:31:38 CyrilRa: Fingerprinting helps you identify what audio was played, watermarking helps you take action. 13:32:01 CyrilRa: You have to have audio triggers that can be recognised by your client with watermarking. 13:32:32 PaulHiggs: You can think of it like old Teletext scanlines that used to be in the signal. 13:32:58 Bin_Hu: For watermarking, the service provider has to encode something within the stream. 13:33:20 Bin_Hu: Are standards such as the MPEG standard planning to create a standard for embedding this? 13:33:25 kawada has joined #webtv 13:33:31 CyrilRa: There's no standard I can think of. 13:34:23 Bin_Hu: From a W3C perspective are we planning to support such a format or accept that it's out of scope. 13:35:35 +??P21 13:35:45 zakim, ??P21 is me 13:35:45 +kawada; got it 13:35:53 Present+ kawada 13:35:55 Bin_Hu: Lots of new codecs are coming out, e.g. within MPEG, so are we going to look into the method of multiplexing? 13:36:20 Bin_Hu: Or will it be left to the implementation so we won't be directly involved. 13:36:56 CyrilRa: That's one of the main issues with watermarking - you need to know what you're looking for first. 13:38:09 CyrilRa: There's some work being done on the fingerprinting side where you have backend side, and the frontend (player) is capturing samples constantly. 13:38:29 CyrilRa: You then match between these two at exactly the right time. 13:38:59 CyrilRa: That's a way of overcoming the burden of having something inaudible embedded, and also of having to know what to look for. 13:39:51 zakim, next topic 13:39:51 I don't understand 'next topic', ddavis 13:39:55 zakim, next item 13:39:55 I see nothing on the agenda 13:40:13 https://www.w3.org/2011/webtv/wiki/New_Ideas#Media_playback_adjustment 13:40:13 https://www.w3.org/2011/webtv/wiki/New_Ideas#UC2-3_Identical_Media_Stream_Synchronization 13:41:00 kaz: Originally this had the HTML task force and Web sockets. 13:41:10 kaz: I also added SMIL by the Timed Text WG. 13:41:44 kaz: Also SCXML is a new version of SMIL and these can be used for synchronization. 13:42:24 yosuke: As a next step, we need to clarify what the requirements are. 13:42:49 yosuke: My understanding is stream a single stream to multiple destinations at the same time. 13:42:58 s/yosuke/kaz/ 13:43:20 yosuke: In many cases, the bandwidth or transport system is different so they have different buffers or time lag. 13:43:30 s/stream a/delivery a/ 13:43:58 yosuke: The exact timing could be different, so we need to think about how to adjust the synchronisation between different devices. 13:44:17 kaz: You mean how to keep the multiple streams (with identical content) synchronised. 13:44:56 yosuke: Yes. Maybe a player on a "better" device would have to wait to achieve synchronization with other slower devices. 13:45:03 kaz: So maybe we should add that point. 13:45:09 zakim, unmute me 13:45:09 ddavis should no longer be muted 13:45:46 zakim, mute me 13:45:46 ddavis should now be muted 13:45:54 kaz: What if the system is using DASH? 13:46:04 kaz: It's even more complicated but we should think about that as well. 13:46:33 yosuke: DASH can help with this use case 13:47:06 s|https://www.w3.org/2011/webtv/wiki/New_Ideas#Media_playback_adjustment|| 13:47:09 yosuke: If DASH is used, the client will have better presentation timing 13:47:32 yosuke: Probably we need a more generic API to synchronize. 13:47:46 yosuke: The use case is simple but the technology could be complicated. 13:48:41 yosuke: For example, I have a video element and my girlfriend has the same content separately. We'd like to match the timing to achieve synchronisation. 13:48:51 kaz: Maybe we could use WebRTC. 13:49:03 PaulHiggs: I don't know if we need WebRTC. This is not sharing streams. 13:49:24 PaulHiggs: I'm watching something and a friend is watching the same thing from the same source, not re-streaming it. 13:49:32 kaz: So probably without WebRTC. 13:49:58 CyrilRa: What you'd need is a sync service. 13:50:11 kaz: What we need is a very generic timeline mechanism. 13:50:41 yosuke: Could you make a note please on the wiki? 13:50:45 kaz: Will do. 13:50:56 s/What we/yes, what we/ 13:51:26 yosuke: Next use case. 13:51:27 https://www.w3.org/2011/webtv/wiki/New_Ideas#UC2-4_Related_Media_Stream_Synchronization 13:51:36 kaz: This is similar 13:51:42 yosuke: We can talk about this next time. 13:51:48 zakim, unmute me 13:51:48 ddavis should no longer be muted 13:53:25 https://www.w3.org/2011/webtv/wiki/New_Ideas#UC2-5_Triggered_Interactive_Overlay 13:54:09 ddavis: What key events were you thinking of? 13:54:29 Bin_Hu: E.g. during the world cup, if there's a goal that would trigger an event. 13:55:32 q+ 13:55:55 yosuke: The basic way to deliver such metadata is using a text track. 13:56:31 yosuke: What you're talking about is additional information. If we implement that we could use HTML5 text track. 13:57:14 yosuke: Is that correct? 13:57:44 Bin_Hu: Text track may be a fundamental way but in the live event it's not predictable - it may not be possible to add that in a text track. 13:58:00 Bin_Hu: Maybe additional information could be pulled in out-of-band. 13:58:26 Bin_Hu: Text track could be possible if not a live event. 13:59:36 q? 13:59:40 ack CyrilRa 13:59:43 ack kaz 14:00:03 Bin_Hu: Or advertising is another situation. 14:00:29 kaz: Maybe the event can be sent to another channel. The destination channel is what the viewer is looking at. 14:01:20 kaz: E.g. if we're watching Harry Potter the info could be in the text track for some event. 14:01:39 kaz: There is a service in Japan like YouTube called NicoNico 14:01:51 kaz: You can add lots of annotations to a video using timings. 14:02:07 kaz: Those kind of annotations could be a trigger to these events. 14:02:14 Bin_Hu: Exactly. 14:02:36 Bin_Hu: This would encoded in-band. 14:02:55 Bin_Hu: The platform implementation would be able to decode this and dispatch the events. 14:03:23 kaz: So the point of the use case is to send such events and show an overlay. 14:03:46 Bin_Hu: Events like start overlay, dismiss overlay should be supported. 14:04:39 yosuke: Next use case: Clean Audio 14:04:40 https://www.w3.org/2011/webtv/wiki/New_Ideas#UC2-6_Clean_Audio 14:04:56 yosuke: I added a section called Initial Gap Analysis 14:05:37 yosuke: If clean audio tracks are provided through an HTML5 audio element, you can select them through existing interfaces. 14:06:01 yosuke: If they're provided in-band, you can use the in-progress in-band resource tracks specification. 14:06:52 yosuke: There's another feature that a therapist can adjust the acoustic features of audio tracks to assist a disabled user. 14:07:03 yosuke: You can achieve this using the Web Audio API. 14:07:14 yosuke: There are examples of audio equalizers already. 14:07:55 yosuke: So only one point remains - if you use encrypted media extensions for your media tracks it's extremely unlikely the audio could be modified. 14:08:22 yosuke: So I think we should ask the accessibility task force about this point. EME can decrease media accessibility. 14:11:25 yosuke: I thought I should check dependencies with existing web standards and we can basically achieve this use case with existing standards. 14:11:47 q+ 14:12:56 yosuke: From a practical viewpoint, clean audio is helpful for disabled people. 14:13:35 yosuke: An API to achieve this use case is not so helpful. 14:14:04 yosuke: Promoting the use case itself or encouraging media service providers is a key point to improve accessibility. 14:14:27 ddavis: So it's more about awareness 14:14:30 q? 14:15:08 yosuke: The EME part is important, but apart from that we can achieve this use case with existing standards. 14:15:35 yosuke: We could make a note about how to do this which can help service providers. 14:16:03 yosuke: We can also ask the EME guys and accessibility task force about the potential drawback of using EME. 14:16:17 ddavis: Sounds like a good idea. 14:16:20 -> http://www.w3.org/WAI/PF/media-accessibility-reqs/ Media Accessibility User Requirements 14:16:44 kaz: The current draft of the Media Accessibility User Requirements doesn't include encryption. 14:17:01 kaz: We can talk about it with the media accessibility task force and HTML media task force. 14:17:22 yosuke: Kaz or Daniel, could you make this feedback? 14:17:39 kaz: Yes, next Monday is the next media accessibility call. 14:18:03 yosuke: I'll create a note about how to implement clean audio with existing web standards. 14:18:24 yosuke: After that I'd like to ask the accessibility task force to review it. 14:18:31 ddavis: I'm sure they'd be happy to do that. 14:18:34 q? 14:18:36 q- 14:18:42 q+ 14:18:46 yosuke: Any further questions or comments? 14:18:53 ack kaz 14:19:16 kaz: During the previous call I had a task to speak with the media accessibility task force about meeting during TPAC in October. 14:19:30 kaz: They're also interested in a joint session. 14:20:17 yosuke: We could have a joint session during the TV IG meeting or we can join their meeting. Do you have any ideas? 14:20:29 kaz: My suggestion is to join their meeting. 14:20:46 kaz: We already have the TV Control API CG joining our TV IG meeting. 14:21:08 yosuke: What's the next step? 14:21:35 kaz: If it's OK, let's ask them to join their meeting. I can suggest this at our next joint call. 14:22:20 yosuke: If we have an accessibility session, it would not be a long session. 14:22:53 yosuke: They can deliver it to more people if they come to our meeting. 14:23:17 yosuke: We could give them a 10-20 minute session and people could learn from them. 14:23:39 yosuke: Then, if IG people are interested in accessibility, they could join their meeting. 14:23:54 kaz: 14:24:09 kaz: We could have our meeting with them joining on Monday, and then we join them on Tuesday. 14:24:15 -> http://www.w3.org/2014/11/TPAC/ TPAC schedule 14:24:20 yosuke: Any other business? 14:24:47 -Paul_Higgs 14:24:48 yosuke: Thank you - meeting is adjourned. 14:24:48 -Bin_Hu 14:24:50 -yosuke 14:24:52 -Kazuyuki 14:24:53 -ddavis 14:25:02 -kawada 14:25:04 rrsagent, generate minutes 14:25:04 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html ddavis 14:25:29 chair: Yosuke 14:25:30 rrsagent, generate minutes 14:25:30 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html ddavis 14:25:43 i/Any further questions/topic: Joint meeting with the Accessibility TF/ 14:25:50 rrsagent, draft minutes 14:25:50 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 14:25:53 Thank you very much for scribing the meeting, Daniel. 14:26:03 You're welcome. 14:26:13 Meeting: Web&TV IG 14:26:15 rrsagent, draft minutes 14:26:15 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 14:26:22 Thanks Kaz 14:26:54 i/UC2-3/topic: UC2-3/ 14:27:01 ddavis has joined #webtv 14:27:02 i/UC2-4/topic: UC2-4/ 14:27:07 i/UC2-5/topic: UC2-5/ 14:27:10 rrsagent, draft minutes 14:27:10 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 14:27:35 i/UC2-6/topic: UC2-6/ 14:27:36 rrsagent, draft minutes 14:27:36 I have made the request to generate http://www.w3.org/2014/07/23-webtv-minutes.html kaz 14:28:40 Thanks - I'll send them out. 14:35:01 disconnecting the lone participant, CyrilRa, in UW_WebTVIG()9:00AM 14:35:03 UW_WebTVIG()9:00AM has ended 14:35:03 Attendees were Paul_Higgs, Kazuyuki, +1.650.946.aaaa, Bin_Hu, yosuke, ddavis, CyrilRa, kawada 16:26:26 jcverdie has joined #webtv 16:45:02 Zakim has left #webtv