07:31:40 RRSAgent has joined #signage 07:31:40 logging to http://www.w3.org/2012/11/02-signage-irc 07:32:09 Ryosuke has joined #signage 07:32:28 rrsagent, make draft minutes 07:32:28 I'm logging. I don't understand 'make draft minutes', hiroki. Try /msg RRSAgent help 07:34:25 tokamoto has joined #signage 07:35:23 rrsagentd, draft minutes 07:35:55 rrsagent, draft minutes 07:35:55 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html Ryosuke 07:36:09 tomoyuki has joined #signage 07:36:57 creat 07:37:19 s/creat/create 07:37:42 rrsagent, draft minutes 07:37:42 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html Ryosuke 07:39:18 rrsagent, make log public 07:43:15 kotakagi has joined #signage 07:43:34 chair:futomi 07:43:54 scribe: naomi 07:45:17 AA: hoge 07:45:33 s/AA/ryosuke 07:46:25 Shinji has joined #signage 07:50:59 shinichi has joined #signage 07:52:38 a12u has joined #signage 07:54:20 Alan has joined #signage 07:56:14 hiroki has joined #signage 08:01:43 yamaday has joined #signage 08:03:08 kotakagi has joined #signage 08:04:54 futomi: good morning, thank you for coming 08:05:16 ... today's agenda - joint meeting with MMI WG 08:05:23 ... thank you for coming MMI WG people 08:05:36 kaz has joined #signage 08:05:37 ... appreciate to have advices 08:05:48 ... very happy to meet and welcome you 08:06:14 Skim has joined #Signage 08:06:58 ... after that we continue our discussion then have a meeting DAP WG 08:07:28 ... 13:30 - 14:30 at their room 08:10:34 karen: thank you for coming 08:10:52 ... appreciate participants especially from north America 08:11:00 Skim13 has joined #Signage 08:11:10 [ everybody nods ] 08:11:59 kaz: Kaz Ashimura, activity lead of MMI 08:12:25 daniel: chair of voice brower WG and editor of couple of Web and TV 08:12:38 shige has joined #signage 08:12:46 james: worked and Web RTC and @@ 08:13:24 s/and @@/MMI and Voice Browser/ 08:13:59 [ introducing themselves - Chang, Debora, Herena, Sebastian ] 08:14:09 futomi: introducing himself 08:14:34 s/introducing himself/[ introducing himself ]/ 08:15:03 JonathanJ1 has joined #signage 08:16:27 mishida has joined #signage 08:16:27 burn has joined #signage 08:16:34 deborah: we have 3 presentations 08:16:42 noriya has joined #signage 08:17:15 sfeu has joined #signage 08:17:33 JonathanJ1 has joined #signage 08:17:47 Present+ Sebastian_Feuerstack 08:17:57 futomi has joined #signage 08:18:56 Jaejeung has joined #signage 08:18:56 james: will explain about multimodal architecture overview 08:19:08 gisung has joined #signage 08:19:42 [ explaining slides ] 08:19:52 james: @@ 08:20:52 scribenick: Ryosuke 08:20:52 dadahl has joined #signage 08:21:19 james: html5 voicexml TTS all independent 08:21:43 ... expaling about architecture of MMI 08:22:25 james: diggram of modality component 08:22:25 kawakami has joined #signage 08:22:48 s/diggram/diagram/ 08:23:04 ... suppose speech difinition system 08:23:38 ... components is blockbox to each other 08:24:13 ... interoperability testing 08:24:41 ... 3 vendors, speech , image, @@ 08:25:03 toru: could you give us any concrete examples? 08:25:15 james: in business model, it would depened @@ 08:25:25 jim: one company might be an expert of voice, but not so for graphics 08:25:46 toru: dynamic discovery is included? 08:25:48 toru: techinical issue , discovery including architecture 08:25:58 james: yes 08:25:59 jim: will be mentioned later 08:26:19 shinji: all codec is controlled ? 08:26:27 sinji: multiple modalities of same kind? 08:26:30 jim: possible 08:26:37 whyun has joined #signage 08:26:38 james: anyone can control each component. 08:27:06 debbie: e.g., multiple languages 08:27:06 scribenick: kaz 08:27:19 Skim13 has joined #Signage 08:27:28 jim: IM decides which modality should be used 08:27:41 ... might choose different modalities, e.g., different languages 08:28:10 toru: IM knows the capability of each MC? 08:28:30 jim: MC is, e.g., sensors 08:28:33 ... and MC is the brain 08:29:16 toru: which part is the server? 08:29:22 jim: either way is possible 08:29:44 debbie: for example, Openstream's implementation includes all the components 08:29:50 ... on cell phones 08:30:20 jim: standard piece here is "how all the components talk with each other" 08:30:49 ... all we define here is messages between the components (=MCs and IM) 08:31:19 q? 08:31:21 [ that's why the name of the spec is "Multimodal Architecture and Interfaces" ] 08:31:44 wook: transport? 08:32:03 jim: we used HTTP for the first version of the interoperable testing proto 08:32:44 (some more questions) 08:33:09 jim: application logic is handled by the IM 08:33:12 ... MVC model 08:34:02 topic: MMI Discovery - helena 08:34:45 rrsagent, make log public 08:34:49 rrsagent, draft minutes 08:34:49 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html kaz 08:35:26 helena: Discovery & Registration 08:36:16 ... MMI Architecture is an architecture for components to orchestrate 08:36:34 ... MC layer is abstract and generic 08:36:47 ... respoonsible for tasks 08:37:09 ... can have multiple devices that provide a task 08:37:24 ... Modality Component life-cycle 08:37:40 ... advertisement, discovery, registration, and control 08:37:55 q- sangwhan 08:38:19 q? 08:38:36 CHONG has joined #signage 08:38:56 ... 1. Advertisement 08:39:18 ... reach correctness in the MC retrieval 08:39:37 Meeting: Web-based Signage BG f2f Meeting at TPAC2012 - Day 2 08:39:48 futomi: MC? 08:40:00 helena: Modality Components of the MMI Architecture 08:40:19 ... and what must be advertised? 08:40:52 ... functional and non-functional information 08:41:03 Present+ Shigeo_Okamoto, Helena_Rodriguez, Kohei_Kawakami, Jaejeung_Kim, Soobin_Lee, Gisung_Kim, Shinichi_Nakao, Noriya_Sakamoto 08:41:10 ... e.g., two concurrent synthesizers 08:41:33 ... compared to DLNA 08:41:49 s/compared to/ examples are/ 08:42:08 s/DLNA/DLNA, Bonjour, Intent and Web Services 08:42:18 rrsagent, draft mintues 08:42:18 I'm logging. I don't understand 'draft mintues', kaz. Try /msg RRSAgent help 08:42:25 rrsagent, draft minutes 08:42:25 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html kaz 08:42:37 I'm logging. I don't understand 'you're too strict :)', kaz. Try /msg RRSAgent help 08:42:49 futomi: seems like UPnP 08:43:03 helena: DLNA uses UPnP for discovery 08:43:14 ... and what is needed for MMI? 08:43:33 Meeting: Joint meeting MMI WG and Web-based signage BG in TPAC2012 08:44:05 helena: (explains the idea using a picture) 08:44:12 Present+ Hiroshi_Yoshida, Shin-Gak_Kang, Kaz_Ashimura, Sung_Hei_Kim, Deboran_Dahl, Toru_Kobayashi, Dan_Burnett 08:44:35 helena: next Discovery 08:44:46 ... for types of discovery criteria 08:44:58 ... task goal, intention, behavior and capacities 08:45:07 s/for/four/ 08:45:20 ... fixed, passive, active and mediated 08:45:42 ... you use underlying technologies 08:46:12 ... requirement is the need of a mechanism of discovery using the MMI events 08:46:21 ... (shows another picture on discovery) 08:46:32 Present+ Wook_Hyun, Karen_Myers, Chong_Gu, Hiroki_Yamada, Masayoshi_Ishida, Ryoichi_Kawada, Toshiyuki_Okamoto, Shinji_Ishii, Sebastian_Feuerstuck, Naomi_Yoshizawa, Ryosuke_Aoki 08:46:50 helena: and Registration 08:47:20 ... implies information storing, indexing criteria, registration state and registaration distribution 08:48:19 ... you can install MCs on some server 08:48:27 ... and the information can be distributed 08:48:56 ... requirement: system state handling, multimodal session and registry updates 08:49:09 ... and then Control 08:49:39 ... use the same MMI life-cycle events to control registration and registration updates 08:49:47 debbie: one use case? 08:49:54 ... public sinage could be one 08:50:05 helena: we have a set of use cases 08:50:16 -> http://www.w3.org/TR/mmi-discovery/ use cases note 08:50:31 helena: UI on a mobile 08:50:42 ... interacts with big screens 08:50:57 ... that is one possibility 08:51:10 ... connect with a public display 08:51:22 ... communicate with MCs via IM 08:51:39 futomi: my understanding is ... 08:51:49 ... there is a big screen for digital signage 08:51:52 ... and I have a mobile 08:52:01 ... which communicate with the big screen 08:52:27 ... maybe there is a list of devices close by on the mobile 08:52:39 ... and I can choose which display to be connected 08:52:43 helena: right 08:53:04 ... currently we need to stop in front of the display at stations 08:53:33 ... using this mechanism devices can interact with each other via IM 08:53:42 futomi: where is the controller (IM)? 08:53:46 ... and where are the MCs? 08:53:59 helena: MMI Architecture allows nested structure 08:54:12 ... so the IM could be installed on the signage display 08:54:27 ... or a separate server is another possibility 08:54:33 ... it depends on the application 08:54:44 futomi: MC is not free 08:54:54 ... we're a signage operater 08:55:18 ... it would be good if we could provide an IM to control the service 08:55:36 jim: right 08:55:56 helena: you can do that 08:56:09 ... the question is rather what your own criteria is 08:57:00 shinji: please share the information 08:57:09 kaz: we'll let you know about the URIs 08:57:46 kaz: [ showing a demo ] 08:58:39 ... possible system of MMI archtecture 09:00:23 ... one interaction manager, DLNA, GUI, VUI MC (voice XLM), GUI MC (HTML5) and other services on the web e.g., EPEG 09:00:53 ... two windows connected by a simple socket XML 09:01:04 scribenick: Ryosuke 09:01:33 kaz: demo of TV control using voice interface 09:01:53 toru: @@ 09:01:56 kaz: no 09:02:09 s/@@/ is it included advertisement?/ 09:02:25 kaz: there is no discovery on this demo 09:03:12 kaz: @@ 09:04:21 q? 09:04:33 scribenick: kaz 09:04:35 Alan has joined #signage 09:05:18 topic: Signage use cases 09:05:27 futomi: Web-based Signage use cases 09:05:31 ... we have 19 use cases 09:05:41 ... we're now updating them 09:05:51 ... some of them are related to MMI 09:06:26 s/toru: @@/toru: does this demo use discovery? 09:06:59 ... most of digital signages just show information in one direction 09:07:07 ... but the future ones should be interactive 09:07:20 ... so we listed possible use cases 09:07:30 ... R5: Discovered by personal devices 09:07:43 ->http://www.w3.org/community/websignage/wiki/Web-based_Signage_Use_cases_and_Requirements Web-based Signage Use cases and Requirements 09:08:14 futomi: (explains the UC) 09:09:09 s/kaz: @@/kaz: both th components know each other's IP address and ports 09:09:43 futomi: I can see the big display 09:09:47 ... but I can touch it 09:09:55 ... how can I interact with it? 09:10:28 ... probably I could use my smartphone as the UI for the display 09:11:39 i/topic: Signage use cases/toru: plan for systems which need discovery?/ 09:12:25 i/topic: Signage use cases/kaz: the MMI WG have been working on intereoperable testing, and the next version of the interoperable testing proto system should include discovery capability. please help us :) 09:12:45 jim: signage terminal as an IM uses users devices as MCs 09:12:54 ... discovery work is very important here 09:13:03 ... this idea fits MMI Architecture 09:13:17 futomi: there are many other UCs 09:13:30 ... maybe not related to MMI, though 09:13:49 Skim13 has joined #Signage 09:13:55 helena: Web Intents is very much modality-oriented 09:14:08 sangwhan has joined #signage 09:14:22 ... UPnP is used for many devices 09:14:42 q+ 09:14:56 ... the coordination capability provided by the MMI Architecture should be useful 09:15:20 @1: how can I select one from various signage devices? 09:15:28 s/@1/gisung/ 09:15:28 s/@1/gisung 09:15:42 helena: depends on application 09:16:05 Kiyoshi_ has joined #signage 09:16:17 futomi: MMI Architecture doesn't define that part 09:16:24 rrsagent, draft minutes 09:16:24 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 09:17:26 jim: terminals agree with each other 09:18:03 q- 09:18:05 sblee has joined #signage 09:18:28 kaz: that's similar to wifi connection :) 09:19:04 futomi: interested in the work of the MMI WG 09:19:20 ... would like to include MMI's work in the gap analysis 09:19:41 debbie: if you could, please give comments to the discovery work 09:19:57 ... it would be very helpful if you could review the note 09:21:03 sangwhan has joined #signage 09:21:43 kaz: [ explaining slides ] 09:21:56 ... @2 09:24:14 futomi: emotion means markup language? 09:24:18 kaz: yes 09:25:42 toru: futomi might not mention not only @3 09:29:00 futomi: meta data 09:29:13 [ everybody nods ] 09:29:25 ddhal: could have avator 09:29:29 ... shows angry face 09:29:36 ... detected from the speech 09:29:43 ... two separated implementation 09:29:54 futomi: there should be many use cases 09:30:11 ... markup language represents lets's say emotion 09:30:18 ... this specify @3? 09:30:28 kaz: [ showing slides ] 09:30:55 daniel: agree what is the emotions 09:31:06 ... need to research what are emotions 09:31:14 futomi: would be very hard to define 09:31:34 ... emotion of each country is different 09:31:37 ... how do you solve 09:31:46 dadhal: the group recognize that 09:31:55 ... 5 -6 comments 09:32:04 ... vocabularies 09:32:14 ... you can define your own vocabularies 09:32:35 kaz: that's why we provided @4 09:32:45 dadhal: 5-6 recommended 09:33:07 ryoichi: how do you use it? 09:33:18 kaz: sets of emotion markup language 09:33:51 daniel: any emotion could be readable of this vocabulary 09:34:17 herena: if you want to question or anxious, you can use annotation language 09:34:26 ... performed - has a lot of use cases 09:34:43 s/herena/helena 09:34:44 toru: what is the most applicable use case of this 09:34:57 kaz: given these provided implementation report 09:35:02 ... avator system, 3D 09:35:07 ... face recognition 09:35:21 dadahl: product testing 09:35:42 ... opinion people says good but might don't like the faces 09:35:54 kaz: one of NTT dept is working on emotion analysys 09:36:00 ... for call centers 09:36:11 dadhal: that's a good use cases 09:36:22 futomi: break! 09:36:28 ... appreciate your attendance 09:36:34 [ everybody applause ] 09:43:14 tokamoto has joined #signage 09:53:40 hiroki has joined #signage 09:56:18 kawakami has joined #signage 09:57:45 skim13 has joined #signage 10:08:04 a12u has joined #signage 10:08:06 naomi has joined #signage 10:08:09 Ryosuke has joined #signage 10:08:54 s/this specify @3/Can this specify level of emotion? 10:09:04 s/[ showing slides ]/Yes, use value attribute. e.g. value="0.85" 10:09:17 rrsagent, draft minutes 10:09:17 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html hiroki 10:09:54 burn has joined #signage 10:10:02 burn has left #signage 10:14:34 scribenick: ryosuke 10:15:03 futomi: yesterday r6, today it starts from r7 10:15:30 ... expalin summary of r7 10:15:52 ... explain use cases of r7. 10:16:22 futomi: R7 include the case of stock in real time communication 10:16:52 noriya has joined #signage 10:17:11 ... real time communication using digital signage is a live stream such as live news, notice board. 10:17:32 ... forth use case is fire in disaster. 10:17:59 ... the use case assume shopping center where there are many displays in the center 10:18:31 ... digital signage inform us about fire area etc 10:18:50 futomi: next use case is earthquake 10:19:42 ... charles commented push API yesterday 10:20:16 ... technique related to R7 add push API 10:21:04 -> http://www.w3.org/TR/push-api/ Push API spec 10:21:33 futomi: explanation of websocket API which is low layer 10:21:59 yamada: @@ 10:22:33 yamada: push API on SNS in the radio 10:22:35 [Editors Action] add. "Push API" to R7. Gap analysis. 10:22:48 Kang: BB 10:23:39 Kang: Sever sends information using push API 10:23:53 Kang: explanation of push API 10:24:05 shinichi has joined #signage 10:24:15 Kang: taliking about fire in disaster 10:25:05 Kang: CC 10:25:21 futomi: preinstall application on digital signage 10:25:35 whyun has joined #signage 10:25:37 s/preinstall/pre-install/ 10:25:42 http://www.w3.org/TR/2012/WD-push-api-20121018/ 10:26:06 Kang: reciving information in real-time using interaction with digital signage 10:26:29 Shinji: how to switch normal case to disaster case on a digital signage? 10:27:02 futomi: assuming Automatically switching 10:27:23 tomoyuki_ has joined #signage 10:28:08 futomi: we can build up automatically system which switch to disaster mode. 10:28:32 Kang: DD 10:29:07 futomi: Let's talk about terminal side 10:29:23 ... how to sensor authentification ? 10:30:27 Shinji: talking about a example 10:30:33 ... signage system watch disaster situation using camera 10:31:04 futomi: Our scope is to discuss not multiple sugnage but single signage 10:31:30 s/is to discuss/is 10:32:57 Kang: R6 is audio mesuament 10:34:01 s/mesuament/measurement/ 10:34:57 futomi: FF 10:36:15 Kang: server can push emergency information to specific screen 10:37:31 topic: R8 identifying a location of a terminal 10:38:50 futomi: a use case is ads based on a location 10:39:47 ... example of situation about this use case is train station 10:40:30 ... the problem that differnt station have a digital signage which show the same content 10:42:11 futomi: API relted with R7 are Geolaction API Specification, Geolocation API Specification Level2 10:42:23 s/Geolaction/Geolocation 10:42:51 okamoto: FF 10:42:58 futomi: twitter route 10:43:08 KK: map 10:43:22 futomi: discuss on map or mapping 10:43:36 futomi: next topic 10:43:48 topic: Synchronizing contents 10:44:36 futomi: a use case is waching course materials on a tablet, 10:44:47 s/KK/Sung/ 10:44:55 ... example educaton field 10:45:15 s/educaton/education/ 10:45:24 ... next example big conference 10:45:39 s/example big/example is big 10:46:03 futomi: Motivation of R9 10:46:44 ... requirement of network connectivty 10:47:39 ... API related to R9 is WebRTC 10:47:59 ... and WebRTC Tab Content Capture API 10:48:12 Kang: I don't know this API 10:48:39 TT: sharing video stream? 10:48:44 futomi: Both 10:49:03 s/TT/ 10:49:16 Wook/ 10:49:24 s/TT/Wook/ 10:49:25 futomi: twitter video, text, image and so on 10:49:43 Wook: Data channel 10:50:08 s|s/s/TT/|| 10:50:28 s/Wook/ / 10:50:32 futomi: video service, multiscreen services 10:51:00 Wook: using web intent 10:51:24 futomi: does twitter use broadcasting? 10:51:53 Kang: FF 10:52:09 [Editors Action] add. DataChannel (WebRTC) to R9. Gap analysis. 10:52:30 q? 10:52:32 futomi: the opinion to DAP member 10:53:30 topic: R10 saving contents and playing saved contents 10:53:59 futomi: a use case is playing contents in network trouble 10:54:59 ... [explain wiki page of R10 on web-based digital signage BG] 10:55:15 look for R10 from http://www.w3.org/community/websignage/wiki/Web-based_Signage_Use_cases_and_Requirements#R9._Synchronizing_contents 10:56:17 futomi: imporatant point is offline web application in R10 use case 10:57:05 topic: R11 Protecting video contents 10:57:41 [Editors Actions delete "HTML5 4.8.6 The video element ", "HTML5 4.8.8 The source element" and "Media Source Extensions" 10:58:30 s/Extensions"/Extensions" from R10 Gap analysis.] 10:58:44 futomi: Twitter served data using encrypted media extensions 10:59:09 topic: R12 Saving log data 10:59:38 sangwhan has joined #signage 11:00:04 futomi: example of log data is who watach ad, when users watch ad, where users watch ad 11:00:06 [Editors Actions]: addd "file API" to R10 Gap analysis. 11:02:37 Shinji: log data is evidence of copy 11:02:54 ... server side cannot detect evidence of content copy 11:04:16 Shinji: sinage operator want evidence 11:05:23 Shinji: signitiure in the terminal 11:05:36 ... serverside cannot PP log data 11:05:38 s/signitiure/signature/ 11:06:21 Wook: CC 11:08:10 s/CC/need to consider proof-of-play/ 11:09:42 futomi: DAP joint meeting starts 13:30 11:11:06 [Editors Actions] add (temporally) with evidence to R11 11:12:09 rrsagent, draft minutes 11:12:09 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html hiroki 11:12:19 Alan has joined #signage 11:16:56 kawakami has joined #signage 11:18:39 a1zu has joined #signage 11:20:28 kawakami_ has joined #signage 12:47:41 RRSAgent has joined #signage 12:47:41 logging to http://www.w3.org/2012/11/02-signage-irc 12:49:50 yoh has joined #signage 13:10:38 sangwhan has joined #signage 13:20:51 a1zu has joined #signage 13:23:34 hiroki has joined #signage 13:23:35 tokamoto has joined #signage 13:26:30 kawada has joined #signage 13:29:32 sangwhan has joined #signage 13:29:38 sangwhan1 has joined #signage 13:30:05 rrsagent, draft minutes 13:30:05 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan1 13:30:24 Present+ Sangwhan_Moon 13:30:33 Scribenick: naomi 13:31:42 kotakagi has joined #signage 13:32:10 rrsagent, draft minutes 13:32:10 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan1 13:34:42 Shinji has joined #signage 13:37:57 s/Scribenick: naomi// 13:38:06 whyun has joined #signage 13:38:13 rrsagent, draft minutes 13:38:13 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan1 13:38:34 ScribeNick: sangwhan1 13:38:40 rrsagent, draft minutes 13:38:40 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan1 13:39:02 [ Resuming meeting after break ] 13:39:08 rrsagent, draft minutes 13:39:08 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan1 13:39:09 shinichi has joined #signage 13:39:37 shige_ has joined #signage 13:39:43 JonathanJ1 has joined #signage 13:40:39 Futomi: We should finish reviewing our draft 13:40:53 … further discussion should be done through the mailing list 13:40:54 JonathanJ1 has joined #signage 13:41:06 … let's talk about renewing our draft document 13:41:13 … namely item 3 13:41:17 rrsagent, draft minutes 13:41:17 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan1 13:41:52 kotakagi has joined #signage 13:42:30 -> http://www.w3.org/community/websignage/wiki/Web-based_Signage_Use_cases_and_Requirements#R14._Showing_contents_on_time Web-based Signage Use cases and Requirements, R14. Showing contents on time 13:43:45 Present+ Hiroyuki_Aizu 13:44:54 rrsagent, hello 13:45:00 Kiyoshi__ has joined #signage 13:45:05 Futomi: R14 defines showing contents on a given time 13:45:08 ... as for now, there is no common format for defining playlists 13:45:10 ... XML and JSON based formats are possibilities 13:45:13 ... but we need to analyze what is needed inside the metadata 13:45:15 ... defining format does not need to be done by a working group 13:46:10 … we can alternatively use javascript code to substitute this 13:46:56 naomi has joined #signage 13:46:57 sangwhan has joined #signage 13:47:05 rrsagent, bye 13:47:20 rrsagent, draft minutes 13:47:20 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 13:47:44 rrsagent, make log public 13:47:48 rrsagent, make minutes 13:47:48 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 13:48:07 rrsagent, bye 13:48:07 I see no action items 13:48:13 RRSAgent has joined #signage 13:48:13 logging to http://www.w3.org/2012/11/02-signage-irc 13:48:18 Zakim has joined #signage 13:48:28 rrsagent, this is signage 13:48:28 I'm logging. I don't understand 'this is signage', sangwhan. Try /msg RRSAgent help 13:48:49 rrsagent, hello 13:49:04 rrsagent, draft minutes 13:49:04 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 13:49:04 tomoyuki_ has joined #signage 13:51:02 http://www.w3.org/TR/ttaf1-dfxp/ 13:51:27 http://dev.w3.org/html5/webvtt/ 13:51:58 matt has joined #signage 13:52:06 rrsagent, draft seconds 13:52:06 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 13:52:55 ScribeNick: sangwhan 13:53:23 [ Scribe missed a lot of minutes due to technical difficulties ] 13:53:38 skim13 has joined #signage 13:55:24 Noriya: how about WebVTT 13:55:38 futomi:  WebVTT is for capturing on the web 13:56:01 ... not quite similar with time text markup language 13:56:13 futomi:  what is the difference of TTML and WebVTT 13:56:36 futomi:  R14 should stay 13:56:44 topic: R15 Identifying an individual 13:57:02 futomi:  there is one use case - personalization 13:57:04 rrsagent, draft minutes 13:57:04 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 13:59:49 rrsagent, draft minutes 13:59:49 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 14:01:53 rrsagent, draft minutes 14:01:53 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 14:02:38 aizu has joined #signage 14:02:58 rrsagent, draft minutes 14:02:58 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 14:03:26 tomoyuki_ has joined #signage 14:04:09 rrsagent, draft minutes 14:04:09 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 14:05:06 http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots 14:05:29 futomi has joined #signage 14:07:22 Ralph has joined #signage 14:08:15 rrsagent, draft minutes 14:08:15 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html matt 14:11:39 futomi: Do we need to keep the requirements 14:11:44 ... or should we remove the use case? 14:11:47 wook: This will have relations with web identity 14:11:50 ... so this use case should be kept 14:11:52 skim1: There are potential privacy issues which need to be considered 14:12:32 ???: Regarding privacy there are a lot of issues, but if there is user consent 14:12:34 ... I don't see why this would be a problem 14:12:37 ... but a interactive model like this should be considered 14:12:39 Futomi: OK, let's keep this and discuss later 14:12:42 ... Moving on 14:12:44 Topic: R16, Capturing screenshots 14:12:46 Futomi: My personal favorite 14:12:48 ... There is a need because the control centers can monitor the terminals easily 14:12:50 ... If each terminal posts screenshots to the control center periodically 14:12:53 ... it can be used to verify QoS 14:12:55 ... Also, this mechanism can be used as evidence whether a advertisement 14:12:57 ... has been showed on the terminal on a given time or not 14:12:59 ... there are some existing API 14:13:01 ... WebRTC tab content capture API seems to be relevant 14:13:04 ... and it fulfills the requirements 14:13:06 ... if it is possible to fetch the screenshot, XHR or WebSockets can be used 14:13:09 ... to transmit the screenshot to the server 14:13:16 aizu: The browser testing tools working group is working on screenshot API 14:13:20 rrsagent, draft minutes 14:13:20 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 14:13:37 s/http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots// 14:13:52 rrsagent, draft minutes 14:13:52 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 14:14:12 ... The specification is here http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots 14:14:17 Futomi: What is the format? 14:14:20 B: It is a pre-encoded image 14:14:27 s/B/aizu/ 14:14:57 ryosuke has joined #signage 14:15:01 Ralph has left #signage 14:16:30 topic:  R17 - Seamless transition of contents 14:16:34 futomi:  suggested by ETRI 14:17:00 skim1:  Ther requirement is for signage terminals to be able to play contents 14:17:08 s/aizu: It is a pre-encoded image/aizu: It is lossless PNG images encoded using Base64/ 14:17:27 matt has left #signage 14:18:14 ... in a normal web page, transition will introduce blinking 14:18:47 ... which isn't quite natural to see when you want a smooth transition between content 14:19:08 … it might be possible to do this with CSS transitions/animations 14:19:18 Futomi: any comments? 14:19:38 wook: Small comment, I heard from Chaals that XHR level 2 is now just XHR 14:19:42 Futomi: I will fix that 14:19:57 ACTION: Futomi to fix XHR reference on use case wiki page 14:21:55 shige has joined #signage 14:29:18 Topic: R18 Interactivity with the call center 14:29:20 skim13: Use case, when you are at a shopping mall 14:29:22 ... when there is too much information 14:29:25 ... this makes it possible to directly connect with a call center 14:29:34 ... and interact with a human operator 14:29:36 ... WebRTC can probably be used for this 14:29:38 ... simple online assistance can be achieved with WebSockets 14:29:40 ... metadata can be packaged in XML 14:29:42 Futomi: This is a very valid use case 14:29:43 ... do you think there are further spec references needed? 14:29:45 ... like media capture streams? 14:29:47 kotakagi: That might be overkill 14:29:49 futomi: We don't need to use IP for voice transmission 14:29:50 ... a normal landline hook could work as well 14:29:52 shinji: Do we have to use Web RTC? 14:29:54 ... what is the benefit from doing so? 14:29:58 futomi: Benefit is that it is free 14:30:10 Kang: I think this should be considered as a secondary feature 14:30:12 Futomi: Any further comments? 14:30:15 [ None ] 14:30:17 Futomi: Moving on 14:30:19 Topic: R19 Video streaming 14:30:21 wook: This requirement is for providing live video streaming 14:30:35 ... this is important because live information is better than anything static 14:30:38 ... probably the most viable option is to use the video tag 14:30:39 ... since there is no transport level limitation in the spec 14:30:41 ... RTSP/HLS/MPEG-DASH can be used as transport 14:30:44 Futomi: Very good point 14:30:46 ... HLS is currently usable, MPEG-DASH can probably be used in the future 14:30:49 [ 30 minute break, resuming at 16:00 CET ] 14:30:52 rrsagent, draft minutes 14:30:52 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 14:32:42 ryosuke has joined #signage 14:33:54 Present+ Koichi_Takagi 14:34:26 s/rrsagentd, draft minutes// 14:34:37 s/hoge// 14:35:30 tomoyuki_ has joined #signage 14:47:32 tomoyuki has joined #signage 14:58:37 kotakagi has joined #signage 14:59:37 shinichi has joined #signage 15:00:10 whyun has joined #signage 15:03:44 ryosuke has joined #signage 15:05:18 subscribenick: ryosuke 15:05:21 Kiyoshi__ has joined #signage 15:05:32 Next session: What's nex 15:05:37 scribenick: ryosuke 15:05:44 s/nex/next/ 15:06:15 ...Making a new document 15:06:22 futomi: use case and requirement finish today 15:06:52 … we need continue discussion 15:06:56 s/Next session: What's next/Topic: What's next step 15:07:11 futomi: KDDI discuss map and mapping using SVG yesterday 15:07:39 ... today we continue discussing this topic 15:08:03 Shinji: we need mile stone 15:08:15 toru: BG is temporaly group 15:08:21 s/discussion/discussion use-case and requirement/ 15:08:31 futomi: our BG is expired 15:10:02 toru: should deside a goal of WG 15:10:32 shige has joined #signage 15:11:18 s/deside/decide/ 15:11:36 futomi: we propose concreately what should be next 15:11:58 rrsagent, draft minutes 15:11:58 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 15:12:18 futomi: We need concuss 15:12:21 s/temporaly/a temporary/ 15:12:22 okamoto: trying to identify a new document 15:12:48 s/concreately/concretely/ 15:13:10 toru: I just want to know goal of making a new document 15:13:38 futomi: what is goal of toru 15:14:37 toru: today's goal is that our opiniton tell MMI or DAP WG 15:14:49 s/opinition/position/ 15:14:57 scribenick: naomi 15:15:10 shige: let's clarify the definition of "goal" 15:15:22 ... not necessary to talk to other WG 15:15:40 ... you can identify solutions within this BG that will work 15:15:46 s/okamoto/shige 15:16:00 s/okamoto/shigeo/ 15:16:36 futomi: the purpose of BG is not clarified 15:16:42 ... we can do anything in anyway 15:16:44 scribenick: sangwhan 15:16:55 futomi: If we want to do something, it should be possible to do so 15:17:05 … Do you all agree? 15:17:29 Shinji: Since a BG cannot standardize anything, we should think about talking with WGs 15:17:41 … to clear the usecases that are needed for our BG 15:17:50 … this should probably be the main goal for our group 15:18:08 … if you would like to propose another goal, we are open for that 15:18:38 … but let's aim to make a deliverable by coming summer 15:18:50 ack a1zu 15:19:00 Shigeo: Is standardization the goal for us? 15:19:14 … because my impression is that is not the case 15:19:20 Aizu: I have two comments 15:19:51 … Maybe BGs and CGs can write draft documents and submit it 15:19:56 … and then we can submit it to a WG 15:20:05 … for example Core Mobile and Responsive Images did so 15:20:28 Futomi: So we'll change the main focus to "communicating with WGs" 15:20:41 … the other activities we can do as well 15:21:19 Aizu: We would like to have W3C members to understand what exactly web based signage is 15:21:31 … and maybe create a prototype and demo during next TPAC 15:21:53 Futomi: I agree, we should try to make a demo during next TPAC 15:22:52 [ Notes that other activities mentioned are 1. Drafting a BP document 2. Continue to upgrade use cases 3. Discussion on map and mapping with SVG ] 15:23:16 Futomi: Which timeframe should we aim for the milestones? 15:23:38 noriya has joined #signage 15:23:57 … my opinion is that we should aim for summer, but Shinji mentioned that might be too late 15:24:08 Shigeo: Shouldn't the schedule be up to the members to decide? 15:24:32 Futomi: Yes 15:24:53 Wook: I agree with the point that we should have a deadline 15:25:06 … but I don't think it's something that we need to decide during this meeting 15:25:16 … we can probably discuss further on the mailing list 15:25:46 sangwhan: you can use probably use W3C polling system 15:25:49 ACTION: Futomi to setup a poll for the deadlines 15:25:58 rrsagent, draft minutes 15:25:58 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 15:26:30 sangwhan: since this group is temporary and new 15:26:38 ... all these minutes will be public one 15:26:49 ... people start looking @1 15:27:55 Sunghan: I have a comment regarding the activities of the BG 15:28:39 … I would like to address the fact that the wiki page needs to be up-to-date with 15:28:49 … the current status of the BG's activities 15:29:04 s/sangwhan:  since this group is temporary and new/Sangwhan: Since this group will be temporary/ 15:29:29 s/... all these minutes will be public one/... I believe that there should be a deliverable document for future reference/ 15:29:41 Futomi: Good point 15:30:34 Shinji: I would like to still address that we should aim for a milestone coming spring 15:30:45 … we can still continue discussion after the milestone 15:31:23 [ Shinji presenting a gantt chart of the business group's activity timeline ] 15:31:43 Shinji: We had a AC meeting in May 15:32:13 … to continue the BG activity we really need to define a milestone for the activity report 15:32:27 … we need a schedule 15:32:56 Shigeo: Since there are new people in this group, I would like to ask you if there was a consensus 15:33:07 … from my understanding there is no charter 15:33:18 … and I don't think the group agreed to such a schedule 15:35:04 … although there was a year long plan agreed in the group 15:35:26 Futomi: I think we should have a milestone, but the point is that the deadline should not be concrete 15:36:25 … we can try to finish our document by April 15:36:44 … so, for the use cases and requirements when should we aim for? 15:37:14 Sunghan: I agree with your general idea, but we should probably take this discussion online like by using the wiki 15:37:51 Naomi: This is why Sangwhan said we should use the W3C poll system 15:38:09 Futomi: Ok, let's set this up after TPAC 15:39:03 Alan has joined #signage 15:39:04 … as for the current goals, does everyone agree? 15:39:55 Please see http://www.w3.org/community/websignage/wiki/TPAC2012_KDDI_Input#Relation_to_System_Applications_WG 15:39:59 [ KDDI presenting a proposal for upcoming activities ] 15:41:08 Toshi: We are talking with the system applications working group, and discussing about collaborating with this group 15:41:24 … presenting the SysApps WG charter 15:42:01 [ For reference: http://www.w3.org/2012/sysapps/ ] 15:42:57 … system applications covers a large amount of requirements needed for signage 15:43:05 … but there are some things that are not being covered 15:43:41 … we have thought about Raw Sockets API, although this idea is not concrete 15:43:53 … we should also consider the security model when interacting with other devices 15:44:26 … a trusted application model is a absolute must for a signage use case 15:44:44 … this was discussed on the plenary session from the SysApps WG 15:45:10 s/plenary session from/plenary day breakout session from/ 15:46:51 … so we should think about which use cases map to which WGs 15:46:56 … opinions? 15:47:01 Futomi: Good point 15:47:17 tokamoto has joined #signage 15:47:47 … I have been talking with the relevant working group chairs 15:52:39 tokamoto has joined #signage 15:56:27 Sangwhan: I believe this discussion is not something we should decide right now, and you have to be flexible for the milestone/deadlines due to the anture of W3C 15:56:38 s/anture of/nature of/ 15:57:41 skim13 has left #signage 15:57:45 Futomi: As a side question, does everyone agree to continue this BG next year? 15:58:08 [ Consensus made ] 15:58:23 Futomi: OK, then we should discuss further online 15:58:29 … meeting adjourned 15:58:36 [ End of meeting ] 15:58:41 rrsagent, make log public 15:58:46 rrsagent, make minutes 15:58:46 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 15:58:55 rrsagent, publish minutes 15:58:55 I have made the request to generate http://www.w3.org/2012/11/02-signage-minutes.html sangwhan 15:59:01 zakim, bye 15:59:01 Zakim has left #signage 15:59:13 rrsagent, bye 15:59:13 I see 2 open action items saved in http://www.w3.org/2012/11/02-signage-actions.rdf : 15:59:13 ACTION: Futomi to fix XHR reference on use case wiki page [1] 15:59:13 recorded in http://www.w3.org/2012/11/02-signage-irc#T14-19-57 15:59:13 ACTION: Futomi to setup a poll for the deadlines [2] 15:59:13 recorded in http://www.w3.org/2012/11/02-signage-irc#T15-25-49