W3C

- DRAFT -

Web-based Signage BG f2f Meeting at TPAC2012 - Day 2

02 Nov 2012

See also: IRC log

Attendees

Present
Sebastian_Feuerstack, Shigeo_Okamoto, Helena_Rodriguez, Kohei_Kawakami, Jaejeung_Kim, Soobin_Lee, Gisung_Kim, Shinichi_Nakao, Noriya_Sakamoto, Hiroshi_Yoshida, Shin-Gak_Kang, Kaz_Ashimura, Sung_Hei_Kim, Deboran_Dahl, Toru_Kobayashi, Dan_Burnett, Wook_Hyun, Karen_Myers, Chong_Gu, Hiroki_Yamada, Masayoshi_Ishida, Ryoichi_Kawada, Toshiyuki_Okamoto, Shinji_Ishii, Sebastian_Feuerstuck, Naomi_Yoshizawa, Ryosuke_Aoki, Sangwhan_Moon, Hiroyuki_Aizu, Koichi_Takagi
Regrets
Chair
futomi
Scribe
naomi

Contents


<Ryosuke> create

<Ryosuke> scribe: naomi

ryosuke:

futomi: good morning, thank you for coming
... today's agenda - joint meeting with MMI WG
... thank you for coming MMI WG people
... appreciate to have advices
... very happy to meet and welcome you
... after that we continue our discussion then have a meeting DAP WG
... 13:30 - 14:30 at their room

karen: thank you for coming
... appreciate participants especially from north America

[ everybody nods ]

kaz: Kaz Ashimura, activity lead of MMI

daniel: chair of voice brower WG and editor of couple of Web and TV

james: worked and Web RTC MMI and Voice Browser

[ introducing themselves - Chang, Debora, Herena, Sebastian ]

futomi: [ introducing himself ]

deborah: we have 3 presentations

james: will explain about multimodal architecture overview

[ explaining slides ]

james: @@

<scribe> scribenick: Ryosuke

james: html5 voicexml TTS all independent
... expaling about architecture of MMI
... diagram of modality component
... suppose speech difinition system
... components is blockbox to each other
... interoperability testing
... 3 vendors, speech , image, @@

<kaz> toru: could you give us any concrete examples?

james: in business model, it would depened @@

<kaz> jim: one company might be an expert of voice, but not so for graphics

<kaz> toru: dynamic discovery is included?

toru: techinical issue , discovery including architecture

james: yes

<kaz> jim: will be mentioned later

shinji: all codec is controlled ?

<kaz> sinji: multiple modalities of same kind?

<kaz> jim: possible

james: anyone can control each component.

<kaz> debbie: e.g., multiple languages

<naomi> scribenick: kaz

jim: IM decides which modality should be used
... might choose different modalities, e.g., different languages

toru: IM knows the capability of each MC?

jim: MC is, e.g., sensors
... and MC is the brain

toru: which part is the server?

jim: either way is possible

debbie: for example, Openstream's implementation includes all the components
... on cell phones

jim: standard piece here is "how all the components talk with each other"
... all we define here is messages between the components (=MCs and IM)

[ that's why the name of the spec is "Multimodal Architecture and Interfaces" ]

wook: transport?

jim: we used HTTP for the first version of the interoperable testing proto

(some more questions)

jim: application logic is handled by the IM
... MVC model

MMI Discovery - helena

helena: Discovery & Registration
... MMI Architecture is an architecture for components to orchestrate
... MC layer is abstract and generic
... respoonsible for tasks
... can have multiple devices that provide a task
... Modality Component life-cycle
... advertisement, discovery, registration, and control
... 1. Advertisement
... reach correctness in the MC retrieval

futomi: MC?

helena: Modality Components of the MMI Architecture
... and what must be advertised?
... functional and non-functional information
... e.g., two concurrent synthesizers
... examples are DLNA, Bonjour, Intent and Web Services

futomi: seems like UPnP

helena: DLNA uses UPnP for discovery
... and what is needed for MMI?

<Shinji> Meeting: Joint meeting MMI WG and Web-based signage BG in TPAC2012

helena: (explains the idea using a picture)
... next Discovery
... four types of discovery criteria
... task goal, intention, behavior and capacities
... fixed, passive, active and mediated
... you use underlying technologies
... requirement is the need of a mechanism of discovery using the MMI events
... (shows another picture on discovery)
... and Registration
... implies information storing, indexing criteria, registration state and registaration distribution
... you can install MCs on some server
... and the information can be distributed
... requirement: system state handling, multimodal session and registry updates
... and then Control
... use the same MMI life-cycle events to control registration and registration updates

debbie: one use case?
... public sinage could be one

helena: we have a set of use cases

-> http://www.w3.org/TR/mmi-discovery/ use cases note

helena: UI on a mobile
... interacts with big screens
... that is one possibility
... connect with a public display
... communicate with MCs via IM

futomi: my understanding is ...
... there is a big screen for digital signage
... and I have a mobile
... which communicate with the big screen
... maybe there is a list of devices close by on the mobile
... and I can choose which display to be connected

helena: right
... currently we need to stop in front of the display at stations
... using this mechanism devices can interact with each other via IM

futomi: where is the controller (IM)?
... and where are the MCs?

helena: MMI Architecture allows nested structure
... so the IM could be installed on the signage display
... or a separate server is another possibility
... it depends on the application

futomi: MC is not free
... we're a signage operater
... it would be good if we could provide an IM to control the service

jim: right

helena: you can do that
... the question is rather what your own criteria is

shinji: please share the information

kaz: we'll let you know about the URIs

<naomi> kaz: [ showing a demo ]

<naomi> ... possible system of MMI archtecture

<naomi> ... one interaction manager, DLNA, GUI, VUI MC (voice XLM), GUI MC (HTML5) and other services on the web e.g., EPEG

<naomi> ... two windows connected by a simple socket XML

<naomi> scribenick: Ryosuke

kaz: demo of TV control using voice interface

toru: is it included advertisement?

kaz: no
... there is no discovery on this demo
... both th components know each other's IP address and ports

<naomi> scribenick: kaz

<inserted> toru: plan for systems which need discovery?

<inserted> kaz: the MMI WG have been working on intereoperable testing, and the next version of the interoperable testing proto system should include discovery capability. please help us :)

Signage use cases

futomi: Web-based Signage use cases
... we have 19 use cases
... we're now updating them
... some of them are related to MMI

s/toru: @@/toru: does this demo use discovery?

scribe: most of digital signages just show information in one direction
... but the future ones should be interactive
... so we listed possible use cases
... R5: Discovered by personal devices

<hiroki> Web-based Signage Use cases and Requirements

futomi: (explains the UC)
... I can see the big display
... but I can touch it
... how can I interact with it?
... probably I could use my smartphone as the UI for the display

jim: signage terminal as an IM uses users devices as MCs
... discovery work is very important here
... this idea fits MMI Architecture

futomi: there are many other UCs
... maybe not related to MMI, though

helena: Web Intents is very much modality-oriented
... UPnP is used for many devices
... the coordination capability provided by the MMI Architecture should be useful

gisung: how can I select one from various signage devices?

s/@1/gisung

helena: depends on application

futomi: MMI Architecture doesn't define that part

jim: terminals agree with each other

kaz: that's similar to wifi connection :)

futomi: interested in the work of the MMI WG
... would like to include MMI's work in the gap analysis

debbie: if you could, please give comments to the discovery work
... it would be very helpful if you could review the note

<naomi> kaz: [ explaining slides ]

<naomi> ... @2

<naomi> futomi: emotion means markup language?

<naomi> kaz: yes

<naomi> toru: futomi might not mention not only @3

<naomi> futomi: meta data

<naomi> [ everybody nods ]

<naomi> ddhal: could have avator

<naomi> ... shows angry face

<naomi> ... detected from the speech

<naomi> ... two separated implementation

<naomi> futomi: there should be many use cases

<naomi> ... markup language represents lets's say emotion

<naomi> ... Can this specify level of emotion??

<naomi> kaz: Yes, use value attribute. e.g. value="0.85"

<naomi> daniel: agree what is the emotions

<naomi> ... need to research what are emotions

<naomi> futomi: would be very hard to define

<naomi> ... emotion of each country is different

<naomi> ... how do you solve

<naomi> dadhal: the group recognize that

<naomi> ... 5 -6 comments

<naomi> ... vocabularies

<naomi> ... you can define your own vocabularies

<naomi> kaz: that's why we provided @4

<naomi> dadhal: 5-6 recommended

<naomi> ryoichi: how do you use it?

<naomi> kaz: sets of emotion markup language

<naomi> daniel: any emotion could be readable of this vocabulary

<naomi> helena: if you want to question or anxious, you can use annotation language

<naomi> ... performed - has a lot of use cases

<naomi> toru: what is the most applicable use case of this

<naomi> kaz: given these provided implementation report

<naomi> ... avator system, 3D

<naomi> ... face recognition

<naomi> dadahl: product testing

<naomi> ... opinion people says good but might don't like the faces

<naomi> kaz: one of NWook dept is working on emotion analysys

<naomi> ... for call centers

<naomi> dadhal: that's a good use cases

<naomi> futomi: break!

<naomi> ... appreciate your attendance

<naomi> [ everybody applause ]

<Ryosuke> scribenick: ryosuke

futomi: yesterday r6, today it starts from r7
... expalin summary of r7
... explain use cases of r7.
... R7 include the case of stock in real time communication
... real time communication using digital signage is a live stream such as live news, notice board.
... forth use case is fire in disaster.
... the use case assume shopping center where there are many displays in the center
... digital signage inform us about fire area etc
... next use case is earthquake
... charles commented push API yesterday
... technique related to R7 add push API

<hiroki> Push API spec

futomi: explanation of websocket API which is low layer

yamada: @@
... push API on SNS in the radio

<Shinji> [Editors Action] add. "Push API" to R7. Gap analysis.

Kang: BB
... Sever sends information using push API
... explanation of push API
... taliking about fire in disaster
... CC

futomi: pre-install application on digital signage

<whyun> http://www.w3.org/TR/2012/WD-push-api-20121018/

Kang: reciving information in real-time using interaction with digital signage

Shinji: how to switch normal case to disaster case on a digital signage?

futomi: assuming Automatically switching
... we can build up automatically system which switch to disaster mode.

Kang: DD

futomi: Let's talk about terminal side
... how to sensor authentification ?

Shinji: talking about a example
... signage system watch disaster situation using camera

futomi: Our scope is not multiple sugnage but single signage

Kang: R6 is audio measurement

futomi: FF

Kang: server can push emergency information to specific screen

R8 identifying a location of a terminal

futomi: a use case is ads based on a location
... example of situation about this use case is train station
... the problem that differnt station have a digital signage which show the same content
... API relted with R7 are Geolocation API Specification, Geolocation API Specification Level2

okamoto: FF

futomi: twitter route

Sung: map

futomi: discuss on map or mapping
... next topic

Synchronizing contents

futomi: a use case is waching course materials on a tablet,
... example education field
... next example is big conference
... Motivation of R9
... requirement of network connectivty
... API related to R9 is WebRTC
... and WebRTC Tab Content Capture API

Kang: I don't know this API

: sharing video stream?

futomi: Both

<naomi> Wook/

futomi: twitter video, text, image and so on

: Data channel

<naomi> s|s/s/TT/||

futomi: video service, multiscreen services

Wook: using web intent

futomi: does twitter use broadcasting?

Kang: FF

<Shinji> [Editors Action] add. DataChannel (WebRTC) to R9. Gap analysis.

futomi: the opinion to DAP member

R10 saving contents and playing saved contents

futomi: a use case is playing contents in network trouble
... [explain wiki page of R10 on web-based digital signage BG]

<naomi> look for R10 from http://www.w3.org/community/websignage/wiki/Web-based_Signage_Use_cases_and_Requirements#R9._Synchronizing_contents

futomi: imporatant point is offline web application in R10 use case

R11 Protecting video contents

<Shinji> [Editors Actions delete "HTML5 4.8.6 The video element ", "HTML5 4.8.8 The source element" and "Media Source Extensions" from R10 Gap analysis.]

futomi: Twitter served data using encrypted media extensions

R12 Saving log data

futomi: example of log data is who watach ad, when users watch ad, where users watch ad

<Shinji> [Editors Actions]: addd "file API" to R10 Gap analysis.

Shinji: log data is evidence of copy
... server side cannot detect evidence of content copy
... sinage operator want evidence
... signature in the terminal
... serverside cannot PP log data

Wook: need to consider proof-of-play

futomi: DAP joint meeting starts 13:30

<Shinji> [Editors Actions] add (temporally) with evidence to R11

<tokamoto> tshigeo has joined #signage

<sangwhan1> ScribeNick: sangwhan1

[ Resuming meeting after break ]

Futomi: We should finish reviewing our draft

… further discussion should be done through the mailing list

… let's talk about renewing our draft document

… namely item 3

<hiroki> Web-based Signage Use cases and Requirements, R14. Showing contents on time

Futomi: R14 defines showing contents on a given time
... as for now, there is no common format for defining playlists
... XML and JSON based formats are possibilities
... but we need to analyze what is needed inside the metadata
... defining format does not need to be done by a working group

… we can alternatively use javascript code to substitute this

<whyun> http://www.w3.org/TR/ttaf1-dfxp/

<Shinji> http://dev.w3.org/html5/webvtt/

<sangwhan> ScribeNick: sangwhan

[ Scribe missed a lot of minutes due to technical difficulties ]

Noriya: how about WebVTT

futomi:  WebVTT is for capturing on the web
... not quite similar with time text markup language
...  what is the difference of TTML and WebVTT
...  R14 should stay

R15 Identifying an individual

futomi:  there is one use case - personalization

<aizu> http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots

futomi: Do we need to keep the requirements
... or should we remove the use case?

wook: This will have relations with web identity
... so this use case should be kept

skim1: There are potential privacy issues which need to be considered

???: Regarding privacy there are a lot of issues, but if there is user consent

scribe: I don't see why this would be a problem
... but a interactive model like this should be considered

Futomi: OK, let's keep this and discuss later
... Moving on

R16, Capturing screenshots

Futomi: My personal favorite
... There is a need because the control centers can monitor the terminals easily
... If each terminal posts screenshots to the control center periodically
... it can be used to verify QoS
... Also, this mechanism can be used as evidence whether a advertisement
... has been showed on the terminal on a given time or not
... there are some existing API
... WebRTC tab content capture API seems to be relevant
... and it fulfills the requirements
... if it is possible to fetch the screenshot, XHR or WebSockets can be used
... to transmit the screenshot to the server

aizu: The browser testing tools working group is working on screenshot API

s/http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots//

scribe: The specification is here http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots

Futomi: What is the format?

aizu: It is lossless PNG images encoded using Base64

 R17 - Seamless transition of contents

futomi:  suggested by ETRI

skim1:  Ther requirement is for signage terminals to be able to play contents
... in a normal web page, transition will introduce blinking
... which isn't quite natural to see when you want a smooth transition between content

… it might be possible to do this with CSS transitions/animations

Futomi: any comments?

wook: Small comment, I heard from Chaals that XHR level 2 is now just XHR

Futomi: I will fix that

<scribe> ACTION: Futomi to fix XHR reference on use case wiki page [recorded in http://www.w3.org/2012/11/02-signage-minutes.html#action01]

R18 Interactivity with the call center

skim13: Use case, when you are at a shopping mall
... when there is too much information
... this makes it possible to directly connect with a call center
... and interact with a human operator
... WebRTC can probably be used for this
... simple online assistance can be achieved with WebSockets
... metadata can be packaged in XML

Futomi: This is a very valid use case
... do you think there are further spec references needed?
... like media capture streams?

kotakagi: That might be overkill

futomi: We don't need to use IP for voice transmission
... a normal landline hook could work as well

shinji: Do we have to use Web RTC?
... what is the benefit from doing so?

futomi: Benefit is that it is free

Kang: I think this should be considered as a secondary feature

Futomi: Any further comments?

[ None ]

Futomi: Moving on

R19 Video streaming

wook: This requirement is for providing live video streaming
... this is important because live information is better than anything static
... probably the most viable option is to use the video tag
... since there is no transport level limitation in the spec
... RTSP/HLS/MPEG-DASH can be used as transport

Futomi: Very good point
... HLS is currently usable, MPEG-DASH can probably be used in the future

[ 30 minute break, resuming at 16:00 CET ]

<ryosuke> subscribenick: ryosuke

What's next step

<hiroki> scribenick: ryosuke

<Shinji> ...Making a new document

futomi: use case and requirement finish today

<Shinji> … we need continue discussion use-case and requirement

futomi: KDDI discuss map and mapping using SVG yesterday
... today we continue discussing this topic

Shinji: we need mile stone

toru: BG is a temporary group

futomi: our BG is expired

toru: should decide a goal of WG

futomi: we propose concretely what should be next

<Shinji> futomi: We need concuss

shige: trying to identify a new document

toru: I just want to know goal of making a new document

futomi: what is goal of toru

toru: today's goal is that our opiniton tell MMI or DAP WG

<sangwhan> s/opinition/position/

<naomi> scribenick: naomi

shige: let's clarify the definition of "goal"
... not necessary to talk to other WG
... you can identify solutions within this BG that will work

futomi: the purpose of BG is not clarified
... we can do anything in anyway

<sangwhan> scribenick: sangwhan

futomi: If we want to do something, it should be possible to do so

… Do you all agree?

Shinji: Since a BG cannot standardize anything, we should think about talking with WGs

… to clear the usecases that are needed for our BG

… this should probably be the main goal for our group

… if you would like to propose another goal, we are open for that

… but let's aim to make a deliverable by coming summer

Shigeo: Is standardization the goal for us?

… because my impression is that is not the case

Aizu: I have two comments

… Maybe BGs and CGs can write draft documents and submit it

… and then we can submit it to a WG

… for example Core Mobile and Responsive Images did so

Futomi: So we'll change the main focus to "communicating with WGs"

… the other activities we can do as well

Aizu: We would like to have W3C members to understand what exactly web based signage is

… and maybe create a prototype and demo during next TPAC

Futomi: I agree, we should try to make a demo during next TPAC

[ Notes that other activities mentioned are 1. Drafting a BP document 2. Continue to upgrade use cases 3. Discussion on map and mapping with SVG ]

Futomi: Which timeframe should we aim for the milestones?

… my opinion is that we should aim for summer, but Shinji mentioned that might be too late

Shigeo: Shouldn't the schedule be up to the members to decide?

Futomi: Yes

Wook: I agree with the point that we should have a deadline

… but I don't think it's something that we need to decide during this meeting

… we can probably discuss further on the mailing list

<naomi> sangwhan: you can use probably use W3C polling system

<scribe> ACTION: Futomi to setup a poll for the deadlines [recorded in http://www.w3.org/2012/11/02-signage-minutes.html#action02]

<naomi> sangwhan: since this group is temporary and new

<naomi> ... all these minutes will be public one

<naomi> ... people start looking @1

Sunghan: I have a comment regarding the activities of the BG

… I would like to address the fact that the wiki page needs to be up-to-date with

… the current status of the BG's activities

s/sangwhan:  since this group is temporary and new/Sangwhan: Since this group will be temporary/

s/... all these minutes will be public one/... I believe that there should be a deliverable document for future reference/

Futomi: Good point

Shinji: I would like to still address that we should aim for a milestone coming spring

… we can still continue discussion after the milestone

[ Shinji presenting a gantt chart of the business group's activity timeline ]

Shinji: We had a AC meeting in May

… to continue the BG activity we really need to define a milestone for the activity report

… we need a schedule

Shigeo: Since there are new people in this group, I would like to ask you if there was a consensus

… from my understanding there is no charter

… and I don't think the group agreed to such a schedule

… although there was a year long plan agreed in the group

Futomi: I think we should have a milestone, but the point is that the deadline should not be concrete

… we can try to finish our document by April

… so, for the use cases and requirements when should we aim for?

Sunghan: I agree with your general idea, but we should probably take this discussion online like by using the wiki

Naomi: This is why Sangwhan said we should use the W3C poll system

Futomi: Ok, let's set this up after TPAC

… as for the current goals, does everyone agree?

<kotakagi> Please see http://www.w3.org/community/websignage/wiki/TPAC2012_KDDI_Input#Relation_to_System_Applications_WG

[ KDDI presenting a proposal for upcoming activities ]

Toshi: We are talking with the system applications working group, and discussing about collaborating with this group

… presenting the SysApps WG charter

[ For reference: http://www.w3.org/2012/sysapps/ ]

… system applications covers a large amount of requirements needed for signage

… but there are some things that are not being covered

… we have thought about Raw Sockets API, although this idea is not concrete

… we should also consider the security model when interacting with other devices

… a trusted application model is a absolute must for a signage use case

… this was discussed on the plenary session from the SysApps WG

s/plenary session from/plenary day breakout session from/

… so we should think about which use cases map to which WGs

… opinions?

Futomi: Good point

… I have been talking with the relevant working group chairs

Sangwhan: I believe this discussion is not something we should decide right now, and you have to be flexible for the milestone/deadlines due to the anture of W3C

s/anture of/nature of/

Futomi: As a side question, does everyone agree to continue this BG next year?

[ Consensus made ]

Futomi: OK, then we should discuss further online

… meeting adjourned

[ End of meeting ]

Summary of Action Items

[NEW] ACTION: Futomi to fix XHR reference on use case wiki page [recorded in http://www.w3.org/2012/11/02-signage-minutes.html#action01]
[NEW] ACTION: Futomi to setup a poll for the deadlines [recorded in http://www.w3.org/2012/11/02-signage-minutes.html#action02]
 
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.137 (CVS log)
$Date: 2012/11/02 15:59:01 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.137  of Date: 2012/09/20 20:19:01  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/creat/create/
Succeeded: s/AA/ryosuke/
Succeeded: s/and @@/MMI and Voice Browser/
Succeeded: s/introducing himself/[ introducing himself ]/
Succeeded: s/diggram/diagram/
Succeeded: s/compared to/ examples are/
Succeeded: s/DLNA/DLNA, Bonjour, Intent and Web Services/
Succeeded: s/for/four/
Succeeded: s/@@/ is it included advertisement?/
FAILED: s/toru: @@/toru: does this demo use discovery?/
Succeeded: s/kaz: @@/kaz: both th components know each other's IP address and ports/
Succeeded: i/topic: Signage use cases/toru: plan for systems which need discovery?
Succeeded: i/topic: Signage use cases/kaz: the MMI WG have been working on intereoperable testing, and the next version of the interoperable testing proto system should include discovery capability.  please help us :)
Succeeded: s/@1/gisung/
FAILED: s/@1/gisung/
Succeeded: s/herena/helena/
Succeeded: s/this specify @3/Can this specify level of emotion?/
Succeeded: s/[ showing slides ]/Yes, use value attribute. e.g. value="0.85"/
Succeeded: s/preinstall/pre-install/
Succeeded: s/is to discuss/is/
Succeeded: s/mesuament/measurement/
Succeeded: s/Geolaction/Geolocation/
Succeeded: s/KK/Sung/
Succeeded: s/educaton/education/
Succeeded: s/example big/example is big/
Succeeded: s/TT//
Succeeded: s/TT/Wook/
FAILED: s|s/s/TT/||
Succeeded: s/Wook/  /
Succeeded: s/Extensions"/Extensions" from R10  Gap analysis.]/
Succeeded: s/signitiure/signature/
Succeeded: s/CC/need to consider proof-of-play/
Succeeded: s/Scribenick: naomi//
WARNING: Bad s/// command: s/http://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html#screenshots//
Succeeded: s/B/aizu/
Succeeded: s/aizu: It is a pre-encoded image/aizu: It is lossless PNG images encoded using Base64/
Succeeded: s/rrsagentd, draft minutes//
Succeeded: s/hoge//
Succeeded: s/nex/next/
Succeeded: s/Next session: What's next/Topic: What's next step/
Succeeded: s/discussion/discussion use-case and requirement/
Succeeded: s/deside/decide/
Succeeded: s/temporaly/a temporary/
Succeeded: s/concreately/concretely/
FAILED: s/opinition/position/
Succeeded: s/okamoto/shige/
Succeeded: s/okamoto/shigeo/
FAILED: s/sangwhan:  since this group is temporary and new/Sangwhan: Since this group will be temporary/
FAILED: s/... all these minutes will be public one/... I believe that there should be a deliverable document for future reference/
FAILED: s/plenary session from/plenary day breakout session from/
FAILED: s/anture of/nature of/
Found Scribe: naomi
Inferring ScribeNick: naomi
Found ScribeNick: Ryosuke
Found ScribeNick: kaz
Found ScribeNick: Ryosuke
Found ScribeNick: kaz
Found ScribeNick: ryosuke
Found ScribeNick: sangwhan1
Found ScribeNick: sangwhan
Found ScribeNick: ryosuke
Found ScribeNick: naomi
Found ScribeNick: sangwhan
ScribeNicks: naomi, Ryosuke, kaz, sangwhan1, sangwhan
Present: Sebastian_Feuerstack Shigeo_Okamoto Helena_Rodriguez Kohei_Kawakami Jaejeung_Kim Soobin_Lee Gisung_Kim Shinichi_Nakao Noriya_Sakamoto Hiroshi_Yoshida Shin-Gak_Kang Kaz_Ashimura Sung_Hei_Kim Deboran_Dahl Toru_Kobayashi Dan_Burnett Wook_Hyun Karen_Myers Chong_Gu Hiroki_Yamada Masayoshi_Ishida Ryoichi_Kawada Toshiyuki_Okamoto Shinji_Ishii Sebastian_Feuerstuck Naomi_Yoshizawa Ryosuke_Aoki Sangwhan_Moon Hiroyuki_Aizu Koichi_Takagi
Got date from IRC log name: 02 Nov 2012
Guessing minutes URL: http://www.w3.org/2012/11/02-signage-minutes.html
People with action items: futomi

WARNING: Possible internal error: join/leave lines remaining: 
        <tokamoto> tshigeo has joined #signage



WARNING: Possible internal error: join/leave lines remaining: 
        <tokamoto> tshigeo has joined #signage



[End of scribe.perl diagnostic output]