15:02:50 RRSAgent has joined #voiceinteraction 15:02:50 logging to https://www.w3.org/2021/08/25-voiceinteraction-irc 15:02:59 meeting: voiceinteraction 15:03:31 chair: Debbie 15:04:25 bev: suggests a birds of a feather meeting at TPAC on running Community Groups 15:08:50 present:Debbie, Jim, Jon, Bev 15:10:21 topic: publication publicity 15:10:47 jon: paper has been publicized in OVON, and Bradley Metrock 15:11:01 ...Bret Kinsella 15:11:21 debbie: included in standards column 15:11:34 ...for Speech Technology Magazine 15:11:59 jim: should send note to Leonard Klie 15:12:08 bev: could share on social media 15:12:43 topic: meeting at TPAC 15:13:06 debbie: will have normal call during TPAC 15:14:09 ...only scheduled meeting is WebRTC, should we talk with them? 15:16:24 debbie: it may be too low-level to concern us 15:16:42 jim: would they be interested in what we're doing? 15:16:52 ...could it help them? 15:17:02 debbie: could give them a use case 15:18:00 jim: proposes introductory phone call 15:18:22 debbie: no reason to wait for TPAC 15:19:32 tobias: it's been integrated into a lot of software already, for example games 15:21:22 debbie: books about MMI interaction and WebRTC 15:21:41 topic: interfaces work 15:22:48 https://w3c.github.io/voiceinteraction/voice%20interaction%20drafts/paArchitecture-1-2.htm#walkthrough 15:25:18 BC has joined #voiceinteraction 15:25:57 debbie: step 4 is between client and dialog manager 15:26:02 Hello 15:26:35 jim: will there be a path for control? 15:26:47 ...audio only, control, or tagged data? 15:27:52 jim: question for Dirk -- what is included in item 4 15:28:03 ...also what about 5 15:29:18 debbie: has to include all semantic information and enough information for the provider selection service to work 15:29:57 ... also forwards information to local ASR 15:31:09 debbie: ask Dirk what is "received data" in step 5? 15:31:26 ...just the data from step 4/ 15:33:49 jim: lets talk about 5 next time, make a list of all data items, then decide if there's some structure 15:34:21 debbie: what do we want to know about each data item? 15:36:06 ...type of media, source, target, confidence, input or output, timestamps 15:36:31 What about language? 15:37:42 jim: identifier for each data item 15:38:33 Emotion Markup EML? 15:38:39 debbie: information identifying the speaker, that raises security and privacy 15:39:44 ...could be embedded in EMMA or MMI Architecture 15:44:03 debbie: what about step in the output path? how does that mapping happen? 15:44:50 ... step 14 is straightforward 15:44:58 ...could be streamed 15:46:27 debbie: local processing is classic dialog system architecture 15:48:36 ...need to focus on steps 5, 9,10,11,12 and 13 15:50:07 bev: let's not simplify 10 and 11 15:51:39 debbie: could put together straw person table for data items 15:51:56 jim: just a straw person proposal 15:53:57 rrsagent format minutes 15:54:05 Thank you 15:54:22 rrsagent make logs public 15:54:28 rrsagent, format mintues 15:54:28 I'm logging. I don't understand 'format mintues', ddahl. Try /msg RRSAgent help 15:54:37 rrsagent, format minutes 15:54:37 I have made the request to generate https://www.w3.org/2021/08/25-voiceinteraction-minutes.html ddahl 15:54:45 rrsagent, make logs public 17:41:43 ddahl has left #voiceinteraction