Meeting minutes
Conversational agent accessibility continuation of discussion.
RAUR and XAUR any updates.
Josh: worked on acknowledgements
<joconnor> http://
Josh: was looking at new user need to mention the participants on teh call and their status
Josh: 14b participant metadata added - comments?
Janina: looks good
Josh: this can be merged into main
Jason: put out as a wide review draft to gather any final comments before publishing a final note
Jason: XAUR is also heading toward a wide-review draft
Josh: should be good to go on his end
Conversational agent accessibility continuation of discussion.
W3C Workshop on Wide Color Gamut and High Dynamic Range for the Web (continued).
Jason: potential RQTF contribution?
Janina: looking for potential presenters on this topic
Jason: other ways of participating?
Janina: may simply attend the workshop
Session speakers and schedule won't be known until March 30
Janina: will be hard to recruit attendees without this information
Josh: what are the accessibility angles and gaps?
Janina: issues around color perception; automated techniques to adjust color spectrum
Judy: what about video and dynamic images?
Judy: flash mitigation is an area, since it can be color-based
Jason: will take this topic to an APA meeting to solicit attendance
Continuing research on media synchronization.
<joconnor> SN: Jason created the page and put issues at the top
<joconnor> Covered the degree of separation between sources
<jasonjgw> Steve: has updated the wiki page. The lip reading audio/video synchronization use case is now well documented.
<jasonjgw> Captions in live media present synchronization issues - including the possibility of summarizing the dialogue.
<jasonjgw> Steve hasn't yet investigated synchronization issues with sign language interpretation.
<jasonjgw> Steve: synchronized highlighting would be another issue to investigate - also documented on the wiki page.
<jasonjgw> Steve suspects there may be limited research on this last point.
<jasonjgw> Janina notes that synchronized highlighting reltes to a specific kind of AT and likely wouldn't result in requirements for timed text.
Conversational agent accessibility continuation of discussion.
Jason: has done some searching for a definition, but nothing so far
Josh: some connections are smart agents, natural language, and how to frame user issues
Josh: we could call it "smart agents" and then connect voice, speech and whatever else
Josh: open to other ideas, but cognizant of not wanting to get into Internet of Things
Janina: what to call it? The wider industry may influence this
<joconnor> JS: Smart seems to be a marketing terms, conversation describes a process,Voice is more accurate for one part - but however you engage is variable
Janina: but conversation is the paradigm
Judy: good to have a term that emphasized the functional aspect
Judy: conversation is closer to that
<Zakim> joconnor, you wanted to ask why conversation is a good term vs voice?
Janina: Google assistant considers input as a conversational thread
Josh: I don't understand the preference for the term conversation over voice, particularily as Voice captures the zietgeist
JPaton: Voice agents are the standard for digital assistants at the moment
<janina> +1 to Judy's parsing
judy: Voice is a modality, like text is another one
Judy: Not sure that the agents can handle true "conversations"
Judy: need to capture the deeper issues
Josh: Broader framework is needed beyond modalities, the service behind it
Josh: We need to look at a modality-independent way of framing it
Jason: Modality connection needs to be nonexclusive
<Zakim> joconnor, you wanted to tease out this use of Smart
Jason: will look for live examples that are publicly available
Josh: understand the problem with "smart" as a current marketing term
Janina: to what degree can the current digital assistants be open to multiple modalities
JPaton: The example of a chat-bot on the web; they also ask as a conversational assistant