W3C

- DRAFT -

Personalization Task Force Teleconference

24 Jun 2019

Attendees

Present
JF, Becky, Roy, sharon
Regrets
Chair
SV_MEETING_CHAIR
Scribe
becka11y, becky

Contents


<Becka11y> scribe, becka11y

<Becka11y> scribe: becka11y

<LisaSeemanKest_> scribe: becky

<LisaSeemanKest_> scribe: becka11y

LS: Updates on publishing?

JS: Will start a CFC in APA for publishing updated drafts, propsing it runs for a week; new documents will need approval after the CFC
... two outcomes, publish these documents, and approval to publish updated drafts going forward

LS: we are seeking editorial changes not major ones at this point

reltaed vocablaries

JS: yes, that is understood - these are work in process

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki/Related-vocabularies

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki/advanced-values-for-content

LS: We have suggestions from others for additional vocabulary items - some got in some didn’t
... need to decide what goes into the next iteration; don’t want to lose these suggestions

<LisaSeemanKest_> ACTION: john to find related vocablaries and find link to micro data

<trackbot> Created ACTION-9 - Find related vocablaries and find link to micro data [on John Foliot - due 2019-07-01].

<JF> https://tools.ietf.org/html/rfc6350#page-12

JF: microdata has values we might want to consider; has equivalencies to VCARD (for sharing contact info) and VCALENDAR (has specific terms for collecting day, date, time annotations). Will gather up the links and send them to Lisa
... want to make sure our taxonomy is in sync with other, existing ones
... and also interchangeable

LS: suggests adding links into the wiki
... there is a page for related vocabularies so we don’t lose the info and can come back to it

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki

JS: make sure we have a “heading” page to capture all of the different ideas we are considering; we need a link to that wiki page on the Personalization home page for the task force (rather than just in the wiki)

<LisaSeemanKest_> https://github.com/w3c/personalization-semantics/wiki

<Roy> https://www.w3.org/WAI/APA/task-forces/personalization/

<JF> ACTION: Roy to add link to the github wiki from the TF Homepage in w3c space

<trackbot> Created ACTION-10 - Add link to the github wiki from the tf homepage in w3c space [on Ruoxi Ran - due 2019-07-01].

LS: need to identify this link as the starting point

<janina> TF Home Page is here: https://www.w3.org/WAI/APA/task-forces/personalization/

LS: What do we want to do for the next iteration of the documents? What are the next steps?

items for next iteration

LS: propose: repeat what we did for data-purpose and data-action for the other modules; that will enable more implementations and prototyping

JF: we really haven’t had discussion on data-destination and data-action; let’s work on those next

LS: cross module review to get to same level as data-purpose

JF: CFC was only for 3 values - purpose, action, destination; Once we get those completed we can go back to the others in module 1, including symbol

LS: think we can do the reviews at the same time while looking for other vocabularies
... wants to work on symbol at same time as finishing up purpose, destination, and action

JF: seems like field is redundant now that we have purpose

LS: agreed;
... get basic implementation of symbol and simplification decided while we continue to review purpose, destination, action

<JF> Module 1 = action, destination, purpose and simplification, distraction and symbol

LS: want to get implementation details of module 1 proposed as well as the values and details for purpose, destination, action

<LisaSeemanKest_> without doing a review of related coablaries

JF: concerned about simplification, distraction and symbol; what if we split these out into a separate module so we don’t bog down the current work around purpose, destination, and action;
... concerned that simplification, distraction, and symbol have too much burden on authors and they are less likely to implement
... example; a business who relies on advertisements are not likely to mark them as distractions

JS: We are 12 weeks from start of TPAC; we don’t have to meet with anyone but expect we want to; who we want to meet with so we can right requests for time with those groups; need to specify what we want to talk to each group about

<JF> Richard Ishida (Internationalization) is one person to have a chat with (maybe?)

<Zakim> janina, you wanted to remember TPAC

JS: one of the things we might want to addess is symbols. Can we address this for developing semantics around the translations to symbols (similar to semantics and understanding needed for voice interfaces). Can we leverage other group understanding of deep learning and AI for these translations

<LisaSeemanKest_> ACTION: lisa to review groups at tpac and look for synagys

<trackbot> Created ACTION-11 - Review groups at tpac and look for synagys [on Lisa Seeman-Kestenbaum - due 2019-07-01].

LS: internationalization could have ideas on this; I need to see what groups are attending TPAC to see where there is overlap

Becka11y: Becky’s TPAC funding fell through

JF: back to simplification and elimination of distraction; some people presenting at Web4All are scaping pages for data in order to simplify the content;
... content authoers may be hesitant to mark up content that might affect their business model; perhaps users that need simpliciation can access sites through a proxy server that actually does the work of simplification rather than having it performed by the content authoer
... given that, it seems that attribute markup may not make the most sense

<JF> +1 to Becky

LS: that may influence how we define distractions; would prefer to have this conversation then

<JF> Becky: believes there is a role here for AI - remove the burden from the authors

<JF> BeckY: maybe we need an implementer (IBM?) to do something here

BG: definitely feel that we should be pursuing AI, we can’t rely on authors to include all of these attributes and information

LS: if we want to go that route, we need to remove ambiguity

JS: Wired magazine pointed out that traditional search returns about 10 or so possibilities and user scans to find the most appropriate; conversational interfaces need to find the correct result right away - just one rather than 10; We need to benefit from what has been learned about working with/ building conversational interfaces

LS: what groups can help to address this?

JS: probably Web apps or web platforms; we need a compelling demo to engage people

LS: I think the demo we saw a few weeks ago is that demo
... IBM has content clarifier but when we were trying to use it there was a high error rate - too high for AAC users. We need to understand if there semantics that can offset that?
... we need to provide the clarification on how to reduce ambiguities

JS: the example of the word “can” is an example - is it a verb or a noun?

JF: two types of sites: hobby and professional; Amazon, banks, etc who deal with a volume of content are not going to do the work necessary to augment and fine tune their sites; it doesn’t scale
... the middle man is the proxy server - a non-profit or UN organization or commercial entity - they will track the high volume pages and make the simplifications

LS: I think you are missing many of the use cases - An AAC user can’t talk to another AAC user because of different symbol sets; Need to translate between them;
... that is part of what we want to solve;

JS: so talking to each other (symbol interchange) is first pass but more global reach of banking shopping is next round
... so believe LS is saying the symbol translation is the first problem to solve

JF: believe this problem is similar to language translation - like we have today with google translate (and others) that translate between languages; this is just a different type of language translation - symbol to symbol rather than between spoken/written languages

LS: problem is that for people with language diifficulties and need symbols the error rate is way to high, so need to help resolve ambiguities

JS: still believe we need machine learning

JF: perhaps need to think about splitting module one so current 3 terms continue to move forward and are not held back by symbol, simplification, etc.

<JF> https://www.w3.org/2019/04/dmpl/

JF: discussion around symbols is complicated; hadn’t realized part of the goal is conversation between individuals; attended a community group recently working on a language for communication between chatbots
... can we perhaps build on that conversation/ community group

JS: this isn’t a new problem - same problem with learning braille and where it was taught

LS: also a problem of symbol set developer/supplier going bankrupt;

JS: that is the problem with proprietary symbol sets, etc.

Summary of Action Items

[NEW] ACTION: john to find related vocablaries and find link to micro data
[NEW] ACTION: lisa to review groups at tpac and look for synagys
[NEW] ACTION: Roy to add link to the github wiki from the TF Homepage in w3c space
 

Summary of Resolutions

[End of minutes]

Minutes manually created (not a transcript), formatted by David Booth's scribe.perl version 1.154 (CVS log)
$Date: 2019/06/24 15:06:42 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.154  of Date: 2018/09/25 16:35:56  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)

Succeeded: s/author/authoer/
Present: JF Becky Roy sharon
Found Scribe: becka11y
Found Scribe: becky
Found Scribe: becka11y
Inferring ScribeNick: Becka11y
Scribes: becka11y, becky

WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 24 Jun 2019
People with action items: john lisa roy

WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]