W3C

– DRAFT –
APA TPAC Thursday

14 September 2023

Attendees

Present
AvneeshSingh, CharlesL, Chris_Needham, David_Ezell, Fazio, harry_, Irfan, Janina, jasonjgw, JohnRiv, mahda-noura, matatk, Matthew, Michael_McCool, mitch, Nigel_Megitt, Roy, Sebastian_Kaebisch, tidoust
Regrets
-
Chair
-
Scribe
cpn, Irfan, mahda-noura, matatk, nigel, nigelm, tidoust

Meeting minutes

Scribe+ Irfan Ali

<Roy> Breakout: https://www.w3.org/2023/09/13-haptics-minutes.html

Tencent's work on Haptics

<matatk> w3c/tpac2023-breakouts#19

Slideset: https://www.w3.org/2023/07/breakout_haptics_TPAC/haptics.pdf

harry_: Enhancing haptics for accessibility

<Roy> Slides at: https://www.w3.org/2023/07/breakout_haptics_TPAC/haptics.pdf

<Fazio> Haptics are now used in dental schools to simulate cavities

haptics amend human machine interaction, Haptics identify all the tech that provide the sensation of digital touch feedback.

samples such as New DuelSense PS5 gamepad, emersive gaming experience etc

various typeof haptics such as smart phone vibration.

sense of touch provides a powerful channel for communication and engagement

diverse mobile devises and components make it difficult to achieve a consistent experience

type of haptics effects - independent design, auditory based design etc.

pattern of the system is based on event such as type, timing, control, parameteres and curve

use cases - firearms, AWM, vehicle etc

We’ve talked to 100+ visually impaired users

Haptics can help visually impaired users to gain more information and understanding.

use case of accessibility- What do visually impaired people use their mobile phones for when accessing the internet?

most commonly used case is Chat like social network. second is music, third is news. these are most common use cases of this application

harry_: provides solution of keyboard and braille users

using input method

we have solution to simulate 6 point braille.

which provides the results based on intensity, frequency and time

haptic solution for map and navigation- When using walking navigation, if the user veers off the designated route, they will be alerted through vibrations and an audio announcement, guiding them back to the right direction.

Through customized vibration effects, users are reminded in four different scenarios: waiting at the station, about to arrive at the station, arriving at the station, and arriving at the destination.

Fazio: what does it do what other devices don't do?

Fazio: it sounds super useful

matatk: finding direction, I use haptics feedback. I agree with Fazio that haptics can be super useful

Fazio: I do have some disabilities and I find it very useful

matatk: use cases that Harry has presented are very useful

harry: content creator will be bale to find tune the haptics frequency and time which will address problems for the users who get confused with the haptics

<Fazio> Good Haptics info: https://www.ted.com/talks/katherine_kuchenbecker_the_technology_of_touch?language=en

promoting Haptics as a common human language is useful by Common understanding, standard API and data, unified user experience, evaluation and metrics;

some standard work is ongoing to attach haptic devices to different body parts.

Avatar: body (or part of body) representation

Perception: haptic perception containing channels of a specific modality

Device: physical system having one or more actuators configured to render haptic sensation corresponding with a given signal

matatk: there are some area of related work of W3 that we are aware of. I would like to share one that come to my mind in a while

Fazio: haptics are under utilized techniques

Fazio: I support your work

matatk: few specification are relevant. RTF have XR accessibility user requirement document

harry: webXR is lot like open XR api.

matatk: document like this can be useful to work on some use-cases

<Roy> https://www.w3.org/TR/xaur/

matatk: one of thing we should be talking is to including support for heptics

media accessibility user requirements; we are about to start. its a good timing for this haptic information

emersive web working group will have a spec that you might be able to contribute

harry: we discussed this during another session and talked to the editor of this document

matatk: there is another community group which is focusing on haptics

lot of thing will become specifications. you can start with community group and ask for consensus

harry: I wonder if the community group gets closed without any output?

matatk: I am not sure if that is a problem.

matatk: sounds like if such a thing does not exist and you want to bring it, it will bring from other part of w3c to help. emersive web have an active community group

you have to get the consensus from the community to publish any standard at W3

if you want to go down to that route, will support the idea of seeking the engagement of the community.

some people in the room will interested and follow on this

matatk: you mentioned haptics in relation to WCAG. I didnt find haptics in actual guidelines

harry: may be you can search tactile. thats the word which is used in WCAG doc

matatk: this is mentioned as a sign language

the principles are general at WCAG. we have a whole range of techniques of meeting WCAG criteria. they are based on technology being used.

<Fazio> It can also simulate braille

perceivable is one of the principle of WCAG and haptics can be a use case

haptics could come in play in meeting WCAG requirements

Roy: Harry looking to get inputs from our standard document. Also, Harry mentioned that they want to some extension of ARIA for haptics

I wonder there is possibility of exploring this idea

janina: Now it is a good idea to put something in their agenda since they have just finished 1.2

timing is good right now to talk about this

matatk: if we imagine that we do something with ARIA on haptics, I do see some possibility.

janina: if vibration API is too simple then we need to ask them to consider these use cases. In that case we do not need a community group

matatk: we can take an action to ask them to device and senses group about their plans

harry: if develops wants to create a method for haptics, what should they do? how to design, implement and test. there are a couple of solutions.

we do have some haptics guidelines and creating common UI elements.

matatk: we should ask our RQTF

standardize haptics, I am not sure if it will go that deep. it may be that we could have some stuff like ARIA.

matatk: first step is to get high-resolution API question resolved than we can work on that

matatk: we would have to check if there was a consensus. biggest challenge is that we do not have resources to do it quickly but your help will be useful

harry: my understanding is that we dont need to wait for high-resolution API. system API already provides that kind of capabilities

Fazio: it will be nice to explore simulated braille

harry: I can share my experience on this

we are not able to provide a satisfied solution for braille.

matatk: it is a very fascinating work. the challenge for us is if W3 is publishing this then we do need to make sure that web API work is there. we have to make sure that it is possible to that in a browser.

matatk: I am excited by the possibilities. big focus to make the web platform as native platform

dont want to promise if we can solve those things in parallel but dont take it as discouragement

devices and senses is not immediate path forward and we will do our best to attract people to participate in this process.

appreciate your time and presentation.

we will close it now. last comment

harry: thanks for listening and to discuss this topic. API work is little behind but accessibility is much more important. lets keep in touch and talk what should we do and how can we work together

Epub

<matatk> https://github.com/w3c/epub-specs/wiki/Image-Missing-When-Described-in-Context

<CharlesL> need specific examples from publishers most common for images described in context.

MEIG + TT

matatk: Would like to start with introductions

[round of introductions]

Nigel Megitt, BBC, co-Chair TTWG, chair ADCG

<matatk> Matthew Atkinson, TPGi, co-chair APA

Chris Needham, BBC, co-chair Media WG, co-chair Media & Entertainment IG

François Daoust, W3C, staff contact for the Media WG

<matatk> Janina: APA co-chair

<Roy> Roy, staff contact for APA WG

<mitch11> Mitchell Evan, TPGi, observer today at APA, Accessibility Guidelines WG, WCAG2ICT task force

<jasonjgw> Jason: co-facilitator Research Questions Task force.

<matatk> https://www.w3.org/WAI/APA/wiki/Meetings/TPAC_2023#MEIG_.2B_Timed_Text

<matatk> w3c/media-and-entertainment#95 (comment)

matatk: Agenda includes Media accessibility user requirements update, really interesting thing that came up this morning
… [goes through the rest of the agenda]
… 20mn for each of the agenda items for our 2h meeting.
… For some of them, we have some updates from earlier in the week.
… Some overlap with other groups, such as ePub and Internationalization, with which we discussed and agreed on things.

MAUR update

<matatk> v1: https://www.w3.org/TR/media-accessibility-reqs/

<Roy> New repo: w3c/maur

janina: We have a new repo. I haven't recently re-read the spec.
… The old 1.0 is in the repository

matatk: A bit of context. Media Accessibility User Requirements, we changed the URL to be MAUR. When this was originally made, this was referring to a series of user requirements related documents.
… [mentions the list]
… MAUR is about 8 years old, so time to update it.
… You can find the list online.

<Roy> w3c/apa#important-note

matatk: This started a real trend.
… Any thoughts on which part needs to be updated?

janina: Let's do a quick tour. The overall structure. Back in the days, we wanted to figure out requirements for HTML5.
… From then on, what can you do in the specifications.
… Relationship with HTML5 was frictional on other topics. Media was an exception. We wrote this document collaboratively.
… And we got pretty much what we wanted from HTML5.
… I wrote a lot of this document.
… The first section describes the needs of people with various disabilities
… Second part on how these needs could be addressed.
… Most of this was invented before there was a web.
… We're not inventing something new, just bringing requirements in the world that the Web has created.
… Third section is a series of special considerations that seemed relevant at the time.
… One of them is support for multiple languages, being able to consume the same sentence again and again in different languages.
… In a world where there is a lot of movement of human population, language used in the host country may not be the native one. I remember my mom when I was young wishing that she could slow down the news.

cpn: W3C has a publishing community. They have a similar set of requirements. Should the future version of this document be an agglomeration of both documents?

janina: That's a very good open question. We'll discuss with them.
… There may be good reasons to do that.

<Zakim> nigel, you wanted to mention we implement time-scale modification, ask about "disability" and what other changes planned?

matatk: We started to talking with publishing about interlinear text, but progressively about broader topics, including comics and manga, and realized that there's alignment needed on a broader set.

nigel: BBC does implement time-scale modification, at least the first few ones.
… Summary of accessibility requirements per type of disability.
… I wonder whether that could be reformulated.
… There's a common understanding that users have something within themselves that requires specific needs, but barrier of access is largely equally environmental.
… In terms of change, I would suggest that.
… I would like to mention synchronization accessibility user requirements.
… I don't know whether it makes sense to keep them separate.
… It would be nice to have them in one place.

matatk: Excellent points.
… Thing we discussed this morning: we had a presentation from Tencent about haptics.
… Apparently, MPEG is specifying a haptic track for MPEG media.
… Gap between OS level availability and Web level availability.
… It seems.

nigel: I wouldn't know how that fits in with text description. BBC implementation always exposes captions to Assistive Tech.
… When you have video and text descriptions, it's true that they can be considered separate things.
… What we develop in the Timed Text Working Group is a format that can do both. Render as audio or as text.
… Or as haptics, possibly.
… Trying to understand better where the gap is.

matatk: We definitely need to research the rationale.

https://www.mpeg.org/standards/Explorations/40/ ?

jasonjgw: We've had conversations about multi-level hierarchies of media content and making sure that they are appropriately navigable and usable.
… Multiple tracks of sign-language interpretation.
… Need to check current document, perhaps other opportunities to change that will show up.

matatk: Sectioning of content, change of scenes, is also a requirement in common with publishing.

janina: We need to be able to do more than add a string, also add markup.

<Zakim> nigel, you wanted to ask about the timeline of updating the MAUR

nigel: I was about to ask about practicalities. Any sense of timeline for updating MAUR?

janina: For this to be useful, if we were pretty much done by next year's TPAC, that would seem reasonable to me.

jasonjgw: That sounds good to me.

nigel: I think it would be useful to promote the fact that you're updating the document and giving a timeline.
… What about the Note status, is that enough? Should you go further?

janina: I'd love this to go beyond Note. W3C Statement, perhaps?

nigel: We reference these requirements.

matatk: If we did combine with synchronization user requirements, we might have troubles making it normative, because some of these cannot be.

janina: Relatively soon, we could go through the editor's draft and flag places where we feel an update might be needed. Then reach out to other groups.

+1

Identified needs for markup in chapter titles

matatk: Those of view who are here in person, there is a flyer about using symbols to make things easier to understand.
… The challenge with symbols is that different symbol sets could be used.
… We could apply symbols to particular parts of web pages.
… They are used across the board. Focusing on the key challenges, people might need symbols to navigate a video.
… Right now, we cannot use these as chapters in a video and we have a pending request around this.
… We're going to be asking WHATWG about that.

janina: And we open that can of worms why not fold into it nested chapters.

nigel: I'm a bit confused about the technical content.
… Different from choosing a font?

janina: This kind of use of symbol is a long standing feature in accessibility tools.
… We have a registry spec where we're going to use a key, then the user identifies what symbols they want to see. Numerical value triggers the right symbol to be displayed.

matatk: Why are we not using Unicode is probably your question?

<Roy> AAC Registry

<matatk> w3c/adapt#240

matatk: There are reasons that experts raise. There is a dedicated discussion about that.
… We want to get this right, I encourage you to review these reasons.
… Assuming that we cannot use Unicode, that's why we need to use markup.

cpn: What you're describing is a general need in HTML markup. Is there a specific media related need?

matatk: The reason for this specific request is that I believe that there are places where you may define video chapters, and that's where we're going to need HTML.

<Zakim> nigel, you wanted to say I'm not sure if the TTWG would consider this question in scope

nigel: It seems the expectation is that, with the video element, people will use track to list chapters.
… If something is added in HTML, then come back to WebVTT to say "we want this here too, please".
… Definitely get into HTML first.

matatk: That's our plan. We're not asking you to do it in particular.
… We wanted to ask if you can nest chapters to skip at different granularity levels.

nigel: I don't know to what extent chapter formats are used in practice, but this makes a lot of sense, and as far as I know there isn't a way to nest navigation units..

<matatk> https://www.w3.org/TR/media-accessibility-reqs/#contentnav

Interlinear publications

matatk: We should consider this one done.
… We just went through it.

I see that WebVTT has nested chapter cues https://www.w3.org/TR/webvtt1/#chapter-cues

I can't work out _how_ that nesting is supposed to work in non-overlapping cues!

"It is further possible to subdivide these intervals into sub-chapters building a navigation tree."

jasonjgw: There was work going on to replace SMIL in the DPUB world.
… I'm not sure about epub, but DAISY considers older things.
… Need to check exactly who's interested.

nigel: There was some question about whether their requirements overlapped with requirements with audio descrption.
… We were hoping they would be the same thing.
… Turned out they're not.

Specialised handling/rendering of embedded media

janina: there are types of media that appear especially in textbooks that won't get accessible handling in a general browser to meet accessibility needs, e.g., MusicXML, e.g., a harmony or music history textbook

<matatk> [ general reference: Music Notation CG: https://www.w3.org/community/music-notation/2016/05/19/introducing-mnx/ ]

janina: playing the sample isn't enough to sample it, you might want the left hand part to be left panned
… multi-line braille displays are coming
… we think it's definable, but won't initially be part of browsers
… Muse Score 4 is a scoring application for music
… Having a shortcut key that changes the beats per minute
… Could be part of a UA that can do this. Music is an example, but there are other applications in other domains
… TPAC2017 breakout on this
… People agreed it was very valuable. We'll do it with publishing, not sure how it applies with media

matatk: Another domain is chemical diagrams in chemistry
… Some things you'll be able to do in the browser, some with a polyfill, but for the foreseeable future it won't be as good as a native app
… So how to make the handoff as seamless as possible
… Our idea is that there's an API you can use in browser extensions to hand over to native apps. It has constraints, but may be usable
… We could make a prototype to see how well it works

Nigel: On a markup language basis, you need a way to have a way to hand the markup to produce something accessible but not violate privacy principles about not exposing use of assistive technology?

janina: yes

Nigel: I may struggle to persuade TTWG it's in scope there. You want to take markup and turn it into something that feels like media, we usually go the other way

matatk: it's not just an accessibility thing, these domains may not have as good a rendering in browser than a native tool

johnriv: The existing API?

matatk: I believe you serialise your thing to JSON to send to the other process, it may be two-way. But it's always a separate process and renders outside the browser
… Could be sufficient, but we don't know yet

jasonjgw: One case is very clear where we have an app using WASM in the browser that provides the appropriate multimedia rendering for the domain specific content
… with ePub we discussed making it possible with APIs
… the case of handing it over to an external process, not clear what the UI requirements are, could be a copy/paste operation the user needs to perform, or press a button in a UI to move the material across
… JSON inline in a document, the user has to select it and choose to transfer it somehow
… UI requirements for that, assuming the tech can make it possible, need work
… An app in browser context needs to be supported by the assistive technologies, need to capture requirements for those

<Zakim> nigel, you wanted to say that there are architectural parallels with a way to render captions

nigel: there are architectural parallels with other requirements
… last year I ran a breakout, if you do subtitles or captions the way the browser wants, you can't get data on usage or on user customisations
… So having some trusted mediated actor which can aggregate and anonymise the data and report back
… to enable product improvement
… A weird consequence of the requirement to not expose use of assistive tech leads to providers not having better data

janina: It's an interesting point

SSML in DAPT and HTML

nigel: this relates to the Spoken HTML spec

Spoken HTML

nigel: It's published by APA WG, last updated 2 years ago
… when I was working on DAPT i realised it would be nice to have a good way to extend the SSML vocabulary to improve the spoken variant, if used for text to speech
… I came across this document. It proposed two approaches, not obvious what's adopted. Any plans to continue work on it?

matatk: It has stalled because we don't have enough industry consensus. Our use cases mostly come from educational assessment
… If broadcasting can use it as well, it would help move it along
… We have a gap analysis, 7 requirements from education partners needed to make it work
… 4 use cases are critical and we have proposals for them

<Roy> Pronunciation Gap Analysis and Use Cases

matatk: We've taken the proposals to other W3C groups for the major platforms, and they see what we're trying to do but don't see a path to implementation that covers all the 4 use cases
… We're at an impasse, looking for ways ahead. Trying to engage industry, but the big platforms don't see it as in their remit
… We may now have a proposal we can go ahead with, but we don't know yet

nigel: I would like to align the approach if possible
… DAPT has two attributes. A pattern we could use is the same as we did with visual styling elements in TTML, took things from CSS and created a new vocabulary and referenced the CSS semantics
… Painful to maintain and update
… I haven't entertained adding SSML featues until we have the right markup process
… If you can help us decide how to do it, it would help us

matatk: Apologies we haven't completed the formal review of DAPT. It looks clear, good examples. We want to make sure we have expert feedback if anything is missed
… It's a priority for us

nigel: We appreciate your thorough review and look forward to receiving your feedback

Accessibility of Canvas based TV User Interfaces

chrislorenzo: A lot of the TV apps we're building are browser based, but use canvas and WebGL for rendering
… JavaScript is used to create a node tree for rendering. But we don't have an HTML tree, so how do we provide that UI to accessibility engines
… We add strings to nodes with alt-text, go through the tree and read it to the end user
… We could add haptic feedback or sounds at ends of rows
… Use case is TV devices

matatk: would it support interfacing with a normal keyboard?

chrislorenzo: yes

matatk: there are general approaches you can take. depending on the browser, different levels of support, different APIs
… You're almost writing half a screen reader yourself, handling all the focusing and keyboard interaction
… You can make it accessible. It's a more limited environment, like a kiosk, it can work. More complicated if you run the same code on a computer that doesn't provide the same information
… It's not a great UX if you create a screen reader as they don't have their personal settings
… Test it with people to see if they use it. Libraries like LightningJS are canvas solutions, they make proxy elements behind the scene, which do expose the right accessibility information
… so users get the AT experience they expect
… Another thing which is coming is AOM, which is what you mean. A subset of the DOM that goes to the accessibility tree
… You can look at it in devtools
… The a11y tree is a subset of the DOM, so there's a problem if there is no DOM
… The AOM provide an API to create nodes in the a11y tree as if there were a DOM
… Something to think about, but it's not near adoption. The two engines implementing are Webkit and Chrome

chrislorenzo: I think you're right, I'd like to be able to build an a11y tree, so AOM seems like what we want
… We have the same concepts, active elements such as buttons, with properties
… That would be relatively easy to implement

matatk: It could be possible to write your code so that when AOM becomes available you can use it
… It's being used in Web Platform Tests to test browser internals now
… WCAG compliance is a vital benchmark, treat it as a minimum
… Some developers, without UX guidance, want to provide as much information as possible, to be helpful. But that's not always the right thing to do
… I think you're on the right lines

jasonjgw: For 2D canvas we currently have support for associating ARIA roles with the canvas element, but no corresponding support for 3D canvas
… The only solution is to create hidden HTML elements
… From a standards point of view, I'm concerned about the TV use cases, but there the OS provides the assistive technologies
… but if this comes to more general browser environments, suddenly you're deprived of the benefits of HTML
… We'll need to consider to what extent to recommend HTML or SVG as fallback, that conversation hasn't happened yet
… AOM, when I last looked, your ability to create your own tree nodes was later in their plan, so probably not implemented

chrislorenzo: It'll be more of a problem in the future with WebGPU, WASM, especially on TV devices where performance is a concern

nigel: you can create DOM objects in JS, but doesn't address the performance question

matatk: It sounds like the accessibility is the easy part. The hard part is going to the different WGs and explaining the performance issues, but please do
… we want you to use it across all devices

+1

DVB liaison on accessibility signaling

nigel: W3C received a liaison from DVB
… on signalling and accessibility services. DVB specifies broadcast related technologies for getting audiovisual media to TVs
… They shared some commercial requirements, and lists of things they want broadcasters to signal - the accessibility components
… so users can decide if they want to consume that content or match to user preferences
… Various audio options

DVB liaison on member-tt

nigel: they want feedback on the list of accessibility services
… I introduced CTA to a group in the ITU who are working on a profile exchange format for expressing a user's accessibililty requirements
… A portable document that expresses the user's needs
… CTA got in touch with ITU to tell them about a similar document being prepared in CTA
… It's challenging, as it's sensitive data about the user, and also because it seems difficult to come up with a list of the accessibility services
… That's where APA could help
… Also, design an algorithm to match up the user preferences to the available content options
… Having some cross industry agreement on how an algorithm would work, seems valuable. But needs careful review on how to do that
… One specific thing that's tricky is, when it comes to a11y services such as alternate audio mixes
… it's hard to work out, particularly with object based media coming where audio and video objects are combined in the client
… that's even more difficult
… Let's coordinate to produce a single W3C response across the groups that received the liaison

janina: We've been considering portable profiles in FAST, trying to enumerate functional needs and build useful profiles
… and analysing edge technologies (a11yEdge CG), we created an initial inventory
… where there's a similar need for general web content, to avoid users having to reconfigure
… Make it privacy preserving, use DIDs to do that
… But you'd be able to match a set of desired features against available content
… Would be nice to find out those options before starting playback

matatk: We need to figure out how to respond. I'm pleased the work is going on, and for Nigel's thoughtful analysis
… Where do we take it to respond?

Francois: Usually groups respond to liaison statements. It's enough for groups agree among themselves

Nigel: Which groups have received it?

Roy: APA, Timed Text, MEIG

matatk: So we'll coordinate, and talk offline

Nigel: Could the Team to take the lead on coordinating?

Francois: Groups can each send their own response. So it makes sense for the team contacts to collect the responses
… I'll coordinate with the team contacts

jasongw: If object based media is coming to the web, we'll have to look at the capabilities and accessibility requirements
… I don't have any reference on those object based formats yet

Web of Things

<kaz> matatk: joint discussion on the collection of use cases

<kaz> ... there is a TF of APA WG, Research Questions TF

<kaz> wot-usecases issue 226

<kaz> mm: would like to filter out what we really can do

<kaz> matatk: great

<kaz> ... accessibility for WoT deliverables

<kaz> WoT use cases on APA wiki

<kaz> ... would like to see which one is applicable

<kaz> matatk: general sort of use cases on climate, etc.

<kaz> ... compatible with home automation service

<kaz> ... another issue is onboarding

<kaz> mm: we can't constraint UI for apps

<kaz> ... would be easier to use if we provide UI for voice, etc.

<kaz> matak: yes, would be good

<kaz> ... accessibility consideration section is really good

<kaz> ... might not be in scope of mandatory technology

<kaz> ... people generally rely on their own preferences

<kaz> ... make devices more helpful to people

<kaz> ... we have some academic expert as well

<kaz> ... also have Jason

<kaz> jason: clarify the scope of the work of WoT

<kaz> ... involved in the technology work but also involved in accessibility purposes as well

<kaz> ... we have broad scope

<kaz> ... what W3C WoT should take into its consideration?

<kaz> ... there are different audiences

<kaz> ... probably larger scope than the original one

<kaz> seb: would provide some more technical background as well

<kaz> ... Thing Description is a landing page for various IoT devices

<kaz> ... should be nice for accessibility purposes

<kaz> ... the chance to provide semantic interoperability too

<kaz> ... how to interoperate with each other by following the Thing Description

<kaz> ... relay on some expectation how to control the device

<kaz> ... e.g., voice interface

<kaz> ... should be possible

<kaz> ... not rely on the device providers but we ourselves

<kaz> ... if the device providers can provide Thing Description, would allow us to handle the devices in more open manner

<kaz> ... heavily depends on device vendors, though

<kaz> mm: yeah

<kaz> ... Thing Description is a variation of JSON-LD

<kaz> ... so could use RDF to extend the capability

<kaz> ... separate group could do that

<kaz> ... SSN can be also applied

<kaz> ... very interesting ontologies to be attached

<kaz> ... we don't know what physical capability of each device

<kaz> ... but could use ontology for that purpose

<kaz> ... mode of Thing Description currently

<kaz> ... there is a way to add annotation

<kaz> matatk: there is a metadata to be used

<kaz> ... you can make devices accessible

<kaz> ... for example, EPUB also could improve accessibility

<kaz> ... the process mixing the capability

<kaz> ... it's very aligned with our experts as well

<kaz> ... getting some way to make the entire mechanism

<kaz> ... we also got from JSON-LD and RDF viewpoint

<kaz> ... accessibility extensions

kaz: WoT TD is metadata to describe devices capabilities
… Can be combined with any existing accessibility mechanismm
… I've been working with voice browser and multimodal groups, with Janina and Michael Cooper.
… Lots of opportunities for existing standards to support accessibility use cases. Important for WoT 2.0 charter.

<Zakim> dezell, you wanted to suggest TD adjustment

<kaz> dezell: we talked about door sensor

<kaz> ... may actually have properties about human interface

<kaz> ... standard qualification around actions?

<kaz> ... some of them are controlled outside of human interfaces

<kaz> ... door sensor here as an example

<kaz> ... one of the requirements is preset password

<kaz> ... candidate for a human interface device

<kaz> ca: basic observation

<kaz> ... use of ontology to link related resources

<kaz> ... wanted to link the discussion on linting too

<kaz> ... might be possible for developers to use linter

<kaz> ... for accessibility as well

<kaz> kaz: btw, "linter" is a kind of validator or checker :)

<kaz> matatk: like the idea

<kaz> ca: related to the mechanism McCool mentioned

<kaz> mm: thiks similar kind of strategy would work

<kaz> ... guidelines to follow the profile, etc.

<kaz> ... using public exposure

<kaz> ... we could summarize the discussion to move ahead

<kaz> matatk: accessibility consideration section to be considered

<kaz> mm: probably should look that closer

<kaz> ... need more detail?

<kaz> ... we have ontology or best practices on voice assistants

<kaz> ... let me ask about detail

<kaz> ... onboarding

<kaz> ... we have constraints

<kaz> ... generally hard to process

<kaz> ... only suggest best practices

<kaz> ... very often we have a hub for dashboard

<kaz> ... could think about accessible dashboard

<kaz> ... dealing with the context like languages

<kaz> ... should describe what's needed

<kaz> ... would like to clarify work items

<kaz> ... e.g., voice interface with AI

<kaz> matatk: should switch to that

<kaz> mm: propose some actions too

<kaz> matatk: fantastic

<kaz> mm: regular meeting to be organized

<kaz> (Janina has just arrived)

Applications of LLMs in IoT

<kaz> mm: talk about voice interface and AI

<kaz> ... using AI to generate TD

<kaz> ... we can use documentation for devices

<kaz> ... MS is interested

<kaz> ... generate other codes for voice agents as well, e.g., Alexa

<kaz> ... voice systems connected with IoT devices

<kaz> ... suitable AI systems on the local side

<kaz> ... these days voice agents have good quality

<kaz> ... some technical work needed

<kaz> ... think it's very promising

<kaz> ...AIs don't have problem to handle JSON

<kaz> ... need a few incubation projects

<kaz> ... can make it better

<kaz> ... need appropriate information within TD

<kaz> ... end-user documentation needed

<kaz> matatk: let me check my understanding

<kaz> ... very interesting

kaz: Thanks McCool for the info. W3C should work on standards for speech agent implementation (interop between them; how to change the voice and so on).
… We need the interop work in parallel with this work.
… There are many components and layers, and each requires more standardization.

<McCool> q/

<kaz> janina: air flow is terrible sometimes

<kaz> ... trying to configure the air conditioner

<kaz> ... need to read the text

<kaz> ... can talk with that nicely

<kaz> ... in some reason, can't handle it remotely

<kaz> ... would like to turn off it or change the temperature

<kaz> mm: people from manufacturers are working on improvement

<kaz> ... but the problem is technical expertise required

<kaz> ... easier set up for smart home purposes needed

<kaz> ... you can do a lot if you have expertise

<kaz> ... but not easy now

<kaz> ... kind of chicken and egg problem

<kaz> ... nice thing is AI could handle complicated part

<kaz> janina: seems good

<kaz> mm: yeah

<kaz> ... the problem to solve is interoerability of devices

<kaz> matatk: really interesting stuff

<kaz> ... wanted to relay principles on ethical machine learning

<kaz> ... shifting the burden

<kaz> ... auto-generation of YouTube video

<kaz> ... sometimes good, sometimes dangerous

<kaz> ... if you rely on the captions, verifying the content is important

<kaz> ... seems to me generating things would be good

<kaz> ... and generating TD would be also good

<kaz> ... but there is a risk

<kaz> mm: generating TD could be totally automated

<kaz> ... we need to think about how to validate it

<kaz> ... Digital Twins is another viewpoint

<kaz> ... need to investigate problems

<kaz> sk: want to add that AI would be helpful

<kaz> ... may not be much context there

<kaz> ... AI can be used to handle requests

<kaz> ... e.g., additional explanation to the users

<kaz> ... can be also applied to TD

<kaz> ... to get more information

<kaz> mm: translation is also good

<kaz> ... let's wrap-up

<kaz> ... 1. use cases

<kaz> ... we're restarting use cases discussion

<kaz> ... should capture some accessibility use cases

<kaz> ... regular discussion, maybe once a month

<kaz> .... requirements arising

<kaz> ... 2. should discuss exploratory topics

<kaz> ... there is the WoT-IG in addition to the WoT-WG

<kaz> ... define a project on ontology

<kaz> ... should discuss that

<kaz> ... recruit people who can work on that

<kaz> matatk: sounds good

<kaz> janina: sounds right

<kaz> matatk: tx for coming

<kaz> ... really good discussion

<kaz> ... we love the accessibility consideration section

<kaz> sk: looking forward to proceeding with this direction

<kaz> mm: fyi, we'll have new/commercial use cases session tomorrow at 5:30pm

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Succeeded: s/lof/lot

Succeeded: s/Subtopic: Epub/Topic: Epub/

Succeeded: s/broader topics, /broader topics, including comics and manga, /

Succeeded: s/track/haptic track/

Succeeded: s/Accessibility Tools/Assistive Tech

Succeeded: s/sense/sense, and as far as I know there isn't a way to nest navigation units.

Succeeded: s/[scribe missed comment]/There was work going on to replace SMIL in the DPUB world./

Succeeded: s/technology/technology?/

Succeeded: s/in the browser that/using WASM in the browser that/

Succeeded: s/data on user/data on usage or on user

Succeeded: s/We appreciate your feedback/We appreciate your thorough review and look forward to receiving your feedback

Succeeded: s/analysing edge technologies/analysing edge technologies (a11yEdge CG)/

Succeeded: i/harry_: Enhancing haptics for accessibility/scribe+ Irfan/

Succeeded: i|would like|-> https://www.w3.org/WAI/APA/wiki/Wot_usecases WoT use cases on APA wiki

Succeeded: s/kin/kind/

Succeeded: s/ris/risk/

Maybe present: Avatar, chrislorenzo, cpn, Device, Francois, harry, jasongw, kaz, nigel, Perception

All speakers: Avatar, chrislorenzo, cpn, Device, Fazio, Francois, harry, harry_, janina, jasongw, jasonjgw, johnriv, kaz, matatk, nigel, Perception, Roy

Active on IRC: aciortea, AvneeshSingh, CharlesL, cpn, cris_, dape, dezell, Fazio, harry, Irfan, jasonjgw, JohnRiv, kaz, ktoumura, mahda-noura_, matatk, McCool, minyongli, mitch11, Mizushima, nigel, Roy, sebastian, tidoust