The WoT Thing Description is the central building block, as it allows to describe the metadata and network-facing interfaces of Things.
Web Of Things Architecture Spec Review
The following is an initial review of the Web Of Things Architecture spec by Joshue O'Connor (APA Working Group) during July 2019. It outlines each 'vertical' that the spec references and asks related accessibility questions of same. Discussed in RQTF meeting July 2019
Overview of Binding Templates, Scripting API, Security and Privacy
The informational WoT Binding Templates provide guidelines on how to define so-called Protocol Bindings for the description of these network-facing interfaces and provides examples for a number of existing IoT ecosystems and standards.
The WoT Security and Privacy Guidelines represent a cross-cutting building block, which should be applied to any system implementing W3C WoT. It focuses on the secure implementation and configuration of Things.
This specification also covers non-normative architectural aspects and conditions for the deployment of WoT systems. These guidelines are described in the context of deployment scenarios.
Overall, the goal is to preserve and complement existing IoT standards and solutions. In general, W3C WoT is designed to describe what exists rather than to prescribe what to implement.
Q: To what degree is accessibility an implementation detail of WoT?
There is a question around how much support for accessibility, in the terms of multimodal user requirements, are an implementation detail. That brings the question of "What is needed in the WoT architecture spec to support accessibility implementations?"
Q: What are useful examples of Linked Data Vocabulary that can provide semantic accessibility extensions to thing descriptions?
What languages can we use in an extensible way, only JSON, or HTML/ARIA? Try to build some. Or could we try something with Digital Twins?
- Q/Note: I'm not sure about the ERCIM implementations, or if the reference to "Other example applications include remote control of a cyber-physical system, a digital twin for a smart light, and multi-channel data streaming for remote diagnosis of cardiac problems. All of these applications combine an exposed thing with a web page for the associated user interface." - means the Area Client/Webhub are implementations of these example - or could be used to facilitate these examples.
- Q: Do we want to suggest that dual role sensors are 'accessibility aware' clients?
In that they are somehow made aware of a person with a disability needing support or multimodal UIs built or similar? Same for Overview of Device controllers
Q Accessibility questions about Thing to Thing descriptions
- Q: How semantically capable are digital twin descriptions in supporting? While low level - they may provide a background data layer that could support an accessibility related architecture.
- Q Can thing descriptions provide accessibility related flags that trigger supports for people with disabilities in sensor aware or other environments?
The following is a review of each of the presented use cases in the architecture document and questions it raised about accessibility that is relevant for that use case.
Many of the technologies that are powered by WoT architecture can be accessible by a well built UI. Much of this may be taken care of in the application level. But are there aspects of Smart Cities where this is not the case and the architecture needs specific structures that support a11y?
For example. - where you have consumer use cases and the 'Smart Home'. In this case, gateways are connected to edge devices such as sensors, cameras and home appliances through corresponding local communication protocols such as KNX, ECHONET, ZigBee, DECT ULE and Wi-SUN. Multiple gateways can exist in one home, while each gateway can support multiple local protocols."
- Q: Do the edge devices need specific protocols that can support multimodal accessibility requirements?
- Q: When they detect input/translate that into output via communication protocols - are current protocols sufficient for multimodal accessibility requirements?
Smart home provides consumer benefits such as remote access and control, voice control and home automation. Smart home also enables device manufacturers to monitor and maintain devices remotely. Smart home can realize value added services such as energy management and security surveillance.
- Q: Can the Smart home devices provide feedback and state/purpose information to the user? And not just be controlled by voice? Can symbol sets be used as feedback mechanisms? Or signed videos that relay the same to deaf or hard of hearing users?
This is a potentially rich area for use case generation for WoT powered tech.
For example - someone with a disability wishes to monitor themselves and use sensors etc to do so.
- Q: Do the sensors need protocols that support multimodal accessibility requirements?
A user may need to be informed to take medication, if blood sugar is low or some other physical aspect needs attention, in a modality that supports their needs.
Environment monitoring typically relies on a lot of distributed sensors that send their measurement data to common gateways, edge devices and cloud services.
Monitoring of air pollution, water pollution and other environmental risk factors such as fine dust, ozone, volatile organic compound, radioactivity, temperature, humidity to detect critical environment conditions can prevent unrecoverable health or environment damages.
- Q: Do the sensors need protocols that support multimodal accessibility requirements that can inform the user of some impending danger in a way/form that they can understand or require?
This is an area that is potentially rich with use cases.
Smart Parking is optimising and tracking the usage and availability of parking spaces and automates billing/reservations. Smart control of street lights based on presence detection, weather predictions, etc. reduces cost. Garbage containers can be monitored to optimise the waste management and the trash collection route.
- Q: Could waste containers contain transducers that broadcast their location so a blind user knows they are on the path/ road when they get near to the user? Symbolsets or other alerts could be used in the UI.
This aims to monitor resource usage and minimise waste.
- Q: Could a user put a location into a device within a building, and the building sensors could update the user constantly as to where they are in relation to that room/floor?
Potentially rich with use cases - discuss with RQTF.
@@ More use case feedback and questions will be inserted here
WoT Implementation report and Test Results
There is an https://w3c.github.io/wot-thing-description/testing/report.html WoT Implementation Report] that may have some potential for accessibility related use cases. There is a table in the document, but it is very hard to parse the results. They seem to relate to aspects of TDs themselves and not contain useful implementation details, or qualitative data from the results of the test themselves.
However, the WoT group has said as for details on the implementations, we do have some writeups under the plugfests , and the testfest, testing/online directories. Needs to be collected and better organised. There also exists a “object identifier” by Michael McCool (Intel), an AI (vision) device that might be a useful accessibility device for blind people.