Meeting minutes
<westont> +present
AT Automation Kickoff
ST: how should we make the meetings most effective? how can people know when they need to attend?
James Scholes: we try to keep all topics as issues, then all meeting notes become updates on issues (in case minutes are not complete)
ST: what about sprint updates?
JS: we tried doing that in issues, now we have it from Matt to basically just go through APG and test everything, so there's not that same requirement to open it for discussion...
ST: let's start with issues for open discussion topics or topics that cross over to other workstreams. still need to figure out how sprint updates fit into issues (esp. since at automation may have separate repos)
<michael_fairchild> MF: before we move to the next topic, can we get an overview of the goals and timeline for this project?
<michael_fairchild> ST: Goal 1 - Productionize the NVDA prototype. Get it working on ARIA-AT Tests and fully integrate it. Test ARIA-AT end-to-end with NVDA.
<michael_fairchild> ST: Goal 2 - explore ways forward with JAWS
<michael_fairchild> ST: Goal 3 - explore VO on mac/iOS
<michael_fairchild> ST: By the end of the year, we have enough info to properly scope work on developing JAWS and VO automation
<michael_fairchild> ST: Goal 4 - draft a protocol around the AT driver, the same way there is a protocol for the web driver.
<michael_fairchild> ST: The bulk of our work with be with NVDA, and toward the end of the year we will further explore JAWS and VO.
<michael_fairchild> ST: right now, we care more about getting one AT complete than try to juggle multiple AT.
AT Automation cross-AT Windows driver research
jugglinmike: worked the past few weeks on figuring out how we can share as much code as possible between NVDA and JAWS
jugglinmike: Simon's NVDA prototype from last year relied on an NVDA-specific framework to work. we'd like to implement with an approach that is cross-AT
jugglinmike: shows a JSON-formatted version of a test file for checkbox
jugglinmike: shows code snippet in VSCode that interactions with Windows Speech API, a common subsystem that is shared by all ATs
jugglinmike: shows a windows "Voice" prototype that can be registered with Windows Speech API and can capture textual information instead of synthesizing a waveform
jugglinmike: currently, interacting with Windows Speech API requires writing C++; we'd like to use a higher-level language to process and perform business logic on output of Speech API
jugglinmike: shows node.js code that is interpreting JSON-formatted test file to perform business logic assertion checks
jugglinmike: node.js code also uses webdriver to launch browser and navigate to page. also uses robot.js to automate end-user input / keypresses
jugglinmike: shows demo on command line. turns on NVDA with new Automation Voice (instead of Microsoft David). runs test on terminal... script opens firefox with webdriver, loads aria-at example, exits without error
jugglinmike: now shows same thing using Narrator! with no code changes. this time we fail, but it's because test expected "lettuce checkbox unchecked" to expect "not checked". so the test automation worked, and correctly showed us that Narrator does not pass this aria-at test as written
JS: want to raise the issue that any AT users who would potentially want to test this code or contribute to this driver would need the text to be vocalized in order to proceed. want to make sure we prioritize adding voice as a necessary feature, not a long term wishlist
westont: is there an issue with vocalizing speed? I think there was a now-fixed bug on iOS which only surfaced on a certain speed.
michael_fairchild: hard to say. we really can't know until we get started.
michael_fairchild: where does this code live?
jugglinmike: currently on bocoup's github. can open issue on aria-at repo to track until we have a proper w3c repo
s3ththompson: next steps for test format?
JS: determining the assertion matching strategy