Meeting minutes
Matt_King_: I didn't make an agenda because it looks like there are still quite a few unresolved conflicts in the test queue, and I'd like to focus on getting those resolved today
Matt_King_: We have other stuff waiting to add to the test queue, so we'll talk about that after
Matt_King_: Is there anything else that should be on the agenda?
Joe_Humbert: I reported a few issues. They are related to one of the test plans, but they're higher-level
Joe_Humbert: I don't know if they impact other test plans
Matt_King_: Got it. We'll add that to the agenda
Joe_Humbert: Also, Testers can no longer update the version of the software that they use. When I went to test, it was locked to the version that the bot used
Joe_Humbert: I did not raise an issue for this
Matt_King_: I noticed that when I opened a Safari test plan on Windows, it didn't even tell me I was using the wrong browser
Matt_King_: I think there used to be a button on the test page where you could change the version information about the browser, anyway... And I guess the AT, as well
Matt_King_: Do you mind raising an issue for that?
Joe_Humbert: Sure
Matt_King_: Thanks, then we can get that tracked and get it into the right place
Bot issues
VoiceOver bot providing incorrect output when testing Action Menu Button Example Using element.focus(), Test 1, V24.10.31)
github: w3c/
Matt_King_: This is the issue Joe_Humbert was referencing just now. I've renamed it to be focused on the bot
Joe_Humbert: I have updated the AT response reported by the bot, is that a problem?
howard-e: That's fine. Whatever we need, we can find in the logs
IsaDC: I have NOT updated the output reported by the bot for the test plan run assigned to me
Joe_Humbert: This is limited to the VoiceOver Bot. The AT responses reported by the NVDA Bot have been accurate
<Joe_Humbert> w3c/
Matt_King_: We use different OS APIs for different screen readers. I'm wondering at which layer these problems exist, and if it's the screen reader, if that could make it hard to solve the problem this way
Testing status
Matt_King_: We have 8 plans in candidate review, 3 plans in draft review, and 2 plans ready to be added to the queue (I just don't want to add more to the queue until we're a little more caught up with the testing)
IsaDC: I agree on waiting
Matt_King_: This morning, I spent my time on conflicts rather than the slider test plan. We'll get it into the "ready to test" soon
Matt_King_: The question right now is: how far can we get these three waiting in the queue out of the queue?
Action menu button with element.focus
IsaDC: All the testers are here
IsaDC: I'm planning on finishing my testing by the end of this week
IsaDC: Luke from PAC told me that he will be testing during this week
IsaDC: And Joe_Humbert already finished
Matt_King_: So we could get this wrapped up this week
Matt_King_: I suppose that if there are any conflicts, you and Luke can work it out this week, IsaDC
IsaDC: Yes, and I can publish once it's ready
Matt_King_: Cool, that's simple. Thank you, IsaDC
Disclosure
Matt_King_: We have two screen readers totally done, and we're 85% complete with VoiceOver
IsaDC: The conflicts reported are the same across test plans
IsaDC: I'm going to re-run my tests because it seems to be an issue with the previous version of either Safari or VoiceOver
IsaDC: If I get the same results, I'm going to ask somebody else to run them, as well
Matt_King_: Dean, what version of macOS were you running when you got your results?
Dean: I was on 14.5, as I recall, which I believe is what the test specifies we were supposed to be using
Dean: I did upgrade. I don't have a comparison of the same test before and after the upgrade, but my experience matches IsaDC. It looks like the problem was addressed
Dean: I got through everything but three tests on the disclosure yesterday
Matt_King_: Last week, we were talking about test 14. Now, I see there are no conflicts in test 14.
Matt_King_: It was a case where some folks were getting output and other folks were not
Matt_King_: How did that get resolved? I was looking at that issue before the meeting, and the issue was still open. When I look at the conflicts page, there is no conflict for test 14
IsaDC: I think it got fixed with the latest version, as well
Matt_King_: I don't know if it's fixed. It says that there's no output.
IsaDC: Ah, not "fixed", but there is no longer a conflict
Matt_King_: The AT responses are now consistently bad
Dean: Correct
Dean: I didn't re-test test number 14, though. I had done that before I upgrade. If I need to go back and run test 14 again, I will. Perhaps the issue is that the new version is fixed and my results from before are not up-to-date
Dean: I can run test 14 right now as an example...
Dean: But I saw several issues with bot. I did kind of destroy some evidence as I went along because I changed what the Bot reported to reflect what I observed locally
Matt_King_: Did you report that as an issue?
Dean: No. This is something I assumed last week and thought that I was cracked.
Dean: I will go back to test 4, raise an issue, paste in what the bot said and what I observed locally
jugglinmike: Along the lines of Joe_Humbert's report, could you re-title that to make it clear that the problem is likely with the bot rather than with the test itself?
Dean: Will do
Dean: We have conflicts on tests 3, 4, and 9
IsaDC: I have upgraded, and I will be changing my results. That doesn't guarantee that the conflicts will go away. Something tells me that I will be running the whole test plan again
Matt_King_: I think it would be unrealistic to expect that we will get this all resolved asynchronously. We'll likely be talking about this in the next meeting on December 4
Dean: I will have those done before tomorrow
Dean: I think what IsaDC reported initially--I think she probably got the correct test results and reported them correctly. I think Apple fixed something in between versions. I expect that she will observe different results without conflicts when she re-runs them
Matt_King_: I think that's it for this test plan. That's all we can talk about and resolve at this moment. There's just some more work to do, here
Navigation menu button
IsaDC: This is the one with the hints
Matt_King_: Right, but last week, we decided on a path forward
Matt_King_: We have 10 conflicts with VoiceOver, and we're not totally done with JAWS yet, either
Matt_King_: Hadi has two tests done. He is not present today
IsaDC: I have completely finished this with JAWS
Matt_King_: I didn't look at THESE conflicts this morning
IsaDC: Do we have conflicts? Oh, dear
Matt_King_: Hadi is recording the JAWS tutor message, and you didn't record the JAWS tutor message
Matt_King_: That shouldn't do it, I guess...
Matt_King_: Hadi has rendered a passing verdict for the assertion regarding the "collapsed" state, but that information is definitely not in the AT response he reported
IsaDC: I have noticed that with JAWS, sometimes it's easy to tick the wrong radio button. That has happened to me right after we switched from radio buttons to check boxes
Matt_King_: I found it easier with the radio buttons, myself, because it clearly says "yes" or "no"
IsaDC: Sometimes JAWS checks the wrong one
james: I have some concerns about that UI change. I know that ultimately, we don't have a binary state. But check boxes seem less prone to error
Matt_King_: Actually, I feel that radio buttons are less prone to error
Matt_King_: If Luke is available for this test plan, that's potentially really helpful. If Hadi isn't able to wrap it up before Thanksgiving
Matt_King_: I want to get this out there to collect feedback from Apple
Matt_King_: IsaDC, could you ask Hadi if he'll be able to get this done before Thanksgiving? If not, could you ask Luke to take over?
Matt_King_: So that's JAWS. Now, for VoiceOver
Matt_King_: There are 10 conflicts. Are all of these conflicts related to the hints?
IsaDC: They are
Matt_King_: Then we can fix this up
Matt_King_: Dean, are you okay if IsaDC and I edit your results to be consistent with our discussion last week?
Matt_King_: That is to say: it doesn't pass if it's only conveyed in this hint
Dean: I didn't know that we arrived at a decision
Matt_King_: Yeah, we arrived at it last week, and I took an action item to document the resolution. We will record the hints, but we will not consider the content of hints when rendering verdicts
Dean: I'm cool with that. Change away
Matt_King_: Okay
Matt_King_: There is one part of the rationale that we discussed last week which is slightly murky...
Matt_King_: In the rationale, we said that we can't consider it part of the assertion verdict because we said that if hints are turned off (which they often are in screen readers), then the required part of the base output will might not be present in the base output
Matt_King_: What if the assertion is optional? Then it really doesn't matter if the hint is on or off
Matt_King_: If we have an optional assertion, and it is only included as part of the hint?
Matt_King_: For consistency, we would say it is "not supported", but in that case, the rationale we used for not considering hint test doesn't really hold water
Matt_King_: Let me restate
Matt_King_: We have a reason for why we're not considering the hint when arriving at a verdict
Matt_King_: That reason is that even though hints are on by default, we know that (for certain screen readers), large portions of the user base turns them off
Matt_King_: So we're saying that the hint cannot be part of the "base output" (because people frequently turn hints off), so the output that is required for interoperability would not be available for most users
Matt_King_: We're saying that those users who turn off hints are not getting something that is required for interoperability
Matt_King_: However, any assertion that is a "may" is not required for interoperability. So it seems as though we shouldn't care about whether the hint is on or off
Matt_King_: The rationale for ignoring hints doesn't really hold water. The only reason we would ignore the hint for an optional assertion is to preserve consistency
james: If we say that "may" assertions can take hint output into account, then testers need to take assertion priority into account. We don't ask them to do that right now
james: I think this risks the human nature of "assuming a given assertion is a 'may' because it has been a 'may' so many other times" even when it changes for one edge case
Matt_King_: Yes, that's a good point. We shouldn't require Testers to consider assertion priority for exactly that reason.
Matt_King_: I definitely need to document this. I'm amazed at the number of decisions we've made; it can be difficult to remember the rationale for all the things we've decided. Documentation is really important, and I'm taking action items to keep it up-to-date.
Matt_King_: So, disclosure is going to be sitting with us for at least one more meeting
Matt_King_: For the other two plans, we at least have a path forward. I will be in conversation with IsaDC to move them along
Matt_King_: For the meeting on December 4, we'll have a bunch of stuff for people to do. We'll also hopefully be putting the "disclosure navigation menu" plan to bed
IsaDC: I can't believe it's almost December, already
Matt_King_: Same here!
Matt_King_: That's it for today, everyone. Thank you so much for your dedication to the project!