TestGL Issues List - February 21, 2004

Num Date Class Status Raised By Owner
001 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Introduction
Title: Need introductory material
Description: Need introductory material with structure similar to other Framework docs

Proposal: Class Of Product is test materials (in later discussion we also realized that we must include test metadata, else we will not be able to address any of the "process issues"). Audience is (in priority order) test developers, WG members, and users of tests. TODO - yes we do have a second cop

DD and PC hoped there would be no need to define test metadata as a separate COP; they would rather define 'test materials' broadly enough to cover these metadata. On the other hand, including in the term "test materials" a test plan and the results of executing that plan to test those test materials (as in checkpoint 5.4) seems to stretch the definition too far (not to mention introduce a confusing element of recursion).

Resolution:  Placeholders have been inserted for all relevant sections. Additional Issues should be opened to address the content of these sections. The Class of Product proposal has been extracted into a separate issue. This issue can now be closed.
002 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Concepts - Types of testing
Title: Clarify and expand types of testing
Description: "Functional testing" is not a good term - should be renamed (to "other"?). Goal is to cover performance, usability, implementation-dependant features that are not covered by the spec (which is conformance testing)
Proposal:
Resolution: DONE
003 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Concepts - Types of testing
Title: Expand discussion of interoperability testing and compare with conformance testing.
Description: Interoperability is possible without conformance, but it needs prior agreement, and it doesn't scale in numbers of implementations, or with time.
Proposal: No - don't lose focus. We've already said that we are addressing conformance test materials. Let's not confuse the issue by introducing interop testing concepts. Issue 004 talks about applying the conf. testing principles to other areas. Enough said.
Resolution: CLOSED
004 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Concepts - Types of testing
Title: Emphasize usefulness of these guidelines for types of testing other than Conformance
Description: Note that many forms of testing can be performed by comparing actual behaviour with behaviour defined in a "specification". Conformance
testing is only such kind. Consequently, even though we will focus on conformance testing, much of what we say is applicable to other kinds of testing.
Proposal:
Resolution: DONE
005 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Concepts - Types of testing
Title: Focus of the guidelines
Description: Should we focus on conformance testing or also cover other kinds of testing? (After all, the group's name is "QA" not "Conformance".)
Proposal: State that we will focus on conformance testing (for this revision of the doc).
Resolution: DONE
006 Jan 04 Substantive Closed Jeremy Carroll Patrick Curran
Section: Concepts
Title: Don't mandate a 'waterfall model' of development
Description: Discuss alternative test-development strategies. Some examples:
  • help the user detect if the implementation is conformant or not
  • help the implementation developer improve his product
  • help the specification writer improve his specification

[MS] I don't see where these have anything to do with whether or not a "waterfall model" was used. [PC response] the first bullet in the list below does (test first, develop spec later)...

[PC] need a better definition of strategies for development. For example:

  • to clarify whether a proposed feature in spec is implementable
  • to verify whether implementations of spec are conformant
  • to verify whether implementations of spec are interoperable

We must stress that we don't necessarily require a "waterfall" process, and that the requirements described in the GL can be applied on a recursive basis; we should actually encourage this type of strategy (without requiring it) in this section.

[MS] I'd like to add the use case that good QA (and this guideline) will result in good test materials that will improve Q/A and thus, result in less bugs, lower maintenance costs, etc. In other words, this use case illustrates voluntary use of tests as a carrot, rather than the stick as in certification, to improve software and save money.

[PC] This seems more like a general rationale for following these guidelines rather than a use-case. We should certainly incorporate this somewhere.

Proposal: We need a short, concise way of defining the "waterfall model". Once we've done that, all we need to say is that this guideline does not imply any one type of model. (We also need to ensure that we do not imply a waterfall model in the wording of guidelines and checkpoints.)

References:

  • http://asd-www.larc.nasa.gov/barkstrom/public/The_Standard_Waterfall_Model_For_Systems_Development.htm
  • http://www.ctg.albany.edu/publications/reports/survey_of_sysdev?chapter=5
  • http://www.convergsoft.com/contents/methodology_sdlc.htm
Resolution: Incorporated suggestions into revised text
007 Jan 04 Substantive Active QAWG Patrick Curran
Section: Use Cases
Title: Need use-cases
Description: Suggested use-cases:
  • testing lab/certification authority needs to test products; the spec and the products already exist, only needs to check if the products conform
  • implementations are begun before spec is finished; tests are needed to check if they conform
  • WG needs feedback on its specification, and uses test cases as a way to get this feedback
  • WG uses test cases as a way to explore new features (eg in OWL, SVG, CSS)
  • comparisons of actual state of implementations independantly of conformance [example of interop testing]

We define "use case" as "a specification mechanism or technique that captures the ways a specification would be used, including the set of interactions between the user and the specification as well as the services, tasks, and functions the specification is required to perform". The cases above address the uses to which the class of product of this spec (test materials) are put more than they address the uses to which this spec could be put. The "test development strategies" outlined in Issue 6 seem closer to this definition.

Proposal:
Resolution:  Incorporated suggestions into use-cases section of doc, but see concern above. Keeping this open until resolved...
008 Jan 04 Substantive Active QAWG TBD
Section: Guideline 2 - Test Assertions
Title: Test Assertions
Description: Despite overlap with specGL, we agreed that it's important to discuss assertions in Test GL. In specGL, TA are considered as output, while in TestGL, they are input.

Reviewing the TA definition in specGL, it appears we also want to move some of the verbiage of SpecGL GL10 to the definition in specGL glossary
and QA-Glossary.

The assertions section of TestGL should point to the definition, address why they are important, how they can be extracted/derived automatically
or not, address what makes a good/useful TA. Needs to emphasize again that there might be a feedback loop between the test assertions extraction and the spec development.

Checkpoint 2.1: Rewrite checkpoint to: Provide a list of test assertions. Change to priority 1. Add some verbiage to the discussion to include examples of derived assertions.

Checkpoint 2.2. The discussion for this checkpoint should include more detail including the possibility of extracting more than one assertion from the same quoted text. It is up to the test developer to decide the mechanism for assigning a unique assertion-ID. The metadata in this checkpoint should be tied with the metadata in checkpoint 3.2.

Replace location for the text from which the assertion was derived.

[PC] we've discussed assertions several times, and don't seem to be able to agree on whether a specific assertion list is required, or whether
it's sufficient to define the "test purpose" in metadata (explaining that assertions are a really good way to do this). Will re-open this as an email discussion, referencing relevant SpecGL Issues.

Proposal:
Resolution:   Assertions discussion needs much work..
009 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Introduction - Class of Product
Title: Specify Class of Product
Description: We must specify the types of test materials to which these guidelines apply.

Proposal: Point to the class of product listed in SpecGl rather than duplicate the list here.

[MS] points out that the COP for SpecGL is "specification", in particular W3C Technical Reports.

[PC] We must be referring to Section 2.2.of SpecGL ("Specification category and class of product") where we do classify different types of spec. (Should that section header really include the term "class of product"?)

Resolution: Incorporated a pointer to the list.  
010 Jan 04 Substantive Unassigned QAWG TBD
Section: Definitions
Title: Need a definition of "testable".
Description: [PC] We had an extensive email thread on this. Did we ever resolve?

Proposal: [MS] and [PC] believe that it's unnecessary to formally define this term. The intuitive understanding most readers will have should suffice.

Resolution:  
011 Jan 04 Substantive Unassigned QAWG TBD
Section: Checkpoint 1.1 - Define the test suite scope
Title: Move content from rationale to discussion
Description: Keep the rationale concise. Move examples and other material to the discussion section. Remove the example cited in the rationale and add the HTTP (client only, server only) example provided by Alex Rousskov.

[PC] Alex will need to document this - we have no minutes.

Proposal:
Resolution:  
012 Jan 04 Editorial Closed QAWG Patrick Curran
Section: Checkpoint 1.2 - Identify the specifications to be tested
Title: Use the term "explicitly tests other specifications" in the rationale.
Description:
Proposal:
Resolution: DONE
013 Jan 04 Substantive Active QAWG Patrick Curran
Section: Checkpoint 1.3 - Define a testing approach
Title: Define the term "testing approach"
Description: Define the term "testing approach" (refers to what kind of test is going to be developed) and provide examples (Validators, API testing, Protocol). Also, include partitioning as part of the discussion section.
Proposal:
Resolution:  
014 Jan 04 Editorial Closed QAWG Patrick Curran
Section: Checkpoint 1.3 - Define a testing approach
Title: Re-word conformance requirement
Description: Rewrite the conformance requirement to: "A testing approach must be identified as a result of the specification analysis".
Proposal: Close. [PC and MS] believe that the propsed wording unduly constrains the testing approach.
Resolution: CLOSED - no action TODO - UNDO
015 Jan 04 Substantive Closed QAWG Dimitris Dimitriadis
Section: Guideline 3 - Define the process for managing test materials
Title: Guideline 3 addresses WG process rather than test materials
Description: Guidelines must address test materials, not WG processes.
Proposal: Change to "Support test material metadata".
Resolution: DONE
016 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Checkpoint 3.1 - Define the process for managing test materials
Title: Drop Checkpoint 3.1 (it addresses processes)
Description: See Issue 15
Proposal: Drop this checkpoint
Resolution: DONE
017 Jan 04 Substantive Unassigned QAWG TBD
Section: Checkpoint 3.2 - Define the metadata to be associated with test materials
Title: Clarify Checkpoint 3.2
Description: Include a statement clarifying that test materials are not referring to testcases.

[PC] I don't understand this statement. Metadata is associated with testcases.

Proposal: DO NOTHING
Resolution:  
018 Jan 04 Substantive Active QAWG DD
Section: Checkpoint 3.2 - Define the metadata to be associated with test materials
Title: Move some of the requirements and discussions back to Checkpoint 2.2 - Tag assertions with essential metadata
Description:

In the conformance requirement the third bullet should go back to assertions. The fifth bullet is too ambiguous, it must be clarified.

In the discussion section:

  • The third bullet should go back to assertion as an optional metadata
  • Fourth bullet should also go back to assertions but as mandated
  • Fifth bullet should go back to assertion but rewording conformance level to degree of conformance
  • A new bullet should be added to include conditional tests as an optional metadata.

[PC] see Issue #008

[MS] The distinction between tagging assertions with metadata and associating metadata with test materials is confusing to me. Since assertions ultimately result in test materials why do we need the metadata in two places? Wouldn’t it be simpler (and less confusing) if we just associated the metadata with either the assertions or the test materials, but not both?

[PC] There may be many test-cases associated with a single (complex) assertion. Under these circumstances, attributes like "optional", or "specific to version x.y of the spec" belong with the assertion (so that they aren't duplicated). On the other hand, attributes like "input arguments/data for test" and "expected results of test" clearly must be associated with tests. Continue to discuss this in the context of our ongoing discussion of assertions.

Proposal:

Resolution: Text has been updated - issue will be kept open until this section has been thoroughly reviewed
019 Jan 04 Substantive Closed QAWG DD
Section: Checkpoint 3.3 - Provide coverage information
Title: Modify conformance requirement of Checkpoint 3.3
Description: Modify conformance requirement to require publishing the list to publish the list as well as the percentage.

[PC] The assertion list? This is covered elsewhere.

[MS] This requirement should also be modified to say “at a minimum . . . “must” be, rather than “should”. Since this is a minimum list, it’s an absolute requirement, thus “must”.

Proposal:
Resolution: DONE
020 Jan 04 Substantive Closed QAWG, Jeremy Carroll Patrick Curran
Section: Checkpoint 3.4 - Provide an issue-tracking system
Title: Drop Checkpoint 3.4

Description: This addresses process - inappropriate for TestGL.

[PC] have we captured this "good practice" somewhere in OpsGL?

Proposal:
Resolution: DONE
021 Jan 04 Substantive Closed QAWG DD
Section: Checkpoint 3.5 - Automate the test materials management process
Title: Checkpoint 3.5 inappropriately addresses process
Description: Once again, we're talking about process (belongs in OpsGL, not here). Reword to focus on metadata and on mechanisms for filtering, sorting, manipulating it.
Proposal: Once we refocus on metadata and on filtering, there's not much left to say. The checkpoint on metadata (3.1) can say it all (add something about the value of automation).
Resolution: Deleted this checkpoint
022 Jan 04 Editorial Closed QAWG Patrick Curran
Section: Checkpoint 4.1 - Define the test execution process
Title: Suggested change for conformance requirement of Checkpoint 4.1
Description: Rewrite conformance requirement to: "The process for executing tests must be well defined and must document how to execute the test".

[PC] how is this better than what we currently have ("The process for executing tests must be well defined and documented")? Note also that "well defined" is untestable. The essence of this checkpoint is: "tell me how to execute the tests. Do so unambiguously, so that if someone else follows your instructions they will get the same results as I do". (See Issue #023.)

Proposal: DO NOTHING
Resolution: CLOSED
023 Jan 04 Substantive Closed QAWG DD
Section: Guideline 4 0 - Define the process for executing tests
Title: Add a checkpoint requiring that test results be reproducible and repeatable
Description: In discussion we realized that the only testable requirement would be to require specification of whether test (results) are reproducible and repeatible. This is pretty weak, but what else is testable?

Alternate suggestion (preferred after discussion): materials must document where test results are not expected to be reproducible and repeatable, and to explain why. If this info is specific to a particular test (as opposed to a group of tests or even the entire test suite) it should be contained in the test
metadata.

Include in the rationale discussion of the order in which tests must be run (if indeed order is important). We must also ensure that every test that should be run is run, and that those that should be excluded from the test run are excluded.

Proposal:
Resolution:  DONE
024 Jan 04 Substantive Closed QAWG Patrick Curran
Section: QAWG Glossary (and/or Definitions section of this doc?)
Title: Define "reproducible" and "repeatable"

Description: Meeting minutes said define in "glossary". Also in definitions section?

Proposal: Definitions from [LR]:

The real definitions are from ISO 5725-1:1994/Technical Corrigendum 1, Published 1998-02-15, "Accuracy (trueness and precision) of measurement methods and results Part 1: General principles and definitions" and ISO 5725-2:1994, "Accuracy (trueness and precision) of measurement methods and results Part 2: Basic method for the determination of repeatability and reproducibility of a standard measurement method."

"3.13 repeatability: Precision under repeatability conditions.

3.14 repeatability conditions: Conditions where independent test results are obtained with the same method on identical test items in the same laboratory by the same operator using the same equipment within short intervals of time.

3.17 reproducibility: Precision under reproducibility conditions.

3.18 reproducibility conditions: Conditions where test results are obtained with the same method on identical test items in different laboratories with different operators using different equipment."

Resolution:   Incorporated these definitions into definitions section
025 Jan 04 Substantive Closed QAWG Patrick Curran
Section: Guideline 4 0 - Define the process for executing tests
Title: Add a checkpoint requiring that test results be reproducible and repeatable
Description: Rolled into issue #023
Proposal:
Resolution:   CLOSED
026 Jan 04 Substantive Closed QAWG DD
Section:
Title:
Description: Checkpoint 4.2 ("Automate the test execution process"). Essential requirements are: 1) test execution must be automated, 2) automation
must be platform-independent. Not always necessary or possible to provide cross-platform framework. Platform-independence should be removed from requirements and instead stated as a goal in discussion.

[PC] what's the difference between "platform-independent" and "cross-platform"? (Are we really trying to say "don't write to a particular platform, but don't feel obliged to write for all platforms"?)

Proposal:
Resolution: DONE
027 Jan 04 Substantive Active QAWG TBD
Section: Checkpoint 4.2 - Automate the test execution process
Title: Do not mandate use of any supplied test harness
Description: Results reporting is the aggregation of the results. The more you expect the harness to do the more issues you have with implementers ability to use the harness, since they may want to use their own for their own reasons. So, require that there be a test harness, but allow for people to substitute their own reporting mechanism or harness. Should there be instructions for using the test suite (with test harness) indicate that the harness is not required for making a conformance claim. Important issue, should be captured, but out of scope of this CP. Put in generic discussion for Guideline 4 keeping it general, since this has broader applicability.

[PC] I'm not sure what the last two sentences mean.

Proposal:
Resolution:  
028 Jan 04 Editorial Closed Dimitris Dimitriadis DD
Section: Checkpoint 5.1 - Review the test materials
Title: Suggested wording for description of Checkpoint 5.1

Description: Ideally, this means that the WG (or whatever body will in the end endorse the use of the test suite) approves the tests in the test suite
as well as test generation mechanisms (if applicable).

[PC] Once again, we're straying into OpsGL territory by addressing process. Can we sidestep this by requiring that test metadata include the results of reviewing? [MS] agrees.

[DD] Agreed, and I propose to reword to "make tests reviewable" (implies coming up with a scheme for reviewing, which is not process, but tech details.

[PC] Still sounds like process to me. We must focus on the output (results of review) and not on how that output is produced (the "scheme for reviewing").

Proposal:
Resolution:  DONE
029 Jan 04 Editorial Closed Dimitris Dimitriadis Patrick Curran
Section: Checkpoint 5.1 - Review the test materials
Title: Suggested wording for description of Checkpoint 5.1
Description: Rolled into issue #28
Proposal:
Resolution: CLOSED
030 Jan 04 Substantive Active Dimitris Dimitriadis TBD
Section: Checkpoint 5.3 - Package the test materials into a test suite
Title: Clarify components of test suite

Description: Need to spell out the parts that make up a test suite, as well as what parts are optional. Also need defintions of the terms used (for example "test harness", "test case").

[MS] This sounds like process to me. I’m not sure it should be a ckpt.

[DD] I think this is as much process as providing test materials to begin with, that is, not much. Packaging is maybe an unsuccessfully picked word, but it boils down to someone having to delimit relevant tests (used to test conformance) from irrelevant ones.

[PC] It's easy to reword the checkpoint to focus on the output (the package) rather than the process (the packaging). The essence of the checkpoint is that a website containing a bunch of links, some of which point to tests, other to docs, others to 'metadata' does not constitute a test suite. Until everything is packaged together test execution runs are unlikely to be reproducible and repeatable. (See also new issue 049 below.)

Proposal:
Resolution:  
031 Jan 04 Substantive Closed Dimitris Dimitriadis DD
Section: Checkpoint 5.4 - Test the test suite
Title: Suggested description for Checkpoint 5.4 - Test the test suite

Description: This needs to be either stressed as either the final proof that this is the real thing (being endorsed by the WG or similar body) or alternatively as one more step in making sure the test suite is designed to be of as high quality as possible (without making reference to it being officially approved).

[PC] Again, this checkpoint addresses process (the WG's activities) rather than the test materials. Require that a test plan (and/or test results) be published along with the test suite?

[MS] Also, as this ckpt now stands, it cannot be tested. (How ironic for a ckpt to test the test suite). The ckpt should require a test plan and test results, as PC suggests.

[DD] Again, Ops since it requires something from the WG, but essential in order to be able to use the test materials for conformance. Include it in the workflow of providing test materials for it to be testable (as a checkpoint). I stress the need for it to be clearly delimited as it is THE activity (I think) that discriminates"officially accepted" test materials from non-accepted ones.

Proposal:
Resolution: DONE
032 Jan 04 Substantive Active Dimitris Dimitriadis TBD
Section: Checkpoint 5.4 - Test the test suite
Title: Should have separate checkpoints for test materials and test suite

Description: I still think we should have two separate checkpoints for test materials and test suite, respectively, as these are two separate things and quite
different from one another (for example, tests are related to specifications, test suites are not).

[PC] We used to have separate checkpoints for testing the individual tests and for testing the the test suite as a whole. They were rolled together in the interests of brevity. Should we separate again?

Proposal:
Resolution:  
033 Jan 04 Substantive Closed QAWG TBD
Section: Checkpoint 5.5 - Solicit feedback on the test materials
Title: Describe how feedback on test materials could be provided

Description: Describe the possible ways in which feedback is given: mailing list for example requires readers that can act on it. Emphasize that feedback should be acted upon.

[PC] Another process-related checkpoint. Relatively easy to reword as a requirement to define and publish a feedback mechanism.

[MS] The important point is not to describe ways feedback can be given, but to obtain feedback, somehow, some way. The feedback mechanism is not important, just the results. Can we just require that feedback solicitation and feedback results be published? [DD] agrees

Proposal:
Resolution: DONE 
034 Jan 04 Substantive Active Dimitris Dimitriadis Patrick Curran
Section: Guideline 6 - Define the process for reporting test results
Title: Request clarification of Guideline 6

Description: Are we here speaking of the automated result reporting in the test suite, or of something separate? Since we indicate automation in the second paragraph of the introduction, we may want to specify that.

[MS] Again, this is a process requirement. Does it belong here?

[PC] Shouldn't have used the "process" word. The blurb under the guideline and the discussion text for checkpoints 6.1 and 6.2 make it clear that we're talking about the mechanisms that the tests use to report their results to the person executing them. If this isn't done in a clear, consistent, and understandable manner it will not be possible to accurately determine which tests have passed and which tests have failed, once again defeating the goal that test runs be reproducible and repeatable.

Proposal: Leaving this open, since we are conflating the reporting of individual test results and the publication of the results of a test execution run. Can we clarify?
Resolution:
035 Jan 04 Editorial Closed Dimitris Dimitriadis Patrick Curran
Section: Checkpoint 6.2 - Tests should report diagnostic information
Title: Suggested wording change for Checkpoint 6.2
Description: Provide diagnostic information where applicable (some implementations may not, for example, implement error reporting).
Proposal:
Resolution:  
036 Jan 04 Substantive Closed Dimitris Dimitriadis Patrick Curran
Section: Checkpoint 6.3 - Define and document the process for reporting test results
Title: Comment on Checkpoint 6.3

Description: Again, if automated, this relates more to the technical issues about the framework than the process as such (we want to allow for different
processes to lead to the same result, namely uniform results reporting).

[PC] whether or not we have automation we must still define and document a process for reporting results.

[MS] Again, this is process.

[PC] We can get around this in the same way as with 'review the test results' or 'gather feedback'. Focus on the output and not the process.

[DD, commenting earlier, but really addressing this point] Propose to change wording to "in the absence of existing mechanisms for reporting test results, create one and package together with test suite"

Proposal:
Resolution: Addressed, and Closed
037 Jan 04 Substantive Unassigned Dimitris Dimitriadis TBD
Section: Checkpoint 6.4 - Allow test results to be filtered
Title: Clarify wording of test results filtering in Checkpoint 3.4
Description: Perhaps we should change wording to indicate that filtering be made in accordance with specification modules, for example. I agree that filtering is more relevant in building or execution phases than after having been run.
Proposal:
Resolution:  
038 Jan 04 Substantive Closed Dimitris Dimitriadis Patrick Curran
Section: Checkpoint 6.5 - Automate the results reporting system
Title: Suggested consolidation of discussion of results reporting
Description: If we stress automation of results reporting, this should be discussed in one checkpoint (relates to issues #034 and #036 above).
Proposal: refer back to test execution automation rather than having a separate checkpoint
Resolution: Checkpoint dropped
039 Jan 04 Substantive Closed Dimitris Dimitriadis Patrick Curran
Section: Appendices
Title: Need explicit appendices
Description: Need to be made explicit.
Proposal:
Resolution:  Placeholders have been inserted for Appendices. Additional Issues should be opened to address the content of these sections. This issue can now be closed.
040 Jan 04 Editorial Closed QAWG DD
Section: Checkpoint 5.1 - Review the test materials
Title: Wording changes for Checkpoint 5.1
Description: Move first sentence of ConfReq to the discussion. Object of this CP is test material management system. Both modules (objects) are required for conformance. The review is that this has been in use for several years and being used that would be a valid review. Review all test materials.
Add ‘all’ to all applicable places need to review use of ‘all’ in entire document.
Proposal:
Resolution:  DONE
041 Jan 04 Substantive Unassigned QAWG TBD
Section: Checkpoint 5.2 - Document the test materials
Title: Checkpoint 5.2 is redundant

Description: [PC] Why?

Proposal:
Resolution:  
042 Jan 04 Substantive Active QAWG Patrick Curran
Section: Checkpoint 5.3- Package the test materials into a test suite
Title: Checkpoint 5.3 is not testable

Description: Test Suite is all the pieces of materials needed, wrapped up together. As written not testable. Make a minimal list of what MUST be provided, including: user documentation, IPR, test harness if supplied, referenced output if defined. Need to make sure that ‘test suite’ is understood. Test suite is the package (sum) of all the components needed to test an implementation. Test materials are the components that make up the test suite. (These terms should be defined.)

Proposal:
Resolution:  
043 Jan 04 Substantive Unassigned QAWG TBD
Section: Checkpoint 5.4 - Test the test suite
Title: Suggested clarification of objects of TestGL CLASS OF PRODUCT

Description: Management system is the object. David Marston to help define the 2 objects of TestGL. In discussion, include the frequency of applying the test plan and testing.

[PC] ???

Proposal:
Resolution:  
044 Jan 04 Substantive Active QAWG DD
Section: Checkpoint 6.1 - Tests should report their status in a consistent manner
Title: Clarify meanings of test-result states

Description: Need to add definitions for the terms. Reword. Remove Cannot Tell. In discussion, reference that terms came from EARL. Can you map these to other states or must you use these ‘states’? If these apply, then MUST use them. These are states, we have definitions of the states, definitions are normative, not providing labels for the states, if state applies, use it. Recommend that if English, use these labels. Change status to outcome.

[MS] This ckpt, and others in Guideline 6 use “should” in describing the ckpt. The “should” should (or must - I’m going insane) be changed to “must”. Checkpoints should not leave any wiggle room. A better way to describe the ckpt is with an active verb (e.g., Report test status, Report diagnostic info.)

Proposal: Partially addressed in revised text. Keep open...
Resolution:  
045 Jan 04 Editorial Unassigned QAWG TBD
Section: Section 6.2 - Tests should report diagnostic information
Title: Simplify conformance requirement for checkpoint 6.2
Description: Simply "must provide diagnostic information". Remainder of sentence is rationale.
Proposal:
Resolution:  
046 Jan 04 Editorial Unassigned QAWG TBD
Section: Checkpoint 6.3 - Define and document the process for reporting test results
Title: Checkpoint 6.3 already exists in OpsGL
Description: Rewrite as "Define an interface to allow publishing of results".
Proposal:
Resolution:  
047 Jan 04 Substantive Closed QAWG TBD
Section: Checkpoint - 6.4 Allow test results to be filtered
Title: Reword checkpoint 6.4
Description: "Have a results management system"
Proposal: This checkpoint talks about filtering out the results of executing invalid tests. This should be addressed by filtering out these tests before execution. Checkpoint 4.3 now addresses that, making this checkpoint unnecessary.
Resolution: Deleted this checkpoint.
048 Jan 04 Substantive Closed QAWG TBD
Section: Checkpoint 6.5 - Automate the results reporting system
Title: Reword checkpoint 6.5

Description: "automate the system"

[PC] ???

Proposal: Checkpoint 4.5 (Integrate results reporting into the automated test execution process) addresses this point. A separate checkpoint on this seems unnecessary.
Resolution:   Deleted this checkpoint.
049 Feb 5, 2004 Substantive Closed Patrick Curran DD
Section: Guideline 5
Title: Need a new checkpoint: releases of test suites must be versioned

Description: Test suites must be explicitly released (rather than "dribbled out" by constantly updating a website), and must be versioned. If they're not, test runs cannot be deterministic or repeatable and the results of different test runs cannot be compared.

Proposal:
Resolution: Done
050 Jan, 2004 Substantive Unassigned Jeremy Carroll TBD
Section: Checkpoints 2.1 and 2.2
Title: Objection to requiring assertions

Description: Wildly oversimplistic. Even the simplest OWL test relies on many parts of the recommendation. The idea that it is possible to tie a test to one or two parts of the recommendation is philosophically flawed (similar to the concept of causation, cf a huge body of literature). I do not believe this is uniquely a property of OWL.

Obviously one tries to structure the tests in such a way that assuming a system passes some set of easier tests, then this new test presents an interesting challenge, but ... Of course this also amounts to the issue that you lot seem to believe that it is possible to test for conformance whereas that is trivially incorrect. (Given any set of conformance tests for any system where each test is characterised as one or more inputs resulting in one or more outputs, the piece of software that is defined to precisely pass the test suite, by giving the determined output for the determined input, and otherwise to fail horribly, is a non-conformant piece of software that passes the conformance tests).

Proposal: Suggest drop these requirements, and the related ones in Guideline 10 of SpecGL. Possibly weaken to a "It may be helpful to list the test assertions found within or derived from a recommendation"
Resolution:  
051 Jan, 2004 Editorial Active Jeremy Carroll Patrick Curran
Section: Checkpoint 3.2: Test Metadata
Title: Suggested clarification

Description: Metadata quality is crucial and is best ensured by having a fairly small number of people responsible - sure it's a lot of work. The *must* is too strong, suggest *may*. The list of test metadata omits "the type of the test" and "the files associated with the test".

Proposal:
Resolution:   All suggestions except softening must to may have been adopted. Keeping issue open until this last point is resolved.
052 Jan, 2004 Substantive Unassigned Patrick Curran TBD
Section: Checkpoint 3.3 Provide coverage information
Title: It is impossible to measure test coverage

Description: Makework - this statistic is useless. Please do not waste other people's time in calculating it. Any test suite tests 0% of any plausible language worth specifying because the language is infinite and the test suite is finite. Any other number is simply a fib.

Proposal: Dop this requirement and any related requirement.
Resolution:  
053 Jan, 2004 Substantive Unassigned Jeremy Carroll TBD
Section: Checkpoint 3.5 Automate the test materials management process
Title: Must is too strong for test materials automation

Description: The rationale is true but does not justify a must; the QA group could collect a set of tools that have been used to help automate test material management, and help try and spread best practice but a *must* here is ridiculous. This really should not be a checkpoint.

I note that the QAWG commits to AAA test conformance, please describe your automatic system for test material management. (Since the spec GL and the ops GL are in CR and not test GL, I would be happy with an answer that restricted itself to those two documents).

Proposal:
Resolution:  
054 Jan, 2004 Substantive Unassigned Jeremy Carroll TBD
Section: Checkpoint 4.2. Automate the test execution process
Title: It is not possible for a WG to provide a universal automation mechanism

Description: WebOnt made it clear to its implementors that we expected test results to have been collected in an automated fashion, but it is not possible for a WG to provide such an execution environment for every conceivable setup.

Proposal:
Resolution:  
055 Jan, 2004 Editorial Closed Jeremy Carroll Patrick Curran
Section: Checkpoint 5.1 Review the test materials
Title: This Priority 2 checkpoint depends on a Priority 1 checkpoint (3.2)

Description: You cannot have a priority 1 depending on a priority 2.

Proposal: I think the "management system" is the problem replace with "metadata".
Resolution: Reference to checkpoint 3.2 was incorrect due to renumbering. Should have been to 3.1 (a priority 1 checkpoint). Corrected
056 Jan, 2004 Substantive Closed Jeremy Carroll Patrick Curran
Section: Checkpoint 5.1 Review the test materials
Title: Wording is too strong

Description: In WebOnt we automated this part - every time the OWL Test Cases document is produced all the test material is verified to conform with the "stylistic" guidelines in OWL Test. Hence we meet the spirit of this without meeting the letter. Once again, your desire to have strong wording is inappropriate.

On the same checkpoint, I note that one test which I accepted, imports-014, had as its whole point that it did not conform to the stylistic preferences (using a superfluous suffix on a URI) and that this presented problems which were not exercised by the other tests.

So, it is important that there is adequate discretion in the process to accept tests that do not meet the submission requirements.

Proposal: Weaker wording that would be acceptable would be:

Checkpoint 5.1 Review the test materials [Priority 1] Conformance requirements: The test materials should be reviewed to ensure that they meet the submission requirements. The status of the review may be recorded in the test materials metadata, as discussed in Checkpoint 3.2 above.

Resolution: Discussion text has been altered to incorporate the note about discretion, and to soften the definition of "review".
057 Jan, 2004 Substantive Closed Jeremy Carroll Patrick Curran
Section: Checkpoint 6.2 Tests should report diagnostic information
Title: Unreasonable to expect test suite to provide diagnostics

Description: It is a huge amount of work for the WG to provide the implementors free of charge with a test suite. No way are the implementors entitled to a test suite with diagnostics. The cost is huge - developers get paid, they should put some sweat in, too.

Proposal: Wording should be clarified. "Diagnostic information" was intended to simply mean "tests should report, as best they can, what they were expecting to encounter and what happened". It was never intended that they should somehow try to diagnose the source of the problem in the implementation.
Resolution: Wording has been modified to clarify that there is no expectation that the tests will diagnose failures in the implementation.
058 Feb 21, 2004 Editorial Closed Patrick Curran Patrick Curran
Section: 1.4. Relationship to other specifications
Title: Ensure that section 1.4 is consistent with other GL docs

Description: The wording of this section should match that of other GL docs

Proposal:
Resolution: DONE
059 21 Feb, 2004 Substantive Active Patrick Curran Patrick Curran
Section: 1.2. Class of Product and Audience
Title: Should Class of Product include test materials metadata?

Description: DD and PC hoped there would be no need to define test metadata as a separate COP; they would rather define 'test materials' broadly enough to cover these metadata. On the other hand, including in the term "test materials" a test plan and the results of executing that plan to test those test materials (as in checkpoint 5.4) seems to stretch the definition too far (not to mention introduce a confusing element of recursion).

Proposal:
Resolution:  COP has now been defined to extend beyond "test materials", but this section probably needs further work...
060 Feb 22, 2004 Substantive Unassigned Patrick Curran TBD
Section: Checkpoint 3.1 Define the metadata to be associated with test materials
Title: Should checkpoint 3.1 be split?

Description: Several checkpoints from this section have been deleted (as too 'process-oriented'). Everything is now defined in terms of metadata. Are we overloading this checkpoint too much? Should it be split into two or more?

Proposal:
Resolution:  
061 Date Substantive Unassigned Patrick Curran TBD
Section:
Title:

Description:

Proposal:
Resolution:  
0xx Date Substantive Unassigned Patrick Curran TBD
Section:
Title:

Description:

Proposal:
Resolution:  
0xx Date Substantive Unassigned Patrick Curran TBD
Section:
Title:

Description:

Proposal:
Resolution:  
0xx Date Substantive Unassigned Patrick Curran TBD
Section:
Title:

Description:

Proposal:
Resolution:  
   
   
   
   
   
   

Table Legend

Num
Issue number
Title
Short title/name of the issue
Description
Short description of issue, possibly including link to origin of issue
Date
The date at which the issue was raised or initially logged.
Class
Substantive or Editorial
Status
One of: Unassigned, Active, Closed, Postponed
Section
Section of the document this issue applies to
Proposal
Current proposal for resolution of issue, possibly including link to further text
Resolution
Short description of resolution, possibly including link to a more elaborate description
Raised by
Person who raised the issue
Owner
QA WG Member responsible for the issue