LC Responses/ML2

From OWL
Revision as of 20:48, 24 February 2009 by PeterPatel-Schneider (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

To: Marko Luther <luther@docomolab-euro.com>
CC: public-owl-comments@w3.org
Subject: [LC response] To Marko Luther

Dear Marco,

Thank you for your comment
     <http://lists.w3.org/Archives/Public/public-owl-comments/2009Jan/0048.html>
on the OWL 2 Web Ontology Language last call drafts.

We acknowledge the importance of implementations and tool support and, indeed, implementations supporting OWL 2 will be a condition for the standardization of OWL 2. It is, however, not in the scope of this working group to standardize communications protocols [1].

On the other hand, we are creating a collection of test cases [2] and would welcome help in the generation and testing of these cases. So, if you think that OWLlink would be a suitable tool for testing our test cases, then it would be great if you could coordinate with the working group, for example Markus Kroetzsch and Mike Smith.

[1] http://www.w3.org/2007/06/OWLCharter.html

[2] http://km.aifb.uni-karlsruhe.de/projects/owltests/index.php/OWL_2_Test_Cases

Please acknowledge receipt of this email to <mailto:public-owl-comments@w3.org> (replying to this email should suffice). In your acknowledgment please let us know whether or not you are satisfied with the working group's response to your comment.

Regards,
Uli Sattler
on behalf of the W3C OWL Working Group



CUT AND PASTE THE BODY OF THE MESSAGE (I.E. FROM "Dear" TO "Group") INTO THE BODY OF AN EMAIL MESSAGE. SET THE To:, CC:, AND Subject: LINES ACCORDINGLY.

PLEASE TRY TO REPLY IN A WAY THAT WILL ALLOW THREADING TO WORK APPROPRIATELY, I.E., SO THAT YOUR REPLY CONTINUES THE THREAD STARTED BY THE ORIGINAL COMMENT EMAIL



Evaluating the potential use of the OWL/DL technology in the

(mobile-)industry I feel pushed to write you. Wrt. the limited time I am able to spend on OWL it is not possible for me to provide any valuable comment on the rather large and complex OWL 2 specification. However, I would like to make a comment on OWL 2 from my perspective.

Besides that the benefits of the OWL technology are rather difficult to communicate to development departments because of its overall complexity (I really enjoyed your OWL article in the Communication of the ACM 12/2008), we discovered many issues (such as purely syntactic issues, wrong answers, non-termination, incomplete coverage) in the currently available tools (OWLAPI, Protege4) and reasoners (Pellet, RacerPro, FaCT++, HermiT) lately. Some could be identified as just interfacing issues, others were revealing issues within the reasoning kernel (mostly errors in the implementation of optimizations I assume). Some of the issues were even reoccurring in later versions once fixed.

Under the assumption that the (commercial) availability of reliable tools is important for such a technology to be successful in industrial applications, I think that the idea of a certain certification process is important. Is this the idea behind the OWL 2 tests currently collected at

<http://km.aifb.uni-karlsruhe.de/projects/owltests/>?

Why not combining these correctness test with automatic performance tests along the line of your work presented at DL'06 [1]? This could give "customers" at least an idea of which reasoning engine best to use for their application. Being aware, that the tool described in [1] is based on the outdated DIG protocol that not even cover all of OWL 1, I wonder if the OWL 2 WG ever discussed the importance of a standardized communication protocol that, in contrast to the OWLAPI, is implementation-neutral and goes along the language specification such as the OWLlink protocol the DIG 2 coalition proposed in [2]. In [1] it is stated that the best (and perhaps the only) way to check correctness of reasoner engines on larger real-world examples is often by checking for consistency with the reasoning of other existing systems. I agree, but also wonder if the current approach to verify the elements in the test queue <http://www.w3.org/2007/OWL/wiki/index.php?title=Test_Queue&oldid=17103> is to run the reasoners from within the OWLAPI. The advantage of using OWLlink instead for providing a generic benchmarking suite is that this way one does not depend on the OWLAPI interpretation (and correct implementation) of OWL 2 as OWLlink relies directly on OWL 2 for the primitives of the modeling language. A second advantage is that OWLlink can be directly supported by engines outside the scope of the Java world. Is it planned that OWLlink (after its final alignment to the final OWL 2 specification) will get the status of W3C Note (as it is planed now for the ManchesterSyntax) or will it at least be somewhere mentioned in a description of the existing OWL 2 universe?

Best Regards, Marko Luther


[1] Tom Gardiner, Ian Horrocks, and Dmitry Tsarkov. Automated Benchmarking of Description Logic Reasoners. In Proc. of the 2006 Description Logic Workshop (DL 2006), 2006. [2] T. Liebig, M. Luther, O. Noppens, M. Rodriguez, D. Calvanese, M. Wessel, M. Horridge, S. Bechhofer, D. Tsarkov, and E. Sirin, "OWLlink: DIG for OWL 2," in Proc. of the OWL Experiences and Directions Workshop at the ISWC'08, November 2008.