Re: shapes-ISSUE-130 (rdf dataset assumption): SHACL should not assume that the data graph is in an RDF dataset [SHACL Spec]

I didn't find filters a problem.  I handled qualified cardinality constraints
using the filters of embedded shapes.  I also have a way of doing templates,
although it does not yet cover a couple central parts of SHACL, which will
need special handling, but then that is the situation with hasShape as well -
hasShape implementation requires that the query engine has code for much of
the SHACL infrastructure.

As I said in my earlier email the reach of $shapesGraph is unclear in the
current editors' draft.  Right now, it appears that non-optional parts of
SHACL depend on its presence.

peter


On 03/21/2016 11:32 AM, Dimitris Kontokostas wrote:
> RDFUnit does work on endpoints but it's not a complete shacl implementation.
> 
> In short, i see filters and qualified cardinality as the main problem for
> enabling shacl on sparql endpoints.
> Filters could be evaluated after the constraints as Peter noted in another
> thread but haven't found a way for QCs yet.
> 
> Peter, do you have any suggestions for handling QCs in this case?
> 
> In any case this is independent of this issue and related to the $shapesGraph
> variable resolution.
> $shapesGraph is optional and people who use it outside of core will be aware
> that they are not interoperable.
> 
> On Mar 21, 2016 19:44, "Peter F. Patel-Schneider" <pfpschneider@gmail.com
> <mailto:pfpschneider@gmail.com>> wrote:
> 
>     I think that Dimitris's implementation does work on endpoints.
> 
>     The implementation that I am putting together will work on endpoints as well.
> 
>     peter
> 
> 
>     On 03/21/2016 04:12 AM, Dimitris Kontokostas wrote:
>     > Found it, https://www.w3.org/2015/08/27-shapes-minutes.html#resolution03
>     > the resolution does not say this but iirc the discussion (which is not 100%
>     > scribed) was talking about bnodes and how they can be identified with a
>     remote
>     > call vs in-memory.
>     > ARQ and Sesame do something clever with bnodes which is not the case for all
>     > sparql engines but I am not trying to re-open the old issue, only trying to
>     > close this one using that resolution
>     >
>     > I propose we close this issue as: SHACL does not assume that the data
>     graph is
>     > an RDF dataset as addressed by the current editor's draft
>     > This of course allows people to use datasets but SHACL doesn't take any
>     > special care in this case
>     >
>     >
>     > On Mon, Mar 21, 2016 at 12:59 AM, Holger Knublauch
>     <holger@topquadrant.com <mailto:holger@topquadrant.com>
>     > <mailto:holger@topquadrant.com <mailto:holger@topquadrant.com>>> wrote:
>     >
>     >     On 18/03/2016 18:38, Dimitris Kontokostas wrote:
>     >>
>     >>     On Fri, Mar 18, 2016 at 9:41 AM, Peter F. Patel-Schneider
>     >>     <<mailto:pfpschneider@gmail.com
>     <mailto:pfpschneider@gmail.com>>pfpschneider@gmail.com
>     <mailto:pfpschneider@gmail.com>
>     >>     <mailto:pfpschneider@gmail.com <mailto:pfpschneider@gmail.com>>> wrote:
>     >>
>     >>         If it is always possible to construct the dataset, then I don't see
>     >>         a problem
>     >>         either.  However, is this always possible?  For example, a user who
>     >>         is just
>     >>         trying to validate a graph may not have permissions to create or
>     >>         modify a dataset.
>     >>
>     >>
>     >>     iirc there was a resolution on supporting only in-memory validation
>     (not
>     >>     my favorite and cannot find it), e.g. full shacl may not run on remote
>     >>      datasets e.g. sparql endpoints.
>     >>     With this in mind an implementation could just copy the shapes & data
>     >>     graph in memory and perform the validation there
>     >
>     >     The resolution that we made a while ago was to not require support
>     for the
>     >     SPARQL endpoint protocol. Note that this is different from the
>     question of
>     >     in-memory vs database. It means that implementations can still work
>     >     against databases, e.g. via an API such as ARQ or Sesame (for which all
>     >     major databases provide drivers for), while the SPARQL endpoint protocol
>     >     is too limiting for what SHACL needs to do.
>     >
>     >     Holger
>     >
>     >
> 

Received on Monday, 21 March 2016 19:05:30 UTC