Re: Issue-57

On Wed, 2011-06-15 at 18:48 +0000, Jonathan Rees wrote: 
> I don't want to get into another argument - or, in reference to the
> Monty Python skit, contradiction - with David on these subjects, since
> in four years of attempting to engage we have yet to find any common
> ground, and I would not expect to suddenly find it today.

To be fair, we actually have quite a *lot* of common ground.  But we
also seem to have some important underlying differences that -- to my
mind -- have been difficult to identify.  Even if we cannot resolve
those differences by ourselves, I think it is helpful to do our best to
isolate what they are and clearly explain them to others.

> 
> However others reading this may be wondering whether, as David
> asserts, I am making a mistake, or making unnecessary class
> disjointness assumptions. David like to bring this accusation of
> unnecessary disjointness assertions into discussions, but it's
> completely irrelevant here, and I continue to maintain that the "is it
> an information resource" meme continues to be one of the most stupidly
> incorrect and harmful bits of the debate. I'll provide just a bit more
> detail on what I meant about induced contradictions, which was part of
> my answer to Jeni's email and my defense of Ed Summers's and Richard
> Cyganiak's approach.
> 
> Suppose that we use URI U1 to refer to Document D1, and that D1
> implies that U1 refers to Person P1.
> Suppose that we use URI U2 to refer to Document D2, and that D2
> implies that U2 refers to Person P2.
> Suppose we believe D1 and D2 (we "take them at face value").
> Suppose that Document D1 is (provably) different from Document D2 -
> perhaps an author of one is not an author of the other.
> Suppose that Person P1 is (provably) the same as Person P2 - perhaps
> they have the same US social security number.
> 
> This is an inconsistent set of axioms. You could make it consistent by
> ditching consistent reference (i.e. a URI refers to at most one
> thing), but I'm not sure anyone's ready to do that.
> 
> I prefer to re-establish consistency by rejecting the last axiom, P1 =
> P2. To show consistency I reinterpret the class "Person" so that its
> members are not persons but rather documents. Then two distinct
> "Persons" can have the same social security number (interpreted as
> being the social security numbers of the persons that the documents
> are about) by virtue of being different documents. I reinterpret that
> "has social security number" property to mean "is about a person with
> SSN", and voila, P1 :ssn S and P2 :ssn S do not imply P1 = P2.

Okay, so you have eliminated the inconsistency by removing an assertion
like the following (in n3):

  :ssn a owl:InverseFunctionalProperty .

In other words, previously the :ssn property was inverse functional
(i.e., two different people could not have the same :ssn value), and now
you have removed this constraint.

But all this shows is that an inconsistent set of assertions can be made
consistent by removing one or more assertions.  How are you intending
this to shed any new light on the issue?  Removing assertions is a well
known way to eliminate inconsistencies.  For example, this set of
assertions is inconsistent (under OWL semantics):

  <a> owl:sameAs <b> .
  <a> owl:differentFrom <b> .

and it can be made consistent by removing either of the two
assertions.  

It seems to me that what is critically missing from the example is any
mention of exactly *what* sets of assertions -- each set consistent
within itself -- were merged to produce the resulting *inconsistent*
set, and hence *why* the resulting set became inconsistent.  That's
where the interesting part happens, because combining assertions is
where the difficulty lies.  For example, one set of assertions may have
been obtained as a result of applying httpRange-14-rule-part-a:
http://lists.w3.org/Archives/Public/www-tag/2005Jun/0039.html
i.e., dereferencing U1 yielded a 200 status code, and thus we concluded
that U1 refers to document D1 (the document at that URI).  But a second
set of assertions was obtained from the *content* of D1.

If we can figure out a useful strategy for avoiding inconsistency when
two, individually consistent sets of assertions like this are merged
then I think we've gained something.  This is what I was trying to do
with the idea of "splitting" a URI's resource identity:
http://dbooth.org/2007/splitting/
and
http://dbooth.org/2010/ambiguity/paper.html#splitting

As it is, I don't see how the solution that you showed is a repeatable
strategy that can be done by machine.

David

> 
> Other interpretations are possible too; I'm just trying to show a way
> to interpret consistently, and in fact more than that, that the
> sender's intent can be decoded by a cooperating receiver, which is the
> important thing.
> 
> In any case this is a happy state of affairs, relatively speaking.
> Agents who only care about metadata need not change their behavior,
> and agents wanting to use U1 and U2 to refer to the people that the
> documents are about obtain a very good illusion that they are doing
> so, because all their properties "coerce" a document to a person as
> needed, in their model. In fact, there might be a theorem in here that
> would allow them to safely interpret the URIs as people after
> scrubbing the metadata axioms out of their knowledge base. And
> everything is consistent, since Harry Halpin tells me that no one who
> gets close to scenarios like this one is interested in doing inference
> (of the P1 = P2 sort), and I trust him.
> 
> (I am glossing over a detail when I say "about" but it's not
> important, contact me off list if you detect it.)
> 
> In essence this is a variant of the unique name assumption - you might
> call it the 'unique document assumption'.
> 
> I am not saying that this is to my taste; I'm just saying that it's a
> consistent way to for the metadata composers and take-it-at-face-value
> people, who appear to be fighting to the death over the coveted
> linguistic territory of dereferenceable absolute URIs, to coexist. If
> accepted by them, then there would be no reason to retract the
> httpRange-14 resolution, because what Richard and Ed want to do would
> be perfectly consistent with it.
> 
> This is what the section of the issue-57 document is *supposed* to
> say. I hate to make the document longer - it's already at 20 pages -
> but apparently I'll have to.
> 
> Jonathan
> 
> 

-- 
David Booth, Ph.D.
http://dbooth.org/

Opinions expressed herein are those of the author and do not necessarily
reflect those of his employer.

Received on Sunday, 26 June 2011 01:29:33 UTC