Not needed? Distributed URI Discovery

>>At present, there is no formal, generalized mechanism whereby a Web Agent,
>>upon discovery of a URI, and lacking knowledge about that URI, can query the
>>Originator of the URI in order to obtain an RDF description of the URI.

Lets compare URIs with symbols. When I discover a new term, I can not
ask the term, what it means. I ask a knowledge source (friends, books,
search engine) about it.
Why do we have to make things different on the web? When I find
a RDF document with URIs I don't know i just ignore them. If the RDF
document was well-written it should contain rdf:seeAlso links to URLs
of RDF-documents describingthe terms. Mights this be a solution?
It is inspired by the WWW approach of links and by the design
criterion of separation between identity (URI) and location (URL). I
think also "_:1 rdf:type foo:isCrawlable" or similiar would be
helpful.

My conclusion is thus we need no index.rdf, no URI originator, no MGET etc.
__ Location is not identity. __


Ok, what if somebody else adds statements about a URI? Well, then i
either need something like
a) a search engine = centralized infrastructure or
b) something like traceback = distributed, networked infrastructure
   -> we need a standard for RDF-traceback-servers!

I hope I inspired some people,

Kind regards,

Max
--
University of Karlsruhe, AIFB, Knowledge Management Group
room #258, building 11.40                      www.xam.de

Received on Thursday, 24 March 2005 02:21:39 UTC