This is an archive of an inactive wiki and cannot be modified.

===from Peter Vojtas === my general question is: We have to concentrate on the Web. In our charter we write: The mission of the Uncertainty Reasoning for the World Wide Web (URW3) Incubator Group, part of the Incubator Activity, is to better define the challenge of reasoning with and representing uncertain information available through the World Wide Web and related WWW technologies. I think Web makes the problem even more difficult.

=== from GiorgosStoilos === To my view this is not only Semantic Web related, but it is one of the most biggest challenges of the Semantic Web; publishing database content on the web and then having (probably) some Semantic Web agent searching and accessing content. Do you believe there will be any Semantic Web without databases?

===from Peter Vojtas === Of course, Semantic web has much to do with databases, what I am asking is, what is Web specific on this problem. Remeber, at the end we have to propose extensions of W3C standards. I am not against, I would like to be more "web specific".

===from Giorgos Stoilos === What is Web specific is that the knowledge base has to be published in some Semantic Web specific language in order to be accessible over the (Semantic) Web. This could be OWL, but it might be something else (since there might be other standards).

p.s. I don't think it is within the scope of the XG to propose extensions of W3C standards. To the contrary this is orthogonal to a W3C XG.

===from Peter Vojtas === Second, I am not convinced that fuzzy conjunction is apropriate to combine degree of slim and tall (especialy in queries with more than 2 properties), better are aggregation operators, see my paper on corresponding DL.

=== from GiorgosStoilos === I guess this is debatable. To my understanding is just like having the intersection of classes like Person and Man. Can you give me the precise pointer to your paper?

===from Peter Vojtas === Modeling preference according to properties (like a Prefered_Camera_For_Peter which is Cheap, Big_Optical_Zoom, Big_Display, Quick_Shot, Big_Memory ...) is better to describe as an fuzzy aggregation of fuzzy values (see discussion to Soft shopping agent Use case), the intuition is: The more properties are fulfilled in higher degree the better, to achieve this with fuzzy conjunction is difficult to learn by an inductive procedure (by the way, one zero makes whole conjunction zero, is this what we want?). Paper can be sent upon request (I do not know, whether I can publish a link and how it is with copyright)

===from Giorgos Stoilos === I totally agree with you. If someone asks: "Give me all Tall and Slim persons", then interpreting this 'and' as a t-norm is the worst choice (compared to fuzzy aggregations, and possibly other choices). But in my use case this is not the case. The users get to ask "Give me all Thin persons" where Thin has a definition (probably in OWL), like Thin <= Tall \and Slim. In that case the usual semantics for intersection in fuzzy DL (for example) are t-norms.