3.2. Presentation by Jonathan Marsh
[11:24]
Jonathan Marsh
[slides at ?!?]
XInclude and XML Processing
Is XInclude a low-level technology or higher level tech?
the processor model describes how to do it but not when to do it.
XSLT and XInclude
1) XInclude prior to XSLT
not template in the XSLT to handle xinclude elements.
Daniel Veillard: you can have multiple sources.
Jonathan Marsh: yes, this is a simplification.
2) XInclude after XSLT
XML Schema processing model
1) XInclude prior to XML Schema
XML Schema already have flexibility
in choosing the schema, xinclude is just
one more step.
XML Encryption/Decryption
2) XInclude after XML Schema
if we do produce a PSVI, what schema it corresponds to?
(the current spec is broken)
Jonathan Marsh: strip off the PSVI
Dan Connolly: no, don't do XInclude after XML Schema at all.
Paul Cotton: yes, if somebody includes more constraint in the infoset, it's even more painful.
3) XML Schema processing within XInclude
XInclude and SOAP
XInclude processed by each SOAP Actor?
or just passed them along the pass?
Conclusion
- a "default processing model" is insufficient
- applications need freedom
- classes of applications may constrain processing models.
- XInclude is getting a bum rap.
Jonathan Marsh: I don't think there should be a
default processing model. browsers and SOAP have different
requirements.
Joseph Reagle: if you want this flexibility,
what does it mean for signature? enumerated signatures?
Jonathan Marsh: in my model, as part of
decrypting the signature, you can do some processing. gluing
signatures as a whole block into the processing model.
David Cleary: I understand the position
regarding "default proc model" what about a soap proc model?
Asir Vedamuthu: what about proc model for an initial infoset? (reference to
Paul Grosso)
Jonathan Marsh: good idea. I believe it's already builtin in the infoset.
Chris ?: I'd be careful about defining a proc model for SOAP. if you apply the
signature proc model into it, what will happen? Several things apply in the
proc model.
Jonathan Marsh: maybe layes of proc model. orchestrating all proc models. important
to predict SOAP content with xinclude.
Richard Tobin: if you type a URI in a browser, same sequence should happen. there
should be a default proc model.
Jonathan Marsh: I would say yes. but let's see interoperability problems. no browsers
supports xinclude yet. user agent guidelines can defined that.
Richard Tobin: default proc model for specific application accepted by all applications.
Daniel Veillard: if it's only a xpointer defined proc model, but [should have a
cannonical processing model]
[11:49]
Steve Zilles: is there consensus with the basic definition of namespaces and xml:base
done?
Henry Thomson: supposed I write a schema which declared reference i the xml:base with
fixed values, that fixed values won't have the desired effect. by saying it's
fixed, it's done once.
Eve Maler: some confusion about xml:base processing. xml:base in the infoset doesn't
mean absolutizing URIs. every transformation can mess up something not meant
for that.
Michael Sperberg-McQueen: xinclude must change the properties of the formal parent. not only
augmentation of the infoset. [reference to Henry example], don't think there is
a desired effect.
David Cleary: different specs. creating, changing infosets. if you do a set of specs,
creating a new infoset. if we don't put a rule to discard the infoset, we will
not be interoperable. don't use schemas to modify an original infoset.,
Daniel Veillard: yes, you produce a new infoset, you don';t have a base uri for
documents anymore. interaction of specifications.
Henry Thomson: pending problem: disatisfaction with Xlink, it is not used with others
specs. SMIL uses an unqualified href for linking semantics, compatible with
XLink, but cannot leverage links in a consistent way. the proposed solution
is: we will defined infoset properties that will carry linking semantics,
different ways to obtain them (from SMIL, XLink, XML Schema types, ...). A
range of different processors are bound to a particular set of linking
properties. it is a requirement that we provide the necessary for existing XML
applications.
Richard Tobin: Daniel said "as soon as you transform the infoset, you loose the uri". you lose the
uri, not the based uri.
Daniel Veillard: yes, but don't forget to propagate the base uri, otherwise, it doesn't work.
Michael Sperberg-McQueen: once you serialize and creating a new infoset, all the derivative
properties are consistent. they are infoset items that describe core
properties, others are derived from them (namespace in scope, ..). we cannot
prevent for designing extensions to derive the properties. all the properties
at level n+1 should be stricted for level n. being to regenerate the
items.
Asir Vedamuthu: after you build the initial infoset, you can recursively apply xinclude,
schemas, ...
Steve Zilles: two answer you can put multiple schemas on top of it.
Eric Prud'hommeaux: don't rely on transformation that augment the infoset, people
will do it anyway. we should resolve the problem now.
Eve Maler: sympathic to Henry problem and fixing it. get nervous on adding new kind
of items in the infoset. not just transform. feel more comfortable if we are
always able to serialize it.
Jonathan Marsh: one of my wishes, as more as we advanced, more restrictions will be
added on the infosets. in scope namespaces, you just copy across with
xinclude. copying all psvi includes more meaning, let the application do that.
Steve Zilles: identifying what for any given properties, the dependencies should be
identified and need to understand how to recompute them.
Jonathan Marsh: the schema hierarchy might also need to be recomputed.
Dan Connolly: serialization of the infoset. the infoset has an RDF schema that actually
works, even with addition. names for PSVI that are not uris. same of xml
query. if you give them uris, then you can serialize them in rdf.
Richard Tobin: there is also a xml schema to define the serialization of the
infoset. every character turns into an element. should we consider to serialize
as closer to the original document.
[steve closes the queue]
Henry Thomson: distinction about a reserialization of the infoset, and a reflection of
the infoset. it's one thing to go back to the serialized document. other thing
to produce an xml representation of this infoset. that needs to be distinguish
from a strict reserialization.
Eve Maler: had in mind: vaguely similar to the original document. pure round tripping
will never be possible in any case.
Paul Cotton: criteria for queries: will we be able to do that? I would like to be
able to get the original document after the processing model. I expect that the
second time, you'll get exactly the same thing.
Steve Vernon: preserving the extensions: if you take arbitrary extensions, unless the
things are self describing, we cannot know what to do (should you relay them?)
it needs to be in it, not external.
Steve Zilles: whether there is an initial infoset or not, i think the answer is yes.
[12:30 adjourned for lunch]