See also: IRC log
http://www.w3.org/2007/xmlsec/ws/slides/13-williams/Overview.pdf
Hal: Would XQuery be more efficient in EXI?
Stephen: The short answer is yes.
<fh> noted that XML not fully self-contained or self-describing, e.g. external schema information. EXI also has additional external information to record redundancy information removed
<fh> XMS records additional info to be shared
Frederick: How will you use XML Dsig and Enc with EXI since they don't work with binary?
Stephen: First step would be to convert things
to XML and roundtrip them into EXI.
... Next step is to look at EXI specific canonicalization.
Konrad: Is there some kind of DOM API in EXI?
Stephen: Initial API is STAX.
<fh> konrad asks about using fidelity options for c14n
Stephen: EXI has adjustable fidelity where the default is most compact and options can be enabled to increase fidelity.
<fh> konrad asks whether signing EXI is consistent with see what you sign
Thomas asked about signing the binary information as a blob with an external reference.
<EdSimon> 30 min should be fine for me.
<EdSimon> Sounds like the current discussion is a good prelude to my presentation.
Henry asked if EXI could be used to replace canonicalization.
<klanz2> As EXI WG is based on the Infoset and normalizes the Information in XML documents, do you have a comprehensive set of normalization use cases that can be used as an Input to future c14n work?
Frederick asked if there was an interoperability issue because of the need to interpret EXI with a separate piece of software.
<klanz2> williams: there is a substantial amout of work on fidelity
Stephen noted that he thought it was more of a library adoption issue.
<klanz2> could you sign the EXI directly being acompainged by some schema. Is that suficiently equal to a text representation of XML to fulfill "sign what you see"
Michael Leventhal noted that EXI should be built with security built in and does not believe that is the current direction.
<EdSimon> Ed: I strongly agree with Michael. The W3C must be thinking about security in all its specifications, especially core ones. Not sufficient to think security can just be patched on afterwards.
slides: http://www.w3.org/2007/xmlsec/ws/slides/18-edsimon-xmlsec/
Jimmy does not understand what XML canonicalization has to do with security and asked for some clarification.
Ed explained that it is not required if the XML will not be modified in transmission.
Ed explained that schema aware canonicalization can validate "10" and "+10" as the same integer value.
Thomas asked how floating point rounding issues may be handled.
Ed suggested that the rounding issues shouls be handled as part of the business process and should be done before signing.
Konrad asked how wildcards should be treated in schemas.
Ed has not thought about it.
Konrad postulated that there should be application aware namespace canonicalization.
Konrad: How does EXI deal with namespace prefixes?
Stephen: Namespaces are fully computed for every element and every attribute.
Hal: Preservation of the prefix is a requirement of XML digital signature in order to support XPath etc.
Jimmy does not think shema validation is a good thing and is of the opinion that trhe application should handle the normalization.
Stephen: EXI is tunable and can be configured to preserve prefixes.
Phillip agrees with Jimmy that schema based canonicalization is not a good idea.
Phillip does not think that the 10, +10 question is a security issue and that should be hanled by application developers.
Konrad disagrees with Phillip strongly.
Konrad noted that transforms are not allowed to change the document so schema based canonicalization should be used for normalization.
<Pratik> This is in defence of Schema aware canonicalization. One big problem for our signatures to break is that some people put line breaks in base64 encoded values whereas others don't
<EdSimon> Hal is correct in that the final version of Canonicalization v1.0 eliminated namespace rewriting which was present in the prior drafts. Version 1.1 of Canonicalization summarized the namespace rewriting issue as such:
<Pratik> if c14n were schema aware, it would know that line breaks in base64 encoded values is not significant
<EdSimon> The C14N-20000119 Canonical XML draft described a method for rewriting namespace prefixes such that two documents having logically equivalent namespace declarations would also have identical namespace prefixes. The goal was to eliminate dependence on the particular namespace prefixes in a document when testing for logical equivalence. However, there now exist a number of contexts in which namespace prefixes can impart information value in an XML document. Fo
<EdSimon> My objection to c14n 1.1 is that while not rewriting namespaces for that particular document preserves that document's integrity, it does not allow comparison with a completely equivalent document that happens to use a different namespace. There should be a way of canonicalizing namespaces that neither corrupts a document but also allows logical equivalence. Not rewriting does the first but not the second. Namespace-aware canonicalization, with namespace can
http://www.ximpleware.com/security/
<fh> design issues versus implementation issues
Henry: The goal of the infoset spec is to provide a standard terminology for spec writers.
<klanz2> http://tools.ietf.org/html/rfc4051#page-9
<klanz2> Minimal Canonicalization
Link to slides: http://www.w3.org/2007/xmlsec/ws/slides/12-mishra-oracle/
<fh> XML signature expensive because of node sets, node sets requires DOM
<fh> expands 20x in memory
<fh> each transform requires 2 nodesets, each of which has to be in memory
<fh> streaming can address memory issue
<fh> if not needing DOM for doc processing
<hal> RFC 4051 (and RFC 3275) only define the identifier for Minimal C14N
<hal> the description is in 3075
<hal> http://tools.ietf.org/html/rfc3075#page-43
Frederick asked if the document information would be stored for every pipeline.
Pratik said that it was not an issue because most XML documents are not more than 20 pipelines.
<bhill> another good case for a minimal XPath profile
konrad: xpath data model does not say what it means when you remove namespace node from nodeset
<EdSimon> What Phill says works for cases where there are not intermediary rewriters.
<fh> discussion of layering, whether to verify sigs before other processing, varying application needs for xml security etc
konrad: nodesets have underlying document available
phill: transform changes instance, also concurrently schema changes, so you have new document effectively
konrad: +1, need to distinguish transformations from selections. This should be on WG agenda,
<EdSimon> To paraphrase Konrad (I hope), transformations require full document; selections may not.
<fh> how to do both look ahead and behind, to allow both sig generation and verification?
Pratik: 2 pass applies here, still good ...
... concern expressed that in network 2 pass might not be good
sean: apache implementation has pipelining model, but some limitations
konrad: jsr 105 constrains order in documents (?)
slides: http://www.w3.org/2007/xmlsec/ws/slides/20-thompson/
ht: proposal, use XMill encoding and digest that
<fh> http://www.liefke.com/hartmut/xmill/xmill.html
<fh> http://sourceforge.net/projects/xmill
ht: consider replacing c14n in subsequent wg
konrad: sign what you see, need to be able to examine what is digested
ht: avoid issues of namespaces, simplify by avoiding
EdSimon: +1 to ht
phill: remove requirement to end with well formed xml can simplify c14n
ht: focus on c14n output to only be input for digest, not viewable etc
Brad: concern about actual usage, ability to view c14n output
konrad: +1, concern about ability to view, legal requirements etc
tlr: cannot display much on a 2 line card reader
mike: output of c14n is octet stream, also can sign anything, not just xml
<sdw> You can display things like transaction total. This is key.
<sdw> Not using the C14N version of data means that unsigned information can leak.
phill: need to interpret output of transforms
to avoid attacks, but not necessariy output of c14n which is part of
digest/sig processing
... want integrated serialize+sign operation
sdw: make sure you use data that was actually
signed
... regarding see what you sign, don't see raw xml, so other processing
happens. Summary might be displayed, such as "transaction total"
jimmy: argues for output of well-formed xml in transform. Don't assume DOM byte level processing
<EdSimon> There is a difference between signing to prevent attacks and signing to provide a trustable, semantically-meaningful application action. The view Henry and I share is focused on the latter. This does not exclude the use of say, even canonicalization-free, signing for the issues Brad and Konrad raised. XML Signature has this flexibility.
erik: trust in engine for signing
konrad: is question, add another form of c1n4 for non-xml
<EdSimon> Just some comments relating to previous topics...
<EdSimon> Prateek wrote (above, after Ed's presentation): "This is in defence of Schema aware canonicalization. One big problem for our signatures to break is that some people put line breaks in base64 encoded values whereas others don't. If c14n were schema aware, it would know that line breaks in base64 encoded values is not significant."
<EdSimon> Ed: Yes, that is a good example where schema-aware canonicalization is necessary.
<EdSimon> Ed: Minimal canonicalization only really addresses the character set -- not really XML-specific -- usable for many text documents not just XML.
<EdSimon> Ed: About XML Canonicalization vs. minimal canonicalization...XML Canonicalization is necessary, perhaps a necessary evil, when there is the chance that an XML instance has been read and rewritten by an intermediate application (otherwise don't canonicalize!) or is being read and parsed by an end application (e.g. signature validator). That application may produce an XML instance that is not the same as the original on a per-character basis, but may be
Konrad: xmldsig relies on xpath data model but
does not specify it for 1.1
... put on agenda for future work
henry: xml 1.1 - encourage specs to move to it
konrad: need to find out who owns xpath 1.0 and ask if they want to shift to xml 1.1
brad: xpath 2.0 has more security issues
henry: use xpath 2.0 data model, not necessarily everything else
<EdSimon> I believe the suggestion is to profile
<EdSimon> XPath,using its data model, to limited functionality necessary for XML Signature.
henry: use xproc to specify transforms?
konrad: xpath 2.0 main difference is namespace declarations
<fh> sean: node filters in apache is optimization to avoid creating node sets
Pratik: can extend pipelining model to work on octet streams
tlr: c14n part of discussion
... overall c14n not satisfactory
phil: applications act on the output of transforms (the pre-hash)
tlr: interaction with other groups
... interact with exi early
konrad: add relation to wraping attacks to sign what you see
frederick: have we captured need for security considerations by other groups (saml, etc)
tlr: refactoring slide - what a future charter
could look like
... 3 ways, 1) work on profiles, impl. guidance, constraining but back.
compat
... 2) no constraints, refactor specs; incorporate lessons learned
3) back. compat as much as possible; break it only if you have to
pratik: have to be b.c. so many specs dependent on xml dsig
konrad: predefined sets to enforce different profiles good approach
<EdSimon> I'm less concerned with backwards compatability; if specs want backward compatibility with the old spec, they can use the old spec. If specs want the new features, they will be rewriting their specs anyway.
jimmy: adoption for wss is modest; b.c. will hold us back; take long term view
hal: b.c. should not be requirement
<EdSimon> +1 to hal
<rdmiller> +1 Hal, Jimmy, and Ed
<EdSimon> Hal: apps can continue to accept old signatures but may generate new version signatures
konrad: stay as close to current specification if we can
phil: don't think new apps need to read existing sigs
<fh> need clear requirements generation as part of potentially chartered wg, clarity on tradeoffs as part of work item
hal: signed stuff out there that already exists; so still need to verify it
<EdSimon> +1 to Phill
jimmy: performance has to be good enough
everyone: +1
jcc: don't forget requirements for keeping signed docs for 5 or more years
rdmiller: thinks jcc's comment is an application issue
frederick: old spec is not going away. may need 2 implementations
<EdSimon> Ed: I imagine there are archival requirements for decades; I do not see how these requirements affect us. The archives in question need to archive the specifications for the archive, including old XML Signature, and, perhaps, old systems for validating them.
<EdSimon> +1 to Henry from Ed
henry: if consensus, we need to say we are not deprecating xml dsig 1.0 in charter
<rdmiller> +1 to Henry
phil: archival document requirement is overrated
<EdSimon> It is up to the archivists to ensure their data continues to be usable; not us.
tlr: lot of complexity is about optional parts
of spec
... lot of pain caused by optional features
<rdmiller> +1 to Thomas
henry: should charter also say what of xml dsig 1.0 should be in scope for WG?
frederick: what is the outcome of this workshop, how is charter accomplished?
tlr: existing wg chartered to do maint of spec
and c14n 1.1 and input into new charter proposal
... useful to participate in maint. wg on charter development
... but wg should take broad/public view into account
... thinks we are hearing lets do a xml sig v.next
... one way to add layers to 1.0
mike: a lot of times applications want
streamlined set of algs, specify something
... efficient, if libraries want to imlement more, that is ok
frederick: we need to be specific about what features are needed
jimmy: 20% of complexity can cover 80% of use
cases
... wants to see core cover as many use cases as possible
frederick: 20/80 can also apply to business
konrad: dss has basic processing
<EdSimon> <Manifest> is used by the Office Open XML specification (e.g. Microsoft Office). That, in itself, would constitute fairly wide use.
konrad: look at dss for ideas
jimmy: 5 minute tutorial should be goal
stephen: may not have luxury of doing this many times
henry: get one chance to do spec that breaks compatibility
tlr: one final pass through all slides to see
whcih topics are of most interest
... Profiles area
... looking for rough numbers of who is interested in working on basic
profile
<EdSimon> I am interested in working on a number of topics (though my time is limited).
many hands for basic profile
determistic-time processing: should be linear to size of document; robust against dos attack
scribe: scaling behavior ... O(n)
sean: should be part of basic profile
jimmy: do we need to bring discussions to other WGs?
tlr: WGs should work together on topics of
mutual interest
... Profiles (II) slide
<EdSimon> Ed is keen on profiling work.
frederick: other groups have worked on
profiles; we need to include others
... even WGs whose charters have ended
tlr: use-case profiles - who is interested?
... is there interest in working on guidelines for profiles? 4 hands
<EdSimon> +1
<bhill> edSimon - to which question are you +1?
tlr: interest in profiles beyond basic one? 1 hand?
<EdSimon> +1 for the use-case profiles beyond the basic (I might have missed some audio).
tlr: efficiently verifiable profile -fine tune
later
... impl. guidance slide
4-5 interested in working on this
key mgmt slide
<EdSimon> Yes to having XKMS spec on the table.
Interest in working on XKMS?
konrad: can we attract others to work on xkms?
frederick: would this be a broader xml security WG if we include XKMS?
6-7 interested in XKMS
frederick: goal is to get everything of interest on table; decide later what to do
phil: version of XKMS that use new version of xml dsig and some additions for symmetric keys
<EdSimon> Just a reminder that Ed thought it might be good move KeyInfo out of XML Signature -- ideally into XKMS.
tlr: who is interested in refactoring KeyInfo?
7 people interested
tlr: referencing model slide
interest in improving id-based approaches?
<EdSimon> Ed: yes
eric: we need to make ids more efficient and work in absense of schema
mike: doesn't like idea of transform specifying id attributes
hal: want to fix references but don't know best way to do it
<MikeMc> not sure I like or don't like - but see potential pitfalls in changing data that the signature depends on
11 people interested in fixing IDness problems
tlr: Algorithms slide
work on adding new algorithms
mike: need a standard mechanism for adding new algs
hal: how to anticipate adding alg parameters, not just alg URI
10 people interested in working on this
tlr: Other issues slide
frederick: excl c14n is just an algorithm issue
<EdSimon> If by algorithms, one is including more than just crypto algorithms, then I'm interested.
konrad: would we be open to ammend c14n to include excl c14 parameter list
<EdSimon> Maybe one should distinguish between the crypto and non-crypto algorithms. (If we are, I missed it on the audio.)
I think it is all algorithms
hal: mechanism to encapsulate xml?
henry: where is the requirement for this coming from?
<EdSimon> Is not base64'ing the XML and putting it into an <Object> sufficient?
<ht> Ed, yes, but you shouldn't have to do that
6 people interested in encapsulating xml
<ht> HST hears MM as requesting a chroot functionality :-)
Transform model slide
<EdSimon> If base64'ing the XML is not the right solution because you want it parsable, then you may need a special XML packaging element that tells the processor that the contained XML is to be processed as if it was an independent document.
<ht> Thanks -- between you and Mike I now get the point, I think
<EdSimon> Ed is interested in the Transform model
10 people interested in transform work
C14N Slide
tlr: interest in revisiting mandatory to implement c14n?
<EdSimon> Ed is interested in C14N
14 people interested
<EdSimon> Thanks everyone and VeriSign for making an excellent conference.
<tlr> rragent, please draft minutes