W3C

XML Security Working Group Teleconference
14 Jan 2009

Agenda

See also: IRC log

Attendees

Present

Frederick Hirsch, Shivaram Mysore, Thomas Roessler, Pratik Datta, Rob Miller, Sean Mullan, Philip Hallam-Baker, Kelvin Yiu, Brian LaMacchia, Gerald Edgar, Hal Lockhart, Scott Cantor, Brad Hill, Ken Graf

Ed Simon, Konrad Lanz, Chris Solc, John Wray (tel), Bruce Rich (tel)

Regrets
Chair
Frederick Hirsch
Scribe
BAL, Shivaram, Thomas, Frederick,

Contents


Agenda Bashing

<fjh> 9-9:30 doc status, 9:30-10:30 transform doc review, break 10:30-10:45, 10:45-12:0 requirements then tech discussion

jccruellas: ETSI has launched a project for standardizing advanced electronic signatures w/in the PDF framework.

jcc: no standard as yet forhow to use XAdES (XML adv. electronic sigs) in PDF, but there's been work on other types of signatures in PDF
... I have raised the question of using XML Sigs within regular PDF docs within this group
... requesting use cases from this group for XML sigs within PDF docs
... and also to ask whether we know of other sources of use cases
... adobe reader also incorporates ability to recognize & verify CMS signatures, this could be a way to increase XMLDSIG signatures broadly

fjh: jcc, what you're suggesting is that people be aware of this work in another SDO and folks should submit use cases to JCC in email

<csolc> q

jcc: if members of this WG could contact other folks within their organizations and solicit use cases, esp. pertaining to document flow

<fjh> perhaps better to share on our public list than direct

<scribe> scribenick: shivaram

fjh: providing a summary of yesterday's meeting

today we will discuss Transforms

today we will also discuss Requirements, Best Practices

Signature 1.1 and Encryption 1.1 are ready to go

Algorithms may need a little bit of tweak before it is ready to go

<fjh> Everyone should review XML Signature 1.1 and XML Encryption 1.1 and Algorithms before the next call, 27 January

<fjh> Agree to publish on next call, 27 January

<jccruellas> interop on what?

<fjh> decision re interop was we have some time, 3-4 month, but desireable to start earlier

<fjh> also for agenda on next call

Transform document Review

<fjh> http://www.w3.org/2008/xmlsec/Drafts/transform-note/Overview.html

<fjh> http://lists.w3.org/Archives/Public/public-xmlsec/2009Jan/0030.html

fjh: suggest a couple of sections on "Selection" and "Canonicalization" in the Transform Note
... need to review structure of the transform document and fix editorials which have been documented in http://lists.w3.org/Archives/Public/public-xmlsec/2009Jan/0030.html

<fjh> http://lists.w3.org/Archives/Public/public-xmlsec/2009Jan/0029.html

tlr: do we want to publish this Transform Note as a Working Draft?
... WD sets an expectation that this doc is going to change more

fjh: people may get confused if we publish it as a note. If it is published as a WD, then we may get feedback which can be incorporated into Requirements
... need to get feedback that is the only requirement for calling it a WD as compared to Note

<fjh> prefer working draft to set expectation as draft for feedback

... a WD can be turned into a Note at any time
... a Note can also be turned into a WD

fjh: now discussing http://www.w3.org/2008/xmlsec/Drafts/transform-note/Overview.html section 3 - Requirements

<fjh> Scott notes if we want feedback we should have a Note

<fjh> question - note or first public working draft

<fjh> comment - backward compatibility in requirement section

<bhill> c14n requirements around DTDs and entity expansions

<fjh> question - acceptable to use new attributes for extensibility

<bhill> need to be decided in order to achieve some goals

<fjh> maybe first public working draft since canonicalization material is TBD

there are some editorial fixes required in section 2: Usage Scenarios

tlr: be clear about reusing implementations vs interoperability on the wire

<fjh> scott notes could have profile of 2nd edition model by restricting transforms etc

<fjh> scott suggests could define a selection transform using the new synax noted in the draft, but use in the existing transform model, etc

phb: this is too complicated for anyone to implement

<fjh> not sure I understand, this seems to be a simplification

<jccruellas> yes, it was long before and I made the question ....

<fjh> is it worth attempting to put new declarative syntax into existing transforms, eg a selection transform, etc with restirctions

tlr: define a set of transforms that address a huge number of common cases and wants to understand what PHB has in his mind

<fjh> tlr: old model more general, this model handles most common cases and benefits from being less general

<fjh> phill: backward compatibility not an issue, have existing code for that

<scantor> goal is to be able to expose the full content but annotate it with the information about whether it was signed

<tlr> ScribeNick: tlr

<fjh> discussion of need to have a single tree representation that notes what portions where signed and which were not

hal: pencil vs printing press... do you know what letters you want to write?
... we're currently solving the problem in too much generality ...
... most of the interesting problems can be solved with a more restricted piece of technology ...

scantor: what information do apps need to get at?

bal: requirements informed by existing implementations and their problems

fjh: reasonable to have a single tree instead of making a new one
... do we address that in this proposal?

bal: we wouldn't normally talk about that ...
... we have implementation requirement to ideally enable one-pass procesisng ...
... want to be cognizant of is putting anything in that would preclude efficient implementations ...

fjh: declarative

scantor: don't want to have a procedural step in there
... in the end of the process, app needs to answer "is this bit signed" ...
... can you do that without re-executing entire step of code ...
... or can you do it by examining the stuff ..
... this solution facilitates answering that question ...

phb: understand how to do it right in C ;-)

bal: so what?

tlr: RATHOLE

brad: this does not prevent wrapping attacks
... do you work with original tree or tree that comes out of processing?
... the latter addresses wrapping attacks ...
... this makes it easier to avoid wrapping attacks ...

<fjh> is not proper URI and XPath approach toward avoiding wrapping attacks

brad: that's the class of attack that isn't completely addressed ...
... a lot of other attacks are addressed ..
... this is a fact of our requirements

pdatta: can address wrapping by phrasing the right xpath

scantor: that seems addressed

<shivaram> wrapping attacks can be addressed by providing the complete XPATH

scantor: lots of pushing for ID approaches because XPath problematic
... this makes XPath less problematic ...

fjh: rough consensus to go forward with this?

hal: well, we want to solicit use cases that can't work this way

<tlr> that is what we're doing the note for

scantor: wondering if some of the real big problems can be addressed without changing the schema or withou changing it radically ...
... does that outweigh the downside of trying to stuff this into existin syntax ...
... we all know that radically changing syntax is going to lead to adoption trouble ...

hal: could we achieve the same thing by profiling and tweaking the current spec?

fjh: turn this into a selection transform?

<fjh> tlr: asking community, do you need transforms beyond selection

<fjh> tlr: separate question is whether syntax should be old or new

<shivaram> tlr: what amount of power do we really need in the Transforms

<shivaram> phb: need guidance for protocol authors

<shivaram> tlr: that may need to go into the best practices doc

<shivaram> [back to the doc]

<shivaram> fjh: add section on selection and canonicalization in section 5: Design

fjh: we now have a document that argues in favor of declarative syntax ...
... would like to send clearer message ...

<shivaram> fjh: section5.3 is a little confusing

fjh: it is not clear how to get the mime headers

scantor: where I write a spec, I stick in a string where I would want to use a URI

<fjh> scott notes might want examples

pdatta: there is some extra thing for URI. There is some extra info needed to fetch URI. Need to know the doc root
... combination of URI and type

fjh, scantor not convinced that Type is required

pdatta: just given a URI it does not know how to fetch it.

fjh, scantor: URI is a rerference, it is not required to fetch it. When and how to fetch it is left to the implementation

pdatta: dividing up the impl into 2 parts - signature part which does not know about context and then the second part is context

scantor: there is a example in STR-Transform case

fjh and scantor: is not convinced that URI is dependent on Type

scantor: the question is this a complete solution or what more needs to be done

<jccruellas> Frederick, I must leave now.... I do not want to interrupt the discussion....have a fruitful meeting....see you at the mail list

scantor: does not need to move type into URI selection
... there needs to be a discussion on how to implement this

fjh: pdatta to make the necessary changes so that it can be reviewed in the next meeting so that we can get to public draft

secton 5.5: see what you sign section

pdatta: this is actually sign what you see
... ex. from Konrad's case: get data from DB and present it in HTML to user. HTML data needs to be signed and not the raw data

<fjh> a+

bhill: don't add decryption transform - as it needs to be in post signature workflow

fjh: we discussed this before. raw data can be signed
... logically what bhill says make sense
... does not want to leave out a large community

pdatta: thinking of using declarative language to incorporate this requirement

<fjh> suggest section on selection, section on canonicalization, with key points for each

<jccruellas> All, I must leave now the meeting....did not want to interrup discussions...have a fruitful meeting.

<fjh> agree with thomas on avoiding generic extensibility discussion

tlr: we have a reasonable question and strawman - cut down on discussing extensibility. Can we simplify this

<jccruellas> you are welcome Frederick, see you

scantor: what to clearly understand what we want to get out of the Transform Note

<tlr> ACTION: scantor to contribute additional text for transform note, to make clear what this document gets at [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action01]

<trackbot> Created ACTION-168 - Contribute additional text for transform note, to make clear what this document gets at [on Scott Cantor - due 2009-01-21].

ACTION pdatta to update draft - Transform Note

<trackbot> Created ACTION-169 - Update draft - Transform Note [on Pratik Datta - due 2009-01-21].

Canonicalization in 2.0

<fjh> some material in transform note

pdatta: bulk transforms for white space

<fjh> http://www.w3.org/2008/xmlsec/Drafts/transform-note/Overview.html

<fjh> section 5.1

pdatta: ignoring comments is another thing.
... use of attributes instead of tranforms for the above

scantor: does C14n impact streamability

pdatta: written code to do the same. there are some things that cause problems. XML base does affect C14n. Namesspaces are better if you use a SAX parser

<fjh> pratik will add material describing more details of canonicalization related to canonicalization element in new syntax to transform document

<fjh> need to note streaming issues with canonicalization in transform document?

scantor: looks like pratik wants to present a new C14n method

<fjh> scott notes it might be better to write up new c14n draft, need clarity of change, using inclusive/exclusive in new method might be confusing

fjh: do we need to make a separate doc

tlr: the main point of this doc is to reduce C14n. The C14n really belongs to ??

scantor: is the goal to reduce where C14n can occur? and what to do when it occurs

<fjh> prefer simpler canonicalization, fewer choices, if leading only to digest shouldn't this be possible?

bal: problems with current c14n is that it is close, but, not perfect.
... it is also not fast

<brich> is there a conflict between the "sign what you see" and simple whitespace treatment? it seems that signing what is seen may require other than simple treatment of whitespace...

bal: when you have 2 end points which know what they are doing, what are the set of things we can allow. Assume that they go thro' multiple hops which do various things to the doc - add spaces, etc

fjh: when you are done with c14n you have a new XML doc - another source doc

bal: in practice it does not really happen ...

tlr: effect on where namespace bindings can happen

requirement - the o/p of c14n can be XML

<fjh> original requiremet of output of c14N to be XML can be removed in transform proposal, this should enable simplifications

<fjh> tlr notes this could enable removing need to propagate namespace information

tlr: what do we need to do with QNames and content

bal: originally in C14N remove prefixes - this was dropped as we could not get good conformant XML
... fjh is correct

we may need inclusive and exclusive on C14n

scantor: use XSI types
... we don't include QNames and Content
... in schema design

fjh: isn't c14n tied to selection

scantor: it is serialization
... selection happens first

tlr: selection implies that we need to fix namespace

bal: if we do a subtree selection, then what is the context of signed content and verification of signature context

scantor: inclusive c14n it pulled in all the prefixes
... with exclusive, we need to be explicit about prefixes
... where do the inefficiencies lie in C14n? - are they separate? in this proposal, focus on selection and leave c14n for later

pdatta: there are simplifications we get - the input we get is only restricted set of nodes.

<fjh> pratik notes transform doc enables a simplification by reducing the allowed inputs into canonicalization

pdatta: earlier model: all namespace nodes are listed . Now you cannot muck around with namespace nodes

fjh: our charter said we do solve C14n problems

bal: not gurantee attribute ordering, but, gurantee white space, new line -- simple C14n

scantor: is this is a huge performance benefit? if not, then don't touch this, create a new one

bal: compliance is a requirement. Simplification has benefits for security, but not for perf gains
... has received compliants on schema validation

tlr: requirement - c14n output needs to be XML

fjh: now we are asking what are the benefits of this XML

phb: we should rename c14n as hash preparation

<tlr> concerning XSL: .

<tlr> This transform requires an octet stream as input. If the actual input is an XPath node-set, then the signature application should attempt to convert it to octets (apply Canonical XML]) as described in the Reference Processing Model (section 4.3.3.2).

phb: c14n implies that we are creating XML, prehashing implies that the output may not be XML

csolc: we need to control what input and outputs are. Output is XML, input is a dataset
... we get better perf if we use datasets for input

pdatta: you get a subtree w/ exlusions and some more re-inclusions

fjh: we had a discussion w/ EXI where they had a perf measuring tool. Can we do this?

<csolc> to get a faster canonicalization algorithm we need a simpler input data set.

<csolc> xml nodesets require the canonicalization algorithm to reconstruct an xml hierarchy.

shivaram: no marketing dept would allow publishing of perf numbers

tlr: this was addressed elsewhere -- numbers were published and what numbers represented which implemenation was not shown

pdatta: we need to have c14n code. How do we verify this?

<csolc> that reconstruction can be expensive especially when you have to rationalize the namespace axis

scantor: we have a question of timing. We need these now not during interop.

pdatta: do we want to leave the white space c14n in the doc?

fjh: we need text for this

scantor: in the doc there are some sections that are is marked is mixed context

should the output of c14n be EXI?

fjh: if the output is hash - then should we bother?

tlr: there is a set of parameters that preserves all info from c14n and hence suitable

phb: it does make a big perf difference if we have a schema. If we do something generic like .NET or Web Services, then things can be tricky
... folks love that EXI exists, but, do n't have an intention to use it :-(

fjh: modulo some changes from Pratik, we can publish the tranform doc

bhill: how to deal with DTDs as part of C14n?
... what to do and when to do it

phb: kill dtds

bhill: currently, we expand entities as a part of C14n

fjh: simplification is good

scantor: make this some one else's problem ;-)

<scribe> ACTION: bhill to write about C14n and DTD processing [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action02]

<trackbot> Created ACTION-170 - Write about C14n and DTD processing [on Bradley Hill - due 2009-01-21].

<fjh> http://www.w3.org/2008/02/xmlsec-charter.html

pdatta: requirements: backwards compatibility does not include on the wire bits
... for type use full length URIs
... make it clear if it is the only alogrithm or if we allow extensions, etc
... eliminate requirements around implicit C14n

2.0 Requirements

tlr: start converging on the requriements draft

<klanz2> are we @22) Requirements Review (9:00 - 11:00) in http://lists.w3.org/Archives/Public/public-xmlsec/2009Jan/0026.html

*we are looking at the charter and trying to understand where we stand interms of time lines and goals

<klanz2> thx

<fjh> http://www.w3.org/2008/xmlsec/Drafts/xmlsec-reqs/Overview.html

<tlr> I think a bunch of the discussion this morning feeds into the requirements.

we may remove the Scope section if can't ariticulate the same

Introduction needs work

Principles Section looks ok for now

section 4: usage requirements and design

section 4.1 looks ok

section 4.2 looks ok

fjh: suggest remove section 4.3

section 4.4 may need work after today morning's discussion

<klanz2> could capture in the minutes in what direction this would go ...

<klanz2> please?

<fjh> section 4.4 comments

tlr: we have not drilled down on searilization of ds:SignedInfo

<fjh> update to indicate that use of canonicalization no longer supported to convert from nodeset to binary - see transform doc

<fjh> update to indicate no requirement to use generalized c14n for SIgnedInfo , limited need here

<klanz2> what shall be the concisely specified INTEROPERABLE document subset serialization ... is there a section in "transform doc"?

bal: if we had raw unicode URLs, then we may have problem - SignedInfo object

tlr: if the transform model is broken down to such level there are no attributes , then c14n becomes very simple

<fjh> konrad, take a look at this http://www.w3.org/2008/xmlsec/Drafts/transform-note/Overview.html

scantor: attribute extensibility is still on the table

pdatta: should we move all the C14n to Transform Note doc

RESOLUTION: move all C14n to Transform Note doc

scantor: starting from fixed vesion of transform and c14n will greatly benefit the requirements
... c14n for hashing operations only and no need to provide XML as output.
... no binary octet stream and produce only one output

<fjh> requirement list for c14n: 1 only used as input to hashing, 2 no need to produce XML as output, 3 no need to use as extension point, only limited canonicalization

<fjh> 4, not used for nodeset - octet stream conversion, 5 simplify what in infoset is preserved

<fjh> 6 still required to deal with namespaces, QNames in content

<fjh> 7 still required to produce out put that can be verified and generated, interop

<klanz2> Have a good lunch:

<klanz2> http://preview.tinyurl.com/C14n-Intro

<klanz2> Soime general stuff about C14n:

<klanz2> 2.5 Canonicalization

<klanz2> Canonicalizing XML is hard! Tim Bray25

<klanz2> To be able to digest XML we need a binary representation or serialization, because only a series of bytes

<klanz2> (aka. octets) can be signed. Certain aspects of XMLs serial representation are left open and a canonical

<klanz2> and reproducible representation is hence required.

<klanz2> The goal of canonicalization is to remove any information, that is considered certainly insignificant and

<klanz2> to define an unambiguous representation for aspects that can be represented in various ways. Such negibilities

<klanz2> range from character encoding, line breaks, order of attributes, whitespace in tags and between

<klanz2> attributes, unutilized namespaces to value normalizations based on a DTD or Schema.

<klanz2> Higher forms of canonicalization include the more primitive ones.

<klanz2> The following forms of XML canonicalization currently can be found in standards, drafts and other

<klanz2> sources. They are presented here by their level of sophistication and ordered from simple to complex:

<klanz2> ullet Minimal Canonicalization (MC14n) [50] [51]

<klanz2> ullet Canonical XML Version 1.0 (C14n) [52]

<klanz2> ullet Canonical XML Version 1.1 (C14n11) [53] fixing issues analyzed by us [54] and the XMLCORE

<klanz2> working group (WG).

<klanz2> ullet Exclusive XML Canonicalization Version 1.0 (Exc-C14n) [55]

<klanz2> ullet Schema Centric XML Canonicalization Version 1.0 (ScC14n) [56]

<klanz2> http://tinyurl.com/Why-C14n-is-inefficient :

<klanz2> Namespace Nodes - A namespace node N is ignored if the nearest ancestor element of

<klanz2> the nodes parent element [O] that is in the node-set and has a namespace node in the

<klanz2> node-set with the same local name and value as N. Otherwise, process the namespace [. . . ]

<klanz2> .

<klanz2> replacing this text with :

<klanz2> .

<klanz2> Namespace Nodes - To process a namespace node [N] by find the first output ancestor element [A] of

<klanz2> the nodes owning element [O] in reverse document order having an output namespace node [Na]

<klanz2> with the same local name as [N] (declaring the same prefix) and [A] and [Na] are in the node-set.

<klanz2> If [N] and [Na] have the same value [N] is ignored otherwise, process the namespace [. . . ]

<klanz2> .

<tlr> ScribeNick: bal

klanz2: simple spec changes to c14n would help w/ namespace handling
... (ns handling is the big problem)
... consider adding some constraints on how nodes are connected in the input to C14N, that could help simplify things too
... there are always some types of nodesets that require that you keep all the namespace prefixes. Can't just use a simple stack model b/c of these edge cases
... this spec change targets the problems w/ canonicalizing namespace nodes

<klanz2> https://online.tu-graz.ac.at/tug_online/voe_main2.getVollText?pDocumentNr=90836#page=60

klanz2: suggests that maybe there could be a C14N v1.2 that is smarter w/ handling namespace nodes

fjh: Konrad, can you pls draft something for the list?

klanz2: will try to bring something to the list after XAdES plugfest end-of-Feb

<scribe> ACTION: klanz2 to draft a proposal for canonicalization improvements [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action03]

<trackbot> Created ACTION-171 - Draft a proposal for canonicalization improvements [on Konrad Lanz - due 2009-01-21].

<klanz2> Exc-C14n suffers not inheriting xml:base, xml:space, and other inheritable attributes ....

<klanz2> Exc-C14n however is good at u

<klanz2> at processing namespace nodes

scantor: need to keep in mind that there are two dimensions: making implementations more efficient and making it easier to use

<klanz2> C14n is bad at processing namespace processing

scantor: these aren't necessarily the same

klanz2: whitespace handling should be dropped in the general case.
... try to establish some principals on how information should be dropped
... when doing C14N

<klanz2> https://online.tu-graz.ac.at/tug_online/voe_main2.getVollText?pDocumentNr=90836#page=101

<klanz2> Be liberal in what you require but conservative in what you do [73].

<klanz2> Translated to XMLDSIG this means: Refer only to what is necessary, and canonicalize as much as

<klanz2> possible by default!

<klanz2> Saying something is application dependant or expensive is a mere excuse of engineers not trying hard

<klanz2> to figure out to make it robust and efficient. Principles for designers of user agents such as browsers or

<klanz2> XMLDSIG applications have to be proxy for their end users. OASIS-DSS allows them to do this centrally

<klanz2> in office environments, but such should apply for decentralized application developers as well:

<klanz2> ullet Signer, should be conservative in what they consider as being the Information they want to have

<klanz2> secured.

<klanz2> ullet Intermediaries, are invited to process signatures with whatever tools they find appropriate. Be

<klanz2> conservative in what you have to touch for processing, especially do not touch signed documents

<klanz2> and use opaque containers (subsection 3.2.3 on page 57). If yet available <xml> ... </xml>

<klanz2> (subsection 4.1.1 on page 79).

<klanz2> ullet Intermediaries and verifiers, do not touch what was meant to be signed, and hence has been

<klanz2> signed or the signature breaks.

<klanz2> ullet Verifiers, only what is signed (i.e. DigestInput) should be shown as signed or processed as

<klanz2> signed.18

<klanz2> Balancing the trade-off between robustness, efficiency and simplicity can not mean only to resign and

<klanz2> hide behind a Do not touch signed documents at all principle. This will hinder the spreading, processing

<klanz2> and passing on of signed content, yes signed information entities that can be trusted, across the

<klanz2> Internet.

fjh: these might imply some requirements document changes

Wiki

fjh: We *do* have a wiki
... copied from the Widgets wiki template

http://www.w3.org/2008/xmlsec/wiki/Main_Page

tlr: Wiki is public

fjh: but you cna only edit the Wiki if you have a Wiki account
... could use Wiki for C14N discussion

<fjh> http://www.w3.org/2008/xmlsec/wiki/PublicationStatus

<scribe> ACTION: fjh to update the Publication Status page on the Wiki [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action04]

<trackbot> Created ACTION-172 - Update the Publication Status page on the Wiki [on Frederick Hirsch - due 2009-01-21].

Derived Keys

Magnus sent a draft proposal to the list

http://www.w3.org/2008/xmlsec/Drafts/derived-key/derived-keys.html

Please review and comment

<klanz2> @best practices: It is good practice to use Exc-C14n only for connected node-sets and declare all used prefixes in the

Best Practices

<klanz2> InclusiveNamespacePrefixList.

<klanz2> In general it is good practice to use Exc-C14n whenever possible, especially if applications use namespace

<klanz2> prefixes only to qualify elements and attributes whose owning element is also in the document

<klanz2> subset. Despite the fact that document sub-sets (node-sets) containing attributes and not their owning

<klanz2> elements have a questionable semantic and hence should be avoided, they are nonetheless allowed in

<klanz2> XPath and accepted by Exc-C14n. Such node-sets are however not suitable for Exc-C14n with respect

<klanz2> to the definition of visibly utilized namespace declarations.

<klanz2> Adding #default will assure the correct interpretation of QNames without prefix.

fjh: We do have a published Best Practices document but it doesn't reflect any of the edits we've made since publication

<klanz2> from https://online.tu-graz.ac.at/tug_online/voe_main2.getVollText?pDocumentNr=90836#page=60

fjh: we have a bunch of open actions & issues associated with the BP document

Smullan will send an email about action 125 (xpath 2 best practices) and fjh will put into the doc

<klanz2> @best practices there is related work that never made it back to the W3C:

<klanz2> work has been performed by Geuer-Pollmann, who edited results of a workshop [70] where several issues in XMLDSIG were identified, they are in German and in keyword style.

<klanz2> http://web.archive.org/web/20070820153600/http://www.nue.et-inf.uni-siegen.de/~geuer-pollmann/publications/20030403_XMLSignaturWorkshop/

<klanz2> the bad thing i sthat it's only german

tlr: to what extent are companies using XML firewalls? Do they run arbitrary XSLT?

<fjh> http://lists.w3.org/Archives/Public/public-xmlsec/2009Jan/0003.html

<klanz2> http://tinyurl.com/XSLT-in-XMLDSIG

(no they only run the common transforms)

Gerald: went out about 1.5 years ago to get an XML firewall, didn't verifiy that it would do signature validation, but did want to make sure it would run XML transforms

Hal: hasn't seen any behavior around dynamic transforms

smullan: Java API allows youto plug in custom transforms

Shivaram: XSL firewalls are mostly used for partner communication

tlr: so they verify that the XSLT transform in the firewall is the one they expect?

klanz2: wants to add some things to the best practices document
... there have been workshops in the past (in Germany) that have identified some of these issues, but they never made it back to W3C

tlr: if you take a declarative approach to avoid XSLT, you have to deploy additional stuff to your firewall

kgraf: most XML firewalls are just XML appliancess

(Moving on to Issue-52)

<tlr> ISSUE-52?

<trackbot> ISSUE-52 -- Rules for syntax of KeyInfo child elements should be unambiguous -- OPEN

<trackbot> http://www.w3.org/2008/xmlsec/track/issues/52

tlr: is X509Certificate in KeyInfo under-specified? If so, is this a bad thing?
... notes last paragraph of Section 4.4.4 in v1.1 draft

(Moving on to Issue-56)

<fjh> http://lists.w3.org/Archives/Public/public-xmlsec/2008Dec/0033.html

<tlr> ACTION-124?

<trackbot> ACTION-124 -- Frederick Hirsch to follow up with Juan Carlos on ISSUE-56 -- due 2008-12-16 -- CLOSED

<trackbot> http://www.w3.org/2008/xmlsec/track/actions/124

<tlr> ACTION-53?

<trackbot> ACTION-53 -- Pratik Datta to review comments from Scott and propose document change, http://lists.w3.org/Archives/Public/public-xmlsec/2008Aug/0054.html -- due 2008-09-09 -- CLOSED

<trackbot> http://www.w3.org/2008/xmlsec/track/actions/53

scantor: w.r.t. scheme normalization, i have experience with only a couple implementations, since most people don't do schema validation can't really say if a particular implementation mutates the DOM when it schema validates

shivaram: need to only look at schema validation in the context of xmdsig

scantor: sounded like the best practice is "always signature validate first, then schema validate"
... but note that some implementations may not allow you to do that today

kgraf: we are baking into HW the ability to validate XML automatically. if you get XML, you get validated XML

scantor: that suggests we shouldn't dismiss schema-validate-first

<scribe> ACTION: scantor to draft some text in response to ISSUE-51 [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action05]

<trackbot> Created ACTION-173 - Draft some text in response to ISSUE-51 [on Scott Cantor - due 2009-01-21].

<klanz2> @ schema validation: https://online.tu-graz.ac.at/tug_online/voe_main2.getVollText?pDocumentNr=90836#page=99

<klanz2> For whitespace, namespace, attribute value and other normalization as described in ScC14n to work in

<klanz2> an interoperable fashion, signers and verifiers need to perform Schema assessment equally, for which

<klanz2> XMLDSIG once had a Schema validation transform [71]15 [11]. Until today it is only a working draft and

<klanz2> hence not supported by implementers of XMLDSIG. Schema Instance can be used to associate a Schema

<klanz2> with a document. XMLDSIG does not specify that the presence of hints16 for document validation

<klanz2> against a Schema or DTD have to be respected and validation to be performed. XMLDSIG only warns

bhill: schema validation is the moral equivalent of XSLT is terms of "see what is signed"

<klanz2> to use Schema consistently, but does not provide any means to enforce this [27][29]:

<klanz2> Note, if the Signature includes same-document references, [XML] or [XML-schema] validation

<klanz2> of the document might introduce changes that break the signature. Consequently,

<klanz2> applications should be careful to consistently process the document or refrain from using

<klanz2> external contributions (e.g., defaults and entities).

<klanz2> For increased interoperability XMLDSIG should specify a default. For example that validation hints if

<klanz2> present in the document must be respected and processed.

<klanz2> Further a very simple parameter for dereferencing and parsing could clarify the situation. Current

<klanz2> markup however does not allow to parametrize parsing. Hence a <ds:Transform> would be required

<klanz2> and have the sole purpose to describe if parsing should be performed with validation and was performed

<klanz2> when parsing the resource during signing. It could have one element parameter indicating one of the

bhill: don't see a logically consistent reason to do one but not the other

<klanz2> three options for a <ds:Reference>:

<klanz2> ullet DoNotValidate - just process as well formed XML document.

<klanz2> ullet ValidateWithHintsInDocument - use hints in the document.

<klanz2> ullet Validate - validate with supplied hints, ignoring potential hints in the document.

<klanz2> http://www.w3.org/Signature/Drafts/xmldsig-transform-xml-validation.html

<klanz2> Having such a

<klanz2> discrimination would further enable system architects to locate signature validation either below or next

<klanz2> to the Schema validation and an application logic.

(Reviewing Issue 64)

<klanz2> who raised this issue?

scantor: there's a piece of OAUTH solely to get the piece that relates to signing of content of HTML forms

<fjh> it is from the first F2F discussion, unclear of issue source

scantor: (OAUTH is http://oauth.net/)

<fjh> scott - ability to reference form controls might be useful

scantor: not sure this is the right venue to write the response/spec for this

<tlr> is there value to us spending time on this?

<klanz2> jsr 105 offers a URIDereferencer that allows to extend derferencing

scantor: AOL came to the SAML TC with use cases involving bundling SAML assertions into HTTP

Hlockhar: what's the assumptions about signing/

<klanz2> https://online.tu-graz.ac.at/tug_online/voe_main2.getVollText?pDocumentNr=90836#page=38 :

<klanz2> URIs as References or address for Resources

<klanz2> The use of URIs as a reference or an address employs the term resource defined in [25] as

<klanz2> [. . . ] whatever might be identified by a URI. [. . . ]

<klanz2> Which is to some extend circular and mainly lives form the examples provided:

<klanz2> [. . . ] an electronic document, [. . . ] a service [. . . ] human beings, corporations, and bound

<klanz2> books [. . . ] Likewise, abstract concepts [. . . ], such as the operators, and operands of a

<klanz2> mathematical equation, the types of a relationship [. . . ], or numeric values [. . . ].

<klanz2> This broad definition of a resource is useful to link from one resource to another and hence this document

<klanz2> can link to an email addressee by using mailto:Konrad.Lanz@iaik.tugraz.at, to a telephone

<klanz2> by using tel:+433168730 (callto:+433168730), a Vo-IP connection like skype:echo123,

<klanz2> for instance if read electronically.

scantor: there are a few mechanisms, including RSA

<klanz2> However, when it comes to signing, URIs are per se not very useful and we have the additional requirement,

<klanz2> that they need to be dereferenceable to an octet-stream, which can be digested and signed.

<klanz2> This is commonly true for URIs using the http: scheme, when retrieving data from the web like in

<klanz2> expression 2.3.13.

<klanz2> http://www.example.org/index.html (2.3.13)

<klanz2> Applications may even dereference URIs under the tel: scheme to binary data objects; for example

<klanz2> by requiring the called party to answer by modem, using the dialpad or simply record the voice14. This

<klanz2> is however not commonly used and such examples just serve the purpose of demonstrating that resource

<klanz2> retrieval will have to be reproducible.

<klanz2> Such is necessary, so that verification can be performed at a later time especially as RFC 3986 [. . . ]

<klanz2> does not require that a URI persists in identifying the same resource over time, though that is a common

<klanz2> goal of all URI schemes [25].

<klanz2> In the case of a phone call and recorded voice this becomes immediately apparent; two voice recordings

<klanz2> of the same sentence will unlikely be binary equivalent data objects. This points to one of the immanent

<klanz2> strenght of an OASIS-DSS request, where arbitrary data can be associated with arbitrary URIs. So the

<klanz2> actual data retrieval is detached from the process of signing or verifying. Hence data stored during

<klanz2> signing can be supplied for verification at a later time, which can be useful, if resource ceased out of

<klanz2> existence or changed.

(Moving on to Issue-69)

<scribe> ACTION: pdatta to update the transforms related to ISSUE-69 [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action06]

<trackbot> Created ACTION-174 - Update the transforms related to ISSUE-69 [on Pratik Datta - due 2009-01-21].

klanz2: two issues to discuss. 1) using exclusive c14n whenever possible, and when is it possible

fjh: could you please send it in email?

<scribe> ACTION: klanz2 to send an email summarizing the points he's raised with the cut-and-paste in the chat. [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action07]

<trackbot> Created ACTION-175 - Send an email summarizing the points he's raised with the cut-and-paste in the chat. [on Konrad Lanz - due 2009-01-21].

fjh: We will try to approve publication of the Best Practices at the next conf call, so please try to resolve your actions before then.

RESOLUTION: Plan to agree to publish the Best Practices document during the next conf call on 1/27/09

<fjh> http://lists.w3.org/Archives/Public/public-xmlsec/2009Jan/0033.html

(we're now looking at the proposal Brad sent to the list regarding DTD text)

klanz2: need some way to track our own comments on best practices

bhill: I think it's pretty well known that DTD processing should be disabled for security processing -- that's pretty well known in the security community at least
... but you can't turn it off in XMLDSIG right now

<klanz2> can hardly hear brad sorry ...

bhill: beteween selection and canonicalization in the new processing model, need to indicate whether DTDs need to be included
... maybe eliminate last paragraph

pdatta: would like to make the statement a bit stronger

fjh: seems to be agreement on the point raised inthe last paragraph, so can we elim

RESOLUTION: to accept the proposal from Brad with the last paragraph removed

<fhirsch> currently discussing agenda

fjh: Agenda bashing for the remainder of the meeting

tlr: do we have any idea that Transforms are actually used in RetrievalMethod?

scantor: believe the answer to the question is "no"
... for remote references, if the key is standalone you can just use the URI
... for samedoc reference or for referencing into a remote doc, then you need a transform
... (these were sent to the mailing list previously)
... really don't want to use tranforms at all

<fhirsch> http://www.w3.org/2008/xmlsec/Drafts/xmldsig-core-11/Overview.htm#sec-RetrievalMethod

scantor: not used because it's not a fun thing to implement. use case it was meant to address has value
... (specifically the use case of being able to reference a key in multiple places in a single document)

<fhirsch> scott - get back the box or the thing in the box, using RetrievalMethod you get box, then use transform to get content

scantor: Issue is whether you get back "the box" or "the thing in the box" via the URI deference in RetrievalMethod

<fhirsch> note you don't need generic transform for this, just clear definition in core

tlr: the spec is relatively loose on this point
... if RM gives you back the box, why can't the app do the unboxing?

scantor: because the spec says the types that refer to existing key material that can go into the RM are specific to the children

fjh: need to clarify in the spec what should be going on

scantor: but it's going to be a breaking change
... spec is clear but backwards

tlr: why do i need a transform?
... there needs to be some magic, and that needs to be written down in a transform (today)

scantor: can't add ID elements to the schema without breaking it

<fhirsch> scott - issue if multiple children, which one to select,

fjh: why do you need a transformchain?

<fhirsch> prefer simple and clear straightforward approach

csolc: ID management is really hard when you're doing document assembly
... if we could come up with a way of annotating the data with IDs ...

fjh: pdatta mentioned work in WS-*

pdatta: KeyIdentifier

<fhirsch> KeyIdentifier

fjh: think it would be a mistake to have a generic transform mechanism

bhill: and it's inherently unauthenticatable

fjh: you point to something that has the keys
... or you point to something that contains them

scantor: if you use fixed IDs, are there substitiution attacks?

csolc: another example where IDs don't necessary get you what you want: workflow, P.O.s, doing a selection for items over $1000

fjh: but we're just talking about KeyInfos here

csolc: how do you find the key for the item that's over $1000

(discussion of this use case)

tlr: you want to do a self-reference where you have no idea what your own ID is

scantor: willing to believe it's necessary to have this capability, but a fully-generalized Transform chain probably isn't what we want

bhill: should be able to simplify it, constrain what is valid in that location

fjh: don't you really just need to be able to say "I want the contents of the box, or the nth thing in the box"

scantor: it would be really nice if we could not need more of the XML stack to solve this
... the more we end up in a situation where we're going to produce a new schema, then this becomes more of a non-issue

fjh: doesn't seem like there's value in trying to retain

scantor: lots of use cases that come up w/ sig & enc, e.g. encrypting many things to the same individual

<fhirsch> issue: Determine approach to RetrievalMethod in 2.0 with regard to transforms, if any, or if revised transform approach

<trackbot> Created ISSUE-87 - Determine approach to RetrievalMethod in 2.0 with regard to transforms, if any, or if revised transform approach ; please complete additional details at http://www.w3.org/2008/xmlsec/track/issues/87/edit .

smullan: asks if anyone has ever seen a signature with a retrievalmethod

<fhirsch> use case - retrievalmethod to point to encrypted key

tlr: asks whether anyone has seen a deployment of RetrievalMethod w/ a non-empty TransformChain

<fhirsch> use case - x509 cert

smullan: even the interop tests use the raw X.509 type that doesn't use the Transform in RM

tlr: maybe v1.1 should say something about it being a bad idea using Transforms w/ RetrievalMethods

fjh: in 2.0 we either use the same transform model or make it a lot simpler

bhill: should be able to be a lot simpler in 2.0

scantor: could say use xpointer, or use this restricted syntax

<fhirsch> bhill: e.g. selection only portion of transform simplicition

tlr: need for a "blessed" xpointer schema
... there is a text-encoding initiative spec for xpointer
... w3c's initial spec for xpath-based xpointer went nowhere
... don't know what the best way to fix right now
... within the scope of that WG, found a narrow fix that solved their specific problem in their timeframe. not a general solution
... no Recommendation or other useful spec for the "xpointer" xpointer scheme

fjh: what do we do to make progress? Should there be a proposal?

scantor: no proposal yet since there's no 2.0 framework yet

tlr: suggests pointing out the problems in the 1.1 spec
... signal possible deprecation

<scribe> ACTION: tlr to draft text for v1.1 signaling possible deprecation in 2.0 [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action08]

<trackbot> Created ACTION-176 - Draft text for v1.1 signaling possible deprecation in 2.0 [on Thomas Roessler - due 2009-01-22].

<fhirsch> what are the requirements for RetrievalMethod -

<scribe> ACTION: scantor to document requirements for RetrievalMethod [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action09]

<trackbot> Created ACTION-177 - Document requirements for RetrievalMethod [on Scott Cantor - due 2009-01-22].

pdatta: should we put a warning in 1.1

tlr: we should call out the difficulties with using what currently exists.

scantor: which is pretty much what's in the best practices text

tlr: don't want to use RM in any complex cases

<tlr> ACTION-176?

<trackbot> ACTION-176 -- Thomas Roessler to draft text for v1.1 signaling possible deprecation in 2.0 -- due 2009-01-22 -- OPEN

<trackbot> http://www.w3.org/2008/xmlsec/track/actions/176

<tlr> ACTION-176?

<trackbot> ACTION-176 -- Scott Cantor to draft text for v1.1 signaling possible deprecation in 2.0 -- due 2009-01-22 -- OPEN

<trackbot> http://www.w3.org/2008/xmlsec/track/actions/176

scantor: one more point for the minutes: apparently one of the reasons that the SAML text around encryption got changed was because IBM had an implementation that depended on RetrievalMethod for some automation

bhill: works in a lot of cases because it's just "plug it in and use the existing code you have"

(now looking at KeyInfo on the whiteboard)

bal gives some history of the components of KeyInfo

fjh: why isn't OCSP there?

bal: wasn't done at the time

phb: verisign CRL is currently >4MB, would be nice to have OCSP support

fjh: any reason why we couldn't define an OCSP element in 1.1?

<scribe> ACTION: smullan to draft text for v1.1 on OCSPResponse subelement for X509Data [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action10]

<trackbot> Created ACTION-178 - Draft text for v1.1 on OCSPResponse subelement for X509Data [on Sean Mullan - due 2009-01-22].

phill: encoding is Base64 of whatever the OCSP spec says

one response per <OCSPResponse> element

<Zakim> tlr, you wanted to tell PHB that DER does appear in the OCSP spec

<tlr> http://www.ietf.org/rfc/rfc2560.txt

smullan: can't find anything in the spec saying it's limited to EE certs

tlr: OCSP spec is written in terms of ASN.1, HTTP binding in the appendix, that it's DER and then Base64 encoded. Questions: a) what other transports are used b) what encodings do they use c) how common?

scantor: want to say that it's RECOMMENDED that consumers can consume BER &DER
... som variation of the text from the certificate section should be OK

phb: says to not mention the encoding because then you need to trace down where it's referenced

scantor: point is to tell implementer what to expect
... we have that text for certificates

<fhirsch> The OCSPResponse element contains a base64 encoded OCSP response

<fhirsch> [ OCSP ]

<tlr> While in principle many certificate encodings are possible, it is RECOMMENDED that certificates appearing in an X509Certificate element be limited to an encoding of BER or its DER subset, allowing that within the certificate other content may be present. The use of other encodings may lead to interoperability issues. In any case, XML Signature implementations SHOULD NOT alter or re-encode certificates, as doing so could invalidate their signatures.

<fhirsch> <element OCSPResponse base64binary>

tlr: should we have a separate 1.1 namespace?

<fhirsch> use 1.1 namespace

<fhirsch> generic

<fhirsch> threrefore update ecc namespace

<fhirsch> http://www.w3.org/2008/xmlsec/Drafts/xmldsig-core-11/Overview.htm#sec-ECKeyValue

<fhirsch> eg change "http://www.w3.org/2009/xmldsig-ecc#ECKeyValue" to xmldsig11#

<fhirsch> eg replace xmldsic-ecc with xmldsig11 in that URI

<scribe> ACTION: bal to either update the v1.1 draft with a consistent namespace URI suitable for ECC and OCSPResponse, or come back with technical reasons why that isn't possible [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action12]

<trackbot> Created ACTION-179 - Either update the v1.1 draft with a consistent namespace URI suitable for ECC and OCSPResponse, or come back with technical reasons why that isn't possible [on Brian LaMacchia - due 2009-01-22].

KeyInfo syntax for v2.0

scantor: bal & I went back and forth on the last call to add an optional encoded key syntax
... seemed to me that for subject public key info you wouldn't need a full ASN.1 stack

<fhirsch> http://www.w3.org/2009/01/06-xmlsec-minutes.html#item06

scantor: additional optional child that would contaicn the SPKI

<fhirsch> http://lists.w3.org/Archives/Public/public-xmlsec/2008Dec/0031.html

scantor: wouldn't be imposing an additional constraint
... tried to find some specific references to other groups that are working only with the KeyInfo element and not all of XMLDSIG
... can't find an example at the moment

<fhirsch> scott: ws-federation is using saml metadata, so that is another example of need in addition to saml

bal: would like to understand the use case first ...
... scenario: dsig itself not involved ...
... want to move data betw 2 xml consuming applications ...
... considerations:
... spki is self-describing in the sense that there's always OID + octet-stream ...
... OID tells you what algorithm ...
... based on OID, go back to parser ...
... technically speaking, if you want to pass SPKI around ...
... do the same we just did for OCSPResponse ...
... if we have an ASN.1 library, it will just take care of this
... one could consider doing that sort of thing
... if we get to the point where we need other mechanisms ...
... could be beneficial to do KeyHash ...
... you'd like to pass around the key hash version ...
... unfortunately, no standard inside X.509 in that

hal: It's in WSS
... X.509 profile
... original thing: various kinds of references were ambiguous ...
... somebody raised the orthodoxy -- issuer + serial unique ...
... but matching issuers, oh well

<fhirsch> bal notes could also define X509SubjectPublicKeyInfo element with base64 encoded value, of asn1 structure

hal: by 1.1., everybody wanted to have key hash

bal: I shall not dive into the rathole of name-based vs key-based chaining
... could see value in defining SKI mechanism.
... for interop, would need to pick one ...
... hash a key representation ...
... then what serialization do you use?
... could use XML serialization
... there is an orthodoxy question here
... we have a way to pass keys around
... the fact that there is a certain toolset that deals with one format and not another
... I don't have a lot of sympathy for that

scantor: I have never seen a crypto library that knew about the XML Syntax
... I have seen XML Signature code that handles the mapping between the two syntaxes
... there's always a glue layer
... what I'm arguing is that it would be nice to do this without a glue layer

bal: the other difficulty here is that they don't use dsig
... they rip off a piece of dsig
... we don't really like digital signatures, we take a random element
... and we don't like XML, so we'd rather have something else

fjh: keyInfo intentionally reusable?

bal: that didn't extend beyond encryption and xkms

fjh: ... and wherever else I wanted it

hal: that was a decision made in the first six months

scantor: there are places in SAML assertions where keys are represented using KeyInfo

phb: saml assertion layer came from very close to XKMS

hal, bal, scantor: interference

bal: looked at this further -- you could do SPKI very easily, but that won't work with OpenSSL
... OpenSSL requires that you identify algorithm on the command line
... the structure does the switching properly internally, thereby making two elements superfluous

fjh: not superfluous -- could pull out of XML, do on command line

bal: now, every time you do a new key type, need to touch the schema
... would like to have a single element ...

scantor: there's precedent for people hacking OpenSSL to look at the first two bytes of the structure -- ugh

bal: other question -- looks like we're going to reopen KeyInfo for 2.0
... want key hash, etc

scantor: if pushback is we shouldn't specialize for each algo, let spki deal with it -- if all we need to do is work around borken APIs, we can do that

bal: trying very hard to not do key serialization at the XML layer
... in .net framework, XML is native serialization format for key
... that's not true elsewhere
... not sure what Java native serialization format is

bal: Java has ASN serialization?

scantor: there's Java code that serves as a glue layer

bhill: depends on your key store implementation
... can use PKCS#11 as key store

frederick: maybe we can get the OpenSSL folks to fix their API?

scantor: I can live with that. I can hear why it makes no sense to have separate elements for each algorithm
... if I wanted to go through the motions of getting OpenSSL to fix things, that would be a choice
... the raw processing is there, don't need significant glue code

fjh: how does this affect SAML community?

bal: you can't make this mandatory

scantor: is ASN.1 mandatory in the SAML community, in practice? yes

bal: they could use it as long as we put it in
... does SAML do a profile of dsig?

scantor: for signing
... the context here has less to do with signatures; there are evolving profiles that relate to "if you find this in KeyInfo, do that"
... some of us are nailing down profiles for dealing with keyInfo when it occurs in SAML context ...
... we're getting to the level of "if you find X509Data, do this",

bal: if you have to apply security policy against keys

scantor: I would be shocked if anybody did what you describe

bal: we do some of that for symmetric key stuff where we're required to check for weak keys

scantor: interesting point; hasn't come up

bal: if you're trying to defend against chosen * attacks

scantor: in my use case, once the key gets where it's going, that kind of work has been done
...
... sense is that if this is something that people would do, they likely have mechanisms in either place

bal: there's the whole ?? key thing
... and there's also, back with DSA, you had to have the parameters generated with a particular PRNG ...
... so you had to send the seeds, and the relying party had to verify that with a NIST-mandated process

fjh: ??

bal: ??
... you must support parsing a number of keyValues
... then there's the alternative that's optional ...

fjh: why is there a problem having some optional things?

bal: we haven't yet had an optional that duplicates sth that's mandatory
...

fjh: you're ok to have the optional ones for the communities where it makes sense

bal: would not make it mandatory

fjh: scott, required?

scantor: no

fjh: we don't have a problem, then

bal: don't want to require that everybody implement an ASN.1 parser

fjh: sounds like we have a list of optional features here

bal: as long as we're fine specifying an alternate encoding that doesn't adhere to the XML religion...
...
... you want all the X.509 stuff, and have a standardised extension
... given that there's other stuff to do in keyInfo, it's probably a good idea that we do it

scantor: we would probably have done the algorithm discrimination bit in XML wrong if done elsewhere

fjh: general agreement here?

scantor: would like to add optional algorithm type

bal: probably ok
... HOWEVER, when you're using it for a public key value
... you want to say what's the key type
... right now, we have signature algorithm types
... we don't have URIs defined for that

scantor: use RetrievalMethod types?

hal: what's the problem with inventing more URIs?
... it's not a deep issue ...

bal: see the point -- could put the OID there

sean: why want this?

scantor, bal: probably don't want to have the additional algorithm thing

bal: a single element X509SubjectPublicKeyInfo

scantor: wouldn't want it in X509Data

bal: is it X509 or PKCS#8?

sean: that's private key info

bal: what's the reference?

scantor: found it in RFC 5280

bal: thought they imported it from ??

scantor: don't remember precisely

fjh: 3279?

scantor: if X509Data is right place to put it, put it there

sean: a little weird
... because X509Data is for certificate...

scantor: schema lets me repeat KeyValue

scribe: looks like the interesting pieces are in prose ...

bal: so you want to hang it off KeyValue

scantor: or put it right under the root

bal: there's a purity question here
... if KeyValue is always the XML encoded one, put it in parallel ...
... It is X509SubjectPublicKeyInfo ...
... the text we wrote (which was fight at the time) was about what could go into X509Data

fjh: want a new element in this list, key value of a different kind, but don't have the right name

bal: PKIXPublicKeyInfo?
... if we are going to do a key hash, that's at the same level

fjh: 1.1 or 2.0?

scantor: we're already cracking KeyInfo open for 1.1

phb: one of the things that struck me as strange is that we're only doing public key...

bal: if we do it in 1.1, want to keep it in the same spot for 2.0

phb: actually, we can move it around, but not change it, the way the schema is designed

sean: if we want to do KeyInfo in 2.0, maybe don't do it in 1.1

fjh: dependencies?

scantor: probably need it in 1.1, time-wise

bal: if we want to get it into 1.1 before FPWD, then that would delay FPWD
... we want to look at implementation stuff, scantor has to go back to his...
... can we put it into 1.1 at a later point?

tlr: it makes sense to put out FPWD as quickly as possible, can add features later

bal: would like to get this and key hash in

scantor: specific person for the key hash stuff

bal: problem with the hash is what serialization we would want to hash
... difference on implementation side possible; I'd like to have that information

hal: are we talking about a hash of the cert?

bal: no, hash of the key!
... which would be the subject key identifier, if there was a standard for it ...

scantor: there's lots of federations that are moving to self-asserted keys
... push-back is often "you're using self-signed certificates, but not as certificates"
... if we could use keys stand-alone, a lot of that would go away
... the sooner as we know what we deal with, the sooner we can push out implementation

fjh: I hear I don't need to be frantic about timing

scantor: I think having support for this is something that I can take back

fjh: does kelvin have anything new about ocsp?

bal: umh

fjh: what do we do with the RNG schema?

RelaxNG Schema

<bal> tlr: (attachment to 1.1)

<bal> ... we're not touching the existing schema

<bal> ... we could touch the new RelaxNG schema

<bal> ... and be more restrictive

<bal> scantor: could use the new schema to restrict the set of valid documents further

fjh: we're going to FPWD, so this is a great way to get feedback

tlr: there was an oiptional rng schema floating around, there was a bug

fjh: focusing on 1.1 in my mind

<fhirsch> ACTION: fjh answer re status of RNG schema [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action13]

<trackbot> Created ACTION-180 - Answer re status of RNG schema [on Frederick Hirsch - due 2009-01-22].

tlr: we shoudl defer the update question

fjh: we should update the RNG schema for 1.1 and try to get feedback on it

<scribe> ACTION: fjh to update RelaxNG schemea for 1.1 [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action14]

<trackbot> Created ACTION-181 - Update RelaxNG schemea for 1.1 [on Frederick Hirsch - due 2009-01-22].

fjh: RNG is easier than XSD
... easier to work with
... we have a schema that seems to make sense
... do we set incorrect expectations to include RNG schema in 1.1?

tlr: if we issue an RNG schema for 1.0 we could still issue a different one for 1.1

scantor: why would the schemas be different?

<fhirsch> close ACTION-181

<trackbot> ACTION-181 Update RelaxNG schemea for 1.1 closed

<fhirsch> Kelvin ok with update to 1.1 namespace name

suggestion is to put the new KeyHash element as a child of KeyInfo

and to put the new X509SubjectPublicKeyInfo (name subject to change) as a child of KeyValue

Actions review

<tlr> ACTION-113 closed

<trackbot> ACTION-113 Suggest text re versioning and namespaces for XML Signature closed

<tlr> ACTION-129 closed

<trackbot> ACTION-129 Update signature properties based on feedback closed

<tlr> ACTION-130 closed

<trackbot> ACTION-130 Create template for algorithm note closed

<tlr> ACTION-136 closed

<trackbot> ACTION-136 Propose stronger language on MD5 for 6.2 closed

<tlr> ACTION-170 closed

<trackbot> ACTION-170 Write about C14n and DTD processing closed

<tlr> ACTION-88 closed

<trackbot> ACTION-88 Look at the EXI use cases closed

<tlr> ACTION-90 closed

<trackbot> ACTION-90 Provide a draft for the requirements document of the simple signing requirements. closed

<tlr> ACTION-121 closed

<trackbot> ACTION-121 Add new algorithms to XML Encryption for 1.1 closed

<tlr> ACTION-132 closed

<trackbot> ACTION-132 Add wording around DSAwithSHA1 for 1.1 to draft closed

<tlr> ACTION-134 closed

<trackbot> ACTION-134 Update conformance document for RSA-SHA256 closed

<tlr> ACTION-139 closed

<trackbot> ACTION-139 <complexType name=\"NameCurveType\" must be \"NamedCurveType\" closed

<tlr> ACTION-138 closed

<trackbot> ACTION-138 Compare 4.4.2.3 to RFC 3279 to determine if they are consistent closed

<tlr> ACTION-140 closed

<tlr> ACTION-141 closed

<trackbot> ACTION-140 Change text in 6.2 to remove statement that there is only one digest closed

<trackbot> ACTION-141 Change URI in 6.2.3 to be SHA-384 closed

<tlr> ACTION-142: tickler for when FIPS-186-3 comes out

<trackbot> ACTION-142 Come up with identifiers and add to the algs doc for the new DSA algorithms notes added

<tlr> ACTION-143 closed

<tlr> ACTION-144 closed

<trackbot> ACTION-143 Summarize the i2os function and put it in the doc closed

<trackbot> ACTION-144 Drop the addresses section closed

<tlr> ACTION-145 closed

<tlr> ACTION-146 closed

<trackbot> ACTION-145 Change language to 4,4.2.3.2 to say if you need to interoperate with 4050 implementations then may do this. closed

<tlr> ACTION-148 closed

<trackbot> ACTION-146 Add rfc 3279 to references closed

<trackbot> ACTION-148 Close the conplex type element in 5.5.4 closed

<tlr> ACTION-149 closed

<trackbot> ACTION-149 Change reference X.9.63 to section G closed

ACTION-149: changed x9.63 reference to SEC1 and SEC2 references

<trackbot> ACTION-149 Change reference X.9.63 to section G notes added

<tlr> ACTION-154: overtaken by events

<tlr> ACTION-154 closed

<tlr> ACTION: bal to update 1.1 namespace [recorded in http://www.w3.org/2009/01/14-xmlsec-minutes.html#action15]

<trackbot> ACTION-154 Fix the URI in the document to www.w3.org/2009/xmldsig-ecc# notes added

<trackbot> ACTION-154 Fix the URI in the document to www.w3.org/2009/xmldsig-ecc# closed

<trackbot> Created ACTION-182 - Update 1.1 namespace [on Brian LaMacchia - due 2009-01-22].

<tlr> ACTION-155 closed

<trackbot> ACTION-155 Update text on 6.2 to reflect contemporary cryptanalysis on MD5, SHA1 closed

<tlr> ISSUE-84?

<trackbot> ISSUE-84 -- What should the best practices say about defenses against collision generation? -- OPEN

<trackbot> http://www.w3.org/2008/xmlsec/track/issues/84

<tlr> ACTION-161 closed

<trackbot> ACTION-161 Fix AES 256 reference in block cipher table closed

<tlr> ACTION-162: probably subsumed by ACTION-165

<tlr> ACTION-162 closed

<trackbot> ACTION-162 Section 9 to be headed RetrievalMethd Type Values notes added

<trackbot> ACTION-162 Section 9 to be headed RetrievalMethd Type Values closed

<tlr> ACTION-163 closed

<trackbot> ACTION-163 Section 3 to be headed MAC closed

RESOLUTION: Thanks to Oracle, Pratik and Hal for hosting the meeting!

Summary of Action Items

[NEW] ACTION-168: Contribute additional text for transform note, to make clear what this document gets at [on Scott Cantor - due 2009-01-21].
[NEW] ACTION-169: Update draft - Transform Note [on Pratik Datta - due 2009-01-21].
[NEW] ACTION-170: Write about C14n and DTD processing [on Bradley Hill - due 2009-01-21].
[NEW] ACTION-171: Draft a proposal for canonicalization improvements [on Konrad Lanz - due 2009-01-21].
[NEW] ACTION-172: Update the Publication Status page on the Wiki [on Frederick Hirsch - due 2009-01-21].
[NEW] ACTION-173: Draft some text in response to ISSUE-51 [on Scott Cantor - due 2009-01-21].
[NEW] ACTION-174: Update the transforms related to ISSUE-69 [on Pratik Datta - due 2009-01-21].
[NEW] ACTION-175: Send an email summarizing the points he's raised with the cut-and-paste in the chat. [on Konrad Lanz - due 2009-01-21].
[NEW] ACTION-176: Draft text for v1.1 signaling possible deprecation in 2.0 [on Thomas Roessler - due 2009-01-22].
[NEW] ACTION-177: Document requirements for RetrievalMethod [on Scott Cantor - due 2009-01-22].
[NEW] ACTION-178: Draft text for v1.1 on OCSPResponse subelement for X509Data [on Sean Mullan - due 2009-01-22].
[NEW] ACTION-179: Either update the v1.1 draft with a consistent namespace URI suitable for ECC and OCSPResponse, or come back with technical reasons why that isn't possible [on Brian LaMacchia - due 2009-01-22].
[NEW] ACTION-180: Answer re status of RNG schema [on Frederick Hirsch - due 2009-01-22].
[NEW] ACTION-181: Update RelaxNG schemea for 1.1 [on Frederick Hirsch - due 2009-01-22].
[NEW] ACTION-182: Update 1.1 namespace [on Brian LaMacchia - due 2009-01-22].

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.134 (CVS log)
$Date: 2009/02/03 15:09:20 $