DID WG Telco — Minutes
Date: 2021-05-18
See also the Agenda and the IRC Log
Attendees
Present: Daniel Burnett, Ivan Herman, Shigeya Suzuki, Justin Richer, Markus Sabadello, Manu Sporny, Geun-Hyung, Chris Winczewski, Dmitri Zagidulin, Amy Guy, Adrian Gropper, Brent Zundel, Drummond Reed, Ted Thibodeau Jr.
Regrets: Charles Lehner
Guests:
Chair: Daniel Burnett
Scribe(s): Dmitri Zagidulin, Amy Guy
Content:
- 1. Agenda Review, Introductions, Re-introductions
- 2. No Special Topic Call
- 3. ITU and DID
- 4. Status of Implementation Feedback
- 5. Report on documents
- 6. Status of registries
- 7. Resolutions
1. Agenda Review, Introductions, Re-introductions
Daniel Burnett: we’re gonna talk about the Special Call, Brent will mention our communication with ITU, then status of implementations … then registries, then DID Core issues & PRs … any requests for changes/additions to the Agenda?
2. No Special Topic Call
Daniel Burnett: summary - no special topic call this week. We’re going to be switching to some other work, but at the moment, no particular items
… what we need is for people to get their implementations in
… so, in case you missed it - no special topic call this week.
3. ITU and DID
Daniel Burnett: Ivan, can you give us a summary?
Manu Sporny: btw, SG17 is their Security Study Group – https://www.itu.int/en/ITU-T/studygroups/2017-2020/17/Pages/default.aspx
Manu Sporny: … and here’s what they’re publishing: https://www.itu.int/md/T17-SG17-R
Ivan Herman: so, we’ve got contacted by the ITU Working Group
… they collect references to a number of identity-related standards all over the world
… not only formal standards, but also other various groups like FIDO
… Brent was invited to go to one of their calls
… to give a presentation about DIDs, which he did, brilliantly
… and now they have contacted us. If we understand their request – if we would be ok with them listing the DID spec on their list of identity standards
… the only thing they need is a 1-2 paragraph description (of what this group does, wants to do)
… and of course reference to the standard. And an authorization/blessing from us to include our spec on the list.
… I think the right time to do that would be in a few weeks or months, when we begin to discuss the future of this WG
… what’s our longer-term plan for spec maintenance, etc.
… Brent - anything I’m missing?
Brent Zundel: no, +1 to what you said
… it was fascinating how much they already knew about DIDs, and are planning to use the tech (and encourage their members)
… (which are telcos)
… so, I encourage us to look at this positively.
Markus Sabadello: personally, I think it would be great if they reference our work. one question - if I remember correctly, we were contacted about a year ago, by ITU Study Group 17 (?)
… which was about security aspects for distributed ledger tech. (and they asked for a liaison?). Was that the same group?
Ivan Herman: I don’t remember.
Brent Zundel: 17 is the Study Group for Identity management
Ivan Herman: ah ok, so.. seems like the same group, but not exclusively about ledgers
Daniel Burnett: Ivan, do we need a decision by the group? Or just, hear if there are any objections?
Ivan Herman: let’s do a resolution
Daniel Burnett: ok, I’ll put a draft proposal.
Ted Thibodeau Jr.: Question back to them about whether this will be a static list, once published, or we can be listed later?
Ted Thibodeau Jr.: is this something they’ll publish in stone at the end of the week? Or something that can be listed in later? (once we’re done)
Drummond Reed: Do we really need to approve someone else referencing our specs?
Ted Thibodeau Jr.: so that it can just be part of the PR when we publish
Ivan Herman: good question. My proposal is - let’s tell them we’ll give them information about our longer-term plans; it’ll be a combination of a reference to the current WG, plus an idea of the longer term plans
… so, we can send them something in 1-1.5 months
Brent Zundel: that sounds like half-agreement with what Ted said?
Ivan Herman: yeah
Drummond Reed: do we really have to approve someone else referencing our spec?
Ivan Herman: that was my question too. our legal advisory ppl said - institutions like ITU are 10x as cautious about this kind of thing; they always ask for authorization
Drummond Reed: sure, but like.. that’s the whole point of a public spec :)
Brent Zundel: drummond, that was my first question too, re approval :) (but, these large orgs seem to like that)
Drummond Reed: yep, got it
Daniel Burnett: how about - instead of 1-2 months, we can just say “when the spec is complete”
Drummond Reed: Looks good
Ted Thibodeau Jr.: I think we should also use W3 labels for these things – e.g., not “specification”, but CR (or PR or TR)
Ted Thibodeau Jr.: we should use W3 labels for the specs. (CR/TR etc)
Drummond Reed: looks good
Proposed resolution: We approve ITU SG 17 referencing our DID Core Technical Report in their report on identity standards when it is completed. (Daniel Burnett)
Ted Thibodeau Jr.: +1
Daniel Burnett: +1
Ivan Herman: +1
Dmitri Zagidulin: +1
Manu Sporny: +1
Chris Winczewski: +1
Shigeya Suzuki: +1
Amy Guy: +1
Markus Sabadello: +1
Brent Zundel: +1
Drummond Reed: +1
Geun-Hyung: +1
Resolution #1: We approve ITU SG 17 referencing our DID Core Technical Report in their report on identity standards when it is completed.
4. Status of Implementation Feedback
Daniel Burnett: how are we doing on implementations?
Manu Sporny: (sharing screen)
… ok, implementations! we have a ton of them in, which is great
… we have a ton of Resolver implementations. (unfortunately, it’s all the Universal Resolver :) )
… but still, that’s great
… we have 3Box and Consensys resolvers, as well
… only one de-referencer still. (so, problematic)
… and a number of DID Method submissions.
… now, this is only after a week, so, that’s still good.
… and we have another week to take implementation reports, to determine whether we have enough to exit CR
… for us to know that, we need to generate a report (that tells us if there’s at least 2 independent impls, for each feature)
… there’s upwards of a 150 normative statements
… we needed an automated way to do that. Shigeya started on an automatic report generator (in this PR), made great progress
… here’s a preview of the Test Suite report
Shigeya Suzuki: actually, just finished working on it (besides matrix)
Manu Sporny: organized by sub-suites
… layout is - here’s the normative statement being tested, then every single DID Method & thing that conforms to it
… we don’t have anything checked in that is failing yet.
… the idea is, we’re going to go down through the report, and look for things that only have one implementation.
… for example - JSON Production
… for that, we don’t have anything, other than an example, generating a Number, a Null Value, a Double, that sort of thing
… we may choose to overlook this, because - it may be obvious to the WG, that clearly we want to support integers in the data model, etc.
… but it’s just an example of implementation
Shigeya Suzuki: https://github.com/w3c/did-test-suite/pull/105
Manu Sporny: https://pr-preview.s3.amazonaws.com/shigeya/did-test-suite/pull/105.html#conformance-by-test-suites-3
Manu Sporny: so, basically, thanks a ton to Shigeya for fixing this mechanism
… there’s a couple more things we need to add to it that are being tracked
… shigeya, would you like to add anything?
Shigeya Suzuki: basically, I just finished
… and it statically generates the HTML
… I still need to do/figure out - how to create an NxN matrix
… we have several test suites here, but it doesn’t match to the section number, for example
… I’ve seen the VC test suite using the matrix structure, for the tests
… I’m not sure how to structure this - thoughts from the WG?
Ted Thibodeau Jr.: is it possible to have each test linked, within, e.g., https://pr-preview.s3.amazonaws.com/shigeya/did-test-suite/pull/105.html#conformance-by-test-suites-1
Ivan Herman: (question about the right column, the Parameters). whether this will be a useful info to the W3C Director
… also, there has to be some sort of back-link to what these implementations are
… just some metadata on implementations, so one can see that easily
… for the matrix, what I would expect to see is a big table which says “these are the tests, and you can categorize that, one per row”
… and for each row, every implementation that does implement that item
… I realize that I don’t know how many implementations we’ll have in that list
Manu Sporny: couple of things. first, this report won’t be too useful to the Director; this is just for our reference
Ted Thibodeau Jr.: +1 to ivan’s question about need to include parameters column … though I now see it containing media types (and 2 such for 1 implementation)
Manu Sporny: shigeya - I don’t think we should do a full table/matrix, will be too large
… what the Director is more likely to want to see is - which features DIDN’T make it
… so, being able to generate that programmatically (the features that didn’t make it) - that would be super useful
… and then if you really wanted to dig down into the details, you can view the full implementation tables
… also, this is still very much experimental; some of the output requires explanation. A lot of this has to do with the complexity of the spec
… since we’re testing all kinds of things - syntax, data model, resolution, dereferencing, metadata structs, etc
… and that just results in a very complex report
Shigeya Suzuki: 47 test inputs (files in implementation directory). 32 of them are universal resolver based test
Shigeya Suzuki: (some of them are examples)
Ted Thibodeau Jr.: summary of summary table good; huge summary table and ability to drill down to per-implementtion and/or per-tested-fature details also good
Manu Sporny: shigeya - so, don’t work on a 2d matrix. ivan - what we need to show the Director specifically, is the list of features that didn’t cut it. (and, obv, that list should be 0 items long :) )
Daniel Burnett: what I’ve been through in director meetings - you’re right manu, whatever the categories are that we are testing for which we say we need at least two implementations we need to be able to clearly summarize at the top the number of features we don’t have sufficient implementations
… it’s also helpful to say what we do have sufficient for
… and to click in to see details
… and should be easy to tell visually which don’t have enough, eg. bright red title
… so it’s easy to skim
… one of the reasons it’s good to have the link to sufficient implementations is because another important check for the director (sometimes, depending on things) is to see how independent they are
… this example is a great example because it looks like everything is great but when you click on the list of implementations and you see ‘example examlpe example’ that could be a red flag
… yes, summary at the top saying how many features have at least two, how many don’t, but a click to get to details
Ivan Herman: agree
… only a minor thing, more psychological than anything
… we will have 30-40 implementations
… it might be more impressive to give the number of positive implementations
… not only to say yes we are over 2 or not
… but give the number
Manu Sporny: we’re converging
… with optional things that might be helpful
… we’re only really missing one thing in the report - that is a breakdown of here are all of the features that don’t have more than implementation
… or have 0 or 1
… and then links to take you to those detail sections
… and another section that says here are all the features that have more than two independent implementations
… here are the number of implementations
Ted Thibodeau Jr.: 37 uses of “universal resolver” are not 37 implementations … and it seems like there isn’t another resolver? that’s a pretty big gap.
Manu Sporny: and here’s a link to that section if you want to look in more detail
… the expectation is the director will look at those two things first and if they have questions will drill down
… we will have things in each list
… I don’t think anyone is going to submit a json implementation that has a null value
… as the right hand side value
… i don’t think anyone has a use case for that yet
… so we’ll be able to prove both lists are being generated properly
… did I say anything that didn’t agree?
Ivan Herman: it’s fine
Daniel Burnett: additional comment - when you get to that last step, you first have the summary, then click to see how many implemented, then to that list and I like the idea per feature that has a summary, what the blue bars are
… it has a feature with a count
… but when you go to see details of the implementations I’d prefer a link to the individual implementation report
… that contained that
… and the reason is because it can be helpful when you click to that to see the implementer implemented one feature and nothing else
… to see the full report for somebody is sometimes helpful and interesting
Ted Thibodeau Jr.: echoing that
… being able to drill down on either access to any given point
… per implementation, but also per feature
… see who implemented it
… if the blue bars included a count of… just count successes, no failures? failures wouldn’t be in the table?
Manu Sporny: they would
Ted Thibodeau Jr.: the bars could summarise one success out of 37 testers that would be useful
… I’m concerned in the early comments you said there were a bunch of resolver implementations which aren’t independent
… they’re a bunch of uses of the thing
… that looks like a big gap
Manu Sporny: correct
Shigeya Suzuki: agreed.
Justin Richer: +1 to the gap
Manu Sporny: I ack Dan that would be nice to have
Daniel Burnett: I don’t want to overload Shigeya either
Manu Sporny: Shigeya has done a ton of work and I want to be respectful and not overload him
… want ot make sure we get to the minimum possible, then it’s up to Shigeya
… I’d like to ask you Shigeya does everything make sense and do you feel it’s actionable? Are the priorities clear?
… and what you need to implement is clear?
Shigeya Suzuki: yes, actionable
… depends on the deadline
… and my time
… not so much work I believe
… when do we need the report?
Manu Sporny: whenever
… you can do it
… ideally everything ready by next monday, but I don’t think that’s realistic
… within the next two to three weeks, the most basic thing, that would be decent shape
Shigeya Suzuki: that’s doable
Daniel Burnett: agree
… first, thank you Shigeya, we should give Shigeya a lot of thanks
Manu Sporny: yes, +1 – this is a big job and it’s very much appreciated.
Daniel Burnett: all we need is the summaries
… and individual reports if you can. Doesn’t have to be beautifully linked
… positive and negative are important
Manu Sporny: on the negatives, people don’t like submitting negative tests to show an implementation fails or doesn’t conform
Daniel Burnett: i meant that we don’t have at least two
… we are failing the report
Manu Sporny: +1
Daniel Burnett: any other comments?
Shigeya Suzuki: there are a bunch of examples here
… so we need to eliminate them
… at some point
… any good ideas to handle this?
Ted Thibodeau Jr.: +1000000 thanks to Shigeya – setting these reports up is vital, never gets enough resource, and often falls flat. this is one of the more challenging groups to set up report on, and these reports are far beyond what might have been delivered.
Manu Sporny: it is very important that the examples remain in the test suite to make sure we are testing everything
… we needed the test suite to run against something
… all that needs to remain, but shigeya you’re right, when we generate the report the report should not include the examples
Ivan Herman: +1 to manu on not including did:example
Manu Sporny: probably needs code that you run to eliminate the things with example in them
… I don’t know how exactly, might have to be a regex
… I don’t think we have the word example in any of the normative statements, so you would check the implementation, and I don’t think we have ‘example’ in any implementation names
… so you might be able to filter out those test results by doing a regex against the implementation name
Shigeya Suzuki: i will try
5. Report on documents
Daniel Burnett: report on implementation guide or rubric?
Manu Sporny: I have a list, but happy to defer to editors of each doc
5.1. Use cases would be Joe and Phil
Manu Sporny: Use cases would be Joe and Phil
… it’s done.
… We published it. Hurrah!
5.2. DID Core
Manu Sporny: DID Core has a few issues remaining but they’re being handled
… one issue from securekey that we want to have a light discussion around in the next week or two
… around the construction of DID URLs
… and we’ve got a number of PRs, many editorial
… one is moving the cbor thing out into its own note
… other ones having to do with cleanup in the appendices
… that’s happening and will continue
… slower than I’d like but we’re addressing them
… The outstanding scary thing is the MIME type discussion at IETF
… Need to email about a decision
… concerned our fallback is unworkable
… We may need to put aside some time to talk about what we’re going to do
… waiting on sending an email to ietf
Ivan Herman: why do you think the alternative is not workable?
Manu Sporny: i don’t think people want to implement it… the profile thing on ld+json feels unworkable
… having had a fresh discussion about it
… we thought people would be okay with it but they thought surely ietf would have a decision at this point so they didn’t consider us having to implement it
… do other people feel that way?
Ivan Herman: separate discussion
5.3. DID Spec registries
Manu Sporny: based on decisions in past weeks, design is mostly stable
… still a large number of issues
… most are we don’t have contact information for authors of specs, except for github handles, which have worked out well
… we can get a list of all the contributors
… many are DID method creators, that’s how we pinged 90+ of them when we needed implementations and they repsonded
… we just don’t have email addresses
… no big blockers I can think of
5.4. DID Rubric
Manu Sporny: Joe reported out on that two weeks ago, in good shape, waiting on a PR from Joe to update with 20+ new considerations
… work continues
5.5. DID+CBOR note
Manu Sporny: that spec was created this weekend by ivan and I
… we have a repo with all the latest in it, spec has been cleaned up
… we’re close to being able to publish that as a note
… we’ll do it when we’ve got many of the DID Core editorial updates to the appendices done
5.6. Implementation guide
Manu Sporny: finally the implementation guide has been updated to use Orie’s language
… Orie did a bunch of suggestions
… some were controversial
… markus has put in a PR on @context
and his concerns around that
… the decision we made is that we’re going to allow people to take positions that are controversial and document all the different arguments on that controversy
… so it’s documented and crosslink them to each other
… so people can read and understand that there are some things the group couldnt agree on
… take a look at Orie’s new language and markus’ PR, we’re going to merge that
… and we’ll have a note by the tiem the WG ends
… that’s in good shape
… General takeaway is we’re in good shape, have been for a while
… Good job as a WG
… looks like we’re going to have a number of useful documents that are in our charter and a couple that weren’t that are going to help the ecosystem
Ivan Herman: did we get any feedback from jonathan the cbor stuff?
Manu Sporny: none
Daniel Burnett: questions or comments?
6. Status of registries
Manu Sporny: We never came to ground on if we were going to have a description field in the Registries or not
… Currently every entry is meant to have the thing that’s being registered a link to where it’s defined and an open question about should we have a field where the person registering can put in useful information about the field
… eg. this property will work across all representations
… and the cbor tag for this is value 63
… that’s been a proposal
… the counter argument against it is because there’s no standard entry people can put in whatever
… but maybe that’s okay
… maybe we allow people to put in guidance around specific properties that they feel are important
… the real thing that matters is the link to the spec
… the managers of the registry don’t have to care that much about the description
… may do some high level check but not requiring the registry maintainers to evaluate the truth of what the registrant writes
… Do we want a general description field? Or forbid that and only provide a link?
… and the json-ld context
… Would there be any objections to providing a description field that registrants can use for purposes that they deem are important?
… free form text field
… light vetting
… but up to the people managing the registry to determine
… proposal writing voice
Proposed resolution: The DID Specification Registries will support a “description” field for all registry entries that can be used at the discretion of the registrant to express items of importance to the registered item. The registry maintainers are not required to check the truthfulness of the statement, but may exercise their best judgement when processing the registration request. (Manu Sporny)
Justin Richer: problem I have is that it is a very long winded way to say this is a human readable field with no implied impact on interoperability
… but given the fact it has been the intent of the WG of the registry to not have a normative impact on interop… we can make this proposal but it is ultimately meaningless
… I object to the statement but I don’t think my objection matters
Ted Thibodeau Jr.: +0.5 but may need “how to get changes made” if/when something gets past “best judgement” and similar…
Manu Sporny: the effect is .. I half agree
… it is true, but we’re recording that the group contemplated this.
… yes we have the field, no it doesn’t really mean anything, has no normative effect, but might guide people in the right direction
Daniel Burnett: the way I heard what Justin said is that there was a different decision that you disagree with and this continues to be part of that same disagreement. You can object but there’s no point in objecting because it’s at a different level
… unless we’re going to revisit the broader statement I’d rather see if we get agreement on a statement today that presumes that prior broader decision
… maybe it requires manu a couple of adjustments
… something like “because these statements do not have normative effects, this doesn’t either”
… …this description field will not have a normative effect because the DID Spec Registries are not normative
Justin Richer: I’m not trying to throw a process wrench
… take the proposal as is go ahead
Ted Thibodeau Jr.: I can’t form a considered opinion for this…. maybe table
Ted Thibodeau Jr.: ?
Proposed resolution: The DID Specification Registries will support a “description” field for all registry entries that can be used at the discretion of the registrant to express items of importance to the registered item. The registry maintainers are not required to check the truthfulness of the statement, but may exercise their best judgment when processing the registration request. (Manu Sporny)
Manu Sporny: +1
Justin Richer: +0 (it’s a bad idea)
Adrian Gropper: +0
Amy Guy:
0 .. seems like everything should go in external specs, and also 'description' is maybe misleading and could duplicate spec content
Ted Thibodeau Jr.: I like “description” field, but unclear this is doing what is wanted
Markus Sabadello: +0 has pros and cons
Daniel Burnett: not seeing overwhelming support
Manu Sporny: if nothing else happens we’re just not going to have a description field, and the registration will have al ink to spec and to jsonld context
Daniel Burnett: needs more work..
Manu Sporny: but we need to implement something
… if the group changes its mind that’s fine
… or we add in future that’s fine
… for now the decision is clear.. no description yet
… editors shouldn’t move on that
… all we have is a link to spec and to jsonld context and you’re done
Daniel Burnett: we’re over time
… thanks everyone
7. Resolutions
- Resolution #1: We approve ITU SG 17 referencing our DID Core Technical Report in their report on identity standards when it is completed.