This is the old homepage of the DID Working Group. It is not maintained anymore. The new homepage lives at https://www.w3.org/groups/wg/did/.

DID WG Telco — Minutes

Date: 2021-02-09

See also the Agenda and the IRC Log

Attendees

Present: Daniel Burnett, Ivan Herman, Ted Thibodeau Jr., Brent Zundel, Shigeya Suzuki, Justin Richer, Amy Guy, Manu Sporny, Adrian Gropper, Markus Sabadello, Jonathan Holt, Phil Archer, Kaliya Young, Orie Steele, Michael Jones, Dave Longley, Chris Winczewski, Drummond Reed, Joe Andrieu, Charles Lehner, Kristina Yasuda, Daniel Buchner

Regrets:

Guests:

Chair: Daniel Burnett

Scribe(s): Amy Guy

Content:


1. Agenda Review, Introductions, Re-introductions

Daniel Burnett: today’s agenda is very short
… special topic call reminder
… determining the exit criteria, and deferring of issues without PRs
… would anyone like anything else on the agenda?

Manu Sporny: I’m sure we’re going to be able to talk about what the work mode is after today? Rapid editorial cycling

Daniel Burnett: let’s see how the CR exit conversation goes

Phil Archer: joe and I have been doing our best to make sure the UCR is ready at the same time as the Core
… I’m doing a bit of final housekeeping, but in terms of the content that is all done, thanks to people who have helped us recently
… I’m hoping the group will want to publish that at the same time

Daniel Burnett: the chairs will make sure that a vote on that gets into the agenda in futre
… we will need a link to review, and that we’re choosing to publish

2. Special Topic Call

Daniel Burnett: the special topic call this Thursday at noon eastern, 6pm CET
… 9am pacific
… will be just like others, to work on PRs, because we are closing on our issues today, does not mean we will not have PRs that are still being editied and in progress
… that time is available for anyone who needs to come and get help on PRs they are working on

3. CR Exit Criteria

Daniel Burnett: brief overview of this topic, then will ask manu to talk about the spec and interesting questions for this group
… in general, to enter CR you need to have a document that is complete enough to be implemented
… and that you believe it’s complete
… you don’t enter CR until you believe you’re done
… the goal of CR is to get implementation reports
… that’s the way we demonstrate that we have a specification that is not only implementable but implementable by multiple independent parties, ideally interoperably
… if only one person implements it then that’s no indication that someone else would come to the same conclusions about how to implement it
… when we enter CR we are required to state what our exit criteria are
… there are some standard ones that are usually used
… normally two independent implementations of each feature is pretty common for most groups
… we have to state them when we enter CR and meet them when we exit CR
… if we don’t meet them that means more discussion, maybe another CR
… the topic now is to discuss what are the exit criteria that we wish to require for the DID Core spec
… some general guidelines - there are, in this conversation what may happen is if we get suggestions that the chairs or editors believe the w3c management and AC are not likely to accept we will let you know
… we can set theoretically any criteria we want, we will be challenged on anything strange

Markus Sabadello: when it comes to the exit criteria and we say we need at least two independent implementations of every features, I assume we’re doing that
… we may want to define what it means to support a ‘feature’
… one case that always comes to mind is is it sufficient to have an implementation that creates DID documents which have certain properties in them or do you also have to use them in your implementation
… I’m thinking about alsoKnownAs or equivalentId or some of these
… if I write a DID method implementation that can put these properties in a DID doc maybe the bar is not high enough
… to say the feature is supported you’d need to write code that does something with that
… maybe we should have that requirement
… that the semantics and the functionality of a property must be supported and used and it’s not sufficient to just have a DID doc that just has them in it

Manu Sporny: to build off that, let me try and suggest what we do
… I think that may address markus’ concern
… there are many ways to do candidate rec
… some groups are fairly lax, others tend to be stringent
… what we have tried to do in this group is make sure that we fall a little on the stringent side
… our test suite that orie has put together tests normative statements in the document, there are real did docs involved, it is expected that there are implementations that generate these DID docs
… in general the path we’ve been on is to provide a test suite that tests real implementations
… vs people just saying they’re going to implement a feature
… the minimum bar is the two independent interoperable implementations per feature in the spec
… that means that when we have a normative statement in the spec, we are expected to have a test in the test suite for it
… and we expect to see two independent implementers, people that don’t coordinate with each other, don’t share the same codebase
… two of these implement the same feature in a way that passes the test suite
… generally speaking that is what we’re trying to do with the vast majority of the normative statements in the spec
… and we have changed, modified, updated, the normative statements where this applies so they are testable
… we’ve made several passes through the spec already to set us up to be able to do that
… we could set the bar higher and say three or four implementations
… I don’t think I know a WG that sets the bar that high
… that’s the base suggestion
… we also have statements that are not machine testable, they have to be testable by a human being
… these are having to do with DID method specs
… we provided DID method authoring guidelines in the spec and tell people writing method specs they must have a privacy section, have a security section, must define how read, update, deactivate works, etc
… there are a class of statements that apply to did method specs and that’s the other class of things
… in general everything in our spec falls into those two categories
… there is a third class of things that we could talk about
… we probably shouldn’t dwell on this too much
… there are features that are new in the spec like nextUpdate and nextVersionId and canonicalId and things like that that are fairly new
… it’s not clear how many organisations are going to implement it
… but there are people going that’s really important, we really want it in the spec
… they’re marked as at risk right now, but we could say for those we’re willing to keep it in the spec if we have 5 independent organisations say they are going to use the feature, they are going to implement it. That’s always a slippery slope, I suggest we don’t take that route
… but putting it out there as a way to keep things in the spec that people feel are important but they may not have the time to implement it
… Three broad categories. I am suggesting we do for the testable machine testable statements, it’s two independent interoperable implementations
… for the thing that are human testable, we keep them and make sure humans would have an easy time seeing if a DID method spec supports them

Orie Steele: -1 to “planned to implement” being kept….

Manu Sporny: For the third class, I suggest we don’t go down that path, but we could have a third class of exit criteria where we say it’s okay for people to come forward and say they would implement it

Ivan Herman: what is an implementation? how do we define it? is it a method? is it an application that uses DIDs by relying on methods?
… and the other point, I don’t understand what exactly interoperability means in this respect
… if I have two DIDs on different methods, they are on different methods so I don’t know how interoperability operates
… if I have a combination of applications and methods then I can see what interoperability means
… that comes back to th previous question

Daniel Burnett: that category 3, that is a world of hurt and end of the world dragons
… to go down that road.. a promise to implement is a failure because it does not demonstrate that the spec is implementable
… just because you say you are going to doesn’t mean you actually can
… or that you will implement in the same way as someone else
… we do this phase for the specification, not for implementers
… we are testing the specification to ensure that it is implementable
… that third category doesn’t give us any confidence whatsoever that something is implementable by multiple parties

Manu Sporny: to answer “what is an implementation”
… ivan is right DID methods don’t necessarily interoperate with one another. The ecosystem isn’t quite there yet.
… what does the test suite test?
… that when you produce a DID document that it follows the normative statements in the spec
… that is from an interop perspective, we have theoretically at the ADM and at the representation layer
… you can test that these statements match the spec
… have you written software that is capable of producing a DID doc in a particular representation?
… that’s the DID doc tests
… we have another set of tests for resolution and dereferencing
… that’s more of a work in progress
… but it comes down to you serialize something and we test that serialization in a certain way in the test suite to make the DID resolution section testable
… most of it boils down to have you written software that can produce a DID doc, the test suite consumes a DID doc
… and applies the normative statements in the spec, that’s all we’re testing
… the interop is at the data model and representation layer

Ivan Herman: I am fine with that
… but this must be written down
… because the question is obviously something the director would ask
… what we may want to do between now and CR is to have a skeleton of what our report will be
… when we are at the end of CR
… that skeleton would include what you describe so it must be clear when we get to the report what we mean by implementation, interoperability, etc
… this should not be hidden
… On category 3: I am not sure that dragon is such a dangerous dragon in this case because if my understanding is correct what we are talking about here are additional metadata elements on resolution
… and on dereferencing
… if that is correct then a metadata element is just a string, there is no testable statement to be done on there, the real question is whether it is a metadata element that has a real usage out there
… if we introduce some sort of a measure, maybe it’s 5 who say this metadata element is important for my type of implementation, this is the equivalent of standardizing a vocab where it’s not testable statement in the procedural sense
… I’m not that worried about that one in this particular case

Orie Steele: to describe how the test suite works in its current form, it’s evolving
… you provide a configuration for your DID method that you want to test. The test suite runs on those json inputs
… the json can be used to represent other representations
… you do a count for normative statement coverable for the features you’ve provided
… if you don’t use the versionId param you don’t show up as being counted for that
… if multiple DID methods in the configuration that is submitted to generate the report there will be a counter for each feature and you can see which DID methods use that feature
… it aligns with to a certain degree what ivan was saying regarding testability of metadata
… obviously vendors can lie about what features they are implementing vs what they are submitting in the test vectors
… a number of vendors could say they are providing canonicalId and provide examples, but not actually be doing that
… we can’t prevent that in the way the test suite is built today

Daniel Buchner: We are using canonicalId right now, for the record

Manu Sporny: ivan makes a very good point about some of the DID resolution metadata fields
… several companies fall into that category, they’re not going to implement a resolver this time around but would like to see those metadata values
… dan mention that is a very slippery slope
… dragons aren’t big but there can be many of them and there’s power in numbers
… those type of things tend to eat up a lot of WG time whereas someone who implements a feature and can show code, that’s a clear indication
… vs I promise to implement this in the future
… I would very like the group to not go down the i promise to implement route
… we will burn up a lot of WG time discussing and talking
… but there are good reasons for it
… the DID resolution metadatafields are an example of that
… As to waht Orie said: it is possible to cheat the test suite
… the WG will be looking
… don’t feel like you can get away with that

Orie Steele: -1 to “promise to implement”… but with the acknowledgement that vendors can “claim to support” a feature… and cheat…

Manu Sporny: there are cases where there are companies that don’t want to make their implementations public so you can never really know if they’re running code, or generating artisanal DID docs
… but if we see a feature and all of those things nobody can provide implementations for that there will be a very good argument to say that sorry we did not get interoperable implementations for this feature
… be ready for that to happen to features that can’t show implementations
… there are 85 DID methods, 32 DID drive implementations, we should be able to provide two implementations for all the features in the spec
… Let’s not do.. people will be watching and we will be looking to see where you implement the feature in code.

Ivan Herman: still on the dragons/cats
… I agree with manu that if the only report is saying “yeah this is something we will implement in two years because we like it”, I would not accept that
… however if there is a method who says “I use that feature (today) in my implementation”, that should be enough for the metadata items
… there is a big difference between the statement that “this is a feature I rely on in my application are” and something which says “I may want to do that”
… the former should be okay, the latter should not be

Orie Steele: -1 to “i use that feature”…. +1 to “tests that show you use the feature”.

Ivan Herman: As for the testing thing, what we have to be careful about, I agree with manu
… we have to be careful to our external communication that when we are talking about testing here and making a test suite and a test report
… our goal here is not to provide some sort of validation stamp on the implementations out there
… they should not consider it that way
… we are not providing authoritative tests
… what we are doing is ask our dear friends implementers to help us finalise the spec and ensure the spec is okay
… if you put it that we you can have more trust that they have no reason to lie, what would they gain by lying? they help us by being implementers, not to get any brownie points

Daniel Burnett: there’s an analogy here to what I know has been done in a number of specs in the past
… a notion of optional features
… sometimes there are features that not everyone is going to implement and you want to make sure that someone is going to implement it
… I know that I have been involved with specs that required one implementation of each optional feature
… doesn’t demonstrate interop, but features like that may not be ready, but someone is. There’s not a good reason to say no, but you’re going to need to have at least one implementation if possible
… and the goal of the candidate rec is to test the spec not to test the implementations
… our goal is to make sure the spec is implementable
… that’s the reason for everything

Justin Richer: +1 for testing the spec not the implementations. This is important to not lose sight of.

Daniel Burnett: when you talk about cheating, in the end what we’re looking for is enough implementers who give us feedback that we can use to make sure the spec is as good as it can be

Brent Zundel: +1 we are testing the spec, not the implementations

Manu Sporny: i’ll write the first proposal…

Amy Guy: manu, some features are made up of more than one normative statement, surely? like MUST be a string and MUST be a date, etc (for example)

Manu Sporny: I thought about saying where each feature is defined as one or more normative statement, but then the question becomes what if you only pass a subset?

Ivan Herman: then you haven’t implemented

Justin Richer: +1 to 1 or more

Manu Sporny: would people feel better about one or more?

Orie Steele: +1 to 1 or more

Amy Guy: +1 one or more

Manu Sporny: changed. And the second one:
… two or four .. this applies to DID methods, if it’s human testable, there have to be four demonstrations that a DID spec has put that section in
… eg. DID methods must provide a section detailing how the read function should work
… we would have to be able to point to 4 DID method specs that did that
… bar is higher because it’s a human thing
… anyone object, or put the bar higher?

Drummond Reed: I’d never thought about testability of the human requirements for something like that
… how do we account for that? are we going to keep a list someplace?

Manu Sporny: yes

Drummond Reed: a matrix?

Manu Sporny: yep
… we will take every single human testable statement and a human being along with a bunch of other human beings will get together and link to every single spec that shows that people are implementing that normative statement
… (I hate this)

Drummond Reed: we have 80 odd?

Manu Sporny: we just need to do 4

Ivan Herman: what we are doing here is two different categories that we test, which is fine, because they are different
… I don’t see why we are making it a higher bar than for the mechanical ones
… two is a general number
… we can live with that
… I don’t see the reason for having 4, let alone higher. What’s the difference?

Justin Richer: +1 to 2 not 4

Manu Sporny: we can change it to 2. we can lower the bar
… the reason is because sometimes it’s debatable. Did they really implement it? it’s subjective
… humans have many more bugs
… I’m happy to lower it, that’s the only reason
… we may come up with a list and tell everyone who has implemented a DID method to go and put the section in their DID method spec in there that applies
… that might be the easier thing to do
… it should be a higher bar because humans are more fallible

Phil Archer: how deterministic are these tests?
… can you give an example? if they are deterministic then 2 is enough. If it’s a bit maybe, then 4

Manu Sporny: maybe 30% fall into the latter

Phil Archer: there’s no gain in making a rod for your own back.. don’t make life harder for ourselves than it needs to be
… if you and those doing this aren’t confident that smithing has been done and that the letter and spirit of the spec has been done at least twice by different people who ideally don’t know each other that’s good enough for me

Manu Sporny: would anyone object for the human testable bar to be 2 demonstrations of implementations?

Drummond Reed: I agree with that

Daniel Burnett: nt hearing objections

Manu Sporny: I put that proposal in ^

Proposed resolution: To exit the DID Core Candidate Recommendation phase, the DID WG will require two things: 1) for normative statements that are machine testable, at least two interoperable implementations per feature, where each feature is defined as one or more normative statement in the specification, and 2) for normative statements that are only human-testable, at least two demonstrations of implementation per feature, where each feature is defined as one or more normative statement in the specification. (Manu Sporny)

Manu Sporny: +1

Ivan Herman: +1

Drummond Reed: +1

Phil Archer: +1

Charles Lehner: +1

Joe Andrieu: +1

Orie Steele: +1

Dave Longley: +1

Shigeya Suzuki: +1

Ted Thibodeau Jr.: +1

Brent Zundel: +1

Markus Sabadello: +1

Amy Guy: +1

Daniel Burnett: +1

Resolution #1: To exit the DID Core Candidate Recommendation phase, the DID WG will require two things: 1) for normative statements that are machine testable, at least two interoperable implementations per feature, where each feature is defined as one or more normative statement in the specification, and 2) for normative statements that are only human-testable, at least two demonstrations of implementation per feature, where each feature is defined as one or more normative statement in the specification.

Manu Sporny: do we want anything else for the exit criteria? do we need anything more process wise?

Ivan Herman: from that point of view it’s fine

Manu Sporny: for the chairs, do we want a resolution down to say we’re not going to do a “i promise to implement” category

Daniel Burnett: we don’t need it, we have the requirement

Manu Sporny: i think we’re done then

Ivan Herman: what do we do with the resolution metadata? under which category?

Manu Sporny: it’s very clearly machine testable, when this last PR goes in

Ivan Herman: then it’s fine

Daniel Burnett: anything else on this topic?

4. Issue Deferring

Daniel Burnett: I want to remind the group that for a couple of months we’ve been saying to get your issues and PRs in
… almost a month ago we warned that any moment now we were going to require any issues without PRs would be deferred, unless its editorial
… and two weeks ago today the chairs formally notified the group that today was the deadline for that
… if there is an issue by the end of the day today that does not have a PR associated with it it will be deferred
… this is the deadline
… no misunderstanding here
… Hawaii time is fine…

5. working mode

Manu Sporny: we’re going to see a flurry of activity on the specs over the next 2 weeks
… the editors are going to make multiple cleaning passes through the spec
… they are largely meant to be editorial but we may find something that is normative
… the working mode is going to change
… no longer keeping things open for debate for 7 days
… if the editors find an editorial change we will give 24 hours for feedback
… that’s the shortest
… more than likely it will be multiple days
… but when you see a PR hit over the next two weeks get your comments in immediately
… you can view this as us getting together at a f2f, as an intense work stream activity, but everyone is supposed to be on call for the next 2 weeks with a 24 hours turnaround on feedback
… what’s going to happen is, in this order:
… people will always rush and get some last minute PRs in
… were going to work as a group to get all of those PRs wrestled as quickly as we can
… that is what the special topic calls are going to be fore
… ideally through the github PR mechanism
… as quickly as we can we get those resolved
… then the editors will make multiple passes from the top to the bottom
… you’ll start seeing PRs in two categories - editorial changes, nothing major, and we are expecting people to respond within 24 hours if they disagree with it being editorial
… they will be section by section, bite sized chunks
… we will go all the way through the document, that’ll take a couple of days
… especially if people object to the change, that PR is going to languish
… going through it once, and then again, and then again.. 5 or 6 reviewers lined up to do that
… once that is done, the normative statements in the spec will be frozen for CR
… we are going to start in earnest implementing those tests in the test suite
… we’re going to expect people to be writing tests and submitting their implementations
… this whole process will probably take until the end of Feb
… and at the end of Feb we’ll do the transition call, which is chairs, staff and a subset of editors who meet with the w3c director to say we transition from our working draft to CR
… we’ll prove we’ve done everything we’ve needed, horizontal review, addressed all the issues
… there’s a big a process there
… then if it’s approved, the spec is published as a candidate recommendation
… any questions?

Daniel Burnett: anyone who thinks that this sounds rushed
… how things used to work before github… the expectation was that we get to the point where we think we’re ready for CR, the editors go away for a week and just fix all this stuff
… clean up, add links, clean up typos, they’d just do it
… and present that to the group and say okay here it is
… we’re operating in a github world where people have an opportunity to comment
… but understand that is the stage we’re at now
… really the editors need to do the non substantive cleanup work that they’ve been holding off on because its a nuisance to do it and then have to maintain it
… I hope that helps anyone worried about the 24 hours
… that’s just there in case you’re concerned, you can follow along on the editorial changes
… for those of you who are pretty comfortable with the spec, do your reviews quickly because we don’t want to have to undo things
… if you’re someone who thinks the spec has been fine for a few weeks, it’s probably still going to be fine

Ivan Herman: two things
… one is the way yo presented it manu sounded like by the CR transition you want all the tests to be present
… that’s not necessarily a requirement, it’s nice, but it’s fine if we go to the director with the request with a clear plan for the test suite
… and tests will come
… if there are already some that’s fine, but we’re not required to have all of them
… lets’ not make it too hard on ourselves

Manu Sporny: yes, agree with Ivan.

Ivan Herman: the other point is actually, things have changed the last few years
… today what happens is the CR transition request means filing an issue on a specific repo with a template
… the chair and I have already looked at the template last week
… all the information has to go there, and raise that issue and if that issue contents is proper and well written and precise then there is no call
… the issue will be commented by the director to say it’s fine
… if there is a ceremony that means we have a problem
… that means Philippe and Ralph cannot judge based on the issue and may ask questions
… then they have a call, then we have a problem
… we don’t talk about a meeting because we won’t have a problem will we? :-)

Manu Sporny: +1
… I didn’t know that we don’t have a call any more, that’ll be faster
… everything that is not in is going to be deferred, in general
… there are some things that are horizontal review items that we may need to talk abut
… leaving that up to the chairs and staff to decide what to do with those
… but for any other issues assume they’re going to be deferred
… for v2
… there are some PRs that are outstanding that i want to give a heads up on
… not necessarily asking if people are objecting, an expectation of when they’ll get in
… there are multiple cbor and dag cbor prs that are waiting for jonathan’s feedback
… if jonathan_holt can provide feedback that would be greatly appreciated
… one has to do with deterministic language, moving dag cbor into its own spec, marking cbor as at risk
… cbor as it stands right now is problematic
… it has language in there that the group decided they didn’t want in there and there’s an open issue on how to address that
… markus said he might be working on that
… the only other one that’s fairly a bit set of changes is the DID resolution section
… I want to confirm, Ted, Markus, Orie and Justin have reviewed that
… I believe I’ve implemented everything, I’m unsure if the section as it stands right now is acceptable. Can you confirm? what are you feeling?

Orie Steele: I will review in the next 10 minutes and either remove my request for change or approve or tell you I still want it to be changed

Manu Sporny: that’s it for the issues and PRs
… if you have an issue assigned to you and you don’t do a PR for it by the end of today it is going to be deferred

Brent Zundel: great call
… we have exit criteria for CR
… a few people frantically working on PRs before the international date line times out this day
… look forward to seeing you all on our special topic call 12pm ET this thursday
… thanks all!


6. Resolutions