This is the old homepage of the DID Working Group. It is not maintained anymore. The new homepage lives at https://www.w3.org/groups/wg/did/.

DID WG Telco — Minutes

Date: 2020-05-19

See also the Agenda and the IRC Log

Attendees

Present: Daniel Burnett, Ivan Herman, Jonathan Holt, Dave Longley, Orie Steele, Manu Sporny, Daniel Buchner, Chris Winczewski, Markus Sabadello, Kaliya Young, David Ezell, Brent Zundel, Dmitri Zagidulin, Justin Richer, Kim Duffy, Eugeniu Rusu, Michael Jones, Drummond Reed, Ganesh Annan

Regrets: Phil Archer, Amy Guy

Guests:

Chair: Dan Burnett

Scribe(s): Markus Sabadello, Chris Winczewski

Content:


Brent Zundel: I’m going to ask dmitriz to re-introduce

1. Agenda Review, Introductions, Re-introductions

Dmitri Zagidulin: I’m Dmitri Zagidulin, software developer at Digital Bazaar, I also worked with Solid. Currently one of the co-chairs of the Secure Data Storage Working Group at DIF/CCG.
… I take a hands-on, development-orient approach to specs.

Brent Zundel: Any proposed changes to the agenda?

2. Next Topic Call

Daniel Burnett: The next topic call will be at the regular Thursday time

Daniel Burnett: https://github.com/w3c/did-core/labels/contract

Daniel Burnett: The topic is going to be the DID Resolution contract
… There’s a relationship between DID Resolution and today’s topic about testing

3. DID Parameters update

Markus Sabadello: The WG has been discussing DID parameter syntax as part of a DID URL

Markus Sabadello: https://github.com/w3c/did-core/pull/285

Markus Sabadello: Query parameters proposed. PR285, looking for feedback.

Daniel Burnett: manu can you give an update on spec structure?

4. spec structure

Manu Sporny: https://github.com/w3c/did-core/pull/288

Manu Sporny: Drummond and rhiaro have been working on a rewrite of the front matter of the spec
… It has been reviewed by a subset of editors+chairs, we believe it improves the introduction

Manu Sporny: https://pr-preview.s3.amazonaws.com/w3c/did-core/pull/288.html#introduction

Manu Sporny: Please review the PR, it basically consolidates some content in the Introduction
… We would really like to get this into the document so we can start Horizontal Review

5. Registries Repo Renaming Straw Poll

Daniel Burnett: We will then prepare for Horizontal Review (privacy review, TAG explainer, etc.) If you’re familiar with this and would like to help, please let us know
… There has been some discussion about the name of the Github repo, it currently is “did-core-registries”. There is a proposal to change it to “did-spec-registries”.
… If there is anyone with a particular concern, please let us know

Michael Jones: It should really just be “did-registries”. I realize there is a conflicting term in the spec, but I think there is agreement to change that term

Dave Longley: “did core spec” is not the same thing as “did spec”

Michael Jones: The registries will contain more than just things from our spec. It will contain entries from other specs.

Dave Longley: there will be many specs that talk about DIDs

Orie Steele: I am starting to agree with that, if we can get a shorter name that won’t change

Daniel Burnett: Let’s allow a bit of discussion now

Justin Richer: First I agreed with selfissued that “did registries” is a more generic name, but I think there’s a problem with that sounding like a “place for registering DIDs”
… “did-spec-registries” (spec = not just the core spec) does a better jobs

Dave Longley: +1 to what Justin is saying, there will be many specs talking about DIDs beyond the did-core spec.

Justin Richer: It’s generally applicable

Orie Steele: I agree with selfissued that there is a reputational issue. As the registry gets filled with information that’s not necessarily reviewed by this WG; it’s going to be filled with a lot of material, this may reflect on the reputation of the WG

Dave Longley: “where do i go to find specs on DIDs” <– “The DID spec registries”

Orie Steele: We should consider what this will look like a few years from now. Calling it “DID spec” may make it sound like “spec stuff”, when in fact it’s about extensions including method names, etc.

Manu Sporny: We did contemplate “DID registries”, and there were multiple -1 during the special topic call.
… One of the important things to consider is that “registries” is plural. What the document does is, if you care about implementing, this document will give you links of all your options. You can look up items (properties, methods, extensions, etc.) in the registries documents. You will get a pointer to the specification.
… All other names we have considered so far (e.g. “DID registries”) are too generic, and people come to wrong conclusions on what the names mean

Drummond Reed: +1 to “DID Specification Registries”

Daniel Burnett: Which of the following options do you like the LEAST?

Daniel Burnett: Please list which of these three options for the repo you like the least: did-core-registries, did-spec-registries, did-registries

Michael Jones: did-core-registries

Drummond Reed: did-registries

Manu Sporny: did-registries the least

Orie Steele: did-core-registries

Ivan Herman: -1 to did-core

Daniel Buchner: did-registries

Justin Richer: did-core-registries

Dmitri Zagidulin: did-core-registries

Eugeniu Rusu: did-registries

Kim Duffy: did-core-registries

Ganesh Annan: did-registries

Kaliya Young: did-core-registries

Brent Zundel: did-core-registries

Dave Longley: did-core-registries

Daniel Burnett: did-core-registries

Jonathan Holt: 0, I’ve already finished painting

Daniel Burnett: Winner for “least liked” seems to be did-core-registries

Daniel Burnett: please list which you prefer of did-spec-registries or did-registries

Drummond Reed: did-spec-registries

Daniel Buchner: did-spec-registries

Michael Jones: did-registries

Dave Longley: did-spec-registries

Manu Sporny: did-spec-registries

Kim Duffy: did-spec-registries

Ganesh Annan: did-spec-registries

Kaliya Young: did-spec-registires

Eugeniu Rusu: did-spec-registries

Markus Sabadello: did-spec-registries

Orie Steele: did-spec-registries

Brent Zundel: did-spec-registries

Justin Richer: 0

Dmitri Zagidulin: did-spec-registries

Ivan Herman: 0

Proposed resolution: Rename the did-core-registries GitHub repo to did-spec-registries (Daniel Burnett)

Daniel Burnett: There’s a pattern here

Daniel Burnett: +1

Manu Sporny: +1

Ganesh Annan: +1

Orie Steele: +1

Dave Longley: +1

Kim Duffy: +1

Markus Sabadello: +1

Drummond Reed: +1

Ivan Herman: +1

Eugeniu Rusu: +1

Michael Jones: 0

Brent Zundel: +1

Resolution #1: Rename the did-core-registries GitHub repo to did-spec-registries

6. Testing

Action #1: rename the github registry (Ivan Herman)

Daniel Burnett: manu I think you wanted to talk about testing, we haven’t had this discussion yet

Manu Sporny: we’re trying to figure out how we are going to test the specification going into CR phase
… we’ve had some assumptions about testing, but recent PRs suggest we needed to have a group discussion
… There are many ways to test a spec when going through W3C standardization; I’m going to talk about 2 ways
… We’re trying to get WG feedback on what they would prefer; this isn’t an A or B decision. There are many variations. We’d like to know what results the WG would like to see from testing.
… Once we have some feedback from the WG, it may make future discussions about testing easier; once we know what kinds of results we want.
… This will help the editors to determine if certain statements in the spec are testable.
… (going to share my screen, but will also share links in IRC)
… The specification we have right now - We thought it will largely be a data model with concrete representations, and that we would test syntax and data formats.
… Since the WG started, there has been a desire to talk about the resolution process, and potentially testing against actual implementations.
… In the VC WG, there were a number of things that people were not happy about in the test suite. We tested the data model, but there was still a lot missing for real interop (as discovered during the recent DHS SVIP program)
… There were some gaps regarding concrete tooling.
… In W3C, there are many groups that only do vocabulary testing

Manu Sporny: https://w3c.github.io/wot-thing-description/testing/report.html#impl-ercim

Manu Sporny: (showing Web of Things Implementation Report)
… The WoT WG released this Implementation Report. Their spec is a data model called (Thing Description), which is a data model used by some corporations to express things in the world (dimmers, signs, etc.)
… It was a vocabulary spec, looked a bit like the DID Core spec, with an abstract data model, attributes, etc.
… In their report, the first thing they have are statements by companies on whether they support the specification (self reported; there is no test suite).
… Another thing they did is that for each normative statement in the specification, (e.g. “… the type of … MUST be a JSON array”)
… They had an automated test for each statement, e.g. to test if … is indeed a JSON array
… They didn’t run it against a live algorithm, they just tested static documents.
… There were both automatic and manual tests.
… E.g. “… each string MUST be processed independently”. This was tested manually via self-reporting. Companies reported whether or not they did this.

Daniel Burnett: Just want to point out, if you look at some of these statements, not everyone reported as passing. If you’re worried about cheating, this is typically not a big issue during spec development.

Manu Sporny: There are tests that are only passed by a single company.
… Another example of testing is Activity Streams.
… Activity Streams is for decentralized social networking activity (articles, audios, likes, etc.)
… There is vocabulary of terms.
… There is a table with “P” (producing) and “C” (consuming) entries.
… (showing an Implementation Report)
… This is self-reported, there is only “y” (we implement this) or “n”

Daniel Burnett: What self-reported tells you is whether the implementer cares about the feature

Jonathan Holt: I suppose most of this is looking at the structure of the data model. Can this be automated with JSON schema and CDDL?

Manu Sporny: Yes, there is a question do you want humans to determine this, or determine it automatically?

Justin Richer: It’s a bit of a false dichotomy that we have to choose between human reporting, and automatic testing. There are options to combine them.

Manu Sporny: To underscore Justin_R ‘s point, WoT is an example of that. They have automated tests, and manual tests. They use JSON schema.
… They passed the W3C Recommendation Track.
… Another example is JSON-LD 1.1. This has a fairly brutal test suite. There are hundreds of tests, they are all programmatic. You need to implement an actual driver into the test suite, and you have to run every single test.
… Some tests pass, some fail (boolean yes/no result).
… You take your implementation, and the test suite runs every single one of the tests against your implementation.
… Everything in the spec is a testable statement that is reflected in the test suite.
… The question to the group is, do we want self-reporting only, do we want purely test-driven, or a hybrid, or something else?

Daniel Burnett: Our actual minimum required goal for the specification to exit the Candidate Recommendation phase (later in our process) is to test the spec, not implementations. We need to at least see 2 implementations of each feature. Ideally we may also want to show some interop.

Ivan Herman: I think before making the choice we have to understand what testing really means and what its goal is in the process.
… From a W3C PoV, the goal of CR testing is to test the specifications. It’s not necessarily a tool to test or rank implementations.
… Its goal is to test if the specification is correct, to test if it can be implemented independently.

Manu Sporny: forgot to put links in IRC

Manu Sporny: https://www.w3.org/2017/02/social/implementations/as2/

Manu Sporny: https://github.com/w3c/activitystreams/blob/master/implementation-reports/pubstrate.md

Manu Sporny: https://w3c.github.io/json-ld-api/reports/

Ivan Herman: The spec must be implementable based on the spec text alone, without additional knowledge.
… It has to be such that each feature based on the spec text is unambiguous.
… This has to be reported back by the implementation.

Orie Steele: 2 independent implementations of the test suite? or of a did method?

Ivan Herman: We also have to prove that the feature we have there makes sense, and that implementations want to use it.
… E.g. if a vocabulary has terms that nobody uses, then that doesn’t make sense.
… Looking as manu ‘s WoT example, this seems to say “yes we want to use these features and we plan to deploy them”, and this has been done by more than one implementation
… I was surprised to see an entry that was only supported by a single implementation.
… In the procedural JSON-LD case, (I implemented RDFa a while ago) then this was a test suite that was useful for the implementation itself.
… It’s fine to have this, but it’s not a requirement for a CR Phase testing.

Dave Longley: Orie: 2 independent implementations of every normative statement in the spec(s)

Ivan Herman: So the voluntary reports are fine. We rely on the fact that implementations come in because they want to tell us that the specification is correct. We believe they are not motivated to “cheat”.

Daniel Burnett: What dlongley said

Dave Longley: We need 2 implementations of each normative statement.

Ivan Herman: Exactly

Orie Steele: so … thats a test suite for normative statements.

Orie Steele: and thats 2 test suite implementations.

Justin Richer: It’s not necessarily obvious what is under test, when we say we need to test a statement. We are defining on either side of a process.
… We are testing DIDs, DID URLs, DID document; potentially DID URL dereferencing
… This does not imply that we have to test DID resolution implementations in any automatic way.

Dave Longley: Orie: or self-asserted/manual claims from implementers that they implement the statements – there’s a wide range of acceptable “types of tests” – it’s up the WG to decide how to test

Justin Richer: https://www.certification.openid.net/plans.html?public=true

Justin Richer: This goes into the DID resolution contract discussion. We need to test that the contract definitions are defined correctly (what goes in, what comes out). But this doesn’t mean we have to plug in an actual implementation. It can be a useful tool to do that, but that’s not actually what we’re testing. We’re testing how well the specification is defined.

Dave Longley: Orie: obviously many of us (including you and me) would want to see two independent implementations passing each test in the test suite (note that for each test, the two implementations could be different, so long as each test gets at least two)

Justin Richer: These are OpenID Foundation tests. People have published these for their results.
… You can view the logs and see details.
… You can see in a lot of detail everything that has been automatically tested, plus results.
… These are technically self-asserted, but there is a process for making the automatize-able portions available. This is a hybrid approach.

Jonathan Holt: I’m a big fan of test-driven (incl. behavior-driven) development.
… I’m worried we’re chasing our tails. The VC spec was changing, the core architecture was changing (e.g. proofs, signatures). I think the challenge is if we start driving with tests, we have to finalize that first and then decide what is in Core, what is in registries.
… I think it would be valuable to automate validating proofs

Daniel Burnett: jonathan_holt , right now we are not proposing test-driven development. In new specifications, it is common to do the work the way we have been doing it so far.
… There will come a point when we are going to require tests for any changes, but this is usually after feature freeze.
… We do need to be talking about testing in general. E.g. with the DID Resolution contract, it’s important that the group understands what testing means.

Manu Sporny: I wanted to highlight what we have today. We do have a DID test suite, many people don’t realize this. It’s out of date, but we do have one.

Michael Jones: When is the next special topic call?

Daniel Burnett: Next topic call is this Thursday at the regular noon ET / 9 PT time.

Manu Sporny: We approached this by saying, this is a data model test suite, so we’re going to have automated tests for syntax and data model.
… More recently, the DID resolution contract has come up.

Manu Sporny: https://w3c-ccg.github.io/did-test-suite/

Manu Sporny: https://w3c-ccg.github.io/did-test-suite/implementations/

Manu Sporny: Another thing I wanted to mention, as an editor, at some point the editors will go through the document and determine the type of testing each normative statement is going to receive.
… You look at a normative statement and ask “How am I going to test this? In an automated way, or by self-reporting”?
… We did this for VC, and for JSON-LD, and we will do it for this spec.

Michael Jones: The OpenID virtual workshop is at 9am Thursday. So I’ll be skipping that one.

Manu Sporny: Now as an implementor, I want to make sure we generate a test suite that generates yes/no results, whether or not an implementations supports a specific normative statement.
… When it comes to things like what Justin_R pointed out reg. hybrid models, fundamentally we want this type of testing
… We want people to use automatic test suite and demonstrate that they are 100% conformant with the spec. I want our group to aim for this.
… With VCs, we may have missed the mark, people wrote custom test code instead of testing their actual implementation.
… We have real DIDs, we have real DID documents, I would like our test suite to pulls in these real systems and test them; as opposed to a test suite that only tests the spec itself.
… I’d like to see broad testability for real interop.

Manu Sporny: Yes, I think we should do far more than what we’re required to do.

Daniel Burnett: Reminder what ivan told us is what we’re required to do, vs. what manu told us can be very good for the industry.


7. Resolutions

8. Action Items