DID WG Topic Call on the DID Rubric — Minutes

Date: 2021-05-06

See also the Agenda and the IRC Log

Attendees

Present: Ivan Herman, Brent Zundel, Daniel Burnett, Joe Andrieu, Shigeya Suzuki, Manu Sporny, Charles Lehner, Justin Richer, Orie Steele, Adrian Gropper

Regrets:

Guests:

Chair: Brent Zundel

Scribe(s): Manu Sporny, Brent Zundel

Content:


Brent Zundel: let’s go ahead and get started.
… I’ll run the queue.
… Topic of the day is DID Rubric, Joe will be guiding us through that work item. Let’s get started.

Joe Andrieu: http://legreq.com/pres/v1.rubric.didwg.2021.05.06.pdf http://legreq.com/media/rubric.v1.2021.05.06.pdf

Joe Andrieu: You can get this Rubric and the Veres One Rubric evaluation from both of those links.
… What did we do, what did we learn?
… This is a presentation that I gave at IIW – to get to conversation… DID Method Rubric… started as RWOT Paper, now a W3C Note… DHS SVIP has been funding DID and VC interop… DB presernted rationale for why one might want to use Veres One… starting point for evaluation.
… We reviewed current W3C draft, found criteria expressing DB’s rationale, identified potential new criteria… refined criteria to better address Veres One distinguishing features, shared it with DHS SVIP cohort – generated 10 or so new criteria.
… Things that came out of the work – Is PII used to generate entropy as part of cryptography.
… The main thing we learned… 20 or so criteria in method that we applied, created 20 or so more, modified six or seven… showed that Rubric was in its infancy, how a ledger might work… Veres One has some interesting architectural features that we had to adjust to… we had to come up with a structured variable feedback mechanism…
… For example… separation of power, consensus layers, enforcement (initial draft of real questions).
… Previous work, had section on enforcement, didn’t quite know what to do with it…
… We also learned, design itself as a separable concern… distinct from governance… May need separate evaluations for implementations, especially wallets. Adversaries – how does the method handle particular adversaries.
… We had several cases where systems that people wanted evaluated weren’t just the methods… for example, many people are building application and wallet… wallet not determined by method, but still part of this coherent system.
… It was hard to figure out how to separate those things… so you end up addressing all of these things… but this approach was all about the method.
… The other thing that came up, possibility of evaluating adversaries… 27 adversaries worth considering wrt. digital assets… people forgetting stuff, old software, DID Method Rubric doesn’t address that very well.
… There is still a HUGE learning curve, you have to learn how the rubric works and then learn each Method… it’s a complicated space here.
… Need better tools for community engagement… criteria discussion, custom rubric development, shared rubric evaluations… how Veres One works is out there on the Internet, but bylaws don’t exist yet.
… There’s just a lot you have to understand to apply this.
… There needs to be a more open and fluid discussion… this is a static document, but the reality is – we only used a portion that applied to the use case… in almost every situation, application is just specific to some of those items.
… One of the criteria that is of that nature – requirement for compliance with US Federal Agency regulations… endemic to DHS – cannot recommend deployment if you’re not using NIST standards… most applications dont’ have those requirements/restrictions.
… Hopefully we can work on that… what are next steps – going to ask the Chairs about … Daniel Hardman / co-editor… the draft that I’m going to share on screen… URL for method is available… all new questions, new criteria, will get into PR, to put into Note.
… I expect we’ll have a PR within a couple of weeks.
… Separately, other items that have come up w/ cohort… that’s the short version… with that, we can open up with question.
… Here’s the rubric, all criteria – specific evaluations – Veres One in testnet, not in full production… one is forward looking, the other is what’s actually deployed.

Manu Sporny: thank you Joe, this is great. I’m wondering what the timeline is. What level of agreement we’re looking to from the group.
… when should we expect the rubric to be stable and published?
… second question, what is plan for updating?

Joe Andrieu: Yes, stable version for critical feedback with PR I’m about to put in in a couple of weeks… we can adopt relatively quickly – specific things can be improved… Daniel Hardman had some feedback that we should incorporate… operational reliability, question was inverted.
… Those sorts of feedback, address that for all of criteria, when PR gets in, maybe one month… then 10 more so… within month… hopefully.
… The type of feedback we’re looking for: “This criteria is important to me, because my DID Method uses it… so add it”
… On to the second question – dealing w/ complexity of updating, come up with a tool that automates what I’ve been doing here, exposing as public shared resource on Rubric Wiki…
… Want to create a mechanism where people can talk, update, fork, etc… that’s forward-looking that Legendary Requirements is looking at… we do have Registry mechanisms at W3C, but they don’t have conversation note that we need for criteria, we tend to not have robust conversational mechanisms…. usually just a PR, some discussion, goes in.
… Open to suggestions on how that could go, but that’s my current thinking.

Orie Steele: would be cool to document how a did method spec might link to the rubric wiki… that way we don’t need to update the registry, but we can encourage folks to comment on both.

Ivan Herman: I just wanted to understand this whole thing… from a W3C approach, when we publish the NOTE, we can set up auto-publishing of updates… that’s it, it just works. This WG will be out of charter sometime this year, but expect that there will be a maintenance once for VC.
… What you described is a bit different, still have to understand what you mean by that, but don’t see a big problem here.

Joe Andrieu: What happened when I worked through this exercise, use case with DHS SVIP, and analysis of Veres One – it was a very bespoke process and that surprised me… there might be dozens of federal agencies, hundreds to thousands of companies, so they can look at all of their options in apples/to/apples way
… This is really a database that is live

Ivan Herman: yes, understand what you mean, don’t have an answer for that right now.
… I know we had discussions with systeam, like to setup live system, question that comes up – who maintains that stuff in six years?
… The first year has plenty of people… six years later, less so.
… what happen then, we need an answer to that.

Joe Andrieu: Where Legendary Requirements is as a company - this feels useful to us, as a way to build our brand, but that may not be true in 5 years.
… Mental model was freebase, “open data” with license so anyone can use it.

Ivan Herman: freebase might not be the best example, it had issues and then Google got it.

Joe Andrieu: Yes, how do we handle this – if someone offers us $10M for Rubric wiki, then we might take it.

Ivan Herman: The question is then, what is the note that we would publish?

Joe Andrieu: We definitely are going to publish a note with these sets of criteria
… There may be an appropriate role to export from that system… some of criteria memorialized and published, that might be a way to do it, a snapshot of what’s in that system.

Ivan Herman: Yes, I see.

Joe Andrieu: Once we have something working, perhaps we can do something around that.
… What’s in a criteria… mapping that, that’s where the research work is right now… building that system.

Brent Zundel: Any other comments, questions?

Manu Sporny: I have a question around perceived fairness of the rubric.
… I know you’re talking to a lot of people, many reactions positive.
… what are some negative reactions?

Joe Andrieu: Yes, there have been a few – there was some sense that rubric wasn’t well structured for particular methods.
… Often wasn’t clear how you apply the Rubric to did:web, which has multiple different layers going on… DNS substrate as part of analysis… some questions as part of initial draft, it doesn’t deal with multiple layers on governance… don’t know how to apply Rubric to that.
… There was push back to “Use Case Centric” – disagreement around governance… independent, exists… depends on use case, if you are a US Federal Agency, you are not going to want to use a DID Method that is controlled by a foreign interest you don’t trust.
… So a federal government might want to ensure that they’re free from those sorts of entanglements… learning curve on Rubric and how it creates value…. haven’t gotten any negative feedback on perceived fairness… for example, we say we did the evaluation on behalf of Digital Bazaar, and was paid for by DHS… some criteria are objective, some are not, being transparent about it is important.
… We have six methods that we presented examples for, for new criteria, we only have Veres One Testnet and Production as examples, we need to get other examples… we reached out to Method Owners that we could find, if they didn’t have a way to reach them, we raised an issue telling them to add it.
… We haven’t heard back from DID Method authors… no one has proposed new criteria, could use help from the group, once we have something stable, we need other examples in there… then it becomes too much of a promotion piece for DID Methods with examples in the Rubric… no one has complained, but it’s an obvious misalignment of our purpose.

Charles Lehner: Just wanted to clarify, DID Rubric repo or the report?
… Where does the collaboration happen?

Joe Andrieu: It happens on the DID Method Rubric repo

Joe Andrieu: https://github.com/w3c/did-rubric

Joe Andrieu: We need more examples… but we’re about to put 20 new Criteria that only talk about Veres One… we need to fix that.

Brent Zundel: How can we best marshall the resources of the WG to help this work move forward?

Joe Andrieu: Once we get the new PR in, with new criteria, it would be helpful to bang the drum a bit to get recommended examples for different methods.
… Concerted effort to get people to submit stuff as examples in rubric.

Brent Zundel: That PR is 1-2 weeks out?

Joe Andrieu: I expect a PR in existence within the week, then make changes after the fact… happy to engage in whatever way.

Ivan Herman: Let’s say PR is in and merged, then we can make an official publication or not?
… At this moment, this document doesn’t exist.

Orie Steele: Doesn’t the group need to agree to publish a note?

Joe Andrieu: Maybe publish a FP NOTE? 2-4 weeks from now?

Brent Zundel: yes, the group must agree before publishing

Brent Zundel: but strong consensus is not required

Ivan Herman: That would have the advantage that it gets registered, easier to find, then we can set up automatic publication pipeline, which makes it easy to republish things.

Joe Andrieu: If we can get that merged, and execute, that’s good.

Orie Steele: Orie: seems like we need that PR, PR needs review, PR is approved, group agrees to publish, registration complete, updates can be made later.

Brent Zundel: Jump on the queue for final comments…

Charles Lehner: Not sure if this is relevant, there was another session about DID Method evaluation report, Markus was involved in that – anything to say about that at this point?

Joe Andrieu: Yes, is Markus on the call? Markus worked with a team that did an evaluation of six or so DID Methods in support of DHS work – not sure what the downstream of that is… don’t know how they’re going to publish it, or what their intentions are.
… I did sit in on that session, it was good work, smart people – raised lot of healthy issues.
… They provided some analysis of dealing with that ambiguity

Brent Zundel: That was a good discussion, we have a plan for next steps, look forward to PR, ping mailing list to review/look at… we’ll keep moving forward. Thanks all.

Orie Steele: Thank you Joe!

Charles Lehner: thanks