Meeting minutes
extensibility of DID Resolution
<decentra_> 5 min warning
<TallTed> it seems like there should be some liaison between DID and FedID. https://
<TallTed> pchampin - Did you see my note of last evening, about IRC log and minutes? (RRSAgent rolled over at midnight GMT.)
<TallTed> pchampin - That's good for today, but what happens to *yesterday*'s data?
wip: any changes the agenda?
[discussion of possible adjustments]
wip: First up, Markus to talk about extensibility of DID Resolution
markus_sabadello: What I wanted to do with this session is to get a discussion and changing thoughts on extensbility and the algorithm of resolution when you give a DID to a resolver or a DID URL to a dereferencer
… this has to do with the discussion from previous meeting, around primary and secondary resources and fragments
… can you resolve to something other than the DID document?
… What's already in DID core (showing basic syntax) for resolve. Input is the DID, plus a map of options. The output is primarily the did document plus two types of meta data.
… We've had that for a long time.
… A bit of innovation in places. Regarding extensibilty, most of these elements are points of extensibility
… described in the specifications, but they are ways to add features
… resolution options haven't been used much. I don't know of any particular implementations.
… the one that is definitely being used is the request for a particular representation of the did document
… the result is guaranteed to have meta data, but that meta data may be customized
… So how flexible should the resolution spec be?
… How constrained or open should the generic did resolution process be?
<Zakim> Wip, you wanted to ask about how you might supporting sidecar data
wip: I wanted to ask, because of did:btc1, we have a need for that options, but we weren't sure if it fit
… we want to pass in the did and some additional data to help resolve the DID
markus_sabadello: absolutely. I remember your conversation and also KERI methods where the DID itself isn't sufficient to fully resolve.
… I think absolutely that's something to put in there.
decentra_: I think a common resolution option would be a version of the did document
<Wip> Version or time
markus_sabadello: yes. I have a few more examples also
… Dereferencing a DID URL is when you have a DID with an optional path, query, and fragment parts
… the tension is that the result *may* be a DID document, but it may be something else
… you can have parameters in the query stream, but the interesting question is what should the spec say about the dereferencing algorithm
ChristopherA: One of the things we've run into is you may need to speak to a ... you may need to have a more interactive back & forth with the resolver
… and there is no way to have an id for "this particular conversation"
<dlongley> +1 for a simpler/modular design so that you either get back a DID document, a portion of it, or another URL that would could be resolved via another resolver -- rather than trying to have `dereference()` do everything (including resolving that other URL to get its content, etc.)
ChristopherA: so if the resolver needs more information, we don't have a standard way to relate the next resolution to the prior one
… lots of advanced use cases, you won't immediately get the answer.
… So can we have some sort of id for a series of transactions?
markus_sabadello: we've done something very much like that in the did registration specification. I think it could be done. Through the options object and metadata returned.
… The metadata could communicate that more is needed and the additional information can be sent in the options
… You could send something like a state identiifier or cookie to keep track
… But also, we could discuss extending the interface to have more inputs
<Zakim> JoeAndrieu, you wanted to say DID URL deref to did document != resolution
JoeAndrieu: The way you introdueced this Markus, a DID URL could resolve to a DID Document -- I think that's correct.
JoeAndrieu: We need to communicate that that is not a form of resolution. Just because a DID Document resolves, doesn'tmean it is the DID Document for the DID.
JoeAndrieu: For DID Document that is canonical for the DID -- we need to be clear about that.
markus_sabadello: I think we're aligned there.
… Resolving a did != dereferencing a did URL
decentra_: I'm wondering if there's language in the spec today about trusting the resolution process?
markus_sabadello: there is a section about that
… it's called DID Resolution architecture. we can probably expand that
… Let's talk about the inputs and outputs. And definitely the metadata can help you make some trust decisions
<Zakim> Wip, you wanted to ask about examples of where you would use a path in the DID url
markus_sabadello: Maybe we can save the trust conversation and get to the next slide
wip: do you have examples?
markus_sabadello: Yes, on my next slide
… Here are a bunch of examples of DID Urls
… Some of these combinations are very generic. Maybe could be defined in the spec itself. Some could be defined in extensions. Some may not work with all DID methods.
… Let's look at them one-by-one
… First is a #fragment to refer to a key.
… Fairly common.
<burn> Examples: https://
markus_sabadello: So what should the spec say about it?
… I hope we agree that dereferencing that is one of the verification methods from the DID document.
… because the fragment is a related resource.
… I was using the language of primary resource and secondary resource.
… when that first DIR URL is dereferenced, you first dereference the DID to get the primary resource, the DID Document which you can use to get the secondary resource of the verification method with that id
<manu_> Agree that it sounds right to me
<Zakim> decentra_, you wanted to ask about the relationship b/w fragments and options
<Wip> Me too
decentra_: I'm wondering. Is it worth having an understanding that these fragments should be coordinated or aligned with resolution options?
markus_sabadello: I don't think so.
… If we look at the next example, with version time as a query parameter.
… The specification currently says that to dereference, the versionTime must be passed as a resolution option
… so when you do resolution, you do it differently because of the parameters.
<Zakim> dlongley, you wanted to say if any DID URL resolves to another "intermediate URL", i would recommend that just be the result -- rather than automatically trying to resolve that... for example, i imagine the did:tdw DID URLs with `.../governance/issuers.json` maps to an HTTPS URL, so ideally (to me) the result of resolution is that HTTPS URL, not whatever content lives at that HTTPS URL.
markus_sabadello: so, yes a relationship for query terms, but not fragments
dlongley: if many of these, resolveing the DID URL probably returns an HTTP URL
… if that points to something other than the VDR, then dereferencing should should RETURN that URL, not attempting to handle all possible URLs
… a boundary: are the resolvers just returning content they generate (per method resolution) or is it something that is from somewhere else (and they should just return the http URL)
<manu_> +1 to what Dave said.
markus_sabadello: that is what happens when we use the service parameter.
<dlongley> +1 to a compositional design
markus_sabadello: so a DIDURL with a service parameter is not the final resource, (the content) but just the URL that can be separately dereferenced
… however with did:tdw (we can discuss that)... in my mind, (with the governance example), it should return the content, because the webserver *is* the VDR
<Zakim> manu_, you wanted to note that different rules in each Method specification could be challenging.
markus_sabadello: look at did:cheqd, you also get something back other than the DID document, but the result is not the URL.
<dlongley> having to mirror every possible thing you can do with HTTP into `didResolutionOptions` (or whatever that param is called) ... is not something we want to do (IMO)
manu: I'm concerned if DID methods can define how things are resolved differently where its really up to the did method and different did methods do it differently.
… for example, if it's a service, it just returns the URL back. But did methods could do something completely different.
<shigeya> Consistency from the point of view of Developer is important.
<dlongley> +1 for consistency
markus_sabadello: yes, we should put some things in the spec that methods can't or shouldn't override
… At one point we had different syntax for method-specific and generic parameters, and we decided that was too complicated
… if people come up with new parameters, but make sense to be consistent across different methods, they can be registered
… the did:cheqd example is a good model
<Zakim> JoeAndrieu, you wanted to talk about DID URL as http replacement in HTML
JoeAndrieu: Wanted to push back a bit on something Dave Longley said.
JoeAndrieu: In Linked Resource for did:cosmos, it was designed that if the DID URL has a path part, we replace the resource, so you can put that URL as src in image tag in URL. When you put DID URL in, what you get back is a resources, we miss opportunity to update Web to use DIDs.
JoeAndrieu: I would like to use DID URL as src in image tag. I'd like to be able to support that use case.
markus_sabadello: I agree
manu_: +1 to supporting the use case
<dlongley> +1 to what Manu is saying.
manu_: there are two things going on here. The question is what do you send to the resolver api to get one of two things:
1 is fully resolved URL 2 what do you do to get the actual content
<dlongley> (notably, you can return a data URL as well)
JoeAndrieu: The way we navigated it, but not how did:cheqd did it, we treated, if there is a path part, you get back resource. That was a definitive one way or the other. If you don't have a path, you want DID Document, if you have path, you get back secondary resource.
JoeAndrieu: Maybe service query pattern gives you URL -- maybe we can break it into different responses based on URL structure.
dlongley: I support the use case. a number of different ways to do it.
… it may be that if a URL is always returned, then browsers are told they should continue dereferencing
… but adopting design when we can use current tooling would be powerful
JoeAndrieu: +1
markus_sabadello: keep in mind, like with did:cheqd, there are other did methods where there isn't a URL. We could use a data URL... but then why have an interface to return a content stream in the first place?
… regarding the path. the resolution specification doesn't tell you anything about how to dereference the path. Maybe that is defined by the did method, or by an application, but other than that is completely open and flexible.
… So it doesn't say that if it's a path, it must be content.
… In current design its left up to did method
<Zakim> Wip, you wanted to ask about tdw examples
wip: About the tdw examples, could you explain the difference
<markus_sabadello> did:tdw:Qma6mc1qZw3NqxwX6SB5GPQYzP4.example.com#whois
<markus_sabadello> did:tdw:Qma6mc1qZw3NqxwX6SB5GPQYzP4.example.com/whois
markus_sabadello: the one with the hash is just a DID URL with a fragment. in this case, when you dereference, you first get DID document, then process fragment according to rules of the media type.
… Assuming this is a service, the result would be the service with the id that matches the hash
… The second example with /whois, the DID method would have to define that.
… I realize its tempting to come up with a generic set of rules, but I don't think that would be right.
… In the did:tdw spec, it says that in order to dereference, you find the endpoint, then you look for the service and return the content
<Zakim> dlongley, you wanted to say: i'm not against returning a content stream, but with no clear guard rails on when that would happen, we run the risk of having to wrap every possible protocol for retrieving other content
dlongley: you mentioned returning a content stream and making sure we have reason to do that. I'm not against that, but we need to figure out guardrails
… there's an impedence mismatch where you'd have to know what is behind the URL to pass the right options for further resolution
… I'm concerned that we are trying to enable a generic solution but only having a limited mechanism to do it
… I'm worried that if we don't give people any guardrails, then clients will need to have magical super powers to know what to do.
markus_sabadello: I agree.
… these examples would theoretically work with no additional parameters in the dereferencing
… some of it is defined by DID methods, some by the spec
<markus_sabadello> did:example:123?service=DecentralizedWebNode&queries=W3sgTUV
<Zakim> JoeAndrieu, you wanted to say I think interop between methods is the better options
JoeAndrieu: Wanted to provide counerpoint on advocacy to leave bulk up to DID Method. We have a challenge where did:cheqd is doing what did:cosmos did, going to limit interop -- what is the common pattern that these DID Methods upgrade to? Can we figure out how we could have mechanism to get resource back in a common way.
<dlongley> +1 to Joe for finding consistency and creating more well-defined boundaries / layers
markus_sabadello: I would love to learn more about the did:cosmos approach.
<drummond> +1 to defining a common pattern that any DID method can adopt for DID URL construction.
markus_sabadello: the objective for this discussion is the question of how much we should standardize
<JoeAndrieu> +1
<drummond> +1
<ivan> +1
manu: +1
<denkeni> +1 to define the scope of dereferencing should happen on DID resolution lev,e
<denkeni> Or the did method, etc
<markus_sabadello> did:example:123?transformKeys=JsonWebKey
markus_sabadello: we have a mechanism to define common parameters (did core and extension registry)
… this JsonWebKey example is defined by the method, but maybe it could be elevated to be worth putting in did-core
… So we might want to create a section for path patterns
<drummond> +1 to extension registry for common path patterns such as /whois
manu: to note the plus ones further up. I think making it so that we don't know how to interpret path patterns is going to lead to a lot of problems.
… I'd go a bit further. Maybe we do need path patterns, but I'm concerned that they might be totally different. So you see similar patterns, but handled wildly differently
… can we not just standardize the path pattern?
<smccown> +1 for standardizing the path pattern
manu: I am not certain about the ramifications of that option, but it seems the other way feels like we are buying ourselves trouble
markus_sabadello: the counter argument would be that isn't how http urls to work
<dlongley> and maybe it ends up being ok for the paths to be wildly different, but we need to make sure it's really clear where the lines are and that we can *construct* (in a compositional way) the pieces we need to do interesting things, using existing interfaces, without having to wrap every interface to integrate existing technologies with DIDs.
<Zakim> JoeAndrieu, you wanted to say the linkedResources is in the extension registry
JoeAndrieu: linkedResources are in the extension registry. That particular use of that property determines how the path is used because it's method specific. That's the dangerous pattern we're in and anyone can add that pattern to the registry, but no one outside DID IXO community understands it. Creates the problem Manu is raising a concern about.
<burn> Agree with Markus that expectations from HTTPS URLs need to be met as much as we can to reduce confusion
JoeAndrieu: We have a path property that is a property of linked resource that would be looked up... bespoke way to do it, but we just thought through it in our own internal conversations.
<Wip> LinkedResources - https://
manu: I didn't mean to imply that the same path on two different did methods would give you the same resouce. I agree. the path on one DID URL may give you a totally different resource than that same path on another DID
<dlongley> agree that that would be a big problem
<drummond> also agreed
manu: In one situation, it results in getting the DID document and the second is a direct resource, then the DID method fundamentally overwrites the fundamental expectations
<burn> Sounds like the key is not so much what is returned but whether behavioral actions differ
manu: there needs to be some level of predictabiliity
… the linked resources is a great example, I would expect that to behave the same way across DID methods
… So we might restrict extensions to prevent that kind of duplication / ambiguity
<drummond> What would be ideal is if you can request the same type of resource (or action) with a specific path construction that works across all DID methods that adopt it. Linked Resources is a good example.
<dlongley> (can't put the requirement on DID registry administrators though :) )
<drummond> +1
<denkeni> +1 to manu, probably we could have a more strict resolution rules, rather than just putting everything we’ve used from URL dereferencing
markus_sabadello: I've organized those examples into patterns we've been seeing
<dlongley> we need different categories that can each be reasoned about similarly -- and DID method authors can put their extensions in the right place/category.
<TallTed> I wonder whether people would be feeling the same way if these URLs were fully opaque/obfuscated (as URLs are specified to be), rather than using human-friendly-ish parameters, values, etc.
markus_sabadello: The method independent ones don't require any method-specific logic. Selecting a service endpoint from a did document that doesn't have to be encoded in a method specific way
<dlongley> TallTed -- i think the point Manu is raising starts to involve something like verbs / actions and how those interact with URLs (or if they are "part of" the URLs)
markus_sabadello: The processing of the service parameter is completely independent of the DID method
… Some of this could be changed or improved
… Where these things are specified is the heart of this discussion
manu_: looking at the list, my gut is that anything that's a query parameter feels right. It's the path part for /whois, and leave that method-specific
… if that were a query term, it would feel less confusing
<decentra_> +1 to confusion b/w path and query param
manu_: in the extensions, if query and path are interchangeable that could become confusing
<denkeni> And that has happened everywhere
markus_sabadello: the different between /whois and ?whois and /governance/issuers could be anything.
… you can have any arbitrary path and how to dereference it.
… because the DID method may want to support arbitrary paths
TallTed: I'm concerned people seem to be interpreting URLs based on their intuititive interpretation of the words they are seeing there. URLs are by definition opaque.
… so if we used opaque identifiers, would that change our sense of how these things work?
… replaceing "whois" within "xc13" would shift expectations, but *it shouldn't*
<dlongley> `did:method:abc/p1/p2.ext?x=y&z=1#f` <-- need clear boundaries around how each of these "things" work :)
TallTed: There's intuitive understanding of how things work because of an incorrect believe that we understand how things work. Because the services don't have any intuitive anything
ChristopherA: I'm struggling with a trust model problem. Which is, how much does the resolver that I trusted
… versus a resolver going and asking something else for an opinion?
… In a bunch of these there is zero trust, because there aren't mechanisms to ensure it.
… What data comes from my browser resolver and I am forced to trust, but if it calls some other resolver, how/why should I trust it?
markus_sabadello: there is some language about this in the architecture section. It's incomplete, but it tries to decsribe that it is possible to have a setup where one resolver talks to another resolver, even at different parts in the process
… that has implications on trust model
… For example, fragment could be used to resolve the document, then processing the fragment yourself.
<TallTed> How do you know to trust the HTTPS server your browser sends a GET to? How do you know that HTTPS server isn't parsing your GET and passing chunks of it to a remote process, that does the same, etc., until something gets returned to you?
<dlongley> also something to note... if you can't resolve a DID URL to an intermediate URL, it becomes harder to figure out what to update :), i.e., people will need/want to update "pointers" to things and they might need to know what those pointers are, not only get back the fully dereferenced content.
<Zakim> burn, you wanted to agree with Dave Longley that this is about verbs/actions
<dlongley> +1 to TallTed, when we hide the HTTPS calls behind a DID resolver we also don't necessarily know what's being used there for trust
burn: I want to go back to Manu's comment on the example of what concerns him with path.
… I think it really is about verbs and actions. Even with HTTP urls, it feels wrong when the path has "actiony" things like "next" or "prev".
<TallTed> I don't believe there's an suggestion of that in HTTP URLs that are not strictly Locators
burn: So that is something not mandated with HTTP urls but if we can restrict/enforce/control that, that would be valuable
<Zakim> manu_, you wanted to note that URLs are supposed to be opaque, but the patterns we build on top of them are not. and to speak to "don't know what is handled by DID Core / DID Resolution vs. DID Method"
<denkeni> +1 to burn
manu: Agree with Ted, yes, URLs are supposed to be opaque and you get something back.
<TallTed> you don't necessarily get anything back. You *should* usually get a response code, but that may be all!
manu: in that way, thy are opaque, but we also have design patterns like .well-known which we now depend on for certain protocols
… I'm concerned about not know which systems resolve which parts of a path
… To really understand what happens, you have to have the entire registry in your head of every possible parameter and path to understand what applies
… Yes, this stuff is support to be opaque
<burn> Correct, Ted, but there is an implication that a path leads to a location and not an action. Maybe something at the location is returned, or maybe not, but path-as-action is problematic.
manu: but it's a high cognitive load. that increases the chances that people will come to the wrong conclusions, leading to security mistakes
<TallTed> It's no less complex with an HTTP(S) URL and all the things that might be behind /cgi/...
<Zakim> JoeAndrieu, you wanted to talk about fragments not being sent to resolvers and dereferencers
<TallTed> Fragments are not sent to servers by clients. It's in the RFCs.
JoeAndrieu: Wanted to speak to a conceptual boundary -- typical pattern in browser is you don't send fragment to server. You process w/ results of resolution or dereferencing. We haven't written that up.
markus_sabadello: I think you do pass the entire DID URL to the deferencing process, but different things can happen in different components
… all of which could happen locally
shigeya: I'm confused about which type of software is using this API
<dlongley> "the server" gets ambiguous, "the resolver" is perhaps not "the server", but the client (even if it also happens to function as a server just to enable you to use its interface), i.e., there is a server/client when resolving and a server/client that provides a resolution interface.
shigeya: There is some confusion about differences of assumptions
… Some think the end result should be resources, but somethings not
… we could put this onto the browser to get there.
markus_sabadello: all of these are implemented somewhere. Maybe not exactly to the interfaces we have in the spec, but these are live uses
wip: It'd be good for everyone to think about next steps. We need some issues and actions we are going to take
<TallTed> (Sorry, yes, my bad, I'm usually a stickler about "qualifier server", "qualifier client".)
markus_sabadello: with did resolution you don't always have client and servers
Sure you have clients and servers!
<KevinDean> From RFC 3986 (https://
<KevinDean> agent, regardless of the URI scheme.
markus_sabadello: it's also a question of which application is treating "dereferencing" to call a dereferencer
wip: off to a 30 min break, after an optional group photo
<decentra_> back at 10:55
<shigeya> We're experiencing power outage at the venue hotel. We may not able to resume at 10:55...
<TallTed> i/Last topic of today/scribe+ wip
<decentralgabe> We are delayed - power is out in the building. No estimate of return just yet.
<decentralgabe> the local power co's website shows a 3PM return. please follow the mailing list, we will post an update when we're able to resume.
<Aaron> W3C in the dark
<markus_sabadello> In case it makes anyone feel better, here in Europe it's dark as well :)
<decentralgabe> 😂
<TallTed> The sun has set on Europe.
<ChristopherA> I’m hanging out at pool patio, much cooler there.
burn: the agenda has adjusted. we will skip issue/PR processing for resolution, which we normally do on calls.
… we were going to do editor onboarding for resolution editors. to help with the process for doing that. we will skip it for now. we are going to run a resolution for the FPWD for DID Resolution, so we can begin making edits under the proper structures
… I will turn it over to Will to run that process.
… we have draft text. please adjust it
<Zakim> manu_, you wanted to ask about using echidna for publishing.
manu_: I don't know if we've made a resolution to use echidna to auto publish, but we should do it.
burn: I would like Will to run the proposal first for this, then Manu you can run one for echinda
manu_: the other thing ... we need to suggest the short name, which I presume is did-resolution
<TallTed> s| the DID WG will publish the following https://
Wip: Does anyone have any concerns for the proposed text before we move forward?
ChristopherA: we have people that haven't been in a W3C process before. we may want to give a recap of what it means to be in a FPWD, who can vote, and what this is ...
burn: there are several stages in the W3C rec track process. the first stage is called a First Public Working Draft. it begins the intellectual property disclosure process. when certain triggering events occur, this being one of them, you will review the document to make sure you do not have IP conflicts
… please read the IPR policy. only invite experts and w3c members of this group can vote. please do not vote if you are not! we know there are sometimes guests.
… this is a 'continuing group' in a sense ... not everyone has participated so we are giving people context!
TallTed: I added the revised proposal with the addition of 'DRAFT' -- it is proposed, then we vote, then it becomes resolved (or not)
Wip: hearing no objections to the current proposed text
<Wip> PROPOSED: Publish the DID Resolution specification (https://
<manu_> +1
<JoeAndrieu> +1
<burn> +1
<decentralgabe> +1
<Wip> +1
<TallTed> +1
<markus_sabadello> +1
<dlongley> +1
<dmitriz> +1
<ChristopherA> +1
<shigeya> +1
<Wonsuk> +1
<smccown> +1
<JennieM> +1
<denkeni> +1
<danpape> +1
RESOLUTION: Publish the DID Resolution specification (https://
burn: please vote 1 per org. do not want to have a disproportionate representation
… we declare it resolved! Manu, would you like to make a proposal about echidna
manu_: echidna is a document publishing system the W3C uses to automate the publishing process. it used to be a lot more of a complex process to publish new versions. this meant new versions were only published every few months. echidna was put in place so that any time the editors merge something to the main branch, echidna auto-publishes a version of that document.
<TallTed> for future... I would appreciate announcement of "one vote per member org" before or with the PROPOSED, as that can change my opinion of the wording.
manu_: this proposes the WG use echidna for all documents the WG publishes after the FPWD
JoeAndrieu: I want to clarify, this echnida stops when we get to CR? then we have a different process?
manu_: echidna needs to be disabled at certain points in the process. we should enable it after FPWD. when we go to CR, we turn it off. then go to CR. then re-enable it to publish CR drafts. then turn it off before PR. etc.
… re: pushing back on the proposed resolution amendment from Dan. hopefully this proposal works for all documents the group publishes, so that we do not need to agree to use echidna for future documents.
burn: that is fine. I want to make sure that everyone understands that this is for the rubric and all other documents. I don't have an objection.
ChristopherA: are there some documents we have inherited that we may not want to auto-publish. like the plain CBOR document. I don't think that's a good starting point.
manu_: that's a good point ... we don't have to enable echidna ... we can be clear what we're talking about
<dlongley> could say echidna by default unless otherwise stated
JoeAndrieu: the use case document could also go in there. for the VCWG we thought we were on echidna but we were not. it would be good to have an explicit list.
The list of WG deliverables are here: https://
ChristopherA: the only one's we haven't mentioned are the plain CBOR representation and implementers guide. I will vote against the CBOR representation. the only one questionable is the implementers guide.
<Zakim> JoeAndrieu, you wanted to ask about registries & echidna
JoeAndrieu: do we have any visibility into the interactions b/w echidna when it is a registry?
manu_: my expectation is that echidna should operate normally for a registry. it should auto-publish. don't see any reason why it wouldn't work like that.
… +1 to Christopher's proposal to not apply it to the CBOR representation or implementation guide until someone pushes it forward. it won't be auto-published until someone works on the documents.
burn: doesn't that mean .. anything not intended as a note can use echidna?
manu_: where is the use cases document?
<dlongley> "use echidna for everything except docs X and Y"
burn: let's list the docs it applies to.
pchampin: we have precedence of registries using echidna, should be no problem with that
PROPOSED: The Working Group will use Echidna to publish the following documents: DID Extensions, DID Method Rubric, DID Core, and DID Resolution.
burn: any questions? recommendations for change? [ hear none ]
<manu_> +1
<JoeAndrieu> +1
<decentralgabe> +1
<burn> +1
<smccown> +1
<markus_sabadello> +1
<denkeni> +1
<Wonsuk> +1
<shigeya> +1
<TallTed> +1
<ChristopherA> +1
<Wip> +1
<dmitriz> +1
burn: resolved!
RESOLUTION: The Working Group will use Echidna to publish the following documents: DID Extensions, DID Method Rubric, DID Core, and DID Resolution.
burn: is there anything else related to this topic of publication processes that needs to be addressed at this time?
manu_: ... is did use cases not in our charter?
pchampin: we can make new notes
decentralgabe: the use cases doc is currently an editors draft and will need to be changed https://
Wip: I noticed some URLs are not working properly
There is a published Note of the use cases https://
<Wip> https://
other link is here: https://
manu_: maybe PA you know the answer to this ... there is a subdirectory structure, and when published through echidna it is not publishing the sub-directory stuff for extensions
… the HTML5 is multi-document, so this should work
pchampin: I was not aware of this and will check.
manu_: in the editor's draft it works just fine. in TR space the links get broken.
<smccown> For clarification: Use Cases are covered by the W3C process doc (https://
<smccown> 6.4 The Note Track (Notes and Statements) for details."
<Wip> This old link still works - https://
manu_: TR means "Technical Report" which is the official home of a specification. when we publish something we pick a short name (like did-resolution). when we publish a document it is put on the w3c site under /TR/<short-name>
… it is continually published when we use echidna. we will publish to TR space .. then when the WG shuts down the document will in theory live there forever until a new WG comes along to update the document. there are some other ways this can change.
<Wip> https://
Wip: flagging the did spec registries link ... should it still resolve?
pchampin: the links in the section resolve to a 404. we will dig into it
burn: PA will take an action to resolve this
pchampin: does someone have a link to the HTML5 spec that follows this pattern?
burn: we are past our original half hour for the 5 min decision. anything else?
… our next topic is the DID Test Suite and Resolver Test suite. there are no slides. just discussion.
Wip: we touched on how we would test interoperability. it is connected. this is really - what actions are we taking as a group to sort out our test suite so we can demonstrate interoperability.
… does anyone have context on what exists currently? I'd like to call on Benjamin since he runs the VC stuff
<ChristopherA> @pierre w3c/
https://
<ChristopherA> I assigned it to GitHub @pchampin
manu_: here is a link to our current test suite (https://
… we have 102 methods which submitted tests for this. for each method we can see if it passes a particular test for each normative statement in the specification. we are largely a data model specification ... which is what the tests cover.
… section 4.2 covers the normative statements we are covering. not all methods have implemented all normative statements.
<Zakim> manu_, you wanted to speak to what test suite we have now
manu_: it is fairly comprehensive. not a lot of methods have problems in passing our test suite. in the future we want this to be far more interactive and dynamic. in the past we got a DID string and a document which we checked into a repository. who knows if these methods are still conformant or not.
… in the VCWG we have made this more active.
markus_sabadello: It is 102 test reports, not methods. there can be multiple reports per method. there is a separate DID resolution report in the CCG.
<markus_sabadello> w3c-ccg/
markus_sabadello: the resolution test suite we have goes beyond the data model. also tests the resolution and dereference function by using the HTTPS binding. connects to an endpoint, sends a DID resolution or dereference requests, and checks aspects of the responses. it hasn't received as much attention as the DID core test suite.
… there are no automated test reports. it is possible to download it .. you can configure the repository to generate test reports locally. can be useful as a starting point.
<Zakim> Wip, you wanted to ask if it should be a requirement to pass the test suite as part of DID method registry submission
<JoeAndrieu> +1
Wip: should be a requirement to pass the test suite as part of DID method registry submission?
JoeAndrieu: we have talked about this. I think it is a really good idea. there is a burden on the editors, but if it is self-reported, and what we accept, maybe it reduces that burden. I don't know why we would put a DID method in a registry if it is not conformant.
ChristopherA: which level of conformance?
TallTed: we have talked about this before. it is a substantial burden on the editors. we do have a new group of editors who are maintaining the original registry. I am not on that team (for good reason). It is a substantial burden of time.
… getting the information that is requested is like pulling teeth. I don't feel like the burden is worth it. sometimes automatic tooling can help generate results. this could be a substantial burden as well.
ChristopherA: I am OK with some kind of field in the method section of the registry that someone can self-report. I do not believe it should be a requirement. I concur with Ted that it's a real burden on the approving team. It has downstream complications.
… I would like to see a list of parties that conform to resolution. DID resolver conformance is not required. using a resolver or the API does not prohibit your use of a DID or DID Document.
<Zakim> manu_, you wanted to speak to https://
<bigbluehat> w3c/
bigbluehat: how the test suite is used is a great conversation which we should come back to. the W3C test suites are testing the spec's statements. they do that by testing implementations, but that is a secondary concern. that doesn't mean we can't turn them into something better. it is better to have an underpinning CG to maintain the suites long term.
… the VCWG/CCG test suites, parallel to what we have here, do specification testing (MUST statements). the data model suite is up to date. shows all MUST statements, a link to the spec, and who passed. there is also an interop grid, where the suite was implemented through a minimal VC API.
… there are live tests that run every Sunday, which is beyond the W3C requirements, but does mean we have up-to-date stats on how everyone is doing. the interop tests have gotten less attention than the spec tests. these move VCs in and out of issuers and verifiers to promote cross-consumption.
https://
bigbluehat: not sure how much will be applicable here. but this does go beyond a JSON-schema evaluation of the data model. this is something we built at Digital Bazaar to aggregate results to understand what implementers have covered.
… there is a spider chart per implementation for issuers and verifiers. we can see across multiple specs how one implementation is doing. there are 15 or so implementations listed here. an aggregate of VCWG and CCG tests in one place. a useful tool to understand the ecosystem.
ChristopherA: what does the dark blue mean?
bigbluehat: that it is unimplemented. you have to opt-in to the test. see it at https://
https://
bigbluehat: question for this WG ... how dynamic do we want the tests to be? there is a test suite that is semi-dynamic for did key. if there is a known way to do resolution we can test conformance more often.
… we have integration tools for mocha-js, for ocaps/zcaps. there are 2-3 community implementations in js, python, rust. can work even if you have static tests behind an API server
… we are working on surfacing reporting better so that you know what went wrong. this is beyond what the W3C requires. but more implementations are better!
<Zakim> TallTed, you wanted to note that it's important to track *implementations*, not *companies*
TallTed: these are all company names, which is great. but also problematic. companies may have different implementations. it can have the company name but should also have the implementation name.
bigbluehat: there is a PR called 'implementation details' that I could use your help with, Ted, which adds additional information
… there is also no link to the implementation. there should be a link to open source implementations so the community can use it. haven't had time to remodel it yet.
Wip: how does this VC example apply to DIDs and DID Resolution? each method will implement resolution in their own way. we want to test the results of resolution. are we testing interop across DID methods?
… it would be great to get something similar for DIDs
bigbluehat: one of the things we kicked around was using Docker containers. you can implement behind that container. resolution methods could hide implementations ... the container could do the resolution, which would be more dynamic than committing DIDs one time. we wouldn't have to implement resolvers for each method.
decentralgabe: Wondering if we could use the universal resolver to do this, it has a lot of docker containers for drivers for many methods.
markus_sabadello: yes, the universal resolver is at DIF. there is a docker container that exposes a HTTP endpoint for each method. each can be run locally and tested using this HTTP interface. it should be included in the testing. I would also argue that it is just one project/implementation.
… even though the universal resolver is well known it should be treated as just one implementation. I do like the Docker approach. it illustrates that you do not need to rely on a remote resolver service. we do not need to require DID methods have a HTTPS endpoint.
… local Docker images, for some methods, need heavy infra (like blockchains ... which you don't want to start locally), which raises a question of how you can trust a result. for example running a BTC node to test did:btcr. there are pros/cons.
<Zakim> JoeAndrieu, you wanted to suggest self attesting... clients to resolvers
kaz: there is a possibility that multiple vendors can use the same codebase. we need to be able to see the codebase as well.
<Zakim> manu_, you wanted to note that there are probably two test suites here?
JoeAndrieu: people writing clients will be motivated to test which resolvers they interoperate with. there is also some client software which we need to test. e.g. is a wallet using the resolver spec correctly?
manu_: there are at least 2 classes of test suites here. 1 - it has to test the resolution interface (the resolver, parameters, MUST statements). 2 - the automation of DID methods and whether they're conformant to DID Core is another set of tests
… we want to replace the current test suite that we have. it is static but does not guarantee continuous conformance.
… unfortunately these two test suites may be different. we are definitely testing two different things between core and resolution specifications. it is a significant amount of development effort.
… we will be writing two brand new test suites
bigbluehat: does anybody know if we add in resolution to the current test suite, even using static files?
manu_: I think so ... from what I remember our current test suite is duct-taped together a bit. it needs some TLC. it just reads a DID Document from disk. instead it should be read from network/docker container.
<Zakim> JoeAndrieu, you wanted to say is there anything in did-core that isn't just syntax or the DID and data model of the did document?
Wip: similar to this, we could apply a suite of tests to a DID method for it and its resolver. is that testing conformance to the resolution spec across methods ... maybe that is the same implementation of resolution
JoeAndrieu: re:Manu, are there tests that you anticipate that are not about syntax/the data model? I think there is a nice separation between the data model and resolution
manu_: exactly right
shigeya: as an implementer of the original DID core tests ... it is a duct-taped thing. the restriction at the time was that the tests should be very static, not dynamic ... that was the assumption. the good news is that is very stable. of course we can plug in something dynamic.
<dlongley> once we have some "discovery" mechanism whereby a particular resolver implementation can specify which DID methods it can resolve, we can build a matrix where there is commonality to compare results when resolving the same DID URLs, perhaps even without having any particular restrictions/requirements on which DID methods to use/include in the test suite.
shigeya: my observation is that the DID Core part of the test still requires static ... but also requires something dynamic like resolution. my gut says that the tests should be separated.
<burn> (nods in the room)
<ChristopherA> +1 to separation
Wip: maybe we should set up a CCG item to set up these flows? to manage the lifecycle past this group. Should this be on our roadmap?
bigbluehat: the WG is the only group that can officially sanction the test suite. it has to be delivered by the WG members according to the patent policy. then the CCG members can help out.
… make sure that it passes as we go to TR. 2+ impls have to pass each statement for those statements to stay in the spec. after that the CG can take it over and do what they want. the report that went up with the spec has to be stable. there are two modes of existence. depends on the WG/CG combo within the W3C. there are also web platform tests that run all the time.
burn: who owns the testing piece now? (which individuals?) Benjamin, do you have a recommendation of which steps we should take in which order based on the conversation we've had so we can assign people.
bigbluehat: great we're having this conversation now as opposed to much later. I would recommend the group not lean heavily on a single provider. The VCWG has been a lonely, singular job. But it is under-reviewed. People don't care until they don't pass.
… there have been lots of false-positives. It would be great to have multiple people involved.
… example of the test suite -- https://
… there is a way in respec to link from the spec back to the test suite. we can explore using web annotations to link back to the tested statements.
… it should be multi-constituent. there are key decisions on how we do resolution (how dynamic/static). could be a good topic for special topic calls.
decentralgabe: Editors should be responsible for contributing to the test suite. It's a risk to only have one compmany maintain them. My intuition is that Editors have an implementation that they're working on and part of that duty should be to contribute to the test suite.
ChristopherA: there are tests for core and resolution. we were talking yesterday that there are many interesting edges on resolution. we care most about the simpler cases where there isn't a test of rotation ... let's make sure we have the simpler set solid by the time we close.
… the more complex tests are out of the scope. maybe there are a few lines of what we deliver with respect to the test suites. there is core, a subset of resolution, and then also supersets.
<Zakim> burn, you wanted to talk about history, champion, and QA
burn: thanks Benjamin. what you've presented looks really great and what I've seen in other WGs that have been successful. there is an opportunity here ... I have seen businesses come out of this (testing opportunity). each group has a champion. it is great if we have multiple champions.
… please engage! I have seen a lot of success when quality assurance people in your organization participate. they are used to testing anyway. they are ideal for being sticklers for what's wrong/right.
bigbluehat: it's good if editors are not responsible. they are the ones that wrote it and may have a harder time spotting issuers. good to have others eyes on it! agree on having QA people.
<decentralgabe> +1 Benjamin great point
Wip: it feels like the DID Core stuff is fairly ok. we can update the test cases, but what we have is OK. we can make some updates. for DID resolution ... we need updates. What tests are we going to create? Those test cases should be applicable to every resolver.
<Zakim> burn, you wanted to say that someone sets up the infrastructure that everyone else uses
Wip: many can use the universal resolver. To Christopher's point--many DIDs will be more complex to resolve (multiple updates, rotations). We may have to categorize the types of DIDs people should be submitting and expectations of what we should be testing.
burn: everywhere I've seen this successful, there has been someone with setting up infrastructure. at some point that will need to happen. usually not by assigning, but by volunteering. that could be you and that would be positive. opening the queue for final comments.
Wip: Markus pointed to a link where we can start. do we need to start from scratch or can we iterate on that?
markus_sabadello: we can start with what we have. I will take it as a responsibility to check the current states. it needs some upgrades but not re-done from scratch.
burn: thank you all this is a good intro conversation for this serious topic
… moving on to our next topic: CBOR/CBOR-LD representation
CBOR/CBOR-LD Representation
ChristopherA: there has been demand in the past for a CBOR-based document. it might be interesting ... CBOR is binary, can be very concise when you do it correctly. it is self-describing. wide variety of platform and language support. it is standardized at the IETF.
… we have been working extensively at the IETF with the CBOR group about 'what it means to be deterministic' there are drafts for dCBOR, CDE, cbor-packed, cbor-ld, gordian envelope.
… it is not a triple store by default. self describing is not the same as context. so what's required for this group? the charter says we have a plain CBOR representation as an 'other deliverable' we can choose to not do it.
… the draft we have is problematic. I do not think anyone has implemented it. originally it had an IPFS CBOR tag but that was taken out. everything is either hex or base64. it is neither concise nor binary. it is very inefficient. there are no deployments as far as I know.
… CBOR-LD has had maturity and growth in the last couple years. it is deployed in some specific areas related to linked data.
manu_: [slides go into CBOR-LD] ... agree with what Christopher said about what we have on CBOR in here. doesn't feel like the right thing for us to do. A little on CBOR-LD - a generalized semantic compression mechanism. it will take a document like an LD document, compress that document into a CBOR representation
… a CBOR document can be an LD document. you can apply CBOR-LD to it to compress it. ... what type of compression can we expect? we tried compressing a JSON-LD document in many ways (starting at ~1200 bytes).
<dlongley> "sc" stands for "semantic compression" on the CBOR-LD slides
manu_: the CBOR-LD compressed payload gets to about ~350 bytes from ~1200 bytes. and you can get smaller. it is a generalized semantic compression scheme.
… most people use 'artisanal CBOR' and you could get a 10-20% improvement if you hand pick things. if you GZIP the output of CBOR-LD with semantic compression it actually gets larger. shows how efficient it is.
… if you have questions ask Wes, he is working on the specification.
… as an example we used it on a did:key document. it has 7 keys (ed25519) about 1kb in size. running it through standard CBOR-LD compression we can get it to 534 bytes. with GZIP it gets down to 129 bytes (lots of duplication in did:key).
… the slide shows a CBOR diagnostic representation of the compression.
… so, what would this group need to do? the pitch is not 'we should all use CBOR-LD' ... but we may want to use some form of CBOR. I know there are multiple people in the group that are doing CBOR things with DID Documents. they are good experiments to run and see the outcome.
… the benefits of CBOR-LD is that is a generalized solution applied to DID Documents. there are other uses with disclosable DID Documents that can be compressed too.
… if we wanted to do CBOR-LD the group would have to establish a media type for it like applicatoin/did+cbor-ld. CBOR-LD will be standardized at W3C and available for us to use. all this group has to do is say this is a media type for us to use.
markus_sabadello: Manu, you said that the JSON-LD DID Document can be transformed, or the CBOR-LD doc is created from the JSON-LD doc. In theory we have an abstract data model...so we should not be converting JSON-LD to CBOR-LD, instead convert any conformant implementation to another.
… this could help us answer the open question on the abstract data model
ChristopherA: [to Manu] what language do you use for CBOR LD today?
<Zakim> JoeAndrieu, you wanted to speak to the goal of getting rid of the abstract data model
manu_: right now it is JS. another company is working on a Java implementation.
<bigbluehat> Java CBOR-LD implementation filip26/
<bigbluehat> JavaScript CBOR-LD implementation digitalbazaar/
JoeAndrieu: I wasn't sure where Markus was going but I like how we finished. This is a good argument for how an abstract data model is working against us.
<bigbluehat> Rust CBOR-LD implementation spruceid/
decentralgabe: can we transform the note to support multiple CBOR representations?
decentralgabe: I wanted to support using CBOR to encode DID Documents. We shouldn't pick one, we should provide options in the CBOR document.
ChristopherA: if we want to do so, yes, we do have some decisions to make ... I believe there is a Java not a TS impl, and a Ruby one. it has numeric reduction and unicode text (NFC). the advantage of numeric reduction is to remove concerns around numbering differences ... the same number will always be the same number based on ANSI math
… if you want to do math larger than what ANSI supports we do not support it
… there are other formats that could be an IETF standard. there is a packed compression format. there is one with tags. and what Gordian envelopes does. The other option is to create a profile using Gordian Envelopes, which is already a dCBOR triple store.
… it already has subject/predicate things and supports compression. it also adds radical new things like the ability to elide. but you don't have to do it. you can do just the abstract data model and Gordian Envelope.
… I think we should do the LD one but if there is interest/demand in a non-LD model we can do it.
<Zakim> JoeAndrieu, you wanted to say you can't "use the abstract data model" for CBOR. CBOR uses a JSON data model. Which is not the abstract data model.
<Zakim> manu_, you wanted to note "it doesn't have to be LD?"
JoeAndrieu: I do not think we can do it, Chris. The abstract data model is in JSON and CBOR uses a different data model. CBOR can work with any JSON.
manu_: just a note on CBOR-LD and the type of data it has. generally DID Documents are simple things. they store keys and some other information ... but in general the fields you have are limited. storing more is a bad idea.
… we could create an artisanal format for the most basic things. it would be specialized but not hard to implement.
… CBOR-LD is meant to be generalized for any JSON-LD document. but what about plain JSON stuff? you can use CBOR-LD. even for a JSON based document -> CBOR you can inject a context in and proceed with the compression.
… it is possible to compress plain JSON documents using CBOR-LD before compression and reversing the process during decompression. a bit of a hack, but a way to do it.
… doing the Linked Data stuff is important for extensions. but we have a core that will have massive compression but does not necessarily need linked data
<decentralgabe> +1 to artisanal CBOR
ChristopherA: so what does Gordian Envelope add to this? we already have a claim. you can put context information on any attribute. it already has an extension mechanism. you can add proprietary extension too. it is similar to other formats with salts, but has a merkle tree where each predicate set can have a salt.
… the salt can be a small value (4 bytes) does not need to be 256 byte salts. this is better than the SD model with a single salt. you can salt individual items.
also +1 to supporting both CBOR-LD and supporting development of an artisanal version that works with WG defined properties
ChristopherA: there is value longer term in being able to offer elision. only reveal what the user/app needs. this could easily be something for DID 2.0 to support elision
<Zakim> manu_, you wanted to ask about did:dht and what could be used there when we have larger key sizes and service descriptions.
manu_: for did:dht there is a limit on DID Document sizes (1000 bytes) ... there is concern when your DID Document grows past this. if we are using BBS keys or post quantum keys you can be much larger than 1KB. we could use elision for this!
… Gabe, thoughts on what happens when we have larger keys or post-quantum keys? Has there been thought about this?
decentralgabe: Yes, we have a couple of thoughts today -- might be in spec -- chaining did:dhts together, chaining secondary documents together... or point to external resource that contains authenticated documented. Less familiar with elision mechanisms, good to explore that idea as well.
manu_: I do not think I heard anyone defending our current CBOR document. Maybe we should update it and put the current thoughts of the group into it. There is support for supporting multiple types of conversion to CBOR. some support for artisanal CBOR. defining that will get us well down the road. similar support for CBOR-LD.
… we would probably need a mechanism to signal which CBOR scheme you are using. it is a bit in the CBOR payload. we should explore Gordian Envelope and explore other elision mechanisms.
ChristopherA: we as W3C are very web oriented. we don't concern ourselves as much with difficulties of doing this in embedded hardware. I have been working with NFC cards...using a constrained subset of Java. With CBOR and embedded systems/chips, it's much easier to deal with this representation.
… as our usage expands to embedded hardware or IoT devices I would like to see that community speak up. Very small sensors could have a DID.
burn: any other comments?
<denkeni> +1 to the use cases that could be expanded to the embedded system
<JoeAndrieu> +1 denkeni agreed.
burn: thank you for a good session. we are about to go on our last break of the day. we will have some announcements at the very end.
… back at :25
<denkeni> We actually have hardware vendors in Taiwan investigating DID solutions
Minimum Criteria for DID Method Standardization at the W3C
Brett Banfe on BSV
Brett Banfi: enjoyed the session. Hiring in our team to extract biz requirements that this group would be interested in implementing. would love advisors on that, reach out to me if interested. BSV Alliance.
Open Topics
Wip: remainder of session will be on open topics
Minimum Criteria for DID Method Standardization at the W3C (Manu)
manu: asking the group if theer aer any opionins on which did methods should be standardized at W3C (in another group).
… another group will be proposed tomorrow to do that. Need predferred criteria to meet for standardization. Think of this as antipatterns and patterns for what should go into W3c.
… a blockchain based DID method would be hard to do here because of some opposing member companies.
… even if we proposed a well-done one, it would likely create formal objections. So maybe not best to try that here.
… just one example.
… wrt a good place, did:web or did:dtw use web tech.
… should have multiple implementations, well-vetted spec, broad deployment and production (ideally). what does the group think?
<ChristopherA> I still want do see did:onion as a decentralized did:web
manu: criteria for being standardized here.
KevinDean: 1. standardizing a did method with broad deployment. 2. idea of a quick start. interested in VCs at GS1. did:web was implemnteted knowing that for any production solution we needed soemthing better, but it was a quick sart. Ease of implementatnion important.
<dezell> preent+
ChristopherA: intrigude by antipattern comment. several people looking for human-readable names, for example, this is my interest area.
… doing a did:web their own way (ignorantly)
TallTed: this would be a good use of the rubric.
… tick off the things that are w3c-like as important. Should be high on those scales, for example.
… don't think ease of implemeantint is in rubric, but that could still be important.
decentra_: agree with Ted. Is it a mistake to standardsize ones that are not fully decenttralized, etc. unless part of a set of DID methods that meet different use cases. For exampel,e did:key, did:web, did:tdw,
<Zakim> JoeAndrieu, you wanted to say independent verification of proofs without reliance on a central party
decentra_: opening ourselves up to critique if we only standard one or two of these
JoeAndrieu: independently verifying proofs without relying on sep'rate authority would be important.
… are we looking for criteria for the group, or for the world? uncomfortable setting up group that is curating this activity.
<Zakim> brent, you wanted to ask an impertinent question
brent: are there many methods clamoring for this standardization?
… doesn't seem to be an issue at the moment
<Zakim> manu_, you wanted to speak to "categories that might work" -- and who is using this criteria? Group to curate activity?
manu_: if there is a new group just to stashalhie new metheds, w3C would likely object. If we don't cover all bases, group will get rejected.
… an antipattern would be a w3c group tht blesses and anoints did methods.
… we are the ones who will use this info.
… if there is a formal objection on the charter, that's useful info.
… brent's point is good. How many methods actually have agreement? only a handful
… many showed up to a did method standardiation group meeting.
… some are thinking they can get their method standardized despite no spec, no impl., etc.
… we know there are certain did methods in production with no vetting, etc. nation states are lookinng for at least some vetting before using.
… without official standardization they consder it a problem.
ChristopherA: a challege here is that w3c has not been considered a friendly venue. Some went to IETF. would love to see Keri back here.
<brent> minimum criteria: one's we can actually agree to standardize
manu_: would really love input from this group on what would make goodh criteria to help folks know whether it's worth doing the work here.
… if there is a better venue for some methods, definitiely go there
… if you can think of a reason for a method not ot be done here, let us know.
Controller Property
When a controller property is present in a DID document, its value expresses one or more DIDs. Any verification methods contained in the DID documents for those DIDs SHOULD be accepted as authoritative, such that proofs that satisfy those verification methods are to be considered equivalent to proofs provided by the DID subject.
JoeAndrieu: ^ definition for controller property, likely without sufficient thought. Maybe I wrote it.
… Manu agreed that this is not how anyone uses the property.
… we should discuss and figure out what to use it for
… manu, what do you think it should have been
manu_: we needed a way for the did document to express what other entity could make changse to it
… controller for the DID document, authorized to update and make changes
… language makes no sense today!
JoeAndrieu: property we discussed today can't be used for certain methods.
… there's a pattern in did:key where the did is put into the controller property, indicating it controls itself. But that does'nt match the definition either
… not sure how to proeed
ivan: even more complex, that property is now specified in the VCWG, right?
… in a way, it's in the other WG that we have to define it's meaning in a general way that would be usalbe for DID
… this discussion is moot
<Zakim> dlongley, you wanted to say that the current language, while hard to parse, does do what manu says -- provided that the VDR supports updates via proofs created by the controller
dlongley: this text is hard to read but does say what Manu says he wants, assuming VDHR accepts proofs
… nothing says VDR has to accept proofs.
… But if it does, it should accept ones that are created by the controller of the identifier of the documeunt.
… this language lets you do what we want, but if the VDR allows you to do proofs this way, it should allow the one listed in the controller property to do the updates.
… we need to leave room for specialized implementations specific to a VDR.
brent: this is not moot, has not come up in VCWG
… controller property as written allows control of the DID (as if you were the subject). If we want to mae thath clear, we should
<Zakim> JoeAndrieu, you wanted to say the language is more transclusion than alternate updaters
JoeAndrieu: the subject is not involved in control of the document.
<dlongley> David = David Chadwick from the VCWG.
JoeAndrieu: the language that confused Dave dosn't do what he thought. Currently meant to do a transclusion, that verification sholud be treated as if in the document.
… probably bad security idea.
brent: (missed)
<Zakim> dlongley, you wanted to say i don't think it's a bad security idea nor do i think a VDR *has* to allow updates using proofs, but if it does allow proofs that way...
dlongley: not a bad security idea. a vdr does not have to allow updates using proofs, but if it does, bad not to allow the controller of a did to update the document.
<brent> brent: there is no guarantee that the proof method in the controller's did, which may be a completely different method, is even one that the VDR can understand.
dlongley: if you can create profos on behalf of the did, it's an antipattern to not allow that controller to do the updates
<Zakim> manu_, you wanted to note there are lots of options here, and that can be bad.
JoeAndrieu: still talking past each other. we would want VDR to respect allowed to update. but if signed by did A and ... (joe, fill in)
manu_: we have alot of optionality we never planned for. the subject can update the did doc.
JoeAndrieu: disagree
manu_: sometimes the subject can update, sometimes not. controller field gives someone control over did document. dislke transclusion term. it only says who can update it.
JoeAndrieu: we are talking about an auth method, going to the other did method and using that other proof
manu_: up to vdr to determine how to
… use thaht controller field. in our impl. the VDR it lists who can update the doc. he would have to generate proof that has one of his auth keys. he is the one who creates an update, adds a proof, sendis to VDR, which confirms that his auth keys were used to create the proof, so valid.
<Zakim> Wip, you wanted to speak to btc1 and resolution
Wip: in BTC1, using controller property to invoke a proof to update the did.
… resolution - how does it handle controller properties. Maybe that needs to be addressed.
<dlongley> -1 "update to the VDR what to do with the controller property" ...
<dlongley> i think manu was saying that how an update is performed to a VDR is up to a VDR
KevinDean: idea that it should be up to teh VDR to decide what te od with controller property is concerning. 'controller' to him means that it hs control of the did itself, not an ability to make st'tements about the did itsellf.
… could have a did for each intety in company, but only one controller. Only the controller can update.
… saying a controller means a verifier has to compare keys in a VC extends the definition too far.
<dlongley> we might want to limit this based on proof purpose.
KevinDean: maybe need a new attribute that someone can make a statmeent on my behlaf.
<dlongley> +1 the language is in the spec is weird! but is kinda right in the abstract but not really right concretely
JoeAndrieu: dave and manu were explaining how they thought the properyt worked. I think the language in the spec has a different notion. (transclusion). And that's a security isseu and I want to get rid of it.
… more work to d.
… we have an issue on it, someone needs to try to write new laguage. I will in the issue.
… will try to move in your direction.
Primary on Decentralization
<Wip> s\Primary\Primer\
decentra_: the rubric has a lot of good content, but the structure doesn' help
<Wip> s/Pimary/Primer/
<JoeAndrieu> +1 yes, it got muddied as we went beyond decentralization
decentra_: maybe would be good to create a new doc
… good to define things that indicate decentralization, also that there are times when centralization is useful.
… ex: establish id indepentetn of anything else. an identifier in a centralized inisttition might not be good but needed
… many methods is a facet of decentralization
… custodial or centralized usage we can explain is not good
… any interest in this kind of thing?
manu_: yes. Would be good to summarize our lernings around d14n
<dlongley> +1 to also talk about decentralization as a spectrum
<dlongley> (in addition to having many axes)
manu_: there are many axes here. it can help W3C to talk about d14n themselves (say the TAG), and can have a section in the did core spec that mentions things you should know about d14n. So we can pooint to that section.
<dlongley> (or dimensions)
manu_: this is something our group could contribute to the larger w3c
<Zakim> denkeni, you wanted to discuss decentralization definition and solution for what
<pchampin> also, interesting read about decentralization: https://
<dlongley> ^ thanks, was trying to remember that RFC... we should reference that and ensure not to repeat.
denkeni: we have projects (digital wallet) where they talk about d14n often. when people think about their API they think it is too centralized. Govt issues VC and hosts verifictation service. People think this is a probelm. Also, some info has been put on blockchain, but not the VC or DID. Only the trust registry of the issuers. But poeple
think it's not decentralized.
… many ways to address single point of failure.
<Zakim> JoeAndrieu, you wanted to +1
JoeAndrieu: we need some sort of primer. rubric good, end narrative insufficient.
<brent> +1 to referencing the RFC
JoeAndrieu: in this doc we should celebrate that the web and DNS are decentralized in a nice way.
… there were some central naming authorities established, we are trying to solve some of the final issues here.
decentra_: sounds lie ther is interest. I will put together some text and the group can decide where it belongs. Thanks.
Future proofing to support MPC based multisig, in particular FROST.
ChristopherA: We have many assumptions based on 40 year old crypto tech. If you keep your private key safe you can safely use your puiblic key.
… property law says internet access is rented to you. Only if you lock with private key do you have ownership.
… but that's not true anymore. new crypto challenges these assumptions.
… for example. with distributed keygen, we can have only one device be honest
… we can still trust the key despite most devices being dishonest
… do we really need a hardware locked device to secure a key when cooperating entities can make something stronger?\
… your first identity is as a child. that's an edge. but now a public key can represen that edge.
… advantages because we can say more complex things about relationships.
… with FROST can have large networks where a public key could represent mulitple enitties that cooperated, but you can't tell
… in some cases the math protects us
… multiparty compuation. BBS is only a small subset of what is possibel.
… merkle trees can be huge. but now there is a ZKP that is 72 bytes to represent a whole path of the tree.
… we are assuming a public key is a single prty in a single node
… better not to lock ourselves into that concept
Wip: want to be albe to support this new tech. maybe they can.
… eg did:ring can There was work done to get DIDs to support ring signatures
… (comments about ring signauters already being possible)
<Zakim> manu_, you wanted to note that these are all useful things and I hope we're not preventing these use cases.
manu: yes we want to support them. I don't think we prevent them toda.
<decentra_> https://
<dlongley> +1 to not prevent new interesting primitives ... but also noting that it's hard when we don't know or understand them all -- and there is always another new one :)
manu: need to use the words "proof", "verification method", etc.
… we should be allowing all of this to work. MPC, multi-party keys, etc.
… i think we support all of these options today, but am not sure of it.
… does anyone know of something that is impos-ible to use today
… eg some new zkevm thing that doess't fit into proof
ChristopherA: subjects become more complex, esp. when edges.
… no way to express that.
… non-provability of voting one way or another. but sometimes you waunt accountability.
<Zakim> JoeAndrieu, you wanted to ask Chris for a reference to the ZKP merkle proof technique
ChristopherA: what if it's multiple parties?
JoeAndrieu: do you have a reference for the zkp merkle proof?
ivan: when you refer to relationships as an entity, this is exactly what RDF 1.2 will have. A link has its own identity. People working on how to reflect this in syntax, but that issue does have a path for us.
ChristopherA: also object capabilities. the issuer has the choice to accept the keys.
… or never requires talking with original issuer to verify.
… more papers at blockchaincommons.com/frost
… these libraries are becoming more mature and numerous, closer to deployability.
… listen to the videos and transcripts there.
Wip: with a schnorr key there is no way to know 1 of 1, 3 of 1, etc. I can use the key but don't know how many keys secure it (for good or ill)
… this can be useful, of course.
burn: I enjoyed dinner last night, it was good. We should talk about repaying for dinner last night. Some paid out of the kindness of their hearts, they need to be repaid. If you had dinner last night, and you don't know who paid for you, you need to speak with some folks.
burn: Please reimburse them.
burn: It's traditional to not have regular meeting after this week. It's a tiring week, so no meetings next week.
burn: What do we want to do for a F2F meeting next year? You get to stay however many days you want.
burn: We can do that, rough idea of who thinks it might be valuable for such a meeting?
danpape: In a year?
burn: More like 6 months.
ChristopherA: I know budgets are difficult now, travel is difficult, especially overseas.
JoeAndrieu: There is a sample bias, those that could make it are here.
ivan: Next TPAC may be in Japan.
burn: Next logical way to have this is in Europe.
burn: Reason we're asking now is that you need to plan in advance -- cannot wait until 2 months before.
burn: Thanks for the input.
<dlongley> thanks everyone!
decentra_: Thanks for coming!
Everyone thanks the Chairs!!!