00:00:10 deiu2 has joined #did 15:17:48 RRSAgent has joined #did 15:17:48 logging to https://www.w3.org/2020/07/01-did-irc 15:17:50 RRSAgent, make logs Public 15:17:51 please title this meeting ("meeting: ..."), ivan 15:18:04 Meeting: DID WG (Virtual) F2F, 2nd day 15:18:04 Chair: burn, brent 15:18:04 Date: 2020-07-01 15:18:04 Agenda: https://tinyurl.com/ycyhrv8w 15:18:04 ivan has changed the topic to: Meeting Agenda 2020-07-01: https://tinyurl.com/ycyhrv8w 15:18:05 Session slides: https://tinyurl.com/y87mqtlf 15:50:13 root has left #did 15:56:37 agropper has joined #did 15:56:59 present+ 15:58:31 phila_ has joined #did 15:58:52 burn has joined #did 15:58:53 present+ 15:58:59 present+ manu, rhiaro, adrian 15:59:15 present+ 15:59:36 present+ phila, wayne, ChristopherA 16:00:13 present+ 16:00:45 present+ jonathan_holt 16:00:58 present+ brent 16:01:03 present+ justin 16:01:07 present+ 16:01:12 scribe: phila_ 16:01:20 scribeNick: phila_ 16:01:22 present+ wayne 16:01:50 brent:opens the meeting 16:02:10 scribe+ phila 16:02:12 brent: We're starting from side 40 at https://docs.google.com/presentation/d/1UHDgw5Q_8-y8AS-E2cPuf9egUyuWAqAZQ24p9E0dqqw/edit#slide=id.p23 16:02:16 scribe+ drummond 16:02:18 present+ 16:02:38 burn: Reminds everyone to present+ themselves 16:02:57 selfissued has joined #did 16:03:01 present+ 16:03:14 Topic: Review and agenda 16:03:21 brent: Let's get started 16:03:37 ... will spend the bulk of this session on the test suite, what the plans are etc. 16:03:43 ... then a 30 minute break 16:03:54 ... then we get into Key Representations and Crypto Algorithms/Specs 16:04:03 ... if we have time, we'll have a working session at the end 16:04:12 ... please use the q+ system 16:04:14 present+ orie, dmitriz, JoeAndrieu 16:04:14 JoeAndrieu has joined #did 16:04:18 present+ 16:04:19 drummond has joined #did 16:04:20 q+ 16:04:23 present+ 16:04:24 present+ 16:05:04 I have an agenda comment 16:05:12 ack selfissued 16:05:24 selfissued: I was thinking about where we left things wrt JSON nad CBOR being at risk 16:05:44 dmitriz has joined #did 16:05:50 ... they may be perceived at risk... our registries have not been uniformly filled out so that there are entries for each representations 16:05:59 present+ 16:06:10 ... A we need to define registry instructions - a gating criteria is that it defines representations for all data types 16:06:16 ... we need registry instructions 16:06:16 oliver_ has joined #did 16:06:21 present+ oliver_terbu 16:06:21 +1 16:06:37 selfissued: Not suggesting that we discuss this now, nut think registry instructions shold be added to a future agenda topic 16:06:43 +1 to Mike's proposal 16:06:46 brent: We are hoping to discuss just that at the end of the say 16:06:58 dmitriz: And I'll touch on that now too 16:07:02 we actually need people to do the work (and not just write criteria) 16:07:17 Topic: test suite 16:07:27 dmitriz: We need a test suite. Let's talk about the deign and the challenges 16:07:35 ... What we know from ourselves and elsewhere 16:07:51 ... We're testing implementations 16:08:01 ... Our main spec is a data mode spec, not a protocol 16:08:06 ... That brings challenges 16:08:16 dmitriz: Also, it's an abstract data model 16:08:32 ... that brings other challenges 16:08:39 dmitriz: We can fall back to the command line 16:08:55 ... we can pipe output to the test suite 16:09:13 ... The universal resolver, Docker containers and HTTP as a way to provide interop 16:09:37 dmitriz: There's the format of the DID URL - we have the ABNF 16:09:47 ... we have the data model of the DID doc 16:09:54 ... and we now have the contract 16:10:29 dmitriz: We need to step back and determine what the goal of the test suite is 16:10:40 ... traditionally, need to make sure that the test suite is implementable. 16:10:49 ... Ideally our MUSTs must be machine-testable 16:11:01 ... There's going to be a certain amount of non-machine testable 16:11:16 ... To get out of CR, we need to know which fearures might be at risk before going into CR 16:11:19 q+ on the goals vs. the spec 16:11:22 ... What are people implementing? 16:11:31 dmitriz: Let's start with the DID URIs 16:11:56 ack ivan 16:11:56 ivan, you wanted to comment on the goals vs. the spec 16:12:10 ivan: I have no problem with what you said 16:12:28 ... We have to emphasise one thing that came up yesterday. It must be implementable based on the spec only 16:12:38 ... and not need some sort of background knowledge 16:12:55 ... And that's why CR exit criteria needs more than one implementation 16:13:01 dmitriz: Thank you 16:13:12 Orie has joined #did 16:13:15 dmitriz: Data Model testing 16:13:16 present+ 16:13:35 dmitriz: At base layer - we pass in a DID doc, via HTTP or command line, and validate it 16:13:45 ... We'll check things are there that must be and not that must not be 16:13:58 ... For the JSON-LD, we need to test that terms are not redefined etc. 16:14:19 dmitriz: A test suite can give us an idea of what else other implementations are doing 16:14:42 ... for example, we said that service endpoints might be at risk. The TS a way to see what is actually being implemented 16:14:55 q? 16:15:35 dmitriz: The nest question - if we're validating and taking a census of these DID docs - a simple solution could be that people submiited examples of their method's DID docs 16:15:47 s/nest/next 16:16:01 oliver has joined #did 16:16:01 dmitriz: That doesn't test how implementations evolve 16:16:07 present+ oliver_terbu 16:16:11 markus_sabadello has joined #did 16:16:13 present+ 16:16:25 dmitriz: So we really want to generate those DID docs - which is what the current rough draft TS does 16:16:39 dmitriz: But we don't have a formal notion of generating a DID doc in our spec 16:16:44 ... We have Create 16:17:07 ... but that traditionally has a component of registration but we don't want to test that - it often requires payment 16:17:22 dmitriz: Alt - since we do have this contract in our spec, we can use that as our generation mechanism 16:17:25 justin_r has joined #did 16:17:28 present+ 16:17:33 q+ to ask about another option 16:17:38 Eugeniu_Rusu has joined #did 16:17:38 ... The downside is to do with connectivity, sample DIDs, and we're mostly a data model spec 16:17:47 dmitriz: We'll come back to this 16:17:52 present+ 16:17:57 q+ to ask if we can break this up into some smaller building blocks, not necessarily APIs. 16:18:01 dmitriz: or maybe we allow people to do both 16:18:05 q? 16:18:07 ack justin_r 16:18:07 justin_r, you wanted to ask about another option 16:18:08 ack js 16:18:10 ack justin_r 16:18:28 the registries builds a set of fixtures from the universal resolver... https://github.com/w3c/did-spec-registries/blob/master/package.json#L8 16:18:49 justin_r: Is the TS... I assume that the TS is independent of the thing being tested 16:18:58 and then uses JSON-LD and JSON Schema to test conformance 16:19:01 q? 16:19:01 ... I assume people will write a pipe into the TS to see what it does 16:19:31 dmitriz: Yes, we want a skelton test harness and we're inviting all the implementations to pipe their implementations into the TS. But what are they piping? 16:19:31 q+ 16:19:39 ... Is it something they've generated? 16:19:59 justin_r: Still confused. Why should the TS care how somebody using it is generating or fetching these things 16:20:09 the JSON-Schema tests are painful... and I strongly suggest removing them... 16:20:26 justin_r: I'm assuming it's running nad is passive. I give it a DID and it passed the parser. I give it a DID doc and it tells me it parsed 16:20:41 justin_r: Why would the TS care whether it was generated or static? 16:20:50 ack manu 16:20:50 manu, you wanted to ask if we can break this up into some smaller building blocks, not necessarily APIs. 16:21:02 q+ 16:21:06 manu: I think my mental model is the same as Justin's. The TS doesn't care how it's generated 16:21:09 manu: your mic volume is super low again 16:21:29 q+ to agree with Justin and Manu 16:21:30 "its helpful to be able to auto generate test vectors / fixtures..." because doing anything by hand is painful... 16:21:30 s/manu: your mic volume is super low again// 16:21:40 manu: There may be reasons why we do care, but at the first layer - at the interface - the TS generates some data that it can test 16:21:55 manu: It'll say 'generate a DID' and I'll tell you whether it's good or not 16:22:15 present+ markus_sabadello 16:22:34 manu: I would expect that we may do this for the primitive things we want to check. A DID is one, a DID doc is another. A call to the resolution process might be another 16:22:36 present+ Eugeniu_Rusu 16:22:44 manu: There we might care what sort of process is going on 16:23:07 manu: So I wonder... rather than call the resolve function, we can break it down into more fine-grained building block 16:23:21 manu: test the syntax, test the validity of the DID doc, then get into resolution 16:23:49 manu: I don't know how we force that to happen - we may not care. Maybe the resolver can be faked, but you generated the appropriate data 16:24:03 manu: How are we invoking this driver that people are testing 16:24:29 jonathan_holt: I want to be careful to delineate between... [missed, sorry] 16:24:40 ack jonathan_holt 16:24:45 ack markus_sabadello 16:24:59 markus_sabadello: I like the way Manu just described breaking this down into components 16:25:24 markus_sabadello: You could start with the DID syntax, then test the DID data model, can it produce a valid DID doc 16:25:31 ... and then maybe testing the resolve contract 16:25:56 ... In that case, we might not only check the validity of th returned DID doc, but cold also check that it matches the type of DID doc 16:26:05 ... that requires choosing certain DID methods 16:26:16 markus_sabadello: But the first 2 parts are the most important 16:26:25 ack dmitriz 16:26:25 dmitriz, you wanted to agree with Justin and Manu 16:26:31 dmitriz: Thanks everyone 16:26:56 dmitriz: I agree with all three. We shouldn't care how the implementation got the DID doc that gets piped in 16:27:08 ... I agree with manu that it should be a stepped process 16:27:46 dmitriz: Jonathan makes a really good point that we want to keep in mind that these aren't DID methods. These are DID libraries that can probably handle multiple DID methods 16:28:00 dmitriz: So we should not only specify the library but which methods it supports 16:28:34 CBOR / Dag-CBOR / CBOR-LD...all diffferent afaik... 16:28:38 dmitriz: SO not only do we have the URL format and the DID data model and possible protocol tests. But we have to test concrete implementations of an abstrac model 16:28:46 ... so that means JSON-LD, JSON and CBOR 16:28:59 dmitriz: The point s that we have multiple representations of the DID doc to test 16:29:26 q+ 16:29:33 ack jonathan_holt 16:29:34 dmitriz: So we can list the DID methods supported but also the content-type of the representations that are supported 16:30:22 jonathan_holt: I think in the CBOR spec, I was concerned with CBOR-LD could be supported... it gets more into... this is not a default format. The native canonical format is DAG CBOR that can be expoerted into JSON or JSON-LD as requeste 16:30:34 jonathan_holt: So we need to use the Accept header. 16:30:41 q+ to note concerns around CBOR testing... and "default formats" for CBOR. 16:30:48 ack manu 16:30:48 manu, you wanted to note concerns around CBOR testing... and "default formats" for CBOR. 16:30:49 ... Im still struggling to get to semantic interop 16:31:12 manu: The community is going to have to discuss this in more depth. We need to not preclude other encodings 16:31:48 manu: So, for example, if we say all keys must be strings, that prevents representation as integers which can be really good for compression in CBOR 16:31:49 q+ to talk about representation encodings 16:31:51 q+ 16:32:18 manu: They're all workable - we can get there - but some groups provide alternative representations, like JSOn and JSON-LD 16:32:37 manu: When we have one representation - if we get to multiple sub-representations, we harm interop 16:32:59 manu: What tests are going to be written for JSON only, CBOR only - will they preclude any advances in those? 16:33:05 ack justin_r 16:33:05 justin_r, you wanted to talk about representation encodings 16:33:31 justin_r: As this is an abstract document spec, saying that the keys are strings in the abstract, does not preclue their encoding in any binary format 16:33:57 justin_r: The spec can and shold say how to translate a binary tag into an appropriate string for the data model 16:34:16 justin_r: So there's parsing the implementation and then understanding in the abstract 16:34:40 +1 16:34:40 q+ to note that that's not what's written in the specification at present for CBOR. 16:34:50 justin_r: The parsing should always provide a valid DID doc. It can then be processed into the appropriate strings - so this shouldn't be a problem. 16:35:02 ack jonathan_holt 16:35:04 jonathan_holt: That's why we need to keep the representations separate from the model 16:35:09 jonathan_holt: +1 to Justin 16:35:43 ack manu 16:35:43 manu, you wanted to note that that's not what's written in the specification at present for CBOR. 16:35:53 manu: +1 to Justin and Jonathan 16:36:13 to be pedantic: I didn't mean to extract to another representation, but to the abstract model 16:36:17 manu: I'm pointing out that the spec currently says it has to be a string, so we have to make it clear that translation is acceptable 16:36:25 manu: The TS is going to bring this to the fore 16:36:41 manu: There's a MUST statement - that's another way that people will discover what the spec actually says 16:36:44 q+ 16:36:50 So yes this needs to be written into the "Representations" section for all representations. You have to tell it explicitly how to map property names and values throughout. That's why it's so specific. 16:36:55 ack jonathan_holt 16:37:01 present+ dirk 16:37:10 jonathan_holt: I got thr CBOR section in at the last minute. Done in a hurry - apologise if it's not complete - let's work on it 16:37:13 DirkBalfanz has joined #did 16:37:46 dmitriz: Sounds as if there's agreement that we will need to test multiple representations. is this a valid JSON string, does it conform to our higher DID data model 16:37:55 dmitriz: I wanted to echo the call for help that's coming 16:38:19 dmitriz: For some of the less-implemented formats like CBOR, we def need implementers to step up and help implement the test suite that tests those things 16:38:35 dmitriz: We have general agreement on what we're testing. 16:38:57 dmitriz: We're recording th usage of various features to see if they should be kept or removed from the spec 16:39:20 dmitriz: Much like other WGs, we want a page to look at. An automated implementation report 16:39:40 dmitriz: We're dealing with GH repos and closed source proprietary implementations 16:40:02 dmitriz: we need as a Wg to make a couple of tech decsiions about the workflow 16:40:39 dmitriz: We want to make the invocation of the open source libraries as easy as possible. A manifest/config doc that can automatically use the... via the command line or vai Docker hub 16:41:04 dmitriz: Invoke the latest versions, installs them... but thinking of cross-languages, it would have to be Docker hub 16:41:10 ... so dependencies can be included 16:41:21 dmitriz: There's a lot to be said for self-contained Docker containers 16:41:26 q+ to note burden on tester for Docker -- need buy in... Docker might not be in everyone's toolkit. 16:41:29 +q to talk about how the OIDF Test Suite manages some of this today 16:41:39 q- later 16:41:44 dmitriz: We have to just say "you've run this report" - at least give us the test output so we can tally it 16:41:47 ack justin_r 16:41:49 justin_r, you wanted to talk about how the OIDF Test Suite manages some of this today 16:42:01 q+ to note continuous runs might be a good thing. 16:42:05 justin_r: I just wanted to second the notion of having an external config file that says this iw how you run my code against the test suite 16:42:17 q+ 16:42:37 justin_r: When I was helping to design what became the open ID foundation's test - everything you wold pass in for a test, would be in a JSON object 16:42:47 ... that allowed us to automate and integrate a lot 16:43:15 q+ to ask Justin about how the oidc test suite handled the language/environment setup 16:43:22 jonathan_holt: An instance of the test suite could test these configurations over HTTP. Scripts would post JSON to the right place and results would be put into a DB from which the report wans generated 16:43:48 yes, we need structured inputs and structured expected outputs :) ... it needs to run offline... 16:43:55 justin_r: Keeps things separate. Want this to be able to run in an offline mode as not everything will be on an open Web 16:43:55 ack manu 16:43:55 manu, you wanted to note burden on tester for Docker -- need buy in... Docker might not be in everyone's toolkit. and to note continuous runs might be a good thing. 16:44:53 manu: The concern that I have... some W3C test suites ask implementers to expose HTTP endpoints. As test suites change, as long as you have your config in a file, you can update and run against those same endpoints 16:44:59 s/jonathan_holt: An instance of the test suite could test these configurations over HTTP. Scripts would post JSON to the right place and results would be put into a DB from which the report wans generated/justin_r: An instance of the test suite could test these configurations over HTTP. Scripts would post JSON to the right place and results would be put into a DB from which the report wans generated 16:45:07 manu: Also +1 to offline. Some companies don't want to put anything online 16:45:24 manu: What is the deployment mode? HTTP, offline, Docker 16:45:46 manu: The issue with Docker is that we have to make sure everyone's comfortable with it 16:46:12 manu: The biggest +ve is that... it would be really nice if we could run the test against the latest version of the code in a repo 16:46:23 q? 16:46:29 q+ to respond to manu 16:46:38 manu: People tend not to be as good at updating their open source code with the latest version of what they've done 16:47:05 manu: It would be good to give feedback on the latest version. That's the strongest argument for moving to Docker 16:47:07 ack ivan 16:47:51 ivan: let's be careful not to over-engineer things. The 1st question - how big will the TS be? When you have a complex test object, you might have huge test suite 16:48:11 ivan: But in other cases the test suite can be small because the chances of getting it wrong are not that hight 16:48:23 ivan: I've seen WGs going into overkill 16:48:31 +1 to ci support for test suite conformance... lets make it easy. 16:48:55 ivan: The last slide... yes, we need a report. but we should separate the generation of the results of testing, which can be complicated but it ends up as a manifest 16:49:04 ivan: Where you have the results in an accepted format 16:49:12 +1 to structured output 16:49:14 ivan: Simple JS hacking can create the human readable page 16:49:41 ivan: If the goal of the WG is to go beyond what the CR phase requires and set up an environment to be used in many years' time, then things become more complicated 16:49:47 +1 to structured output 16:49:54 ivan: But if you just want to get through CR, KISS 16:49:57 ack dmitriz 16:49:57 dmitriz, you wanted to ask Justin about how the oidc test suite handled the language/environment setup 16:50:14 dmitriz: I do want to reply to both Ivan and Manu. 16:50:20 q+ 16:50:36 dmitriz: You're right Ivan - the data model is actually very simple. There is one required field and several optional ones. 16:50:48 dmitriz: Lots of room for over complicated 16:51:11 +1 to submitting the results 16:51:12 dmitriz: Option 2 - that we just ask people to submit the results so we don't actually run the tests ourselves, just send us the results 16:51:18 "all my tests are passing, just trust me ; )" 16:51:20 dmitriz: The Docker approach does need buy-in 16:51:28 Orie: you have to submit proof too ;) 16:51:47 dmitriz: But without that, people might have to set everything up - and that can be a nightmare. Docker can be hard, but the alternative can be worse 16:51:47 q+ 16:51:51 ack justin_r 16:51:51 justin_r, you wanted to respond to manu 16:52:14 q- 16:52:21 justin_r: I was going to start with the last point - you're asking people t buy into Docker but that might be easier than buying into a load of other tech 16:52:47 +1 to dockerized local and hosted versions :) 16:52:49 jonathan_holt: At OIDC, We've made it so that yo can download the java and run it, or the Docker, or run it online 16:53:04 orie - but who is gonna host it? :) 16:53:19 dif lol 16:53:20 s/jonathan_holt: At OIDC, We've made it so that yo can download the java and run it, or the Docker, or run it online/justin_r: At OIDC, We've made it so that yo can download the java and run it, or the Docker, or run it online 16:53:48 justin_r: I'm not saying that the config should be in the test suite. The config is outside - the TS gets updated and then if people want to run their code against the new TS it runs again and they get the result 16:53:58 https://www.certification.openid.net/log-detail.html?log=jWmOc7GX94&public=true 16:54:00 justin_r: +1 to Ivan to the results being in a structured format 16:54:15 https://www.certification.openid.net/api/log/jWmOc7GX94?public=true 16:54:16 fwiw, (that part was just assumed. Not really in question - that the test results outputs in structured format) 16:54:24 q? 16:54:25 justin_r: This is a publicly visible result - it's just a JS web page rendering what's in the 2nd link 16:54:29 q+ 16:54:31 justin_r: That's just JSON 16:55:03 justin_r: That tries to address some of the issues seen in the chat. You need the log - that's been really valuable 16:55:23 ack jonathan_holt 16:55:31 q+ 16:55:43 justin_r: [please type what you just said, justin_r - I missed a really important bit at the end] 16:56:07 -> Example for a report for JSON-LD WG https://w3c.github.io/json-ld-api/reports/ 16:56:09 q+ 16:56:13 ack ivan 16:56:23 q+ 16:56:43 ivan: Just for example, for the JSON-LD test suite -it's the kind of report that was generated. Based on participants submitting their results using a specific RDF format 16:56:49 ivan: I think it's generated on the fly 16:57:21 ivan: A different issue... not on the testing itself, but on the point when we go to the director and ask for Proposed Rec. A question that might come... 16:57:39 ivan: Are all the features necessary for the uses? 16:58:02 ivan: Are there real use cases for all the features that we have? Are they in use? Or are we dreaming up features that no one wants 16:58:18 ivan: This is the kind of thing that we need to report on 16:58:45 kdenhartog has joined #did 16:58:53 ivan: All those discussions about the parameters and metadata. If it's normative in the doc, we must have an argument why we have that and back it up with real use case 16:58:56 ack selfissued 16:58:56 present+ 16:59:14 jonathan_holt - we're testing just the DID-Core data model. Signatures etc are /not/ in there, so we're not testing it. 16:59:17 selfissued: I wanted to agree with Justin that people shold create their own config nad be able to run the test tool themselves against their own code 16:59:18 ack markus_sabadello 16:59:22 dmitriz : so, just to restate my question awaiting a response: what exactly are we testing? The syntax? The signatures? The serialization from one format to another? The resolve function? 16:59:51 markus_sabadello: A response to Jonathan - what are testing? So far we've talked about 3 areas - the DID syntax, the DID doc data model andits representations 17:00:05 markus_sabadello: and the 3rd was potentially the resolver contract 17:00:11 s.andits/and its/ 17:00:15 s/andits/and its/ 17:00:40 q+ 17:00:52 ack brent 17:00:53 markus_sabadello: COnverting between different implementations - see if diff representations were equivalent 17:00:57 markus_sabadello: thanks! 17:01:11 brent: We have 30 mins left for this conversation tht I think need to happen 17:01:21 brent: I think Dmitri needs time to finish up 17:01:34 ... I'd like the group to focus on making decision, if possible 17:01:54 brent: So that we can be as concrete moving forward so that when Dmitri calls for help, we can be specific 17:02:12 ack dmitriz 17:02:50 dmitriz: We should probably move the current TS to the DID WG (from CCG) 17:02:54 +1 17:03:06 dmitriz: What we're going to test will depend a lot on what you implementers need and want 17:03:20 dmitriz: For example, do we want to test the conversion of representations? 17:03:40 dmitriz: That means having a conversion library which not everyone has, but that depends on the implementers 17:03:45 dmitriz: We need... 17:04:05 -1 to requiring testing of a conversion function unless an implementation supports it. 17:04:08 dmitriz: We need to resolve a couple of these issues. Docker vs 'follow instructions' 17:04:36 ... IS the list of implementations to be tested, doe it live in one place on the repo or does each implementation run on their won and submits hte results? 17:04:45 dmitriz: And if we do that, what do we do about the log format? 17:04:51 ... Do we find the results sufficient? 17:05:01 ... Do we need a hosted version of this and if so, who will host? 17:05:19 q? 17:05:26 brent: We resolved to move the test suite into the DID WG at hte kick off meeting I think. 17:05:32 can we propose to use docker as first step? 17:05:38 brent: So maybe you'd like to propose some steps 17:05:40 +1 to Dmitri's list of questions 17:06:00 -> https://github.com/w3c/did-test-suite converted CCG test repository on the wg site 17:06:18 dmitriz: For the non-hosted version - do we use Docker, Docker Hub, or rely on here's the link to my GitHub repo, or maybe a shell script to set things up? 17:06:31 present+ 17:06:36 PROPOSED: the did test suite will use docker 17:06:39 +1 17:06:39 +1 17:06:40 +1 17:06:41 q+ to clarify 17:06:42 +1 17:06:45 +1 17:06:45 +1 17:06:47 +1 17:06:53 ack justin_r 17:06:53 justin_r, you wanted to clarify 17:07:07 PROPOSED: the did test suite will use docker to containerize the test suite 17:07:10 hrm... not what I was thinking... 17:07:12 justin_r: Yes, +1, but to be clear, we're containerising the test suite itself, not that it requires the test subject to be containerised 17:07:21 q+ 17:07:25 dmitriz: This is about containerising the implementation 17:07:27 q+ 17:07:54 +1 17:07:58 dmitriz: How are we structuring the test suite? Does it have the list of implementations and then run against them all 17:07:58 0 17:08:01 q+ 17:08:10 q+ to say what?? 17:08:12 q- 17:08:14 dmitriz: or do we have the test suite and everyone runs against that 17:08:20 q? 17:08:20 q? 17:08:25 ack Orie 17:08:46 Orie: It's great that we've agreed to use Docker - good. I propose that we take a phased approach to the TS 17:09:03 Orie: I'd like to see a set of structured input and outputs that we can get by running the Docker container locally 17:09:12 ... no requirement to spin up a BitCoin node 17:09:22 ... Inputs, output, Dockerised TS 17:09:39 q? 17:09:41 ack selfissued 17:09:42 dmitriz: It's not a binary decision, true. We can do it in stages 17:10:22 selfissued: This is half philosophy.. to the extent that we're doing decentralized work, it would be shocking if we created a list of all known implementations 17:10:43 q+ 17:10:48 +1 to everything that selfissued is saying. 17:10:58 selfissued: There will be implementations that are not done, but you want them to run the test suite and they'll fail at first. So we should allow that without recoding those failures 17:11:00 +1 to mike 17:11:07 +1 to proposal 17:11:08 +1 17:11:18 ack justin_r 17:11:18 justin_r, you wanted to say what?? 17:11:46 This is not the straw poll. It is getting the language of the poll understood and agreed to first before running the poll. 17:11:51 justin_r: I agree with Mike that having a containerised test suite so that people can run it outside the hosted version is goingn to be vital. 17:11:56 +1 you must be able to run the test suite independently against your own implementation 17:12:02 ... The experience in th Open ID Foundation shows how valuable that it 17:12:17 ack kdenhartog 17:12:31 kdenhartog: Does having a Docker instance that polls from other people assume open source and available? 17:12:37 q+ 17:12:41 ack ivan 17:12:41 dmitriz: No, only the available ones. Closed source will have to run it themselves 17:13:17 ivan: I'm not sure I understand... the TS will be containerised in Docker... if I was on the implementer side, the TS meant that I got a bunch of small HTML files that cold get 17:13:33 ivan: And for each of those I was supposed to produce a set of RDF that was checked 17:13:35 1. pull the docker container... 2. build your configuration.... 3. run the rest suite.... 4. get test results. 17:13:39 q+ 17:13:40 re proposal - we should clarify that the Docker file for the test suite is a nicety (people can always install it manually) 17:13:46 ivan: Is this what you mean or do you mean more than that? 17:14:02 dmitriz: yes, you can ignore docker and go direct at any point. If you hate yourself. :) 17:14:07 ivan: Mine as in Python, Gregg's was in Ruby etc... 17:14:22 q+ to propose the thing above. 17:14:48 dmitriz: I want to be clear - Dockerising the test suite is just a convenience. You can still download the code and run it yourself manually. 17:14:57 ack jonathan_holt 17:14:57 ivan: That's a tech detail that doesn't matter so much 17:15:24 jonathan_holt: I'm still curious as to what we're testing? if it's just the syntax? You can do a lot with JSON schema 17:15:31 ... testing resolve functiosn is harder 17:15:36 ack manu 17:15:36 manu, you wanted to propose the thing above. 17:15:51 ROPOSED The test suite will be containerized (using docker), will allow structured configuration input, and produce structured result outputs 17:15:56 PROPOSED The test suite will be containerized (using docker), will allow structured configuration input, and produce structured result outputs 17:16:05 +1 17:16:08 +1 17:16:10 +1 17:16:11 +1 17:16:12 +1 17:16:16 manu: Can we get a stake in the ground... I think what Justin just suggested was right. The test suite will be containerised and use structured data for input and output 17:16:17 +1 17:16:21 +1 17:16:21 can i understand what the inputs are 17:16:28 jonathan_holt: that's the next question 17:16:28 dmitriz: Yes, as long as people can still run the test suite without Docker 17:16:35 +1 17:16:36 0 (I am not sure what 'docker' give me at tis point, it could be just a zip file for what I care) 17:16:47 0 see ivan 17:16:51 +1 17:16:55 brent: I'm not seeing any -1s 17:16:56 ivan - docker gives you the excuse not to install Node.js to run the suite :) 17:16:59 ivan: docker gives you a way to manage all the pre-installed dependencies that you'd need to get that zip file to run 17:17:01 RESOLVED: The test suite will be containerized (using docker), will allow structured configuration input, and produce structured result outputs 17:17:04 +1 17:17:14 q? 17:17:24 q+ 17:17:47 dmitriz: Zip does not allow you to skip installing node and all the dependencies. 17:18:11 ack Orie 17:18:14 brent: The test suite's dependencies will either have to be in the Docker, or installed. 17:18:26 Orie: I think the next step is what's the structure of the input files? 17:18:45 Orie: There's a trade off between using YAML or JSON vs a folder structure 17:19:02 PROPOSAL: Use YAML for config for inputs 17:19:07 +1 17:19:13 +0.5 17:19:13 +1 17:19:20 what are the input representing? 17:19:22 +1 (it's structured, it's fine) 17:19:31 +1 17:19:36 dmitriz: This is not the important part, We can switch to JSON if people protest 17:19:46 +0 use whatever most implementers want, lowest burden 17:19:46 dmitriz: The important part is the decision to Dokerise 17:19:58 dmitriz: We're testing the DID URI ABNF, the DID Doc data model 17:20:03 q+ 17:20:12 dmitriz: if there is time and volunteer labour, then we can test resolve 17:20:14 ack markus_sabadello 17:20:16 q+ 17:20:50 markus_sabadello: I fully agree with these areas... I guess for some of these areas that there are different ways of modelling the DID URI syntax. 17:20:56 counter-proposal: outputs in YAML :) 17:21:05 markus_sabadello: The input is a DID URI and the TS has to parse and report the result 17:21:38 markus_sabadello: Anotehr approach would be that the input would be an instruction to generate a DID URI and then the TS tests that it is valid. 17:21:59 markus_sabadello: The input could be a DID doc and an output could be whether it's valid or not 17:22:30 q? 17:22:31 markus_sabadello: Or the input could be to generate a DID doc in a given representation and the output must match the input instructions 17:23:00 markus_sabadello: In both cases, you can test either the implementation's ability to generate or the ability to parse 17:23:06 markus_sabadello: Perhaps we do both 17:23:16 RESOLVED: Use YAML for config for inputs 17:23:58 dmitriz: I recommend that we take these discussions to the issues and PRs of the test suite itself 17:24:06 ack justin_r 17:24:09 justin_r: +1 to that 17:24:27 q+ 17:24:35 q+ 17:24:51 justin_r: A lot of that is implementation detail. To Markus's comment. The OIDF suite has a lot of tests Each test can have its own config input 17:24:54 to clarify: structured output + a script to generate an HTML visual version of the structured output 17:25:23 justin_r: Somebody's implementation, through a script or whatever, can put stuff into the test suite and run that and get the result 17:25:54 justin_r: So to Markus's point - yes, a string as the input is a definite requirement, in addition, we need a callback-style ability 17:26:12 justin_r: We're in danger of testing the ability to generate, rather than to test the generated thing 17:26:17 +1 to what just is saying... they are 2 separate tests... 17:26:25 lets do the easy string based one first. 17:26:33 q? 17:26:35 justin_r: Tell something to generate an input for me that I can then test, as opposed to generating the thing and then testing that 17:26:37 ack ivan 17:27:07 ivan: I realise that there is an aspect we've not discussed... in genera when I take other WGs - there is a critera that we have to define which is what it means to be successful 17:27:38 ivan: The usual thing that is done is that the WG defines a number of features and then states that each feature must be implemented by at least 2 or 3 things 17:27:53 +1 to what item said - we're opening an issue that says 'produce a list of MUST features to test' 17:28:04 s/what item/what ivan/ 17:28:10 ivan: JSON-LD has lots of features. Each of those features have a number of tests, the report shows what's been tested and by which implementations 17:28:53 ivan: JSON, JSON-LD and CBOR are different features - we don't necessarily expect each implementation to implement every feature 17:28:57 ack brent 17:29:02 +1 17:29:35 brent: We've had a good discussion here. A lot of people have brought up issues. Please take some time soon to raise these as issues in the test suite rpository 17:30:02 brent: Thanks to dmitriz for setting the stage for this 17:30:07 rrsagent, draft minutes 17:30:07 I have made the request to generate https://www.w3.org/2020/07/01-did-minutes.html ivan 17:30:08 ... We need to keep this momentum 17:30:21 brent: We're at time. We have a 30 minute break 17:30:21 Links again for breakout rooms? 17:30:34 == session ends == 17:30:49 @dmitry - breakout room to talk about sameAs? 17:35:13 justin_r can you add comments to the proposals here: https://github.com/w3c/did-test-suite/issues 17:43:31 oliver_ has joined #did 18:01:42 scribe+ drummond 18:01:44 scribe+ 18:01:59 brent: we are on slide 56 18:02:03 Topic: Key representation and crypto algorithms 18:02:08 See [slides](https://docs.google.com/presentation/d/1UHDgw5Q_8-y8AS-E2cPuf9egUyuWAqAZQ24p9E0dqqw/edit#slide=id.p27) 18:02:38 Orie: the slides will provide an overview of the topic, then considerations, then issues 18:03:11 ...one key reason for this session is different questions that have come up around algorithms, key encodings, etc. 18:03:50 mnzaki has joined #did 18:04:22 ...slide 57 covers the requirements of great security: excellent documentation, tranparency, academic analysis, observability, blessings from authorities 18:04:37 ...e.g., NIST, FIPS, governmental approvals 18:04:54 ...we need to ask the question about our authority when we public registries of crypto 18:05:28 q+ 18:05:28 ...another aspect is formal verification, frequent key rotations, short expiration, performance, and paranoia 18:05:43 ...slide 58 is some broad questions for the group 18:05:44 q+ to mention controversy about key agility 18:06:08 ......Should modern cryptographic tooling / data models or standards support legacy crypto? 18:06:19 ...Who decides when it’s not a good idea to use something anymore? 18:06:32 ...How do(?) we communicate risks associated with specific key types and algorithms? 18:07:16 ...How do we encode “key purpose” or “key use”?... (again not everyone uses JOSE). 18:07:37 ack ChristopherA 18:07:37 ChristopherA, you wanted to mention controversy about key agility 18:07:40 jonathan_holt has joined #did 18:07:49 present+ jonathan_holt 18:08:08 ChristopherA: Wants to add to the previous page of good security: 1) hardware support 18:08:29 hardware is related to preventing key exfiltration/direct key material access 18:08:35 ...some parties will not use a security solution without hardware-based support 18:08:58 ....the second one is algorithmic agility, which Christopher did not see on the list 18:09:39 Algorthimic agility is controversial is a more accurate scribe 18:09:40 Orie: Key use or key purpose is important even if you don't use JOSE 18:09:57 and we'll get there later, but "key use" and "proof purpose"/"verification relationships" are actually different concepts 18:10:01 ...How will I get the features I want if I can’t implement them myself? 18:10:11 ...Who watches the watchmen? 18:10:24 ...who is holding the recommenders accountable? 18:10:38 ...Why would we trust that this standards governance process won’t be compromised to the advantage of interested parties? (is this happening right now) 18:10:56 ...on this point, people may challenge us very directly 18:11:02 ...slide 58 18:11:30 ...JOSE - main spec - https://github.com/panva/jose 18:12:03 ...wide support for JOSE, but that alone is not enough (example of outdated widely support infrastructure: MD5) 18:12:16 ...https://safecurves.cr.yp.to/rigid.html 18:13:08 ...has a property called "rigidity" -- "do you believe that there may be constraints on elliptic curve constants that may be more susceptible to attacks" 18:13:25 ...Orie invites everyone to read about it 18:13:43 ...“American security is better served with unbreakable end-to-end encryption than it would be served with one or another front door, backdoor, side door, however you want to describe it.” - Gen. Michael Hayden 18:13:54 ...“I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry.” - Bruce Schneier 18:14:08 ...slide 60 18:14:22 ...“No Way, JOSE! Javascript Object Signing and Encryption is a Bad Standard That Everyone Should Avoid” - https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-bad-standard-that-everyone-should-avoid 18:14:36 ...“The most blatant way to make your app vulnerable is to get the alg header, and then immediately proceed to verify the JWT's HMAC or signature, without first checking if that JWT alg is permitted. What will happen to your app if it gets an unsecured JWT with alg = none?” - https://connect2id.com/products/nimbus-jose-jwt/vulnerabilities 18:14:56 ...If you are using go-jose, node-jose, jose2go, Nimbus JOSE+JWT or jose4 with ECDH-ES please update to the latest version. RFC 7516 aka JSON Web Encryption (JWE) Invalid Curve Attack. This can allow an attacker to recover the secret key of a party using JWE with Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES), where the sender could extract receiver’s private key. - https://blogs.adobe.com/security/2017/03/critical[CUT] 18:15:05 ...slide 61 18:15:23 q+ 18:15:24 ...What is "crypto-agility"? 18:15:46 ...Wikipedia: “Crypto-agility (cryptographic agility) is a practice paradigm in designing information security systems that encourages support of rapid adaptations of new cryptographic primitives and algorithms without making significant changes to the system's infrastructure. Crypto-agility acts as a safety measure or an incident response mechanism when a cryptographic primitive of a system is discovered to be vulnerable.[1] A security system[CUT] 18:16:14 ...agile if its cryptographic algorithms or parameters can be replaced with ease and is at least partly automated.[2][3]” 18:16:52 ...the key point is that new crypto algorithms should be able to be evolved quickly 18:17:09 ...Names of algorithms should be very clear 18:17:21 ...“The names of the algorithms used should be communicated and not assumed or defaulted.” 18:17:30 - https://en.wikipedia.org/wiki/Crypto-agility 18:17:52 ...“as written, this library greatly expands the attack surface on anyone using it and goes against the whole reason that cryptosuites exist (that is, people that know better should be picking an extremely limited set of options, not enabling a kitchen sink approach, which this library does).” 18:18:02 ...- https://github.com/w3c-ccg/lds-jws2020/issues/4 18:18:35 q? 18:18:42 ...final thought: "Why is IANA recommending NIST Curves, why are node and go libraries supporting a “kitchen sink”… are we dancing around just saying “JOSE considered unsafe?” 18:18:59 ...Orie asks, "Are DID WG members the “people that know better”? 18:19:09 ack ChristopherA 18:19:11 q+ 18:19:47 ChristopherA: One of my problems with the term "crypto-agility" is that some people think that's what's really needed and others it scares to death. 18:20:01 ...but what really matters is future-proofing 18:20:21 q+ to comment on when cryptoagility goes bad... when you give someone a choice that's not capable of making the decision. 18:20:32 ...you need to pop-up one or two layers higher and try to future-proof the higher layers 18:20:52 ...I am pretty against the lower-layer "agility" 18:21:06 q+ to speak to complexity. 18:21:13 ...as the co-author of SSL, I don't even trust my own work. I need to get it peer reviewed. 18:21:25 ...I don't believe there's anyone in this WG that qualifies at that level. 18:21:35 ack selfissued 18:21:37 ...we need to get the best advice we can from the best people 18:22:14 selfissued: I am obviously biased in this regard. We did design JOSE to support cryptographic agility. 18:22:26 ...but there are both engineering and political factors. 18:22:46 ...there are different requirements in the U.S. and Russia and China. 18:23:19 +1 to creating profiles for limiting agility appropriately and for interop... which specific verification methods enable 18:23:24 ...which is why there are particular profiles whose purpose is to constrain JOSE to a specific set of recommended algorithms that have been vetted by actual cryptographic experts 18:23:44 ...if you use those constrained sets of algorithms, using JOSE should be fine 18:24:24 ...I have commented about having us profile JOSE but we will still see other nations use their own sets of algorithms 18:24:24 ack manu 18:24:24 manu, you wanted to comment on when cryptoagility goes bad... when you give someone a choice that's not capable of making the decision. and to speak to complexity. 18:25:02 manu: RE crypto-agility being "good" or "bad", I don't think anyone is saying we shouldn't listen to the experts 18:25:21 ...I agree we should be using recommendations from IETF experts 18:25:46 ...and we should not be creating our own algos or doing our own vetting that requires experts 18:26:01 ...we already have too much optionality in front of us, and we should try to limit it 18:26:19 ...in order to reduce the attack surface and the complexity of doing the analysis 18:26:20 q+ 18:26:34 ...it also increases interop 18:27:02 ...the biggest mistake that has been made is "giving options to people who don't know how to decide between the options" 18:27:13 ...that's the problem with many common cryptosuites 18:27:24 ...developers don't know what to choose 18:27:36 ...this can be avoided by narrowing those choices 18:27:43 +1 to limiting where possible 18:27:55 allow list / deny list 18:27:57 ack jonathan_holt 18:28:18 jonathan_holt: Clarifying - is the problem white-listing or blacklisting algos? Or shouldn't that be done via governance models? 18:28:34 Orie: slide 62 18:29:00 ...How is that stuff in a did document used? 18:29:01 Reminder: Try not to use needlessly racial language... use more accurate language -- allow / deny ... not white / black. 18:29:30 ...Verify Digital Signatures - https://w3c-ccg.github.io/security-vocab/#assertionMethod 18:29:42 ...https://www.iana.org/assignments/jose/jose.xhtml#web-signature-encryption-algorithms 18:29:58 ...How much of JOSE will we actually support? 18:30:05 ...slide 63 18:30:21 ...Reminder that PGP exists... 18:30:33 ...which is why we need support for plain "public keys" 18:30:37 ...slide 63 18:30:53 ...Reminder that Minimal Cipher exists... 18:31:12 ...slide 64 18:31:29 ...slide 64 18:31:44 ...Reminder that DID Comm is being built (at DIF) 18:31:59 ...slide 66 18:32:22 ...Encoding a key type and purpose in the name and verification relationship. 18:32:43 ...the point is that the key type includes the purpose and name 18:32:57 ...slide 67 18:33:09 ...Json Web Key 2020 Proposal 18:33:23 ...W3C CCG will maintain a JSON-LD Signature Suite which documents how to use JOSE with DIDs… the key representation will support JWA / JWS / JWT / JWE. 18:33:36 ...Keep the JSON-LD side of working with JOSE as simple as possible.... While actually supporting interoperability on both fronts. 18:34:21 ...can this approach work for support both JOSE and JSON-LD in an easy-to-implement way 18:34:45 ...The suite will provide better guidance than IANA does, on what “recommended” means. 18:34:54 ...The suite will support the needs of “Pure JSON” / “JOSE only” folks. 18:35:04 ...JOSE features that are not documented / contributed to will NOT be included. 18:35:13 ...The suite will be year timestamped, and updated as the security landscape changes. 18:35:22 ...The DID WG will reference this suite, as it does for others, via the spec registries. 18:35:29 ...slide 68 18:35:45 ...Encoding a key purpose in verification relationship 18:36:25 ...one of the major criticisms of JOSE is that the key representations are easy to screw up 18:36:53 ...there's also controversy about some of the curves supported by JOSE 18:37:34 ...this example also does not follow the best practice of the previous slides 18:37:48 ...slide 69 18:38:03 ...Next steps for JOSE 18:38:13 ...https://docs.microsoft.com/en-us/microsoft-edge/dev-guide/windows-integration/web-authentication#authenticate-your-user 18:38:23 ...WebAuthN uses `publicKeyJwk` … but we’ve seen almost 0 contribution on this front from DID WG members…. It’s still not even supported in CCG vocabulary.... 18:38:34 ...https://github.com/microsoft/VerifiableCredentials-Crypto-SDK-Typescript/issues/12 18:38:49 ...Does this make sense given the near complete lack of support for JOSE today? 18:39:20 ...why are we seeing libraries like this show up but not making it clear for how this should be composed 18:39:30 ...Who is going to do the work to make JOSE and DIDs work together? 18:39:43 ...slide 70 18:39:47 q+ 18:39:57 +1 for multicodec / multibase! :) 18:40:01 ...Is the future Multicodec / Multibase? 18:40:18 ...Compact double clickable string and binary representations. Friendlier towards other programing languages (not everyone owns a browser) Growing adoption within the blockchain ecosystem Registries are easier to update quickly and safely Already used to describe bls12_381-g1 and g2 for JSON-LD ZKPs Key representations less of a foot gun...what does “jwk.d” do? Fingerprint algorithms less of a foot gun… JWK has no canonical representation.[CUT] 18:40:39 ...Sidetree forced to rely on JCS as well. NIST Curves aren’t even registered… (is this a good thing?) 18:40:50 ...Is base58 the “defacto” standard key representation for DLT keys today? 18:41:35 +1 for multibase/multicodec although keys in IPFS are moving to base36, which is in the table 18:42:23 ...Multicodec / Multibase does not support the NIST curves -- but for government work they are required 18:42:51 ...we should all be thinking about things that are more future-facing 18:42:57 q+ 18:42:59 ack wayne 18:43:13 q+ 18:43:18 wayne: Are we only considering JOSE? There are successors. Should we consider them? 18:43:28 ack selfissued 18:43:29 q+ to support the proposal with some concerns -- aggressive reduction of alg agility - once choice for each curve type... aggresively ensure that 'd' isn't allowed in DID Documents... concern about name... verification key vs. just key. 18:43:34 ack selfissued 18:43:37 Orie: We will need champions for them. 18:44:12 selfissued: Tony Nadalin, relative to mobile driver's license work, has been working on zero-knowledge proof support for JOSE 18:44:13 ack markus_sabadello 18:45:07 markus_sabadello: mention an idea that a resolver could perform the job of transforming between key representation types 18:45:21 ...that could return keys to the client in the form the client prefers. 18:45:31 ...but someone has to do the work 18:45:36 q+ 18:45:39 ack manu 18:45:39 manu, you wanted to support the proposal with some concerns -- aggressive reduction of alg agility - once choice for each curve type... aggresively ensure that 'd' isn't allowed in 18:45:42 ... DID Documents... concern about name... verification key vs. just key. 18:46:09 manu: wanted to speak in general support of the JSON Linked Data crypto suite 18:46:25 A bit outdated experiments on transforming public key representations during DID resolution: https://hackmd.io/XmL-Bjh5TdqV4fj6nwdPEQ 18:46:26 ...those algos are meant to come out every 3-5 years 18:47:39 ...in getting support for JSON Web Key into the DID spec, we could remove support for older formats 18:47:40 ack kdenhartog 18:48:20 kdenhartog: Two things of note. 1) when implementing did:key:, had to do a mathematical conversion between the two forms 18:48:21 All, please be brief with comments. At 2-3 minutes per comment we only have time for 4-6 people talking and nothing else. 18:50:35 Orie: slide 71 18:50:48 ...Recommendations 18:50:58 ...Recommend we allow base58 key representations to be valid for all key types, in addition to JWK 18:51:01 q+ 18:51:14 ...Recommend we include disclaimers about JOSE and algorithmic agility in the Security Considerations section of the DID Core Spec. 18:51:20 q+ 18:51:31 ...this will be necessary to get support for JOSE 18:51:54 ...Recommend we work together to ensure that Json Web Key 2020 is usable. Meets the needs of “Pure Json”, legacy crypto, NIST / FIPS crypto… that some backward facing support for DID interoperability exists. 18:52:08 ...Recommend the did spec registries provide some indication of security implications / independent vendor implementations for registered crypto. Ideally a green, yellow, red scale. 18:52:29 The best way to see what I'm after with this is in this comment: https://github.com/w3c-ccg/lds-jws2020/issues/11#issuecomment-642308779 18:52:35 ...we are having real trouble to get everyone to contribute to even one repo. Let's work together to solve this. 18:53:27 Who will update green/yellow/red over time? 18:53:57 ...Orie recommends trying to find a way to apply warning or recommendations to the DID Specification Registries 18:54:09 ack jonathan_holt 18:54:20 jonathan_holt: asked about base58? 18:54:46 q+ 18:54:53 Orie: it is the multi-codec format supported by Protocol Labs 18:55:09 jonathan_holt: Would like to see the same 18:55:36 +1 to "most common decision might not be the right decision" 18:55:48 Continue for maybe 5-10 minutes 18:55:55 Aww I like both 18:56:02 ...is concerned about the green/yellow/red scale for crypto algorithms/suites as it can be very difficult to make lasting recommendations 18:56:04 I think move on, but not a huge deal 18:56:10 ack selfissued 18:56:22 brent: cut off of discussion will be at 10 after the hour 18:57:00 selfissued: As far as IETF is concerned, there is already "green/yellow/red" in the registry (by other labels). 18:57:16 ...this is informed by a group of experts within the IETF. 18:57:38 ...so the extent that we want those signals in the DID Specification Registries, we should use those as the defaults 18:58:04 ...second point is one of clarification: Mike is willing to help do the JSON Web Key 2020 work if it's done in this WG. 18:58:15 It's not "let's use what's in the registries"... it's "let's limit the things in the registries to a smaller subset that's easier to do a security analysis on" 18:58:20 Orie: We have some of this work going on here but need to do more. 18:58:59 ...the #1 takeaway from this presentation should be that this work should be done in this WG and that we need more focus on it 18:59:08 ...we just need to figure out how to do it 18:59:09 Translation: Not enough people are helping Orie do this work, and people need to step up and do the work. 18:59:12 ack ChristopherA 18:59:12 https://github.com/BlockchainCommons/Research/blob/master/papers/bcr-2020-003-uri-binary-compatibility.md#comparison 18:59:28 selfissued: feel free to backchannel me if you want my eyes on something quickly 19:00:07 ChristopherA: The link I just posted is about Bitcoin's usage of base58. The primary motivation was length. 19:00:21 ...it is not a standard, and there are multiple versions of it out there. 19:00:31 ...BTCR does not use it at all. 19:00:41 ...Bitcoin Core uses hex 19:01:21 ...what Blockchain Commons has been doing is just using the binary form and using that with CBOR 19:01:38 ...CBOR is an international standard 19:02:17 ...the reasons for IPLD and other codecs were having to tackle this was because we didn't have tools like CBOR 19:02:34 https://www.iana.org/assignments/cose/cose.xhtml 19:02:52 ...so we could go get one or two-byte codes for these 19:03:08 zakim, close the queue 19:03:08 ok, brent, the speaker queue is closed 19:03:18 Orie: part of the reason you see base58 is because people have done the work on it 19:04:06 ...the main point of this presentation is that there are lots of people using JOSE but little support for it in this WG. With base58, there is a lot of work around it in the WG but little format support for it. 19:04:10 We've begun figuring what key formats we need CBOR tags for: https://github.com/BlockchainCommons/Research/blob/master/papers/bcr-2020-006-urtypes.md#registry 19:04:38 ...slide 72, Issues 19:04:47 ...https://github.com/w3c/did-spec-registries/issues/66 19:05:21 present+ identitywoman 19:05:36 ..."Add transform-keys=jwks parameter for use with OIDC SIOP" 19:05:42 ...needs support 19:06:00 ...https://github.com/w3c/did-spec-registries/issues/46 19:06:57 ...Add JsonWebKey2020 to allow for algoright agnostic key usage in DID documents" 19:07:35 ...this doesn't apply to the DID Core spec but still needs support 19:08:07 ...https://github.com/w3c/did-core/issues?q=is%3Aissue+is%3Aopen+jose 19:08:10 +1 looking for others to comment on these topics 19:08:24 It's often Orie and I agreeing in silence 19:08:34 ...there are a lot of issues tagged JOSE 19:09:02 ...my hope is that if we can get collaboration on these issues in DID Specification Registries, then we can go close a bunch of these issues 19:09:14 ...thank you to everyone 19:09:18 zakim, open the queue 19:09:18 ok, brent, the speaker queue is open 19:09:35 ...#1 takeaway is please contribute to these issues around JOSE and key representations in DID Core 19:09:37 Topic: working session 19:09:48 subtopic: registries 19:10:15 brent: Should we require principled requirements for inclusion of properties? 19:10:25 q+ to provide some background for "principled requirements"... 19:10:30 ...and What registry instructions are needed? 19:10:37 ack manu 19:10:37 manu, you wanted to provide some background for "principled requirements"... 19:10:42 q+ 19:10:47 https://w3c.github.io/did-spec-registries/#the-registration-process 19:11:23 manu: this is a link to the current list of issues for the registration process 19:12:09 ...the issue is an attack for the registries that the name of an extension can be a personal attack 19:12:26 q+ 19:12:27 ...IETF has a solution in the form of expert review 19:12:52 ack selfissued 19:13:11 ...but if we have this, then there's the counter-charge of "a centralized judge" that's a gatekeeper 19:13:27 q- 19:13:43 q+ 19:14:01 selfissued: In the Amsterdam F2F, the consensus was that the registry would be used to support interoperability across representations. 19:14:18 ...have we been requiring that each extension supports that? 19:14:32 ack Orie 19:14:43 Orie: We've seen almost no contributions to the DID Specification Registries since the F2F 19:15:13 ...that makes it an almost intractable requirement that is preventing people from registering properties 19:15:14 q+ 19:15:23 q+ 19:15:37 ack burn 19:15:42 ...so my concrete recommendation is to only require JSON-LD and not require JSON-only or CBOR translations or testing 19:15:52 q+ 19:16:41 also, we don't need JSON Schema to have "pure json" 19:16:59 its actually easier if we just have "Pure JSON" use "JSON-LD". 19:16:59 ack jonathan_holt 19:17:04 burn: Clarifying that the properties defined in DID Core that need JSON-only and CBOR representation definitions, those people who know how to do it must get in there and do it. 19:17:16 +1 to Orie's "Council of Elders" approach as a "big red button" when things get out of hand... ideally, objection is kicked up to DID Maintenance WG or W3C CCG. 19:17:48 @self-issued, which document did secp256k1 for COSE get in? 19:17:52 jonathan_holt: It should be pretty easy to do the mapping to CBOR, and I can do it 19:17:54 ack selfissued 19:18:15 and to be clear I'm not an expert! 19:18:18 q+ 19:18:29 which i'm not keen on doing it 19:18:35 selfissued: I agree with Manu that we need to have a set of designated experts. You can't write into the registration instructions to prevent all the stupid ways people will try to abuse it. 19:18:37 q+ 19:18:51 +1 mike with respect to designated experts for "Expert Review" 19:19:11 ack kdenhartog 19:19:15 ...in the short term, editors should be the designated experts, and then we should nominate others. 19:19:18 I believe this could be useful. 19:19:20 ...Mike is willing to be one. 19:19:42 ack drummond 19:19:54 scribe+ manu 19:20:25 Just want to say that I'd prefer that we not set the requirement that expert review doesn't mean that all proposed requests must come from a WG. With JOSE IANA registries this has made it difficulty for me to get xchacha registered. 19:20:40 drummond: I want to second the idea that we do need expert/community review... the "big red button" method -- in general, process is straightforward, but if someone attempts to abuse the process, spam is easy to process... a dangerous extension, requires expert review... We need it, as an Editor happy to do it between now and when spec gets out. 19:20:42 Dangerous extensions are another reason to have experts. 19:20:46 Expert Review, Expert Review, Expert Review! 19:20:53 drummond: Spec registries should include process. 19:20:54 remove "doesn't" for good grammar from my last comment 19:21:02 scribe+ 19:21:09 For instance, an extension that uses "alg":"none" for "signing" is dangerous, and needs human intervention 19:21:32 We should expect the registry gate keepers to have political interests, and design in defense against that. 19:21:37 q+ 19:21:37 brent: believes we need a mechanism for how proposed additions to the DID Specification Registries can be reviewed 19:21:54 I will not add your feature for you. 19:21:59 +1 to that ^ 19:22:01 you must open PRs ; ) 19:22:22 q+ 19:22:24 ...I don't think it should be expected that the current editors to be the ones to add support for properties in all representation formats 19:22:43 q+ 19:22:54 zkaim, close the queue 19:22:56 ...but those who want to see support for those representations should do the work. 19:23:01 zakim, close the queue 19:23:01 ok, brent, the speaker queue is closed 19:23:10 ack wayne 19:23:29 Orie: I agree that a "Council of Elders" approach or similar type of review is needed that that we need to define how it can work to avoid bias by the experts 19:23:36 ack jonathan_holt 19:23:46 q+ to say we don't know yet... 19:23:48 wayne: Wanted to say that the CCG is interested in assisting with this 19:24:07 jonathan_holt: What is the needed CBOR contribution needed? 19:24:09 Please start in github issues, seek feedback 19:24:27 manu: it depends on what is needed by the CBOR portion of the spec 19:24:27 and once its clear that people have given you feedback, make a PR 19:24:44 Please don't just open a PR without gathering feedback on issues first. 19:24:55 ack selfissued 19:25:00 jonathan_holt: just trying to use vanilla tags, so hopefully it shouldn't be much work 19:25:41 subtopic: ethereumAddress 19:25:54 selfissued: all the registries that he's involved in require that there be multiple experts appointed and that any experts that have a conflict of interest with regard to a proposed extension, they must abstain. 19:26:04 brent: taking the rest of time now 19:26:18 q+ 19:26:20 topic: ethereumAddress 19:26:35 q- 19:26:43 https://github.com/w3c/did-spec-registries/pull/73 19:26:55 ...please review, this is important 19:26:59 q+ can we have 1-2 weeks and then feature freeze :) 19:27:00 Topic: can we declare feature freeze? 19:27:26 brent: can we declare feature freeze now? 19:27:36 Can we have 1-2 weeks and then feature freeze? 19:27:36 Note that ethereumAddress, for example, has already been proposed, along with the various key approaches, etc. 19:27:36 -1 I'm still working on some 19:27:43 drummond: 2 weeks 19:27:44 +1 19:27:49 I am hoping a little longer 19:27:49 +1 for 2 weeks 19:27:52 +1 19:27:53 give people a heads-up ... feature freeze in 2 weeks, get your stuff in! 19:27:55 I have 2 PRs to submit on properties 19:27:55 +1 19:28:39 Which repo? 19:29:04 brent: let it be known, you have TWO WEEKS - by end-of-day Pacific Time on July 15th - we will no longer accept new features 19:29:08 https://github.com/w3c/did-core 19:29:15 ...this is your FINAL WARNING 19:29:17 I just e-mailed registry instructions language about dealing with conflicts of interest 19:29:20 topic: tpac 19:29:34 brent: TPAC will be fully virtual this year 19:29:58 ...October 2020, format still being worked out 19:30:06 Great job Chairs! 19:30:10 rrsagent, draft minutes 19:30:10 I have made the request to generate https://www.w3.org/2020/07/01-did-minutes.html ivan 19:30:13 ...we will get you more details 19:31:07 Thank you to the Chairs. 19:31:08 Ouch, I wanted to propose representations and cose 19:31:57 zakim, end meeting 19:31:57 As of this point the attendees have been ivan, rhiaro, manu, adrian, burn, phila, wayne, ChristopherA, jonathan_holt, brent, justin, phila_, dlongley, selfissued, orie, dmitriz, 19:32:00 ... JoeAndrieu, chriswinc, drummond, oliver_terbu, markus_sabadello, justin_r, Eugeniu_Rusu, dirk, kdenhartog, agropper, identitywoman 19:32:00 RRSAgent, please draft minutes 19:32:00 I have made the request to generate https://www.w3.org/2020/07/01-did-minutes.html Zakim 19:32:02 I am happy to have been of service, ivan; please remember to excuse RRSAgent. Goodbye 19:32:06 Zakim has left #did 19:33:00 rrsagent, bye 19:33:00 I see no action items