15:54:15 RRSAgent has joined #did 15:54:15 logging to https://www.w3.org/2021/02/09-did-irc 15:54:17 Meeting: DID WG Telco 15:54:17 RRSAgent, make logs Public 15:54:17 Chair: burn 15:54:17 Date: 2021-02-09 15:54:17 Agenda: https://lists.w3.org/Archives/Public/public-did-wg/2021Feb/0002.html 15:54:17 ivan has changed the topic to: Meeting Agenda 2021-02-09: https://lists.w3.org/Archives/Public/public-did-wg/2021Feb/0002.html 15:54:18 please title this meeting ("meeting: ..."), ivan 15:54:41 burn has joined #did 15:55:30 present+ 15:55:51 TallTed has joined #did 15:57:55 jonathan_holt has joined #did 15:58:07 present+ 15:58:12 brent has joined #did 15:58:18 present+ 15:59:18 present+ 16:00:04 justin_r has joined #did 16:00:08 present+ 16:00:12 present+ 16:00:41 present+ 16:00:56 agropper has joined #did 16:01:06 scribe+ 16:01:09 present+ 16:01:14 present+ 16:01:16 Topic: Agenda Review, Introductions, Re-introductions 16:01:22 burn: today's agenda is very short 16:01:24 markus_sabadello has joined #did 16:01:29 ... special topic call reminder 16:01:36 zakim, who is here? 16:01:36 Present: burn, ivan, TallTed, brent, shigeya, justin_r, rhiaro, manu, agropper 16:01:39 On IRC I see markus_sabadello, agropper, justin_r, brent, jonathan_holt, TallTed, burn, RRSAgent, Zakim, tzviya, phila, chriswinc, ivan, dlehn1, Travis_, shigeya, hadleybeeman, 16:01:39 ... ChristopherA, bigbluehat, wayne, cel, dlongley, manu, rhiaro 16:01:39 present+ 16:01:41 ... determining the exit criteria, and deferring of issues without PRs 16:01:45 present+ jonathan_holt 16:01:55 present+ markus_sabadello 16:01:55 ... would anyone like anything else on the agenda? 16:01:55 present+ 16:02:09 present+ identitywoman 16:02:16 manu: I'm sure we're going to be able to talk about what the work mode is after today? Rapid editorial cycling 16:02:16 q+ 16:02:28 burn: let's see how the CR exit conversation goes 16:02:35 ack phila 16:02:35 selfissued has joined #did 16:02:39 present+ justin_r 16:02:46 phila: joe and I have been doing our best to make sure the UCR is ready at the same time as the Core 16:02:50 present+ orie 16:02:54 present+ 16:02:59 ... I'm doing a bit of final housekeeping, but in terms of the content that is all done, tahnks to people who have helped us recently 16:03:01 present+ dlongley 16:03:07 ... I'm hoping the group will want to publish that at the same time 16:03:21 burn: the chairs will make sure that a vote on that gets into the agenda in futre 16:03:35 ... we will need a link to review, and that we're choosing to publish 16:03:42 Topic: Special Topic Call 16:03:43 present+ 16:03:55 burn: the special topic call this thursday at noon eastern, 6pm CET 16:04:00 ... 9am pacific 16:04:10 present+ chriswinc 16:04:17 ... will be just like others, to work on PRs, because we are closing on our issues today, does not mean we will not have PRs that are still being editied and in progress 16:04:25 ... that time is available for anyone who needs to come and get help on PRs they are working on 16:04:26 present+ agropper 16:04:37 Topic: CR Exit Criteria 16:04:50 drummond has joined #did 16:04:58 burn: brief overview of this topic, then will ask manu to talk about the spec and interesting questions for this group 16:04:58 present+ 16:05:06 JoeAndrieu has joined #did 16:05:06 ... in general, to enter CR you need to have a document that is complete enough to be implemented 16:05:10 ... and that you believe it's complete 16:05:16 ... you don't enter CR until you believe youre done 16:05:21 present+ 16:05:21 ... the goal of CR is to get implementation reports 16:05:39 ... that's the way we demonstrate that we have a specification that is not only implementable but implementable by multiple independent parties, ideally interoperablly 16:05:54 ... if only one person implements it then that's no indication that someone else would come to the same conclsions about how to implement it 16:06:03 ... when we enter CR we are required to state what our exit criteria are 16:06:09 ... there are some standard ones that are usually used 16:06:19 ... normally two independant implementations of each feature is pretty common for most groups 16:06:29 ... we have to state them when we enter CR and meet them when we exit CR 16:06:37 ... if we don't meet them that means more discussion, maybe another CR 16:06:46 ... the topic now is to discuss what are the exit criteria that we wish to require for the DID Core spec 16:07:07 ... some general guidelines - there are, in this conversation what may happen is if we get suggestions that the chairs or editors believe the w3c management and AC are not liekly to accept we will let you know 16:07:17 q? 16:07:19 ... we can set theoretically any criteria we want, we will be challenged on anything strange 16:07:19 q+ to confirm timing of special call ... agenda email for said 6pm ET not 6 pm CET 16:07:25 ack TallTed 16:07:25 TallTed, you wanted to confirm timing of special call ... agenda email for said 6pm ET not 6 pm CET 16:07:28 kristina has joined #did 16:07:37 present+ JoeAndrieu 16:07:48 TallTed: call time? said thursday at 6pm 16:08:06 brent: the subject line of the email was incorrect, the body of the email was correct, I sent out a corrected email subject line yesterday 16:08:19 ... thursday noon eastern 16:08:28 present+ charles_lehner 16:08:37 brent: dang it 16:08:44 ... will send an update 16:08:52 burn: Thursday at noon Eastern Time 16:08:56 ... as it usually is when it's on thursday 16:09:02 q? 16:09:13 present+ Kristina 16:10:15 q+ 16:10:20 q+ 16:10:32 ack markus_sabadello 16:10:50 markus_sabadello: when it comes to the exit criteria and we say we need at least two independent implementations of every features, I assume we're doing that 16:10:58 ... we may want to define what it means to support a 'feature' 16:11:19 ... one case that always comes to mind is is it sufficient to have an implementation that creates DID documents which have certain properties in them or do you also have to use them in your implementation 16:11:26 ... I'm thinking about alsoknownas or equivalentid or some of these 16:11:39 ... if I write a DID method implementation that can put these properties in a DID doc maybe the bar is not high enough 16:11:46 ... to say the feature is supported you'd need to write code that does something with that 16:11:56 ... maybe we should have that requirement 16:12:10 ack manu 16:12:11 ... that the semantics and the functionality of a property must be supported and used and it's not sufficient to just have a DID doc that just has them in it 16:12:19 manu: to build off that, let me try and suggest what we do 16:12:27 ... I think that may address markus' concern 16:12:31 ... there are many ways to do candidate rec 16:12:36 ... some groups are failrly lax, others tend to be stringent 16:12:49 ... what we have tried to do in this group is make sure that we fall a little on the stringent side 16:13:08 q? 16:13:08 ... our test suite that orie has put together tests normative statements in the document, there are real did docs involved, it is expected that there are implementations that generate these DID docs 16:13:18 ... in general the path we've been on is to provide a test suite that tests real implementations 16:13:24 ... vs people just saying they're going to implement a feature 16:13:41 ... the minimum bar is the two independent interoperable implementations per feature in the spec 16:13:51 ... that means that when we have a normative statement in the spec, we are expected to have a test in the test suite for it 16:14:04 ... and we expect to see two independent implementers, people that don't coordinate with each other, don't share the same codebase 16:14:11 ... two of these implement the same feature in a way that passes the test suite 16:14:18 q+ 16:14:21 ... generally speaking that is what we're trying to do with the vast majority of the normative statements in the spec 16:14:32 ... and we have changed, modified, updated, the normative statements where this applies so they are testable 16:14:33 dbuc has joined #did 16:14:36 Orie has joined #did 16:14:36 present+ 16:14:41 ... we've made several passes through the spec already to set us up to be able to do that 16:14:42 present+ 16:14:47 ... we could set the bar higher and say three or four implementations 16:14:53 ... I don't think I know a WG that sets the bar that high 16:14:57 ... that's the base suggestion 16:15:11 ... we also have statements that are not machine testable, they have to be testable by a human being 16:15:17 ... these are having to do with DID method specs 16:15:38 ... we provided DID method authoring guidelines in the spec and tell people writing method specs they must have a privacy section, have a security section, must define how read, update, deactivate works, etc 16:15:46 q+ 16:15:50 ... there are a class of statements that apply to did method specs and that's the other class of things 16:15:57 ... in general eerything in our spec falls into those two categories 16:16:03 ... there is a third class of things that we could talk about 16:16:09 ... we probably shouldn't dwell on this too much 16:16:26 ... there are features that are new in the spec like nextUpdate and nextVersionId and canonicalId and things like that that are fairly new 16:16:32 ... it's not clear how many organisations are going to implement it 16:16:42 ... but there are people going that's really important, we really want it in the spec 16:17:09 ... they're marked as at risk right now, but we could say for those we're willing to keep it in the spec if we have 5 independant organisations say they are going to use the feature, they are going to implement it. That's always a slippery slope, I suggest we don't take that route 16:17:22 ... but putting it out there as a way to keep thigns in the spec that people feel are important but they may not have the time to implement it 16:17:42 ... Three broad categories. I am suggesting we do for the testable machine testable statements, it's two independant interoperable implementations 16:17:56 ... for the thing that are human testable, we keep them and make sure humans would have an easy time seeing if a DID method spec supports them 16:18:14 ack ivan 16:18:18 -1 to "planned to implement" being kept.... 16:18:23 ... For the third class, I suggest we don't go down that path, but we could have a third class of exit criteria where we say it's okay for people to come forward and say they would implement it 16:18:39 ivan: what is an implementation? how do we define it? is it a method? an application that uses DIDs by relying on methods? 16:18:39 q+ to say "what is an implementation" 16:18:52 ... and the other, I don't understand, is what exactly interoperability means in this respect 16:19:01 ... if I have two DIDs on different methods, they are on different methodsd so I don't knwo how interoperability operates 16:19:08 ... if I have a combination of applications and methods then I can see what interoperability means 16:19:12 ack burn 16:19:13 ... that comes back to th eprevious question 16:19:47 burn: that category 3, that is a world of hurt and end of the world dragons 16:19:58 ... to go down that road.. a promise to implement is a failure because it does not demonstrate that the spec is implementable 16:20:00 q+ 16:20:06 ... just becase you say you are going to doesn't mean you actually can 16:20:11 ... or that you will implement in the same way as someone else 16:20:16 ... we do this phase for the specification, not for implementers 16:20:22 ... we are testing the specification to ensure that it is implementable 16:20:33 ack manu 16:20:33 manu, you wanted to say "what is an implementation" 16:20:34 ... that third category doesn't give us any confidence whatsoever that something is implementable by multiple parties 16:20:41 manu: to answer "what is an implementation" 16:20:56 ... ivan is right DID methods don't necessarily interoperate with one another. The ecosystem isn't quite there yet. 16:21:02 ... what does the test suite test? 16:21:10 ... that when you produce a DID document that it follows the normative statemetns in the spec 16:21:26 ... that is from an interop perspective, we have theoretically at the ADM and at the reperesentation layer 16:21:32 ... you can test that these statemetns match the spec 16:21:42 ... have you written software that is capable of producing a DID doc in a particular representation? 16:21:45 ... that's the DID doc tests 16:21:52 ... we have another set of tests for resolution and dereferencing 16:21:54 ... that's more of a work in progress 16:22:11 ... but it comes down to you serialize something and we test that serialization in a certain way in the test suite to make the DID resolution section testable 16:22:22 ... most of it boils down to have you written software that can produce a DID doc, the test suite consuems a DID doc 16:22:31 ... and applies the normative statemetns in the spec, that's all we're testing 16:22:36 ... the interop is at the data model and representation layer 16:22:41 q? 16:22:43 ack ivan 16:22:46 ivan: I am fine with that 16:22:51 ... but this must be written down 16:23:01 ... because the question is obviously something the director would ask 16:23:12 ... what we may want to do between now and CR is to have a skeleton of what our report will be 16:23:16 ... when we are at the end of CR 16:23:31 ... that skeleton would include what you describe so it must be clear when we get to the report what we mean by implementation, interoperability, etc 16:23:33 q+ to describe how the test suite generates a report 16:23:33 ... this should not be hidden 16:23:58 ... I am not sure that dragon is such a dangerous dragon in this case because if my understanding is correct what we are talking about here are additional metadata elements on resolution 16:24:02 ... and on dereferencing 16:24:25 ... if that is correct then a metadata element is just a string, there is no testable statement to be doen on there, the real question is whether it is a metadata element that has a real usage out there 16:24:54 ... if we introduce some sort of a measure, maybe it's 5 who say this metadata element is important for my type of implementation, this is the equivalent of standardizing a vocab where it's not testable statement in the procedural sense 16:25:01 ... I'm not that worried about that one in this particular case 16:25:04 q? 16:25:06 q+ to speak to DID Resolution metadata, and how you can cheat the test suite. 16:25:07 ack orie 16:25:07 Orie, you wanted to describe how the test suite generates a report 16:25:19 Orie: to describe how the test suite works in its current form, it's evolving 16:25:33 ... you provide a configuration for your DID method that you want to test. The test suite runs on those json inputs 16:25:39 ... the json can be used to represent other representations 16:25:48 ... you do a count for normative statement coverable for the features you've provided 16:25:55 ... if you don't use the versionId param you don't show up as being counted for that 16:26:12 ... if multiple DID methods in the configuration that is submitted to generate the report there will be a counter for each feature and you can see which DID methods use that featre 16:26:24 ... it aligns with to a certain degree what ivan was saying regarding testability of metadata 16:26:36 ... obviously vendors can lie about what features they are implementing vs what they are submitting in the test vectors 16:26:48 ... a number of vendors could say they are providing canonicalId and provide examples, but not actually be doing that 16:26:49 ack manu 16:26:49 manu, you wanted to speak to DID Resolution metadata, and how you can cheat the test suite. 16:26:54 ... we can't prevent that in the way the test suite is built today 16:26:56 We are using canonicalId right now, for the record 16:27:02 manu: ivan makes a very good point about some of the DID resolution metadata fields 16:27:17 ... several companies fall into that category, they're not going to implement a resolver this time around but would like to see those metadata values 16:27:21 ... dan mention taht is a very slippery slope 16:27:29 ... dragons aren't big but there can be many of them and there's power in numbers 16:27:43 ... those type of things tend to eat up a lot of WG time whereas someone who implements a feature and can show code, that's a clear indication 16:27:46 s/taht/that/ 16:27:46 ... vs I promise to implement this in the futre 16:27:52 q+ 16:27:56 ... I would very like the group to not go down the i promise to implement route 16:28:05 ... we will burn up a lot of WG time discussing and talking 16:28:07 ... ut there are good reasons for it 16:28:15 ... the DID resolution metadatafields are an example of that 16:28:26 ... What orie said, it is possible to cheat the test suite 16:28:31 ... the WG will be looking 16:28:34 ... don't feel like you can get away with that 16:28:34 -1 to "promise to implement"... but with the acknowledgement that vendors can "claim to support" a feature... and cheat... 16:28:44 q+ to talk about cheating 16:28:54 ... ther eare cases where there are companies that don'tw ant to make their implementations public so you can never really know if they're running code, or generating artisanal DID docs 16:29:14 ... but if we see a feature anda ll of those things nobody can provide implementations for that there will be a very good argument to say that sorry we did not get interoperable implemenations for this feature 16:29:22 ... be ready for that to happen to features that can't show implementations 16:29:37 ... there are 85 DID methods, 32 DID drive rimplementations, we should be able to provide two impelmentations for all the features in the spec 16:29:44 q- 16:29:50 ack ivan 16:29:51 ... Let's not do.. people will be watching and we will be lookign to see where you implement the feature in code. 16:29:56 ivan: still on the dragons/cats 16:30:15 ... I agree with manu that if the only report is saying yeah this is something we will implement in two years because we like it, I would not accept that 16:30:27 ... however if there is a method who says I use that feature in my implementation, that should be enough for the metadata items 16:30:40 ... there is a big difference between the statement that "this is a feature I rely on in my application are" and osmething which says "I may want to do that" 16:30:40 q+ to mention optional features analogy 16:30:45 ... the former should be okay, the latter should not be 16:30:54 -1 to "i use that feature".... +1 to "tests that show you use the feature". 16:30:56 ... As for the testing thing, what we have to be carefl about, I agree with manu 16:31:09 ... we have to be careful to our external communication that when we are talking about testing here and making a test suite and a test report 16:31:18 ... our goal here is not to provide some sort of validation stamp on the implementatins out there 16:31:22 ... they should not consider it that way 16:31:27 ... we are not providing authoritative tests 16:31:40 ... what we are doing is ask our dear friends implementers to help us finalise the spec and ensure the spec is okay 16:31:55 q+ to reiterate that we are testing the spec, not the implementatiosn 16:31:56 ... if you put it that we you can have more trust that they have no reason to lie, what would they gain by lying? they help us by being implementers, not get any brownie points 16:31:58 ack burn 16:31:58 burn, you wanted to mention optional features analogy and to reiterate that we are testing the spec, not the implementatiosn 16:32:12 burn: there's an analogy here to what I know has been done in a number of specs in the past 16:32:14 ... a notion of optional features 16:32:26 ... sometimes there are feautres that not everyone is going to implement and you want to make sure that someone is going to implement it 16:32:37 ... I know that I have been involved with specs that required one implementation of each optional feature 16:33:01 ... doesn't demonstrate interop, but features like that may not be ready, but someone is. There's not a good reason to say no, but you're going to need to have at least one implementation if possible 16:33:12 ... and the goal of the candidate rec is to test the *spec* not to test the implementations 16:33:19 ... our goal is to make sure the spec is implementable 16:33:22 ... that's the reason for everything 16:33:34 +1 we are testing the spec, not the implementations 16:33:37 q? 16:33:37 +1 for testing the spec not the implementations. This is important to not lose sight of. 16:33:38 ... when you talk about cheating, in the end what we're looking for is enough implementers who give us feedback that we can use to make sure the spec is as good as it can be 16:34:02 manu: i'll write the first proposal... 16:34:52 manu, some features are made up of more than one normative statement, surely? like MUST be a string and MUST be a date.. etc (for example) 16:35:32 manu: I thought about saying where each feature is defined as one or more normative statement, but then the question becomes what if you only pass a subset? 16:35:36 ivan: then you haven't implemented 16:35:39 +1 to 1 or more 16:35:45 manu: would people feel better about one or more? 16:35:46 +1 to 1 or more 16:35:49 +1 one or more 16:36:04 manu: changed. And the second one: 16:36:26 ... two or four .. this applies to DID methods, if it's human testable, there have to be four demonstrations that a DID spec has put that section in 16:36:35 ... eg. DID methods must provide a section detailing how the read function should work 16:36:43 ... we would have to be able to point to 4 DID method specs that did that 16:36:47 ... bar is higher because it's a human thing 16:36:48 q+ 16:36:53 ack drummond 16:36:54 ... anyone object, or put the bar higher? 16:37:07 drummond: I'd never thought about testability of the human requirements for something like that 16:37:10 q+ 16:37:22 ... how do we account for that? are we going to keep a list someplace? 16:37:23 manu: yes 16:37:26 drummond: a matrix? 16:37:27 manu: yep 16:37:43 q+ 16:37:46 ... we will take every single human testable statement and a human being along with a bunch of other human beings will get together and link to every single spec that shows that peple are implementing that normative statement 16:37:47 Manu just doesn't like human beings 16:37:50 ... (I hate this) 16:37:53 drummond: we have 80 odd? 16:37:56 manu: we just need to do 4 16:38:07 ack ivan 16:38:12 q+ 16:38:22 ivan: what we are doing here is two different categories that we test, which is fine, because they are different 16:38:30 ... I don't see why we are making it a higher bar than for the mechanical ones 16:38:33 ... two is a general number 16:38:36 ... we can live with that 16:38:42 +1 to 2 not 4 16:38:42 ... I don't see the reason for having 4, lt alone higher. What's the difference? 16:38:52 manu: we can change it to 2. we can lower the bar 16:39:04 ... the reason is because sometimes it's debatable. Did they really implement it? it's subjective 16:39:16 ... humans have many more bugs 16:39:21 ... I'm happy to lower it, that's the only reason 16:39:34 ... we may come up with a list and tell everyone who has implemented a DID method to go and put the section in their DID method spec in there that applies 16:39:38 ... that might be the easier thing to do 16:39:41 ack phila 16:39:45 ... it should be a higher bar because humans are more falliable 16:39:53 phila: how deterministic are these tests? 16:40:04 ... can you give an example? if they are deterministic then 2 is enough. If it's a bit maybe, then 4 16:40:09 manu: maybe 30% fall into the latter 16:40:27 phila: there's no gain in making a rod for your own back.. dont' make life harder for ourselves than it needs to be 16:40:47 ... if you and those doing this aren't confident that smething has been done and that the letter and spirit of the spec has been done at least twice by different people who ideally don't know each other that's good enough for me 16:40:48 q? 16:40:50 q+ 16:40:54 ack drummond 16:40:56 q+ to ask for objections to two 16:41:01 q+ drummond 16:41:03 ack manu 16:41:03 manu, you wanted to ask for objections to two 16:41:17 manu: would anyone object for the human testable bar to be 2 demonstrations of implementations? 16:41:19 drummond: I agree with that 16:41:26 burn: nt hearing objections 16:41:29 manu: I put that proposal in ^ 16:41:45 ack drummond 16:41:55 PROPOSAL: To exit the DID Core Candidate Recommendation phase, the DID WG will require two things: 1) for normative statements that are machine testable, at least two interoperable implementations per feature, where each feature is defined as one or more normative statement in the specification, and 2) for normative statements that are only human-tesable, at least two demonstrations of implementation per feature, where each feature is defined as one or 16:41:55 more normative statement in the specification. 16:41:58 +1 16:42:00 +1 16:42:01 +1 16:42:02 +1 16:42:02 +1 16:42:04 +1 16:42:05 +1 16:42:06 +1 16:42:06 +1 16:42:07 +1 16:42:08 +1 16:42:09 +1 16:42:15 +1 16:42:20 +1 16:42:27 RESOLVED: To exit the DID Core Candidate Recommendation phase, the DID WG will require two things: 1) for normative statements that are machine testable, at least two interoperable implementations per feature, where each feature is defined as one or more normative statement in the specification, and 2) for normative statements that are only human-tesable, at least two demonstrations of implementation per feature, where each feature is defined as one or 16:42:28 more normative statement in the specification. 16:42:53 manu: do we want anything else for the exit criteria? do we need anything more process wise? 16:43:01 ivan: from that point of view it's fine 16:43:09 s/human-tesable/human-testable/ 16:43:21 manu: for the chairs, do we want a resolution down to say we're not going to do a "i promise to implement" category 16:43:28 burn: we don't need it, we have the requirement 16:43:32 manu: i think we're done then 16:43:37 q+ 16:43:41 ack ivan 16:43:50 ivan: what do we do with the resolution metadata? under which category? 16:43:59 manu: it's very clearly testable, when this last PR goes in 16:44:02 ivan: then it's fine 16:44:11 s/clearly testable/clearly machine testable 16:44:24 burn: anything else on this topic? 16:44:25 Topic: Issue Deferring 16:44:55 burn: I want to remind the group that for a couple of months we've been saying to get your issues and PRs in 16:45:14 ... almost a month ago we warned that any moment now we were going to require any issues without PRs would be deferred, unless its editorial 16:45:25 ... and two weeks ago today the chairs formally notified the group that today was the deadline for that 16:45:38 ... if there is an issue by the end of the day today that does not have a PR associated with it it will be deferred 16:45:42 ... this is the deadline 16:45:47 ... no misunderstanding here 16:45:51 ... Hawaii time is fine.. 16:46:23 Topic: working mode 16:46:46 manu: we're going to see a flurry of activity on the specs over the next 2 weeks 16:46:52 ... the editors are going to make multiple cleaning passes through the spec 16:47:03 ... they are largely meant to be editorial but we may find something that is normative 16:47:06 ... the working mode is going to change 16:47:12 ... no longer keeping thing sopen for debate for 7 days 16:47:21 ... if the editors find an editorial change we will give 24 hours for feedback 16:47:22 ... that's the shortest 16:47:26 ... more than likely it will be multiple days 16:47:36 ... but when you see a PR hit over the next two weeks get your comments in immediately 16:47:56 ... you can view this as us getting together at a f2f, as an intense work stream activity, but everyone is supposed to be on call for the next 2 weeks with a 24 horu turnaround on feedback 16:48:01 ... what's going to happen is, in this order: 16:48:07 ... people will always rush and get some last minute PRs in 16:48:17 ... were going to work as a group to get all of those PRs wrestled as qickly as we can 16:48:24 ... that is what the special topic calls are going to be fore 16:48:28 ... idealy through the github PR mechanism 16:48:33 ... as quickly as we can we get those resolved 16:48:40 ... then the editors will make multiple passes from the top to the bottom 16:48:54 q+ 16:49:00 ... you'll start seeing PRs in two categories - editorial changes, nothing major, and we are expecting people to respond within 24 hours if they disagree with it being editorial 16:49:07 ... they will be section by section, bite sized chunks 16:49:18 ... we will go all the way through the document, that'll take a couple of days 16:49:30 ... espeically if people object ot the change, that PR is going to languish 16:49:41 ... going through it once, and then again, and then again.. 5 or 6 reviewers lined up to do that 16:49:49 ... once that is done, the normative statemetns in the spec will be frozen for CR 16:49:57 ... we are going to start in earnest implementing those tests in the test suite 16:50:03 ... we're going to expect people to be writing tests and submitting their implementations 16:50:09 ... this whole process will probably take until the end of Feb 16:50:10 q+ 16:50:29 ... and at the end of Feb we'll do the transition call, which is chairs, staff and a subset of editors who meet with the w3c director to say we transition from our working draft to CR 16:50:38 ... we'll prove we've done everyting we've needed, horizontal review, addressed all the issues 16:50:44 ... there's a bit qa process there 16:50:46 s/bit/big 16:50:54 ... then if it's approved, the spec is published as a candidate recommendation 16:50:56 ... any questions? 16:51:00 ack burn 16:51:13 burn: anyone who thinks that this sounds rushed 16:51:31 ... how things used to work before github... the expectation was that we get to the point where we think we're ready for CR, the editors go away for a week and just fix all this stuff 16:51:36 ... clean up, add links, clean up typos, they'd just do it 16:51:41 ... and present that to the group and say okay here it is 16:51:49 ... we're operating in a github world where people have an opportunity to comment 16:51:54 ... but understand that is the stage we're at now 16:52:08 ... really the editors need to do the non substantive cleanup work that they've been holding off on because its a nuiscane to do it and then have to maintain it 16:52:16 ... I hope that helps anyone worried about the 24 hours 16:52:23 ... that's just there in case you're concerned, you can follow along on the editorial changes 16:52:36 ... for those of you who are pretty comfortable with the spec, do your reviews quickly because we don't want to have to undo things 16:52:41 q+ to note a few last minute issue resolution expectations. 16:52:42 identitywoman has joined #did 16:52:45 ... if you're someone who thinks the spec has been fine for a few weeks, it's probably sitll going to be fine 16:52:48 q? 16:52:49 present+ 16:52:50 ack ivan 16:52:53 ivan: two things 16:53:11 ... one is the way yo presented it manu sounded like by the CR transition you want all the tests to be present 16:53:25 ... that's not necessarily a requirement, it's nice, but it's fine if we go to the director with the requet with a clear plan for the test suite 16:53:28 ... and tests will come 16:53:36 ... if there are already some that's fine, but we're not required to have all of them 16:53:39 yes, agree with Ivan. 16:53:40 ... lets' not make it too hard on ourselves 16:53:52 ... the other point is actually, things have changed the last few years 16:54:06 ... today what happens is the CR transition request means filing an issue on a specific repo with a template 16:54:13 ... the chair and I have already looked at the template last week 16:54:32 ... all the information has to go there, and raise that issue and if that issue contents is prpoer and well written and precise then there is no call 16:54:41 ... the issue will be commented by the director to say it's fine 16:54:46 ... if there is a ceremony that means we have a problem 16:54:57 ... that means philippe and ralph cannot judge based on the issue and may ask questions 16:55:03 ... then they have a call, then we have a problem 16:55:04 q+ 16:55:11 ... we don't talk about a meeting because we won't have a problem will we? :) 16:55:16 manu: +1 16:55:21 ... I didn't know that we don't have a call any more, that'll be faster 16:55:28 ack manu 16:55:28 manu, you wanted to note a few last minute issue resolution expectations. 16:55:49 manu: everything that is not in is going to be deferred, in general 16:55:56 ... there are some things that are horizontal review items that we may need to talk abut 16:56:04 ... leaving that up to the chairs and staff to decide what to do with those 16:56:12 ... but for any other issues assume they're going to be deferred 16:56:14 ... for v2 16:56:22 ... there are some PRs that are outstanding that i want to give a heads up on 16:56:33 ... not necessarily asking if people are objecting, an expectation of when the'll get in 16:56:42 ... there are multiple cbor and dag cbor prs that are waiting for jonathan's feedback 16:56:51 ... if jonathan_holt can provide feedback that would be greatly appreciated 16:57:03 ... one has to do with deterministic language, moving dag cbor into its own spec, marking cbor as at risk 16:57:04 q- 16:57:08 ... cbor as it stands right now is problematic 16:57:18 ... it has language in there that the group decided they didn't want in there and there's an open issue on how to address that 16:57:22 ... markus said he might be working on that 16:57:34 ... the nly other one that's fairly a bit set of changes is the DID resolution section 16:57:42 ... I want to confirm, Ted, Markus, Orie and Justin have reviewed that 16:57:59 ... I believe I've implemented everything, I'm unsure if the section as it stands right now is acceptable. Can you confirm? what are you feeling? 16:58:20 Orie: I will review in the next 10 minutes and either remove my request for change or approve or tell you I still want it to be changed 16:58:24 manu: that's it for the issues and PRs 16:58:34 ... if you have an issue assigned to you and you don't do a PR for it by the end of today it is going to be deferred 16:58:44 brent: great call 16:58:49 ... we have exit criteria for CR 16:58:58 ... a few people frantically working on PRs before the international date line times out this day 16:59:09 ... look forward to seeing you all on our special topic call 12pm ET this thursday 16:59:15 ... thanks all! 16:59:37 rrsagent, draft minutes 16:59:37 I have made the request to generate https://www.w3.org/2021/02/09-did-minutes.html ivan 16:59:42 zakim, end meeting 16:59:42 As of this point the attendees have been burn, ivan, TallTed, brent, shigeya, justin_r, rhiaro, manu, agropper, markus_sabadello, jonathan_holt, phila, identitywoman, orie, 16:59:45 ... selfissued, dlongley, chriswinc, drummond, cel, JoeAndrieu, charles_lehner, Kristina, dbuc 16:59:45 RRSAgent, please draft minutes 16:59:45 I have made the request to generate https://www.w3.org/2021/02/09-did-minutes.html Zakim 16:59:47 I am happy to have been of service, ivan; please remember to excuse RRSAgent. Goodbye 16:59:51 Zakim has left #did 17:00:11 rrs, make logs public 17:00:20 RRSAgent, make logs public 17:01:34 rrsagent, please excuse us 17:01:34 I see no action items