DID WG (Virtual) F2F, 2nd day — Minutes

Date: 2020-07-01

See also the Agenda and the IRC Log

Attendees

Present: Ivan Herman, Manu Sporny, Amy Guy, Adrian Gropper, Daniel Burnett, Phil Archer, Wayne Chang, Christopher Allen, Jonathan Holt, Brent Zundel, Justin Richer, Dave Longley, Michael Jones, Orie Steele, Dmitri Zagidulin, Joe Andrieu, Chris Winczewski, Drummond Reed, Oliver Terbu, Markus Sabadello, Eugeniu Rusu, Dirk Balfanz, Kyle Den Hartog, Kaliya Young

Regrets:

Guests:

Chair: Daniel Burnett, Brent Zundel

Scribe(s): Phil Archer, Drummond Reed, Manu Sporny

Content:


Session slides: https://tinyurl.com/y87mqtlf

Brent Zundel: opens the meeting
… We’re starting from side 40 at https://docs.google.com/presentation/d/1UHDgw5Q_8-y8AS-E2cPuf9egUyuWAqAZQ24p9E0dqqw/edit#slide=id.p23

Daniel Burnett: Reminds everyone to present+ themselves

1. Review and agenda

Brent Zundel: Let’s get started
… will spend the bulk of this session on the test suite, what the plans are etc.
… then a 30 minute break
… then we get into Key Representations and Crypto Algorithms/Specs
… if we have time, we’ll have a working session at the end
… please use the q+ system

Michael Jones: I have an agenda comment

Michael Jones: I was thinking about where we left things wrt JSON nad CBOR being at risk
… they may be perceived at risk… our registries have not been uniformly filled out so that there are entries for each representations
… A we need to define registry instructions - a gating criteria is that it defines representations for all data types
… we need registry instructions

Jonathan Holt: +1

Michael Jones: Not suggesting that we discuss this now, nut think registry instructions should be added to a future agenda topic

Drummond Reed: +1 to Mike’s proposal

Brent Zundel: We are hoping to discuss just that at the end of the say

Dmitri Zagidulin: And I’ll touch on that now too

Daniel Burnett: we actually need people to do the work (and not just write criteria)

2. test suite

See slides

Dmitri Zagidulin: We need a test suite. Let’s talk about the deign and the challenges
… What we know from ourselves and elsewhere
… We’re testing implementations
… Our main spec is a data mode spec, not a protocol
… That brings challenges
… Also, it’s an abstract data model
… that brings other challenges
… We can fall back to the command line
… we can pipe output to the test suite
… The universal resolver, Docker containers and HTTP as a way to provide interop
… There’s the format of the DID URL - we have the ABNF
… we have the data model of the DID doc
… and we now have the contract
… We need to step back and determine what the goal of the test suite is
… traditionally, need to make sure that the test suite is implementable.
… Ideally our MUSTs must be machine-testable
… There’s going to be a certain amount of non-machine testable
… To get out of CR, we need to know which features might be at risk before going into CR
… What are people implementing?
… Let’s start with the DID URIs

Ivan Herman: I have no problem with what you said
… We have to emphasise one thing that came up yesterday. It must be implementable based on the spec only
… and not need some sort of background knowledge
… And that’s why CR exit criteria needs more than one implementation

Dmitri Zagidulin: Thank you
… Data Model testing
… At base layer - we pass in a DID doc, via HTTP or command line, and validate it
… We’ll check things are there that must be and not that must not be
… For the JSON-LD, we need to test that terms are not redefined etc.
… A test suite can give us an idea of what else other implementations are doing
… for example, we said that service endpoints might be at risk. The TS a way to see what is actually being implemented
… The next question - if we’re validating and taking a census of these DID docs - a simple solution could be that people submitted examples of their method’s DID docs
… That doesn’t test how implementations evolve
… So we really want to generate those DID docs - which is what the current rough draft TS does
… But we don’t have a formal notion of generating a DID doc in our spec
… We have Create
… but that traditionally has a component of registration but we don’t want to test that - it often requires payment
… Alt - since we do have this contract in our spec, we can use that as our generation mechanism
… The downside is to do with connectivity, sample DIDs, and we’re mostly a data model spec
… We’ll come back to this
… or maybe we allow people to do both

Orie Steele: the registries builds a set of fixtures from the universal resolver… https://github.com/w3c/did-spec-registries/blob/master/package.json#L8

Justin Richer: Is the TS… I assume that the TS is independent of the thing being tested

Orie Steele: and then uses JSON-LD and JSON Schema to test conformance

Justin Richer: I assume people will write a pipe into the TS to see what it does

Dmitri Zagidulin: Yes, we want a skeleton test harness and we’re inviting all the implementations to pipe their implementations into the TS. But what are they piping?
… Is it something they’ve generated?

Justin Richer: Still confused. Why should the TS care how somebody using it is generating or fetching these things

Orie Steele: the JSON-Schema tests are painful… and I strongly suggest removing them…

Justin Richer: I’m assuming it’s running and is passive. I give it a DID and it passed the parser. I give it a DID doc and it tells me it parsed
… Why would the TS care whether it was generated or static?

Manu Sporny: I think my mental model is the same as Justin’s. The TS doesn’t care how it’s generated

Joe Andrieu:

Orie Steele: “its helpful to be able to auto generate test vectors / fixtures…” because doing anything by hand is painful…

Manu Sporny: There may be reasons why we do care, but at the first layer - at the interface - the TS generates some data that it can test
… It’ll say ‘generate a DID’ and I’ll tell you whether it’s good or not
… I would expect that we may do this for the primitive things we want to check. A DID is one, a DID doc is another. A call to the resolution process might be another
… There we might care what sort of process is going on
… So I wonder… rather than call the resolve function, we can break it down into more fine-grained building block
… test the syntax, test the validity of the DID doc, then get into resolution
… I don’t know how we force that to happen - we may not care. Maybe the resolver can be faked, but you generated the appropriate data
… How are we invoking this driver that people are testing

Jonathan Holt: I want to be careful to delineate between… [missed, sorry]

Markus Sabadello: I like the way Manu just described breaking this down into components
… You could start with the DID syntax, then test the DID data model, can it produce a valid DID doc
… and then maybe testing the resolve contract
… In that case, we might not only check the validity of the returned DID doc, but cold also check that it matches the type of DID doc
… that requires choosing certain DID methods
… But the first 2 parts are the most important

Dmitri Zagidulin: Thanks everyone
… I agree with all three. We shouldn’t care how the implementation got the DID doc that gets piped in
… I agree with manu that it should be a stepped process
… Jonathan makes a really good point that we want to keep in mind that these aren’t DID methods. These are DID libraries that can probably handle multiple DID methods
… So we should not only specify the library but which methods it supports

Orie Steele: CBOR / Dag-CBOR / CBOR-LD…all different afaik…

Dmitri Zagidulin: SO not only do we have the URL format and the DID data model and possible protocol tests. But we have to test concrete implementations of an abstract model
… so that means JSON-LD, JSON and CBOR
… The point s that we have multiple representations of the DID doc to test
… So we can list the DID methods supported but also the content-type of the representations that are supported

Jonathan Holt: I think in the CBOR spec, I was concerned with CBOR-LD could be supported… it gets more into… this is not a default format. The native canonical format is DAG CBOR that can be exported into JSON or JSON-LD as requested
… So we need to use the Accept header.
… Im still struggling to get to semantic interop

Manu Sporny: The community is going to have to discuss this in more depth. We need to not preclude other encodings
… So, for example, if we say all keys must be strings, that prevents representation as integers which can be really good for compression in CBOR
… They’re all workable - we can get there - but some groups provide alternative representations, like JSOn and JSON-LD
… When we have one representation - if we get to multiple sub-representations, we harm interop
… What tests are going to be written for JSON only, CBOR only - will they preclude any advances in those?

Justin Richer: As this is an abstract document spec, saying that the keys are strings in the abstract, does not preclude their encoding in any binary format
… The spec can and should say how to translate a binary tag into an appropriate string for the data model
… So there’s parsing the implementation and then understanding in the abstract

Jonathan Holt: +1

Justin Richer: The parsing should always provide a valid DID doc. It can then be processed into the appropriate strings - so this shouldn’t be a problem.

Jonathan Holt: That’s why we need to keep the representations separate from the model
… +1 to Justin

Manu Sporny: +1 to Justin and Jonathan

Justin Richer: to be pedantic: I didn’t mean to extract to another representation, but to the abstract model

Manu Sporny: I’m pointing out that the spec currently says it has to be a string, so we have to make it clear that translation is acceptable
… The TS is going to bring this to the fore
… There’s a MUST statement - that’s another way that people will discover what the spec actually says

Justin Richer: So yes this needs to be written into the “Representations” section for all representations. You have to tell it explicitly how to map property names and values throughout. That’s why it’s so specific.

Jonathan Holt: I got the CBOR section in at the last minute. Done in a hurry - apologise if it’s not complete - let’s work on it

Dmitri Zagidulin: Sounds as if there’s agreement that we will need to test multiple representations. is this a valid JSON string, does it conform to our higher DID data model
… I wanted to echo the call for help that’s coming
… For some of the less-implemented formats like CBOR, we def need implementers to step up and help implement the test suite that tests those things
… We have general agreement on what we’re testing.
… We’re recording the usage of various features to see if they should be kept or removed from the spec
… Much like other WGs, we want a page to look at. An automated implementation report
… We’re dealing with GH repos and closed source proprietary implementations
… we need as a Wg to make a couple of tech decisions about the workflow
… We want to make the invocation of the open source libraries as easy as possible. A manifest/config doc that can automatically use the… via the command line or via Docker hub
… Invoke the latest versions, installs them… but thinking of cross-languages, it would have to be Docker hub
… so dependencies can be included
… There’s a lot to be said for self-contained Docker containers
… We have to just say “you’ve run this report” - at least give us the test output so we can tally it

Justin Richer: I just wanted to second the notion of having an external config file that says this is how you run my code against the test suite
… When I was helping to design what became the open ID foundation’s test - everything you wold pass in for a test, would be in a JSON object
… that allowed us to automate and integrate a lot
… An instance of the test suite could test these configurations over HTTP. Scripts would post JSON to the right place and results would be put into a DB from which the report was generated

Orie Steele: yes, we need structured inputs and structured expected outputs :) … it needs to run offline…

Justin Richer: Keeps things separate. Want this to be able to run in an offline mode as not everything will be on an open Web

Manu Sporny: The concern that I have… some W3C test suites ask implementers to expose HTTP endpoints. As test suites change, as long as you have your config in a file, you can update and run against those same endpoints
… Also +1 to offline. Some companies don’t want to put anything online
… What is the deployment mode? HTTP, offline, Docker
… The issue with Docker is that we have to make sure everyone’s comfortable with it
… The biggest +ve is that… it would be really nice if we could run the test against the latest version of the code in a repo
… People tend not to be as good at updating their open source code with the latest version of what they’ve done
… It would be good to give feedback on the latest version. That’s the strongest argument for moving to Docker

Ivan Herman: let’s be careful not to over-engineer things. The 1st question - how big will the TS be? When you have a complex test object, you might have huge test suite
… But in other cases the test suite can be small because the chances of getting it wrong are not that hight
… I’ve seen WGs going into overkill

Orie Steele: +1 to ci support for test suite conformance… lets make it easy.

Ivan Herman: The last slide… yes, we need a report. but we should separate the generation of the results of testing, which can be complicated but it ends up as a manifest
… Where you have the results in an accepted format
… Simple JS hacking can create the human readable page
… If the goal of the WG is to go beyond what the CR phase requires and set up an environment to be used in many years’ time, then things become more complicated
… But if you just want to get through CR, KISS

Justin Richer: +1 to structured output

Orie Steele: +1 to structured output

Dmitri Zagidulin: I do want to reply to both Ivan and Manu.
… You’re right Ivan - the data model is actually very simple. There is one required field and several optional ones.
… Lots of room for over complicated

Justin Richer: +1 to submitting the results

Dmitri Zagidulin: Option 2 - that we just ask people to submit the results so we don’t actually run the tests ourselves, just send us the results

Orie Steele: “all my tests are passing, just trust me ; )”

Dmitri Zagidulin: The Docker approach does need buy-in

Justin Richer: Orie: you have to submit proof too ;)

Dmitri Zagidulin: But without that, people might have to set everything up - and that can be a nightmare. Docker can be hard, but the alternative can be worse

Justin Richer: I was going to start with the last point - you’re asking people t buy into Docker but that might be easier than buying into a load of other tech

Orie Steele: +1 to dockerized local and hosted versions :)

Justin Richer: At OIDC, We’ve made it so that yo can download the java and run it, or the Docker, or run it online

Dmitri Zagidulin: orie - but who is gonna host it? :)

Justin Richer: I’m not saying that the config should be in the test suite. The config is outside - the TS gets updated and then if people want to run their code against the new TS it runs again and they get the result

Justin Richer: https://www.certification.openid.net/log-detail.html?log=jWmOc7GX94&public=true

Justin Richer: +1 to Ivan to the results being in a structured format

Justin Richer: https://www.certification.openid.net/api/log/jWmOc7GX94?public=true

Dmitri Zagidulin: fwiw, (that part was just assumed. Not really in question - that the test results outputs in structured format)

Justin Richer: This is a publicly visible result - it’s just a JS web page rendering what’s in the 2nd link
… That’s just JSON
… That tries to address some of the issues seen in the chat. You need the log - that’s been really valuable
… [please type what you just said, justin_r - I missed a really important bit at the end]

Ivan Herman: Example for a report for JSON-LD WG

Ivan Herman: Just for example, for the JSON-LD test suite - it’s the kind of report that was generated. Based on participants submitting their results using a specific RDF format
… I think it’s generated on the fly
… A different issue… not on the testing itself, but on the point when we go to the director and ask for Proposed Rec. A question that might come…
… Are all the features necessary for the uses?
… Are there real use cases for all the features that we have? Are they in use? Or are we dreaming up features that no one wants
… This is the kind of thing that we need to report on
… All those discussions about the parameters and metadata. If it’s normative in the doc, we must have an argument why we have that and back it up with real use case

Dmitri Zagidulin: jonathan_holt - we’re testing just the DID-Core data model. Signatures etc are /not/ in there, so we’re not testing it.

Michael Jones: I wanted to agree with Justin that people should create their own config and be able to run the test tool themselves against their own code

Jonathan Holt: dmitriz : so, just to restate my question awaiting a response: what exactly are we testing? The syntax? The signatures? The serialization from one format to another? The resolve function?

Markus Sabadello: A response to Jonathan - what are testing? So far we’ve talked about 3 areas - the DID syntax, the DID doc data model and its representations
… and the 3rd was potentially the resolver contract

Markus Sabadello: Converting between different implementations - see if diff representations were equivalent

Jonathan Holt: markus_sabadello: thanks!

Brent Zundel: We have 30 mins left for this conversation that I think need to happen
… I think Dmitri needs time to finish up
… I’d like the group to focus on making decision, if possible
… So that we can be as concrete moving forward so that when Dmitri calls for help, we can be specific

Dmitri Zagidulin: We should probably move the current TS to the DID WG (from CCG)

Drummond Reed: +1

Dmitri Zagidulin: What we’re going to test will depend a lot on what you implementers need and want
… For example, do we want to test the conversion of representations?
… That means having a conversion library which not everyone has, but that depends on the implementers
… We need…

Drummond Reed: -1 to requiring testing of a conversion function unless an implementation supports it.

Dmitri Zagidulin: We need to resolve a couple of these issues. Docker vs ‘follow instructions’
… IS the list of implementations to be tested, doe it live in one place on the repo or does each implementation run on their won and submits the results?
… And if we do that, what do we do about the log format?
… Do we find the results sufficient?
… Do we need a hosted version of this and if so, who will host?

Brent Zundel: We resolved to move the test suite into the DID WG at the kick-off meeting I think.

Orie Steele: can we propose to use docker as first step?

Brent Zundel: So maybe you’d like to propose some steps

Justin Richer: +1 to Dmitri’s list of questions

Ivan Herman: See converted CCG test repository on the WG repo

Dmitri Zagidulin: For the non-hosted version - do we use Docker, Docker Hub, or rely on here’s the link to my GitHub repo, or maybe a shell script to set things up?

Proposed resolution: the did test suite will use docker (Brent Zundel)

Orie Steele: +1

Manu Sporny: +1

Eugeniu Rusu: +1

Adrian Gropper: +1

Markus Sabadello: +1

Justin Richer: +1

Drummond Reed: +1

Proposed resolution: the did test suite will use docker to containerize the test suite (Brent Zundel)

Manu Sporny: hrm… not what I was thinking…

Justin Richer: Yes, +1, but to be clear, we’re containerising the test suite itself, not that it requires the test subject to be containerised

Dmitri Zagidulin: This is about containerising the implementation

Adrian Gropper: +1

Dmitri Zagidulin: How are we structuring the test suite? Does it have the list of implementations and then run against them all

Eugeniu Rusu: 0

Dmitri Zagidulin: or do we have the test suite and everyone runs against that

Orie Steele: It’s great that we’ve agreed to use Docker - good. I propose that we take a phased approach to the TS
… I’d like to see a set of structured input and outputs that we can get by running the Docker container locally
… no requirement to spin up a BitCoin node
… Inputs, output, Dockerised TS

Dmitri Zagidulin: It’s not a binary decision, true. We can do it in stages

Michael Jones: This is half philosophy.. to the extent that we’re doing decentralized work, it would be shocking if we created a list of all known implementations

Orie Steele: +1 to everything that Mike is saying.

Ivan Herman: +1 to mike

Michael Jones: There will be implementations that are not done, but you want them to run the test suite and they’ll fail at first. So we should allow that without recoding those failures

Dmitri Zagidulin: +1 to proposal

Orie Steele: +1

Daniel Burnett: This is not the straw poll. It is getting the language of the poll understood and agreed to first before running the poll.

Justin Richer: I agree with Mike that having a containerised test suite so that people can run it outside the hosted version is going to be vital.

Dave Longley: +1 you must be able to run the test suite independently against your own implementation

Justin Richer: The experience in the Open ID Foundation shows how valuable that it

Kyle Den Hartog: Does having a Docker instance that polls from other people assume open source and available?

Dmitri Zagidulin: No, only the available ones. Closed source will have to run it themselves

Ivan Herman: I’m not sure I understand… the TS will be containerised in Docker… For example, when I was on the implementer side for the RDFa specification some years ago, the TS meant that I got a bunch of small HTML files that could get
… And for each of those I was supposed to produce a set of RDF that was checked in the final report. Is this what you mean or do you mean more than that?
… Mine as in Python, Gregg’s was in Ruby etc…

Orie Steele: 1. pull the docker container… 2. build your configuration…. 3. run the rest suite…. 4. get test results.

Dmitri Zagidulin: re proposal - we should clarify that the Docker file for the test suite is a nicety (people can always install it manually)

Justin Richer: dmitriz: yes, you can ignore docker and go direct at any point. If you hate yourself. :)

Dmitri Zagidulin: I want to be clear - Dockerising the test suite is just a convenience. You can still download the code and run it yourself manually.

Ivan Herman: That’s a tech detail that doesn’t matter so much

Jonathan Holt: I’m still curious as to what we’re testing? if it’s just the syntax? You can do a lot with JSON schema
… testing resolve functions is harder

Proposed resolution: The test suite will be containerized (using docker), will allow structured configuration input, and produce structured result outputs (Brent Zundel)

Orie Steele: +1

Dmitri Zagidulin: +1

Kyle Den Hartog: +1

Eugeniu Rusu: +1

Justin Richer: +1

Manu Sporny: Can we get a stake in the ground… I think what Justin just suggested was right. The test suite will be containerised and use structured data for input and output

Manu Sporny: +1

Dave Longley: +1

Jonathan Holt: can i understand what the inputs are

Justin Richer: jonathan_holt: that’s the next question

Dmitri Zagidulin: Yes, as long as people can still run the test suite without Docker

Brent Zundel: +1

Ivan Herman: 0 (I am not sure what ‘docker’ gives me at this point, it could be just a zip file for what I care)

Jonathan Holt: 0 see ivan

Markus Sabadello: +1

Brent Zundel: I’m not seeing any -1s

Dmitri Zagidulin: ivan - docker gives you the excuse not to install Node.js to run the suite :)

Justin Richer: ivan: docker gives you a way to manage all the pre-installed dependencies that you’d need to get that zip file to run

Resolution #1: The test suite will be containerized (using docker), will allow structured configuration input, and produce structured result outputs

Michael Jones: +1

Dmitri Zagidulin: Zip does not allow you to skip installing node and all the dependencies.

Brent Zundel: The test suite’s dependencies will either have to be in the Docker, or installed.

Orie Steele: I think the next step is what’s the structure of the input files?
… There’s a trade off between using YAML or JSON vs a folder structure

Proposed resolution: Use YAML for config for inputs (Dmitri Zagidulin)

Ivan Herman: +1

Manu Sporny: +0.5

Orie Steele: +1

Jonathan Holt: what are the input representing?

Justin Richer: +1 (it’s structured, it’s fine)

Eugeniu Rusu: +1

Dmitri Zagidulin: This is not the important part, We can switch to JSON if people protest

Dave Longley: +0 use whatever most implementers want, lowest burden

Dmitri Zagidulin: The important part is the decision to Dockerise
… We’re testing the DID URI ABNF, the DID Doc data model
… if there is time and volunteer labour, then we can test resolve

Markus Sabadello: I fully agree with these areas… I guess for some of these areas that there are different ways of modeling the DID URI syntax.

Dmitri Zagidulin: counter-proposal: outputs in YAML :)

Markus Sabadello: The input is a DID URI and the TS has to parse and report the result
… Another approach would be that the input would be an instruction to generate a DID URI and then the TS tests that it is valid.
… The input could be a DID doc and an output could be whether it’s valid or not
… Or the input could be to generate a DID doc in a given representation and the output must match the input instructions
… In both cases, you can test either the implementation’s ability to generate or the ability to parse
… Perhaps we do both

Resolution #2: Use YAML for config for inputs

Dmitri Zagidulin: I recommend that we take these discussions to the issues and PRs of the test suite itself

Justin Richer: +1 to that
… A lot of that is implementation detail. To Markus’s comment. The OIDF suite has a lot of tests Each test can have its own config input

Dmitri Zagidulin: to clarify: structured output + a script to generate an HTML visual version of the structured output

Justin Richer: Somebody’s implementation, through a script or whatever, can put stuff into the test suite and run that and get the result
… So to Markus’s point - yes, a string as the input is a definite requirement, in addition, we need a callback-style ability
… We’re in danger of testing the ability to generate, rather than to test the generated thing

Orie Steele: +1 to what just is saying… they are 2 separate tests…

Orie Steele: lets do the easy string based one first.

Justin Richer: Tell something to generate an input for me that I can then test, as opposed to generating the thing and then testing that

Ivan Herman: I realise that there is an aspect we’ve not discussed… in general when I take other WGs - there is a criteria that we have to define which is what it means to be successful
… The usual thing that is done is that the WG defines a number of features and then states that each feature must be implemented by at least 2 or 3 things

Dmitri Zagidulin: +1 to what ivan said - we’re opening an issue that says ‘produce a list of MUST features to test’

Ivan Herman: e.g., JSON-LD has lots of features. Each of those features have a number of tests, the report shows what’s been tested and by which implementations
… JSON, JSON-LD and CBOR are different features - we don’t necessarily expect each implementation to implement every feature

Justin Richer: +1

Brent Zundel: We’ve had a good discussion here. A lot of people have brought up issues. Please take some time soon to raise these as issues in the test suite repository
… Thanks to dmitriz for setting the stage for this
… We need to keep this momentum
… We’re at time. We have a 30 minute break

Brent Zundel: we are on slide 56

3. Key representation and crypto algorithms

See slides

Orie Steele: the slides will provide an overview of the topic, then considerations, then issues
… one key reason for this session is different questions that have come up around algorithms, key encodings, etc.
… slide 57 covers the requirements of great security: excellent documentation, transparency, academic analysis, observability, blessings from authorities
… e.g., NIST, FIPS, governmental approvals
… we need to ask the question about our authority when we public registries of crypto
… another aspect is formal verification, frequent key rotations, short expiration, performance, and paranoia
… slide 58 is some broad questions for the group
… …Should modern cryptographic tooling / data models or standards support legacy crypto?
… Who decides when it’s not a good idea to use something anymore?
… How do(?) we communicate risks associated with specific key types and algorithms?
… How do we encode “key purpose” or “key use”?… (again not everyone uses JOSE).

Christopher Allen: Wants to add to the previous page of good security: 1) hardware support

Dave Longley: hardware is related to preventing key exfiltration/direct key material access

Christopher Allen: some parties will not use a security solution without hardware-based support
… .the second one is algorithmic agility, which Christopher did not see on the list

Christopher Allen: Algorithmic agility is controversial is a more accurate scribe

Orie Steele: Key use or key purpose is important even if you don’t use JOSE

Dave Longley: and we’ll get there later, but “key use” and “proof purpose”/”verification relationships” are actually different concepts

Orie Steele: How will I get the features I want if I can’t implement them myself?
… Who watches the watchmen?
… who is holding the recommenders accountable?
… Why would we trust that this standards governance process won’t be compromised to the advantage of interested parties? (is this happening right now)
… on this point, people may challenge us very directly
… slide 58
… JOSE - main spec - https://github.com/panva/jose
… wide support for JOSE, but that alone is not enough (example of outdated widely support infrastructure: MD5)
… https://safecurves.cr.yp.to/rigid.html
… has a property called “rigidity” – “do you believe that there may be constraints on elliptic curve constants that may be more susceptible to attacks”
… Orie invites everyone to read about it
… “American security is better served with unbreakable end-to-end encryption than it would be served with one or another front door, backdoor, side door, however you want to describe it.” - Gen. Michael Hayden
… “I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry.” - Bruce Schneier
… slide 60
… “No Way, JOSE! Javascript Object Signing and Encryption is a Bad Standard That Everyone Should Avoid” - https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-bad-standard-that-everyone-should-avoid
… “The most blatant way to make your app vulnerable is to get the alg header, and then immediately proceed to verify the JWT’s HMAC or signature, without first checking if that JWT alg is permitted. What will happen to your app if it gets an unsecured JWT with alg = none?” - https://connect2id.com/products/nimbus-jose-jwt/vulnerabilities
… If you are using go-jose, node-jose, jose2go, Nimbus JOSE+JWT or jose4 with ECDH-ES please update to the latest version. RFC 7516 aka JSON Web Encryption (JWE) Invalid Curve Attack. This can allow an attacker to recover the secret key of a party using JWE with Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES), where the sender could extract receiver’s private key. - https://blogs.adobe.com/security/2017/03/critical[CUT]
… slide 61
… What is “crypto-agility”?
… Wikipedia: “Crypto-agility (cryptographic agility) is a practice paradigm in designing information security systems that encourages support of rapid adaptations of new cryptographic primitives and algorithms without making significant changes to the system’s infrastructure. Crypto-agility acts as a safety measure or an incident response mechanism when a cryptographic primitive of a system is discovered to be vulnerable.[1] A security system[CUT]
… agile if its cryptographic algorithms or parameters can be replaced with ease and is at least partly automated.[2][3]”
… the key point is that new crypto algorithms should be able to be evolved quickly
… Names of algorithms should be very clear
… “The names of the algorithms used should be communicated and not assumed or defaulted.”

Drummond Reed: - https://en.wikipedia.org/wiki/Crypto-agility

Orie Steele: “as written, this library greatly expands the attack surface on anyone using it and goes against the whole reason that cryptosuites exist (that is, people that know better should be picking an extremely limited set of options, not enabling a kitchen sink approach, which this library does).”
… - https://github.com/w3c-ccg/lds-jws2020/issues/4
… final thought: “Why is IANA recommending NIST Curves, why are node and go libraries supporting a “kitchen sink”… are we dancing around just saying “JOSE considered unsafe?”
… Orie asks, “Are DID WG members the “people that know better”?

Christopher Allen: One of my problems with the term “crypto-agility” is that some people think that’s what’s really needed and others it scares to death.
… but what really matters is future-proofing
… you need to pop-up one or two layers higher and try to future-proof the higher layers
… I am pretty against the lower-layer “agility”
… as the co-author of SSL, I don’t even trust my own work. I need to get it peer reviewed.
… I don’t believe there’s anyone in this WG that qualifies at that level.
… we need to get the best advice we can from the best people

Michael Jones: I am obviously biased in this regard. We did design JOSE to support cryptographic agility.
… but there are both engineering and political factors.
… there are different requirements in the U.S. and Russia and China.

Dave Longley: +1 to creating profiles for limiting agility appropriately and for interop… which specific verification methods enable

Michael Jones: which is why there are particular profiles whose purpose is to constrain JOSE to a specific set of recommended algorithms that have been vetted by actual cryptographic experts
… if you use those constrained sets of algorithms, using JOSE should be fine
… I have commented about having us profile JOSE but we will still see other nations use their own sets of algorithms

Manu Sporny: RE crypto-agility being “good” or “bad”, I don’t think anyone is saying we shouldn’t listen to the experts
… I agree we should be using recommendations from IETF experts
… and we should not be creating our own algos or doing our own vetting that requires experts
… we already have too much optionality in front of us, and we should try to limit it
… in order to reduce the attack surface and the complexity of doing the analysis
… it also increases interop
… the biggest mistake that has been made is “giving options to people who don’t know how to decide between the options”
… that’s the problem with many common cryptosuites
… developers don’t know what to choose
… this can be avoided by narrowing those choices

Kyle Den Hartog: +1 to limiting where possible

Manu Sporny: allow list / deny list

Jonathan Holt: Clarifying - is the problem white-listing or blacklisting algos? Or shouldn’t that be done via governance models?

Orie Steele: slide 62
… How is that stuff in a did document used?

Manu Sporny: Reminder: Try not to use needlessly racial language… use more accurate language – allow / deny … not white / black.

Orie Steele: Verify Digital Signatures - https://w3c-ccg.github.io/security-vocab/#assertionMethod
… https://www.iana.org/assignments/jose/jose.xhtml#web-signature-encryption-algorithms
… How much of JOSE will we actually support?
… slide 63
… Reminder that PGP exists…
… which is why we need support for plain “public keys”
… slide 63
… Reminder that Minimal Cipher exists…
… slide 64
… slide 64
… Reminder that DID Comm is being built (at DIF)
… slide 66
… Encoding a key type and purpose in the name and verification relationship.
… the point is that the key type includes the purpose and name
… slide 67
… Json Web Key 2020 Proposal
… W3C CCG will maintain a JSON-LD Signature Suite which documents how to use JOSE with DIDs… the key representation will support JWA / JWS / JWT / JWE.
… Keep the JSON-LD side of working with JOSE as simple as possible…. While actually supporting interoperability on both fronts.
… can this approach work for support both JOSE and JSON-LD in an easy-to-implement way
… The suite will provide better guidance than IANA does, on what “recommended” means.
… The suite will support the needs of “Pure JSON” / “JOSE only” folks.
… JOSE features that are not documented / contributed to will NOT be included.
… The suite will be year timestamped, and updated as the security landscape changes.
… The DID WG will reference this suite, as it does for others, via the spec registries.
… slide 68
… Encoding a key purpose in verification relationship
… one of the major criticisms of JOSE is that the key representations are easy to screw up
… there’s also controversy about some of the curves supported by JOSE
… this example also does not follow the best practice of the previous slides
… slide 69
… Next steps for JOSE
… https://docs.microsoft.com/en-us/microsoft-edge/dev-guide/windows-integration/web-authentication#authenticate-your-user
… WebAuthN uses publicKeyJwk … but we’ve seen almost 0 contribution on this front from DID WG members…. It’s still not even supported in CCG vocabulary….
… https://github.com/microsoft/VerifiableCredentials-Crypto-SDK-Typescript/issues/12
… Does this make sense given the near complete lack of support for JOSE today?
… why are we seeing libraries like this show up but not making it clear for how this should be composed
… Who is going to do the work to make JOSE and DIDs work together?
… slide 70

Manu Sporny: +1 for multicodec / multibase! :)

Orie Steele: Is the future Multicodec / Multibase?
… Compact double clickable string and binary representations. Friendlier towards other programing languages (not everyone owns a browser) Growing adoption within the blockchain ecosystem Registries are easier to update quickly and safely Already used to describe bls12_381-g1 and g2 for JSON-LD ZKPs Key representations less of a foot gun…what does “jwk.d” do? Fingerprint algorithms less of a foot gun… JWK has no canonical representation.[CUT]
… Sidetree forced to rely on JCS as well. NIST Curves aren’t even registered… (is this a good thing?)
… Is base58 the “defacto” standard key representation for DLT keys today?

Jonathan Holt: +1 for multibase/multicodec although keys in IPFS are moving to base36, which is in the table

Orie Steele: Multicodec / Multibase does not support the NIST curves – but for government work they are required
… we should all be thinking about things that are more future-facing

Wayne Chang: Are we only considering JOSE? There are successors. Should we consider them?

Orie Steele: We will need champions for them.

Michael Jones: Tony Nadalin, relative to mobile driver’s license work, has been working on zero-knowledge proof support for JOSE

Markus Sabadello: mention an idea that a resolver could perform the job of transforming between key representation types
… that could return keys to the client in the form the client prefers.
… but someone has to do the work

Manu Sporny: wanted to speak in general support of the JSON Linked Data crypto suite

Markus Sabadello: A bit outdated experiments on transforming public key representations during DID resolution: https://hackmd.io/XmL-Bjh5TdqV4fj6nwdPEQ

Manu Sporny: those algos are meant to come out every 3-5 years
… in getting support for JSON Web Key into the DID spec, we could remove support for older formats

Kyle Den Hartog: Two things of note. 1) when implementing did:key:, had to do a mathematical conversion between the two forms
… The best way to see what I’m after with this is in this comment: https://github.com/w3c-ccg/lds-jws2020/issues/11#issuecomment-642308779

Orie Steele: slide 71
… Recommendations
… Recommend we allow base58 key representations to be valid for all key types, in addition to JWK
… Recommend we include disclaimers about JOSE and algorithmic agility in the Security Considerations section of the DID Core Spec.
… this will be necessary to get support for JOSE
… Recommend we work together to ensure that Json Web Key 2020 is usable. Meets the needs of “Pure Json”, legacy crypto, NIST / FIPS crypto… that some backward facing support for DID interoperability exists.
… Recommend the did spec registries provide some indication of security implications / independent vendor implementations for registered crypto. Ideally a green, yellow, red scale.

Orie Steele: we are having real trouble to get everyone to contribute to even one repo. Let’s work together to solve this.

Daniel Burnett: Who will update green/yellow/red over time?

Orie Steele: Orie recommends trying to find a way to apply warning or recommendations to the DID Specification Registries

Jonathan Holt: asked about base58?

Orie Steele: it is the multi-codec format supported by Protocol Labs

Jonathan Holt: Would like to see the same

Jonathan Holt: is concerned about the green/yellow/red scale for crypto algorithms/suites as it can be very difficult to make lasting recommendations

Justin Richer: I think move on, but not a huge deal

Brent Zundel: cut off of discussion will be at 10 after the hour

Michael Jones: As far as IETF is concerned, there is already “green/yellow/red” in the registry (by other labels).
… this is informed by a group of experts within the IETF.
… so the extent that we want those signals in the DID Specification Registries, we should use those as the defaults
… second point is one of clarification: Mike is willing to help do the JSON Web Key 2020 work if it’s done in this WG.

Manu Sporny: It’s not “let’s use what’s in the registries”… it’s “let’s limit the things in the registries to a smaller subset that’s easier to do a security analysis on”

Orie Steele: We have some of this work going on here but need to do more.
… the #1 takeaway from this presentation should be that this work should be done in this WG and that we need more focus on it
… we just need to figure out how to do it

Manu Sporny: Translation: Not enough people are helping Orie do this work, and people need to step up and do the work.

Christopher Allen: https://github.com/BlockchainCommons/Research/blob/master/papers/bcr-2020-003-uri-binary-compatibility.md#comparison

Michael Jones: feel free to backchannel me if you want my eyes on something quickly

Christopher Allen: The link I just posted is about Bitcoin’s usage of base58. The primary motivation was length.
… it is not a standard, and there are multiple versions of it out there.
… BTCR does not use it at all.
… Bitcoin Core uses hex
… what Blockchain Commons has been doing is just using the binary form and using that with CBOR
… CBOR is an international standard
… the reasons for IPLD and other codecs were having to tackle this was because we didn’t have tools like CBOR

Jonathan Holt: https://www.iana.org/assignments/cose/cose.xhtml

Christopher Allen: so we could go get one or two-byte codes for these

Orie Steele: part of the reason you see base58 is because people have done the work on it
… the main point of this presentation is that there are lots of people using JOSE but little support for it in this WG. With base58, there is a lot of work around it in the WG but little format support for it.

Christopher Allen: We’ve begun figuring what key formats we need CBOR tags for: https://github.com/BlockchainCommons/Research/blob/master/papers/bcr-2020-006-urtypes.md#registry

Orie Steele: slide 72, Issues
… https://github.com/w3c/did-spec-registries/issues/66
… “Add transform-keys=jwks parameter for use with OIDC SIOP”
… needs support
… https://github.com/w3c/did-spec-registries/issues/46
… Add JsonWebKey2020 to allow for algoright agnostic key usage in DID documents”
… this doesn’t apply to the DID Core spec but still needs support
… https://github.com/w3c/did-core/issues?q=is%3Aissue+is%3Aopen+jose

Kyle Den Hartog: +1 looking for others to comment on these topics

Orie Steele: there are a lot of issues tagged JOSE
… my hope is that if we can get collaboration on these issues in DID Specification Registries, then we can go close a bunch of these issues
… thank you to everyone
… #1 takeaway is please contribute to these issues around JOSE and key representations in DID Core

4. working session

4.1. registries

Brent Zundel: Should we require principled requirements for inclusion of properties?
… and What registry instructions are needed?

Manu Sporny: https://w3c.github.io/did-spec-registries/#the-registration-process

Manu Sporny: this is a link to the current list of issues for the registration process
… the issue is an attack for the registries that the name of an extension can be a personal attack
… IETF has a solution in the form of expert review
… but if we have this, then there’s the counter-charge of “a centralized judge” that’s a gatekeeper

Michael Jones: In the Amsterdam F2F, the consensus was that the registry would be used to support interoperability across representations.
… have we been requiring that each extension supports that?

Orie Steele: We’ve seen almost no contributions to the DID Specification Registries since the F2F
… that makes it an almost intractable requirement that is preventing people from registering properties
… so my concrete recommendation is to only require JSON-LD and not require JSON-only or CBOR translations or testing

Orie Steele: also, we don’t need JSON Schema to have “pure json”

Orie Steele: its actually easier if we just have “Pure JSON” use “JSON-LD”.

Daniel Burnett: Clarifying that the properties defined in DID Core that need JSON-only and CBOR representation definitions, those people who know how to do it must get in there and do it.

Manu Sporny: +1 to Orie’s “Council of Elders” approach as a “big red button” when things get out of hand… ideally, objection is kicked up to DID Maintenance WG or W3C CCG.

Christopher Allen: @self-issued, which document did secp256k1 for COSE get in?

Jonathan Holt: It should be pretty easy to do the mapping to CBOR, and I can do it

Jonathan Holt: and to be clear I’m not an expert!

Jonathan Holt: which i’m not keen on doing it

Michael Jones: I agree with Manu that we need to have a set of designated experts. You can’t write into the registration instructions to prevent all the stupid ways people will try to abuse it.

Daniel Burnett: +1 mike with respect to designated experts for “Expert Review”

Michael Jones: in the short term, editors should be the designated experts, and then we should nominate others.

Christopher Allen: I believe this could be useful.

Michael Jones: Mike is willing to be one.

Kyle Den Hartog: Just want to say that I’d prefer that we not set the requirement that expert review mean that all proposed requests must come from a WG. With JOSE IANA registries this has made it difficulty for me to get xchacha registered.

Drummond Reed: I want to second the idea that we do need expert/community review… the “big red button” method – in general, process is straightforward, but if someone attempts to abuse the process, spam is easy to process… a dangerous extension, requires expert review… We need it, as an Editor happy to do it between now and when spec gets out.

Michael Jones: Dangerous extensions are another reason to have experts.

Daniel Burnett: Expert Review, Expert Review, Expert Review!

Drummond Reed: Spec registries should include process.

Michael Jones: For instance, an extension that uses “alg”:”none” for “signing” is dangerous, and needs human intervention

Orie Steele: We should expect the registry gate keepers to have political interests, and design in defense against that.

Brent Zundel: believes we need a mechanism for how proposed additions to the DID Specification Registries can be reviewed

Orie Steele: I will not add your feature for you.

Manu Sporny: +1 to that ^

Orie Steele: you must open PRs ; )

Brent Zundel: I don’t think it should be expected that the current editors to be the ones to add support for properties in all representation formats

Brent Zundel: zkaim, close the queue

Brent Zundel: but those who want to see support for those representations should do the work.

Orie Steele: I agree that a “Council of Elders” approach or similar type of review is needed that that we need to define how it can work to avoid bias by the experts

Wayne Chang: Wanted to say that the CCG is interested in assisting with this

Jonathan Holt: What is the needed CBOR contribution needed?

Orie Steele: Please start in github issues, seek feedback

Manu Sporny: it depends on what is needed by the CBOR portion of the spec

Orie Steele: and once its clear that people have given you feedback, make a PR

Orie Steele: Please don’t just open a PR without gathering feedback on issues first.

Jonathan Holt: just trying to use vanilla tags, so hopefully it shouldn’t be much work

Michael Jones: all the registries that he’s involved in require that there be multiple experts appointed and that any experts that have a conflict of interest with regard to a proposed extension, they must abstain.

4.2. ethereumAddress

Brent Zundel: taking the rest of time now

Brent Zundel: please review, this is important

Drummond Reed: https://github.com/w3c/did-spec-registries/pull/73

Daniel Burnett: Note that ethereumAddress, for example, has already been proposed, along with the various key approaches, etc.

5. can we declare feature freeze?

Brent Zundel: can we declare feature freeze now?

Manu Sporny: Can we have 1-2 weeks and then feature freeze?

Christopher Allen: -1 I’m still working on some

Drummond Reed: 2 weeks

Daniel Burnett: +1

Christopher Allen: I am hoping a little longer

Dave Longley: +1 for 2 weeks

Kyle Den Hartog: +1

Manu Sporny: give people a heads-up … feature freeze in 2 weeks, get your stuff in!

Drummond Reed: I have 2 PRs to submit on properties

Markus Sabadello: +1

Christopher Allen: Which repo?

Brent Zundel: let it be known, you have TWO WEEKS — by end-of-day Pacific Time on July 15th — we will no longer accept new features

Dave Longley: https://github.com/w3c/did-core

Brent Zundel: this is your FINAL WARNING

Michael Jones: I just e-mailed registry instructions language about dealing with conflicts of interest

Christopher Allen: Ouch, I wanted to propose representations and jose

6. TPAC 2020

Brent Zundel: TPAC will be fully virtual this year
… October 2020, format still being worked out

Brent Zundel: we will get you more details

Manu Sporny: Great job Chairs!

Markus Sabadello: Thank you to the Chairs.


8. Resolutions