<rhiaro> slides: https://docs.google.com/presentation/d/1_AKEYKWqaiMIUb6tlo3yVONTl9Z-71H4XfdTtgUF88U/edit
<rhiaro> scribenick: rhiaro
burn: welcome back survivors from
last night. It becomes clear who the committed people
are..
... We need one more scribe for today
... This morning we're starting with the context discussion, a
big one
dmitriz: Would it be possible to switch the context discussion and the MIME type discussion? I would like a bit more time for preparation
burn: possibly yes. We don't want
to move the context to too late
... We could do MIME type and then.. we need an hour for each.
If we can get started now with the MIME type then we might have
enough time to get it done before the break
gannan: I thought there was a dependency between the two?
burn: anything else we can
switch
... Let's talk about the DID working group and try to keep it
short
burn: The advanced notification
of working on the charter has been sent out [to the AC]
... We are expecting that there will be a DID WG sometime
within the next couple of months. There is defnitely strong
support. It's a matter of process and working through
objections
... We think it's reasonable to expect that a DID WG will begin
before this group finishes
... There is very strong overlap expected between thi sgroup
and the DID WG participants. How can we minimise negative
impact of that?
... For some people they might findi t challenging to schedule
and attend multiple meetings like this one
... I was in a WG that had 6 specs at once and most of us were
intersted in all of them and that was a challenge, that was
exhausting, and that was all in the same WG
... So I want to hear any thoughts on the logistics, how we
might make that work better
<Zakim> manu, you wanted to discuss some thoughts
manu: I really prefer that we..
we've already talked about rechartering, I'm -1 on rechartering
VCWG while DID WG is going on. If the DID WG is chartered in
the next 3 months, we should be mostly through CR at that
point, and then it's autopilot once you get to PR. Seems like
the timing is great.
... So just don't recharter this grup with anything new.
Handing management off to CCG if possible, and then
... (missed a bit)
<stonematt> zakim who's here
yancy: I have some concerns that
many of the decisions we make in this WG will be directly
impacted by th ework done in the DID WG
... I do think there is a possible reason to recharter the work
done in this group because of that
... Some things being proposed in the implementations are
dependent on which resolver you choose, as in which vendor you
choose to help with your DID resolution
... I think that's potentially the conflict
... It's something I need to know about before being an
implementor for VCs
burn: could you be more specific?
yancy: Two of the new
propositions such as ZKP can directly depend on which DID
resolution model you choose
... Which of the two new features that are being proposed that
didn't exist for the VC version
JoeAndrieu: I want to concur with
the conflict of interest issue, which to me has been
productive. It's been a gift to have folks dealing with
different methods bring their concerns with how credentials
manifest. I think it's an inherant complexity. I want to
acknowledge ethe problem, I think it's constructive
... the zkp stuff has been an implementer driving some
features, that's largely from sovrin
... I want to be able to support zkp but that's also a conflict
of interest in doing DID work in isolation and VC work in
isolation
rgrant: it's a conflict of interest in the separation of WGs? it's not about how the WG gets to agreement? It's a poor structure if you're trying to use that feature?
DavidC: it's the wrong term, conflict of interest
manu: it's a coordination issue
stonematt: It's about
whether/when we recharter, vs how we manage the
transition
... Many people will leave this group and go to DID. We will
have to decide in the short term is there a logistical impact
of that
... and a tactical nature of that question
... The second topic is more strategic, about how do we
progress in both lanes?
... DIDs are gonna go and we don't want the whole VC community
to diverge without continuing to work together because we don't
have a group
... I think part of what I took away from yesterday's
discussion was that we will rely heavily on the CG to be the
vessel for the VC work that continues utnil we get to the point
that we have something that we can clearly claim as a standard
that needs to be standardised. Right now we have a set of
problems and a set of things that might be problems
... and we need to go incubate that for a bit and continue
working together while we determine the nature of that problem,
so we can express it as a thing that needs to be
standardised
... We're not ready for that. We can't stop, but the w3c
formatily of WG is not the right construct to have that
discussion
ken: I wantt o clarify what yancy
was saying first
... You feel like there's a dependency so that one needs to go
in front o fthe other so you can make decisions?
yancy: Yes I believe there's a dependency, and that DIDs are a main driver for the use of VC. A critical driver
ken: Joe, yours was about being torn in two different directions?
JoeAndrieu: Mostly I was trying
to ack where yancy was coming from, but taht I saw it as
constructive
... At TPAC, your collective concerns about privacly led me to
support removing the id from presentation. I got that put in in
the first place. But your concerns about privacy convinced me
to change that
... The method point is constructive to the data model
<Zakim> burn, you wanted to say manu may be right that the timing could work out now
ken: there are differences and approaches that need to be resolved, and that's constructive?
JoeAndrieu: yes
burn: I don't know how much I need to say on this. I agree with manu taht the timing might work out with us wrt to copmletion of the current charter of this group with the DID WG. As long as we wrap up on the time schedule showed with the CR, and we don't need a second one that requires a lot of work. The DID WG is not forming tomorrow, or in a month, it takes time. However within a copule of months it could be, and we could be largely done.
Pratically it may not be much of an issue. Unless something goes terribly wrong we probably wont' need another face to face of this group, that would be my hope
drummond: for the reasons you
just articulated, and a couple more, I don't think there's
gonna be a conflict
... I think it's going to be a fairly smooth transition
... I look at this as a a layering issue. DIDs operate at a
lower layer. They're an enabling tech for VCS
... VCs set up a killer use case
... so we're going down a level for a group that's going to
focus on that. Seems like the phasing is pretty good
... The truth is it won't really wrap up.. that is contingent
on it not taking more than one extension. Seems like there's a
whole lot of things that want that to be the case
... I don't think it's going to be that big of a deal. Plus
everyone who wants VCs to be successful wants DIDs to be
successful too
... we need to keep in mind that DIDs seem to have attracted a
much larger intereset base than I ever expected
... there's a whole area of SSI generally is really getting
exciting
... Seeing the whole decentralised PKI aspect of DIDs
... So that would also be a rising tide that helps float the
boats, and it will help float this boat
yancy: VCs have been around for a while and this is the second charter correct?
stonematt: it's the first charter as a WG, but we did work as a CG
yancy: the first implementation was based solely on CG?
burn: there was not a standards track WG. there was pre-standards track work. There was a task force. Just like the DID spec now
yancy: we have implemented
previous versions of VC and there hasnt' been a lot of
traction. I think DIDs will help give VCs that traction.
There's going to be a lot of decisions that come out of the DID
WG that might affect the VCs
... that's the main arguement I wanted to point out.
<Zakim> JoeAndrieu, you wanted to offer ccg as continuing forum
stonematt: I want to reinforce
both of what drummond and yancy just said. I think for us in
this community it's very important for us to stay engaged with
the DID work and to keep our voice clear and even if there's
dissent among us, unified outside of us
... Specifically because there's a broader community gettin
ginterested in DIDs and we don't want to accidentally have an
evolution that has an outcome that makes VCs harder to
implement. I don't know what that means but we have to stay
engaged to make sure that doesn't happen
... To yancy's point, when we started this, I hoped this would
all be done in 2 years and there was al ot more. DIDs are the
secret sauce required to enable this technology
... I suspect there's another one after that
... It's going to be wallet management that's an ecocystem that
VCs eventually sit upon
... This is my education my learning experience is that we're
really really early movers and thinkers in this space and the
actual product in the market place will take some time
... There's a reliance lifecycle that wer'e moving from cool
new tech that thinktanks are thinking about to early market
adoption to can't imagine life without it, andif you look at
GPS and mapping, we all got our first smartphones we though it
was kinda neat, and now we're looking for vegan ice cream in
Barcelona and it draws a line
... No-one thought twice about saying I'm gonna follow the map
and use all the technology that was there
... We're at the very beginning of having a blue dot on the
crummy map. We have to do a lot of steps to get to the point
where noone recognises what went into it
yancy: I'm trying to avoid the situation where if we want to implement VCs and we want to choose zkp or jwt support, we're gonna have to choose which DID method to go with. Whether it's sovrin, ibm or whateve.r Before the DID WG is complete, that causes a conflict
brent: I don't see the DID method determinig what signature types you're allowed to use
JoeAndrieu: speaking as one of
the cochairs of the ccg, I want to anchor that we welcome
bringing conversations there, whether it's proposing work items
or asking for time on the agenda to work on VC issues after the
WG wraps up. We're all welcome there
... You're also welcome at RWOT to write papers about these
issues, and IIW
... each of those are more and more open
<Zakim> burn, you wanted to warn group about dropping off early
JoeAndrieu: Those are places the community can continue the conversation after the WG wraps
burn: to contradict myself from
earlier and irritate all of you. I do have to warn all of you
very strenuously not to drop off this group because you're
excited about the DID work. It is a common problem. Everyone
wants the new shiny. It's easy to look at a spec and think
we're done, but often at that point you still have a huge
amount of work.
... The webrtc spec is an example, it's still not done, and it
was 2 or 3 eyars ago they did feature freeze. But then they
added more.. and more.. and people got interested in other work
and they started working on 2.0. But there weren't many people
left to finish 1.0
... It's like putting out the next release of your product
before you've advertised the first one.
<Zakim> deiu, you wanted to mention separation of concerns and future proofing
deiu: I want to reiterate agaoin
the importance of separation fo concerns wrt to the DID work
and the signature work. In it's current form this wg has
produced 1/3 of the work required to make VCs useful and
valuable to companies today
... VCs without identity and proofs have very low value to
anyone who is trying to use VCs. Being future proof for this
particular spec means we have to create the DID WG as its own
entity, but also a group that handles signatures and proofs.
Merging them and tightly coupling them with the current spec
will make this current spec less strong or much more difficult
to be upgraded later when a new technology that is more
resilient, interesting or
nicer comes along and we want to use that as well
<manu> +1, yes, absolutely.
drummond: it seems there's an era
of VCs before protocol, and an era of VCs after or with
protocol, it seems inevitable
... I am a proponant of doing this work before protocol, but I
think that era is coming
... it will inform a lot of what nees to be done
... And to repsond to yancy's point about the layering.. the
capabilities for DIDs and DID docs to support things above it
is independent
... You need what's necessary to support the protocol layer,
but they're independant layers
... you should be able to get support for what the protocol
needs from any DID method. Some DID methods are very
constrained, so you may have cases, but in more cases what
you'll have is as long as you support extensibility of DID docs
you'll get what you need to support high level protocol
credential exchange. Doesn't mean you'll get thes ame trust
guarantees, that's a big difference eg. from DNS, but that's
the way i've been approaching DIDs for a
long time
oliver: I have a similar point, to emphasise that the VC spec as it reads now it's DID agnostic, it's even possibly eot use plain URLs instead of DIDs, so there's no real dependency, so you should be able to use any DID method with any proof or data format. In uPort's case we have the ether did method and it should be nop roblem to use ld-proofs or jwts
brent: I agree with drummond and oliver
DavidC: In our implementation we
don't currently use DIDs, it's improtant to keep that
separation
... The point about signatures and proofs.. if the proof has a
type, does that not give the flexibility you need to be ableto
future proof with new types of proofs?
deiu: I'm saying the flexibility is there and we should maintain it
brent: what it seems like we're talking about is that we agree the data model itsel fi snot enough. In order for the data model to succeed we need to build protocols. In order to do that we need to keep working together and stay in touch to make them interoprable. once we've done that we can maybe take all these things trying to interoperate we can make a 2.0 that describes formally how they are doing that
deiu: brent, your point at this
stage I wonder whether this group needs a recharter if we could
have the work that's still missing, that's pieceing the puzzle
together based on DID and proofs work, could we have the CG be
in charge of maintaining and producting docs that show
implementers how to build VC systems using emerging technology
all the time
... so we have.. I don't want to say a living spec, it's not a
spec, but a guide to how we use those technologies together
with the spec that has the model definition to build the actual
system
... a VC 2.0 could be obsolete by the time it's out
<Zakim> burn, you wanted to start wrap up and to mention impl guide
<JoeAndrieu> +1000 for CCG to write/publish guides & best practices (proposals and leads welcome)
burn: We already have an implementation guide and we talked about that being something we need to start but that the CG will want to take that and go beyond whatever we write in this group. What you said fits into that. There's implementation, but it's not just implementaiton. When the privacy interest group came and joined us at tpac. They asked us all the hard questions about how we intended it to use. They said just write what you're thinking.
There's an awful lot that can be written that's not ready for spec yet and may never be. The CG is a great place for that, just as it is a great place to incuabe work that may go standards track
scribe: I want to encourage us to
wrap this up now
... If we get some conclusions out of this this is nice. This
is one of the few sessions wwhere we don't need a decision,
this is a discussion about what are the factors.
yancy: I want to second the 2
comments made that VC 2.0 could be obsolete due to the work
coming out of th DID WG, and I understand you can use
URLs
... but I do think that the work coming out of the DID WG is
going to be the secret sauce that makes VCs worthwhile
drummond: I want to be clear what steps are left and what should any of us around this table be doing to help the DID WG happen?
<ken> I agree with Yancy's comments
drummond: I know manu you're close to the chartering process. what timeframe should we expect? People keep asking
<Zakim> burn, you wanted to remind that this is VC 1.0
burn: I heard yancy refer to our current work as VC 2.0 and I want to be very clear ,this is VC 1.0. Everything before that was VC 0.
<Zakim> manu, you wanted to respond to drummond
<kaz> DID WG draft charter
manu: We don't know, it's a black box. It's over the wall with w3m, we have no assigned staff contact. Anything I hear from wendy that is okay to rely, I relay it. What went out was a head's up to the W3C AC. Now the members know it's coming, they can look at the charter, we're gathering feedback. There was a workshop, the conclusion was let's do a DID WG. Or let's float a charter that can be worked on. I expect it's gonna take two more months. once
the actual charter goes out to vote, we'll all have to pitch in and get votes for that charter
scribe: We have access to all of
the AC reps, reach out to any you know. GO down every single
company and see if the technologies would be interesting and
write them directly, say you think it's going to be useful
technology because of xyz, if we can split the list up and
write personal driect emails to AC reps. We already have 65
committements to vote in favour, but it doesn't hurt to remidn
everyone charter is out for review.
... When the charter goes out for official vote, all of us
should mobilise
drummond: if the vote is successful, how long?
manu: a month to two, it depends
on if a staff contact is ready
... I'm pretty sure wendy is lining that up
... the vote is a month and a half, and then maybe another 30
to 60 days before the first official meeting of the group.
Could be like May/June timeframe
burn: Any last comments?
dmitriz: for some of us this is
the main event.
... This is the source of much of the arguement and CR blocking
issues behind the scenes of the spec
... Soooo what is a context? It is an attribute.. we have two
serialisations right now,t he JSON and the JSON-LD
... in the form of a JSON web token
... The thing to note about them is that they are 1-1
mappable
... there is a nice set of setps provided by oliver and team,
pre and post processing, that shows how to translate from the
JSON-LD context attirbutes to the JWT attributes. There are
predefined ones that are one to one and everything else gets
stuffed into the VC attribute
... first of all I want to point out that even though there's
this major philosphical rift that we have two syntaxes in
implementation, it's not that bad, it's almost trivial. That's
outside of the question fo the context
drummond: it's 100% lossless, exactly?
dmitriz: yes, that's the
difference between the syntax
... some signatures require canonicalisation some do not. THe
difference between the syntaxes is orthogonal to the
cryptographic signature method. You can use zkps with iethe
rone, you could use ld sigs with either one
... there you get into a canonicalisation problem, but that
does not touch on the difference between the two
syntaxes.
... The issue now is the context attribute
... It's an array
... When translating between two syntaxes, where does the
context go? It doesnt really belong in the JWT.. in the
standard attributes
... That's fine.
... The JWT format explicitly provisions for, because they
understand that all actual implementations will add
app-specific attributes, we're in the process of registering
the vc ?? with iana
... In translating between the syntaxes, the context attribute
gets put into the VC claim with all the other credentials
stuffl like type, subject
brent: is this how it's happening now?
dmitriz: that's what's in the spec right now
manu: this is what is happening right now
dmitriz: Why are we doing this?
what is context for?
... it's to prevent global attribute collisions. We're goign to
have interoperable verifiable CREDENTIALs, what my app means by
'name' will, if we just go with the typical JSON style claims,
there's a potential for name collision. What my app means by
'name' might be different by what your app means by 'name'. You
might mean first name, I mean something else
... There are way to make json attributes in the credentials
globally unique and collision resistant without reducing
develoepr experience. No name-dmtirisapp. Its' an easy way,
just adding the attribute, to not step on each others' toes
brent: and to not step on our own toes
dmitriz: you also write docs for
yourself, YES
... That attribute there is for interop.
... What's the action item?
... We're asking to clarify the spec. It was assumed in thes
pec but it was not read that way by implementers, so we're
asking explicitly mark the context attribute as required
... What we're not asking is that anybody has to use JSONLD
processors, ro any sort of library stack, aka JWT developers
can continue what they're doing without changing tooling
... The technical reason? If we include this context, we have
machine processable interop between two syntaxes
... What are the downsides? The main reason put forth against
is is that it locks us in into having to use exotic libraries.
Or it's a philosophical battle
... it doesn't have to be a battle. It's just an attribute, you
don't have to use other libraries
... The main arguement is political. But are there actual valid
reasons against it as developers?
... The two things.. what's the dev experience going to be
like? What is it going to look like to develop with VCs? And
what about storage?
... There are ways around these issues
... How do developers know what to put in the context
array?
... Will developers have to create their own vocabs, their own
schemas, where will they publish them?
... We have massive experience in the field, eg. from google
teaching all the developers in the world teaching all the devs
who profoundly don't care how to use context. THey will cut and
paste from examples. From stackoverflow, better yet from our
guide
... Usually from app-specific guides. The educational
credential develoeprs will be copypasting from those specs and
articles
... It can be even easier. We can add some text about it in the
implementation docs. We can in our vast spare time make some
tooling and graphical wizards to make it even easier.
jonnycrunch: I've filed the issue about security concerns, the MITM attacks and we've talkeda bout the mitigation. I want to add .. I have a demo to do DNS spoofing. I just want to voice that legitimate concern. Even if it's mitgated with caching, the first time request is still an issue. Implementers will cache it, but the first time you're going out to get it, the guy in a boat who is the dive instructor, can subvert that and redirect you to my
own context and most crypto libraries, you validate the empty string as true, you should have smart code that prevents that but all I would need to do in the context is to redirect you to an override
DavidC: a pratical question. In the context we've got these URIs for the types. in type we say VerifiableCredential, which isn't a URI because the URI is in the context. You said the devs will copy and paste, and will extend it to their own application use. so they need a new type. Now they're gonna know what to do, see the strings and add their own strings on the end, but of course it won't work because the context doesn't have their string it, so
it won't work
scribe: is it possible to cut and
paste the context, totally ignore it, then put URIs in the
type. So instead of having VerifiableCredential, put the
URI
... so even though the context says you can replace the URI
with the string to save you typing, you can just ignore it and
put the full URIs in, and then the implementors will see
they're all URIs and put their own URIs in there
dmitriz: very good point, we
should consider opening an issue to tweak the examples
... Storage requirement. you may hear in the arguements against
context if I'm storing billions of VCs (a great problem to
have) what about the literal extra bytes taht the context
attributes will have. By the time you have a sophisticated
application with billions of VCs, there are ways to not
actually have to store the context but to add it on
serialisation based on your own application logic. Storage is
cheap and in the very valid cases where you
just don't want to pay for those extra bytes there are ways around it. Storage should not be a dealbreaker for requiring the context of the output, of the serialisation
scribe: What about MITM
attacks?
... We have this array of contexts, they are URLs, which we
fetch them over HTTP
... what about the server hosting the context being
compromised?
... This is a solvable problem in ipfs, via hashlinks
... we have recommendations on how to link to contexts in
verifiable ways
... For even simpler experience, embedding and versioning the
context. You can npm install the context as a dependency and
refer to the context locally
... But please understand that for the very security conscious,
crypticgraphically locking it is possible
... We're asking to mark the attribute as required in the
spec.
ken: The immutability issue,
we're tyring to address that already with hashlink etc. There
are a couple of ways we can address that, I agree with jonny's
concern
... I'm gonna call turtles all the way down.
... That's one of the problems we run into with some of the
contexts, some include others, which include others.. it's not
an insurmountable problem but some people think oh I'm just
includein the VC context, which includes other contexts, and
then I need to create my own context side by side with it.
There's a referential problem that makes this more complex than
just including the VC context. I'm not opposed to having it
there, but it's a bigger
camel than we look at on the surface
scribe: Other issue is tools. In
order to process all the turtles we need tooling.
... Some type of tooling or libraries support will be required
to properly process the context. I don't object to coming up
with a solution but we need to be aware of it then chacnes of
adoption will go up
... IF we require a context to be included as an outward
gesture of including the context but then I ignore everything
it then you don't really accomplish the purpose whic his to
establish the meaning of the words. Just including it and then
violating everything that's in it it's misleading and
dangerous
... I'm going to include a context with all the defintions and
then deviate from it you put a false flag about what you claim
to have followed and haven't actually followed
dmitriz: if we teach the
develoeprs to do this, we point out it makes attribute name
collision resistent.. to make it clear we should address how
you should take advantage of the tooling,b ut that is
optional
... We need to validate so that members of the ecosystem who
*do* want to perform jsonld processing it doesn't break their
stuff
<drummond> Ken just covered the point I was going to make, which is that if we require the context, but developers don't actually know why it's there, they are going to ignore it and thus create invalid JSON-LD.
oliver: I agree with jonny's
concern but it was address ed mutltiple time. I'm not sure how
the solution looks like, which spec it's in
... Another thing, do we assume that contexts can be hosted on
any webserver. Isn't there a problem that we have a dependency
on the availability of the servers? Do verifiers have a choice?
You can always cahce them, that's true.
dmitriz: I want to make offline
capable apps. I want apps to use VCs and not have to fetch them
from the net
... it's possible today.
oliver: some people might not be able to connect to any type of external web service. There are clsoed loop systems.
<Zakim> manu, you wanted to note that we only need two implementers to support @context, and we have many more than that now. and to note some of the discussion we are having assumes the
manu: Some of the discussion we're having assume that we've already said we're going to put context in there. And this discussion is about are we going to put it in there at all. The toher discssuiosn are secondary to whether we make context mandatory. The other point is that we only need two implementers to put context in the spec. if it says MUST we only need two to implement it. Right now are there going to be a whole bunch of implementers who
object to it being in the spec at all. Then it becomes who is going to object more vigorously to having it in vs not having it in. The proposal on the table is do we put context in, are there two implementers who are going to support it (I think we will)
scribe: Then the question is how
many objections are there gonna be to that
... are the objectiosn to it being mandatory goign to be more
than the objections for it not being mandatory
... If one person objects to the MUST and 8 object to it not
being MUST, it goes in the spec
... Consensus means the least number of objections
jonnycrunch: I love JSON-LD, it's
lightweight, it really helps with interop. Ultimately it's the
turtles all the way down issue
... Ultimately in ipld we implement this as a hashlink, in ipfs
it's a /
... it's a hash reference. It's this rooting problem.. it's
turtles sall the way down until it's what tim says
... I think it's a phenomenal way to get on the same page, but
I have some concerns about the way we get there. I don't want
to push an IPFS solution, but IPLD is taking JSON-LD and taking
it to th enext level which is hash based linking
... I know there are concerns about IPFS or IPLD, it's all CC
license
burn: jonny is not currently an
IE or Member. Keep that in mind wrt to this suggestion
... Not in any way demeaning what he's saying, but members must
understand this
dmitriz: Personally I'm really grateful for IPLDs experience in this
brent: requiring that it goes in
does not mean anybody has to use. the goal of the inclusion fo
the context is to enable a shared vocab and enable
interoperability
... Perhaps opposition to including it is that as a solution to
interop it may be a solution that doesn't work for some
people
... Saying you have to put it in might be detrimental to
interop, the way they would go about being interopable wouldn't
be through the context. Just a thought?
<Zakim> rgrant, you wanted to address specific overrides and to ask if contexts are hashed and to ask what consensus is
brent: There's this @context thing and everyone assumes this is how interop will happen but someone needs to do it a different way (hypothetial)
rgrant: I want to ask about specific overrides. I think it is useful if there is a context and it's okay if specific terms have overrides via URI becauase you can look it up, it's like importing a module and the specifying the name of the module and then fully specifying it in different programming langauages. each call can be traced; in this case the meaning of each term can be traced to a parent context or a URI. I think it is useful and not a
problem.
scribe: I want to ask if contexts
are hashed? If they are then we know after 1 web hit whether
the rest need to be followed. This would mean a change in how
context is listed in the document. You would say here's my
context and here's the hash, the deterministic normalised thing
that I would sign, after getting it
... Is this happening? I'm not aware of it happening. Is it
part of the background?
manu: that's what the hashlink spec is about. It's experimental, we need to further it
brent: it could be part of the dereferencing but it's not necessarily
manu: that's an option open to people who want to use it
rgrant: does this data model say when you have it here's where to put it?
manu: that's a different spec
<Zakim> drummond, you wanted to make a suggestion (at or near the conclusion of this discussion)
rgrant: so there's a way to do it? it's not preculded by our inaction?
everyone: correct
rgrant: what is consensus? It's not about votes, it's about valid technical concerns
drummond: the discussion here in this room we need to capture it in the implementation guide, ten thousand devs will ask this question. It's a critical point of understanding. All the considerations bbout security issues, hashlinks, etc, they need to be in there. If it's mandatory and developers cut and paste and ignore it, are we perpetuating a problem of a bunch of broken JSON-LD. Documents that don't actually follow it. if the implementation
guide says declare URIs in there, good solution not a problem
scribe: At RWOT we specifically
said there's goign to be a specific syntax for addressing
hashlink in the context of a DID
... cool, huh
<Zakim> burn, you wanted to comment on manu's explanation of W3C process
DavidC: If someone votes against it and says we don't need it way have to say how do you solve the interopability, whta is your solution, if they have a good technical solution then we can say contexts are a MAY
burn: the discussion should have been framed the way manu said, whether we're going to do this or not.
<jonnycrunch> Would you please invite me to join the working group as an invited expert. I just went to sign the disclosure and says that I am "not authorized to join" and I have "bogus category" jonnycrunch@me.com
burn: Wrt consensus. manu is correct, he explained the end of the process. The goal at w3c is to get consensus. The question is what do you do when there is not consensus. Every organisation has this problem. Ultimately you need to understand what happens when you have a disagreement even when you have tried everything. Ultimately the director makes a decision. That is considered a failure of consensus. That is a black mark on an effort, to have that
happen
scribe: We try really hard not to
do that. It still happens sometimes
... manu is correct, he can absolutely encourage that, the
chairs can say we don't see any other way forward than to do
that, and let objectiosn fall where they may. i was hopeful
from conversations last night at dinner that we were not going
to go down this road.
manu: I'm not saying let's put this to a vote. i misspoke
burn: we do try to get consensus, the reason we discuss it for so long is to get consensus, sometimes you get pushed on time and energy and it falls to what manu said. In the end, that's what happens when you fail to get consensus. Every group has a defintion in w3c, ultimately a failure results in what he said
yancy: about deeply nesting links. it seems like that's a classic parser compiler problem, that isn't necessarily unsolvable but fetching contexts iteratively could be a concern. Althoug hif there's a way to cache it so it works offline then it's not a major issue
dmitriz: we have very easy precedent for exactly that. gemlock, maven, npm files
<Zakim> ken, you wanted to discuss the bottom of the turtles is possible
dmitriz: this question fo I have a deeply nested structure it could change underneath me, that's why we have package lock, we have a lot of experience with infra including caching to deal with this.
ken: the fact that I said it's
turtles all the way down.. there is actually ground at the
bottom. I have walked the turtle chain. I've been down the
whole way, there is an end to it. It's just more complex than
the simplest approach.
... it's not an infinite tree
<Zakim> manu, you wanted to note Lighthouse and to clarify that this is about technical issues
ken: the second thing is that the fact I brought up 4 concerns may be percieved that I oppose the context. I do not oppose the context. I think we have reasonable solutions to 3 out of the 4, i think we should press forward and having it mandatory should be a good idea
manu: i was NOT suggesting we put
it to a vote
... It is about technical merit. I am not hearing the reason
why we shouldn't do it from a technical standpoint, what is the
technical arguement against doing it. dmitriz went through a
number of them. What is the arguement against it? If we don't
have it we can't argue against it. The objection has no
technical merit
... For those of you with chromium.. we've been talking about
dev tooling. Ctrl+shift+J, click on the audit tab. It's a
project called lighthouse built into yor web browsers. They've
built JSON-LD support in. It'll be in 2.6 million peoples'
hands
... the reason they're pulling in jsonld.js is because the
schema.org folks want to do full json-ld processing. They found
it useful.
... Develoeprs were copypasting from schema.org to get it show
up in the search rankings, people were getting it wrong, and
now this tooling helps people extend it in the right way. We
can build on top of this toolling
stonematt: The technical merit of
the objection to making context required. Asking the group to
respond: we have not heard a strong voice or advocate to not
making it required, other than for some percieved future
objection purpose
... Anyone in the room who would clearly object to making it
required?
... Who will object? We need to make a decision, we have very
strong advocacy for it should be required with technical
reasons. we don't want to shut down the discussion, but we need
the voice of dissent to be in the room
... Can we get that?
DavidC: the added complexity of
it, rather than just simply using URLs. We didn't use context.
You go back through x500 and ldap they had the exact same
problem of interop
... but I recognise the value in using it. But I didn't know
about the multiple levels of depth of fetching and
retrieving
ken: it's not insurmountable, and it's not as deep as one might think
manu: it's 2 deep right now
burn: are you making a formal objection?
DavidC: no!
burn: if you have dissent now is the time to own up to it
oliver: I raised some concerns earlier, but I'm really not opposed.. we are fine with adding the context as mandatory. But do you think it makes sense to provide a section in the spec to say it's fine if you don't really validate or use the context?
dmitriz: we're making it required on the spec level. In the implementation guidence, it's require dto include it but you odn't have to take advantage of it
drummond: if there's a direc tpointer to the implementation guide, beacause it's such a deep issue, if someone chooses to ignore it they're going to ignore lots of other things, if that's there it takes a lot of that problem away
burn: remember this when we get to the test suite
Ned: I would like to understand what the security assumptions are.. we're making assumptiosn about the security of the cache
<Zakim> manu, you wanted to "you don't hae to process it"
manu: we're dealing with stuff that's pretty new experimentally. We know that we can do a cryptographic hash over the context. Implementations can hardcode the hash so they'll never go out to the network. The spec can say this si the hash, but we can't do it now cos it's too new. Eventually specs can say this is the hash of the context, and implementations can include the context by hash so you're system never has to go out to th enetwork even once.
It's not necessarily a caching thing. The idea here is that if your implementaiton includes the context file and knows the has, it never has to go out of the network and there's no MITM attack
scribe: That is one way of solving it
Ned: there are other scenarios, local attacks on caches
stonematt: this seems like.. we're assuming other standards get it right
dmitriz: we can note in the implementation guidence security section
manu: to oliver, you don't have
to process it. JSON-LD 1.1 is being written such that JSON devs
never have to install JSON-LD toolling, never have to fetch
from the network
... the minor caveat is that you can't just extend it in
whatever way you want and expect to be compatible with the rest
fo the ecosystem who is using JSON-LD. You can put it in and
ignore it, but you're not going to interop with other people
who do that. If you just hardcode the values in the context,
there's an impleentation guide for an extneded context, and you
hardcode the values, that's fine. You never have to use to the
tooling or processing.
You fall back to what JSON devs fall back to, that there's documentation out there
oliver: its' more a problem of the clients. uPort is a generic platform, you can put any data you want in there. THey might have to take care of that. We can't anticipate what schemas they will use.
burn: it would be good for oliver to come back to the group and say what is still not clear
ken: I want to second what manu said. And if you don't actually verify what's in the context and you just put it in there, it's still valid, but your root of trust, you're using out of band context to rely on what fields should be named what. You're weakening the credential, it's for the issuer to take that on
brent: the discussion fo hashing the context - there are even more ways to guarantee the immutability of the context than a hashed file
<stonematt> a-
<stonematt> a-
burn: the question still stands..
we can proceed with making it required. Unless and until we
receive a direct objection, at which point we will be expecting
along with the objection the reasons for it.
... And in particular the technical reasons for it
... any objections to *that*?
manu: can we do a formal proposal and approval?
burn: do +1 or -1 in irc, and if you are not in IRC raise your hand
<brent> +1 to zakim needing to keep up
manu: we haven't said anything about the value options yet, we haven't said anything about hashlinks in the spec, we can't yet
jonnycrunch: there is text I had objection to that I filed issues to
manu: different issue
jonnycrunch: I filed issues in JSON-LD and DID about the same issues.. those are my reservations because you're committing down this path to documentation that already says it must be a URL
manu: I understand what you're saying, I feel like it's a different issue
jonnycrunch: it's interrelated, it's already int he spec what the value is
manu: you're suggesting it should be a js object, they're not interrelated
jonnycrunch: my concerns are well documented
manu: noted, it's not this issue
burn: the complaint about saying
the current spec does not solve a problem is acceptable.
Proposing a psecific solution is not [for a non-member]
... what I don't want to do is kick the can down the road
... any other questions?
<stonematt> PROPOSAL: Make the expression of the @context property mandatory in the data model
<brent> +1
<burn> +1
<manu> +
<drummond> +1
<JoeAndrieu> +1
<manu> +1
+1
<dmitriz> +1
<stonematt> +1
<DavidC> +1
<ken> +1
<deiu> +1
<kaz> +1
<grantnoble> +1
<oliver> +0
<Ned> +0
RESOLUTION: Make the expression of the @context property mandatory in the data model
<burn> Visitor/guest opinions:
<pmcb55> +1
<jonnycrunch> +1
<rgrant> +1
yancy: I'm neutral on this because obviously you can include it but not use it. I do think it's a bit.. the members are saying they're okay to include it but they're not going to use it is tantamount to saying they don't necessarily agree with it being in the spec. It's a weak endorsement
<yancy> +0
burn: any other questions or comments?
*** applause ***
** httpRange-14 jokes **
<kaz> [break till 12:30]
<scribe> scribenick: yancy
<inserted> scribenick: Yancy_
brent: how to do progressive trust
<rhiaro> scribenick: Yancy_
brent: we know we need to open an
issue in the guidance repo
... still needs to be done
... ping has been pinged again
davidc: we've sent the
questions
... what has been verified is the new payments program
<Yancy__> brent: can we close this?
<Yancy__> burn: doesn't see a problem with closing
<Yancy__> brent: has this been done?
<Yancy__> manu: security context hasn't been locked down and it needs to be
<Yancy__> ... need to put a jsonld context and vocab in the context
<Yancy__> brent: what else needs to be done to move forward?
<Yancy__> johnnyc: how do you deal with future schemes
<Yancy__> manu: they will continue to publish new ones
<Yancy__> brent: doesn't disagree that it's not an issue but no time to raise a different issue
<rhiaro> scribenick: Yancy__
brent: wasn't marked as editorial
burn: please check through all your sections
ken: will volunteer to read and
look
... volunteers Amy
davidc: has gone through normative sections
burn: worried something big is coming
davidc: there are a number of issues that have been talked about
<inserted> (Charles joins)
burn: requests name and animal from random person
brent: ken is going to join Amy
to make sure normative text not in normative sections
... a PR associated has been raised and merged
... some language has been added to the verification
section
... doesn't know how normative the section is because of
protocol
manu: I forget if that has made
it in
... agrees that it is editorial
... Amy gets another issue assigned to her
Brent: next one, we're doing
great
... there was some changes to a PR that went in.
... that's my summary
manu: suggestions strongly that
this PR is closed
... someone else should review it
... modified section
burn: to review and close
brent: this issue 414 is
mine
... if anyone else wants to take a look to see if i'm on the
right track
... only PR that has Brents name on it
... kind of owns this one
... should there be a holder ID in the presentation
<burn> ?
davidc there should not be a holder ID in the presentation
davidc: the proof may be missing due to conflict in text
brent: doesn't say proof section itself is mandatory
manu: dave longley says we should do it but hasn't figured out why
brent: there is a different section where the verifier could hold the ID
manu: you link to a key which links to the holder
davidc: wants to go back to the
text
... reads the property must be present
stone: thought that was the def that this was verifiable
davidc: it was
brent: if that's the problem we need a new issue
davidc: the suggestion was we create one issue about the proof
brent: david l will look at 419
and look at it
... 422 there is no refresh service in the vocabulary
manu: will take that and add it
brent: hopefully before it's
immutable
... the resolution of the url that says example.com don't
resolve to an actual resource
... was pointed out they should actually resolve to
something
... we've already talked about the url doesn't point
anywhere
davidc: there's a hanging term in
the context
... it has to be in a context and it isn't
johnny: shows that in the example
section there is a context
... must be an array of strings
... this should be a way of extending to the future
Brent: this issue is about the example context
manu: thinks this should be fixed
<Zakim> manu, you wanted to discuss example contexts not being retrievable
Johnny: asks how to resolve
manu: we have an example context
checked into repo which is loaded by the document loader
... that is used in the test-suite which can't be put in
example.org
... anyway there is a website out there called example.org
brent: the rest of the issue is that if it's copied and pasted into a browser it goes no place
manu: we talked about using
schema.org
... doesn't feel like there is much that we can do
chaals: what is the term?
... which says everything can be an array of strings
<gannan> a- chaals
brent: will we close this or leave it as is
manu: will volunteer to say why it should be closed
brent: says we should close it
johnny: has reservations
<rhiaro> scribenick: rhiaro
chaals: I suggest we change the
examples to a thing we can control and resolve. Github
pages?
... it's editorial, since these are examples, but it's a useful
thing to do
... I can volunteer to do it
burn: quick check that the group agrees
<Zakim> manu, you wanted to say this is a multifaceted issue.
<Yancy_> rhiaro I can pick back up after manu
manu: this is a multifaceted
problem. We should do that but it does nto address
jonnycrunch's problem, and there are 3 other issues we haven't
addressed that are related
... Let's do this, I think it should go at our w3c url. We have
a /vc/example/v1 that will always resolve
... We change the example url to that
... all of the examples in the spec are valid and the json-ld
processer with handle them
<scribe> scribenick: Yancy_
gannan: for the issue of moving this along is it to act
manu: lets capture the
issue
... the list of urls is wrong because we need to support
objects
... we need to allocate and publish jsonld
... thinks the others are resolved because of that and i'm say
two minus ones because of added complexity
burn: we only have two issues here
manu: minus one to using schema.org
<rhiaro> scribenick: rhiaro
brent: the list of URLs one is a
functional change, manu will do that
... 429, we're expecting 2 PRs
... 430, multiple proofs in the proofs section
... I say this is editorial
manu: Correct, I"ll take it
<inserted> scribenick: Yancy__
brent: we have three more slids
<rhiaro> scribenick: Yancy__
brent: is it mandatory or not
manu: someone needs to write a pr to specify why ids are required or not
brent: davidc will create a pr to
address this issue
... one of the zkp section PRs will be announced
... there are a number of language or base direction that
requires metadata
... wants his comment first
... a VC is issued and the audience is not global
... the information in the credential may be distributed
further my the holder
chaals: all user facing text is
global
... needs to be able to handle two languages and needs to be
able to tag that information
... can't do graphics and not do accessibility
manu: working on direction and
language
... it may be good to having something in the spec for how to
do
... coordinate in how to address this in 1.1
burn: jsonld is not the only
syntax we have
... must be handled
ken: there are some international
things that can happen
... what is crypo sighed must be assured
... that needs to be communicated
brent: what I was hoping to get
it in my initial comment
... understand that data model must support this
<Zakim> manu, you wanted to note the problem
brent: what needs to change in the spec to support
manu: nothing
... does not support text direcction
... in the period of time when that gets into jsonld 1.1
chaals: make your own random property
manu: we can
... now we do it for all the w3c
chaals thinks you can make a proposal
chaals: it's ugly
manu: we need an example in the
json section for how to do language in text direction
... also this opens a can of worms that are not supported in
json
... this is the whole reason we said to use jsonld in the
beginning
burn: this falls in the category
of things we must do
... could become a very long conversation
... is hopeful
brent: expects two prs
... one from manu and one from chaals
... dates defined in iso 8601
... may not be sufficient
... points out practical use of the dates may not be
practical
burn: we need to be clear on this issue for what we must do and what we encourage
chaals: has a solution it's not
iso 8601
... everyone uses it and it works
oliver: yesterday we discussed
the tradeoff sections
... do we want to record the decision we did yesterday
brent: just create an issue
oliver: wants manu to be the issue creator
manu: we need int the appendix
for non-normative text
... separate appendix for jsonld vs jsonld proofs
oliver: either appendix or implementors guidance
manu: prefers to keep it in the
spec
... has it's benefits and drawbacks
burn: can do both
... give in implementation guides
... it really doesn't belong in the spec
manu: point readers to the
guidance
... implementation guide notes will be referenced in the syntax
section
johhnycrunch: the has based
linking of lpid approach solves many problems but I keep on
seeing how I would solve it differently
... want it to be considered as future work
... just for you guys to consider
burn: does not have scribe for
the last session
... chaals to scribe after lunch
manu: we could pull of the issue
brent: this is my bosses
issue
... what it would take to try and get a specific MIME
type
... the first decision we need to decide on is is this a good
idea
burn: this is an appropriate discussion
<kaz> issue 421 on MIME type
burn: value of having a mime type
is if you do an http request, and if you get this is a document
there some hope you'll get the mime type back
... servers often doesn't often set their types properly
<Zakim> manu, you wanted to provide background.
manu: I was against this because
of CR
... it is something we can do later
... if you scroll down you can see the options
... two different signatures
... to answer the question it has to be all of those
things
... if json-ld as jwt you use the second from the bottom
... ballons very quickly because of the options we have in the
spec
... when you get the json-ld you know what it is
... you could still determine it
... lets say you want to save and open
... if we just did a simple one option application vc
... maybe we do it for vcs or specific types of vcs
... and at that point it explodes to a number of mime
types
... probably a large discussion of where this lands
... the wrong application will open it on the desktop
... your vs will open in a code editor instead of a
wallet
... you then know it's a vc and your wallet can mess with
it
... at this point you can content sniff
... there is another discussion around this
... if we do that there are only two mime types
dmitiz: wants to second
that
... both serialization requires that
... the json-ld requires json-ld type
... could determine content of credential just off of typew
<Zakim> rgrant, you wanted to note that we have two different kinds of VCs
dmitiz: lets wait until this is a pain point
rgrant: we def have two different
types
... we have implementations that do not deal with json-ld
<Zakim> burn, you wanted to say that these can be done completely independently in IETF since managed by IANA. and to recommend separating from our spec work for timing reasons
rhiaro: can have a profile attribute on content type which point to a specific profile of the content type, eg. to the vc spec
burn: doesn't want to have the
spec fail because of this
... doesnt think we'll finish this
... defer is a strong statment
pmcb55: there was a great article about just using the standard
<Zakim> rgrant, you wanted to say content-transfer-encoding may handle jwt
manu: except we have a json-ld that it doesn't work with that
rgrant: content type and separate
field
... however jws is in jwt and so this is open
<kaz> Media Type registration guide
chaas: if it's ready in the spec add it otherwise doen't
kaz: w3c spec has different
procedure
... let me check within the W3C Team to see if this is still
the case
stone: so I find this discussion
valuable
... we could just defer this
... let some future group make the final MIME type
decision
... any opposition to defer
burn: our spec does not say
... wild west until it's defined
<Zakim> burn, you wanted to demand that we not tie this to the spec
deiu: so we can bring up some text in implementation type
<kaz> mime type section of the Web Annotation Data Model REC
kaz: we have web annotation model based on jsonld and this specification talks about mime type, so we should look into this as well
manu: do we have a higher priority we could talk about
davidc: done task and now raised it as an issue
<Zakim> rgrant, you wanted to say implementations will default to json unless they can lift to json-ld
davidc: the implementation won't know to demand that
<rhiaro> scribenick: rhiaro
manu: the first thing you sniff
for is @context, if it has the one you expect, you
automatically know that you can continue to process it
... if you want to make absolutely sure you run it through an
LD processor
rgrant: so run everything through an ld processor, if it succeeds you have jsonld, if it fails you have json?
manu: if it fails you don't know
jonnycrunch: VCs will be used in did resolution. Each resolver will claim that it resolves this DID for you. This is gonna be where the rubber meets the road, how do the DID resolvers handle the claim for the DID document?
rgrant: that's one applicaton right?
jonnycrunch: it ties in with the crossover between the two groups
rgrant: on the internet.. some people do things right, many people do things wrong
jonnycrunch: the beauty is the cross pollination between the did resolution people, to get on the same page
rgrant: I heard you if json-ld
parsing fails you don't know what you have
... but certainly I know more than nothing
manu: you can't determine
anything from it, you need to fall back to a process
... it depends why it failed
burn: you do not know that it is json-ld. It may be that you can sniff internally for the type attribute and conclude that it's the JSON VC format by that value
rgrant: certainly if it says it's
the JSON then we know that
... what if some random person thought they were making their
own VC and they used something outside of the data model? in
that case do we say it's valid JSON so we could treat it like
one, or since it was supposed to be JSON-LD you've failed
manu: it's up to the
implementation, we don't define it
... the right thing to do is to say it failed. the JSON-LD
processor is erroring for a very specific reason; could be you
overrode a core term so the semantics are different
... so they have made a mistake in publishing their credential,
yo ushould not process this credential
<Yancy_> rhiaro I can take back over now
rgrant: that's a great opportunity for a conversation with the user
<scribe> scribenick: Yancy_
burn: any objections for another
topic before this
... now a discussion of which topic
<rhiaro> scribenick: rhiaro
<inserted> issue 438
oliver: I created a new issue 438 that's about the spec doesn't not allow the VC to be verifiable without a proof property but it's not the case because JWTs have JWS which doesn't require a proof property, that's an issue DavidC found but I created a separate issue because I think it's a longer conversation
manu: I think this is easy. We
have to allow the JWT stuff
... We need to change the language so that's allowed. That's
fairly easy change
burn: is it an either/or?
manu: it's not, you can do
both
... you can have an ldproof and secure it with a jwt
burn: but the requirement is you
must have at least one
... you must have either this, or this, instead of making a
proof section just optional.
... that fits with the spirit of what verifiable means
brent: I like the idea of a proof section regardless, would it be possible if the JWT is serving as the way to prove that we .. is there a way in the proof section to say this is being signed by a jwt therefore look at that signature for verification? once it's unwrapped and I get the credential I see where it's signed, not just a thing with no proof section at all
burn: almost a new feature..
manu: DavidC raised that
today
... I think we'd rather jwts got in trouble, the jose stack got
in trouble by allowing the algorithm type of none
... there were a lot of implementations that misimplemented it
and raised security issues
... we want to try and avoid that. None of the ldproof stuff
has a none thing. If you're going to use a proof tell us the
crypto you're using. If we add a proof section that says none
or don't worry about it it's external, someone will write some
code that will check it and see if the proof says jwt
brent: what would stop someone do something similar for the other proof?
DavidC: you can always say type none
JoeAndrieu: we're talking about credentails that don't have a proof part?
brent: if I'm understanding, it's either we have a proof that specifies jsonld or zkp stuff, or it's signe by a jwt, or possibly both. The jwt doesn't have a proof part
JoeAndrieu: in TPAC we had talkeda bout a detached verifiable credential that didn't have a proof, eg. for testing reasons. If it's not verifiable it's not a VC. At TPAC we said we should specify a different type that's not a verifialbe credential
burn: we have that alraedy. It's only a verfiable credential if there is a proof section
JoeAndrieu: that's not what we agreed at TPAC, credential is too to be something we proposed
manu: credential is a set of one or more claims, a VC is a tamperevident credential that can be
JoeAndrieu: the value of the type field in the credential has to be verifiablecredential.. I don't know how that plays into the desire for JWT that doesn't have a proof field
<kaz> terminology section
dmitriz: I'd like to propose a concrete spec change. We do either/or. We check the type field. if it is a JWT then we phrase it it must be validated as a JWT, that covers all the use cases of signature/no signature. If the type is not JWT then make proof mandatory
brent: each node can have a type
dmitriz: outer type
... if the outer type is jwt, it must be validated as jwt. If
it's not jwt, it means it's json-ld, then we make proof
mandatory
<Zakim> manu, you wanted to expected developer behavior
JoeAndrieu: a detached credential means the jwt is handling it
manu: in our implementations, a couple, large companies may hand this packet off between various systems. When systems recieve thigns they do or don't do things based on what's in the credential. It could be possible to.. I'm concerned about sloppy programming. It's not the developers that do the right thing. it's the ones who do the wrong thing even though the spec says to do something else, and what results from that. my concern is that if we say
proof: jwt, whatever section fothe pipeline gets that is going move it on, oh something else checked this, I'm going to keep moving, not .. I'm concerned about an attack where somebody injects proof jwt and it shortcuts the process, vs it always having to have proof on it, or not at all. if it has proof and it's an rsasignature etc you know you have to check it and it's never removed. In the jwt case it's removed at the beginning and then sent
through the system with nothing inside
scribe: I don't the proof: jwt thing buys us anything
brent: I was going to propose
what manu just spoke so elegantly against
... now I'm having an internal arguement.
oliver: to dmitriz's point, was your suggestion to add an additional type? Or just have jwt as the only type?
dmitriz: jwt as the only type. in the jwt serialisation, it says the type must be the string jwt
DavidC: so inside the LD it would still be the verifiablecredential
dmitriz: correct
ken: I have a question for manu - instead of having a proof with a type that says jwt you would take that out and now you're looking at pipeline, if it has nothing there how is that any different, what's the securty model difference?
manu: thinking about this, if the processing pipeline you get to a stage where you have no proof, it feels like.. if you have a system with all these different types of inputs and jwts over here and VCs over here, at some stage in your pipeline, the issue is when you get both jwts an djsonld into your system. The jwts strip the outermost thing out and it doesn't have a proof. the jsonld stuff always up the proof int the system. The developer is just
going to let the things that don't have a proof go through
brent: so what are you proposing?
manu: I don't know.
brent: I misunderstood where dmitriz was talking about the type. If the credentialtype says verifiablecredential, and that's all I have, there has to be a way for it to be verifiable. Which would go in the proof section. If we're saying that a jwt, once it's stripped off, the resulting thing doesnt' have a proof, we're saying that with just the jwt it's not a verifiable credential. Once the outer envelope has been stripped there's no way of
verifying it
oliver: in an enterprise where the credential gets verified and then passed to the the next thing in the pipeline, they won't just strip away everything that is jwt specific, they will pass the whole jwt and the next in the pipeline can decide whether they want to verify it. I'm not saying everyone is doing this
<Zakim> manu, you wanted to argue against himself and to say keep it the same
oliver: Once you verify the proof once at the gate you could then .. that might happen, there are multiple possible architectures
manu: the concrete proposal is to say one or the other, and I don't think that putting proof: jwt buys you anything
brent: except for consistency
manu: I don't think we need to be consistent in that way, it's not buying us anything and we have to define a whole bunch of stuff and it opens us up to misimplementations
<burn> DavidC
DavidC: because the proof is defined as a type and the type can be anything, then you're never goign to stop anyone defining type as external or x509 or jwt or anything. You can't stop that. You can't say we won't have it for jwt becausue you won't stop anyone from doing it
brent: I agree with DavidC and we are kind of straying into protocol, how it's going to be used, but we have to have some understanding of potential protocol. But what it feels like the more I think about the jwt as an envelope the more it's like something we might want to use in evernym and sovrin as an evenlope to a zkp signed VC, so we'd strip off the jwt and use the zkp to do the verification. What happens after the verification step I'm not
concerned about, if they want it to be verified further they need to provide the meanns. They probably don't need it to remain verifiable anyway
scribe: The credential is received. if it's wrapped in a jwt, is it recognised as a credential that needs to be verified in that way? At what point is the verification process done within the jwt case?
<Zakim> manu, you wanted to say but we're not defining it
manu: DavidC is right, we can't stop anyone from doing that, but we're not doing that, that's the point
<Zakim> dmitriz, you wanted to ask about the types VerifiableCredential vs Credential
manu: people can do that but the WG isn't saying this is the way to do it
dmitriz: what Joe was saying.. is this correct? In the terminology section we differentiate credential vs verifiable credential? but in the types we do not?
DavidC: correct
brent: but if we do that then we're saying a VC is verifable without a proof section
drummond: because it's verifiable in a different way
brent: so if we're all okay with that
burn: we're saying it's verifiable without a proof property, not without a proof
manu: what Joe said earlier about
detached credential thing at tpac.. I thought we said we may
have credential and verifiablecredential, and credential may
exist in the context, but I thought we agreed to not put any
examples anywhere to not plant that idea
... I thought in the context we may have a credential type? but
we don't give any examples for it
JoeAndrieu: that was definitely
not my recollecton. i was arguing we shouldn't have to have a
proof
... but the consensus seemed to go, and I support it, that
having another type that whether it's detached or
externallyverified, will allow a bunch of use cases. Kim said
yeah in testing we often pass data around without the proof
<Zakim> burn, you wanted to argue against detached credential type
JoeAndrieu: It sounded like what brent just proposed is a detached credential and we should specify that in a type
brent: I'm saying we need to have
an understanding. If we're saying verifiable credential doesn't
need proof because it could be verified externally.. or we need
to have another type that says verifiablecredential definitely
has proof, and we need another type
... JWT is in the data model
burn: arguing against the
detachedcredential type. I don't disagree with the use case.
it's that visibly it's not clear what the difference is between
a credential that is missing a proof and one that is a
detachedcredential and has no proof with it
... from a practical programming standpoint
... either it has a proof right here with it so the entire
document can be verified. Or you have something which has
credential information and you need something else
... and you need something else or you don't. Either way this
thing is not internally verifiable
... I don't think it makes any difference to call it a
detachedcredential
JoeAndrieu: it's about redundancy
and understanding that it's supposed to be there or it's
not
... subject-bearer agreement
brent: my fear is that if someone
receives a VC without a proof they're going to think it's
already verified and they're good and nothing else is
required
... if the proof was jwt, you have some assurance it was
already verified
... you know it wasn't just stripped out
... that's my fear. If we allow it to not be present it allows
for potentially incorrect assumptions if someone just pulls it
out
<Zakim> manu, you wanted to note that in testing, we just turn sig checking off. and to note that empty proof could be an attack vector
DavidC: I would say the toss up is between having the type VC with a proof, where the type can refer to different things, one could be jwt. Or we have the credential type with no proof, and verifiablecredential type which does have a proof
manu: the whole
detachedcredential.. in testing we just turn signature checking
off. If we want to skip that whole thing, the dev comments a
line out and it's done
... I don't think testing is an arguement for that
feature
... The having a proof is must and you should put a jwt in
there, I would use that as an attack vector. That gives me
something I can send to a system and if there's no error back I
can see they misimplemented. it's immediate feedback that I can
attack that system
brent: that's also the attack vector of being able to strip proof out
manu: if the system is not even checking proof they're insecure
dmitriz: as far as the VC without verification, I would like to hear more use cases for it aside from testing which is not valid. If there are valid use cases I'd like to propose a type for it, in the proof section, intentionallynotverified or soemthing, a tombstone object, a negativeproof object
burn: anything else we need to announce..?
<kaz> [lunch until 15:30]
<kaz> scribenick: kaz
dz: yesterday we mentioned some
concern
... 2 serializations
... JSON/JWT
... the latest consensus is the test suite needs no change
db: two syntax
ot: it's possible for now
... reconfirm it
... should work
pm: test still fails?
dz: the idea is hook up your
language
... will take a look
pm: can talk with you offline
dz: questions?
<Zakim> manu, you wanted to explain
manu: (explains the mechanism)
<gannan> manu what is "it"? the test suite? or vc lib?
dz: currently hard coded
... can take it offline
ken: looking at the test
sutie
... one test credential should leak the data
... how to test it?
... not link but leak
dz: the test currently skip
it
... in zkp test sutie
... 38
<gannan> link https://github.com/w3c/vc-test-suite/blob/gh-pages/test/vc-data-model-1.0/60-zkp.js#L38
dz: probably should open an issue
manu: (explains the text in the
spec at the zkp section)
... may reword it
... the language in the spec
ot: looking at the test
suite
... conformance stateement
manu: need to update
... some kind of MUST/SHOULD
bz: need an issue?
manu: already one
... will raise an issue
<deiu> '
ot: already MUST in the test
suite
... is it mandatory?
burn: anything else?
ot: created an issue on vc-data-model
@@
ot: should update the test suite
accordingly
... proof property mandatory or not
manu: the spec is the authority
burn: anything else?
(none)
burn: action items
ms: review last 2 days
... feedback
... outstanding issues
... maybe for the next call
<dmitriz> btw, https://github.com/w3c/vc-imp-guide is transfered & ready for issues.
ms: getting CR is the clear
goal
... people to remember what to do
<Zakim> manu, you wanted to comment on getting to CR publication -- when do we vote, when do editors have to be done, when does test suite have to be done?
ms: what else?
manu: there is a question when we
vote
... anything outstanding
... my assumption is editors work aggressively to close
items
... the question is when to finish the test suite
... makes us nervous
... like the zkp things
... we need to figure out the language, e.g., MUST/SHOULD
burn: we do have one more review
issue
... small and editorial
... TAG is scheduling our review
... March 12
... scheduled for this call
... someone to be on the call
manu: volunteer for the call
ms: there are conformance related issue
dc: quickly mention what I found
@@@ issue 440
dc: (explains the issue)
bz: I volunteered
burn: how do you test?
... you must use the property
dc: you may have issue on state
burn: you can't say "could
semantically mean", etc.
... we can provide guidance
dc: the evidence didn't have an
id
... would say the id is optional
... make schema optional?
manu: optional
dc: can create a big PR for this
issue
... if you go to the repo
... conformance pr
sm: anything else?
manu: evidence is pretty shaky
<gannan> Conformance branch is here https://github.com/w3c/vc-data-model/tree/Conformance
manu: we don't have
implementation
... still need to read through
dc: change to nontransferable
burn: it's time to check everything
ot: is this a cr blocker?
manu: yes
bz: can create a pr?
dc: meant to do so
ms: already on a branch
... and you need to create a PR
bz: will make a PR
... just did
<gannan> here's the conformance PR https://github.com/w3c/vc-data-model/pull/442
burn: (shows CR blockers)
... CR blocker issues
sm: all the PRs are also CR blockers
@@ CR blocker issues
sm: how do we do that?
burn: can do this here
... nobody believes there is anything outstanding
<burn> Rough straw poll showed no one who would object to publishing as CR once all items are completed as agreed at this meeting.
youtube video!
real world crypto 2019 - day 2 - session 1 - morning
by Brent
bz: over 600 people there
https://youtu.be/pqev9r3rUJs?t=12278 without any explanation :)
<deiu> Here's the URL https://www.youtube.com/watch?v=pqev9r3rUJs
<rhiaro> Public service announcement: I happened to have lunch at BarCeloneta so while I was there I took the liberty of making us a reservation for tonight to be on the safe side (they seemed happy to not be surprised by a crowd)
[break till 17:00]
<chaals> scribe: chaals
stonematt: Join it.
<deiu> /me manu, by "cover" do you mean finish?
<Zakim> manu, you wanted to ask for volunteers to cover the event food.
stonematt: This WG is winding down at least for now, DID work will start up soon, and there is more stuff we want to do in this space so the Community Group will be the way to participate.
Manu: Also, we're looking for contributions to cover the costs here - thanks to Digital Bazaar, @@ for contributing.
Ken: I have gathered stuff from
the community - there might be more I missed, it isn't intened
as a slight.
... how do we prioritise"? Broad adoption and interoperability
are attractive characteristics in any feature.
... What would improve security? That's generally a nice thing
to be doing.
... What would make verifiable Credentials easier to use?
Manu: Does security come last in terms of how we prioritise? I think we have done a decent job and in terms of explaining what we are doing, maybe we should focus on uptake and other stuff more.
stonematt: If we have good security there will be no work to do there.
Brent: Think security should be first priority as a principle
Burn: Let's please use the queue. Chaals isn't that good at figuring out what people say.
<inserted> scribenick: manu
chaals: Security, like i18n, is not something you put in the priority list, it's an interrupt. If there is a problem, you fix it as fast as you can.
Ken: There is distinction about fixing security problems, or taking it to the next level.
<inserted> scribenick: chaals
Ken: There is a distinction between fixing security, and looking at things that might actually be enhancements we can afford to spend time on because they aren't a problem.
Manu: User experience often gets neglected, and I think there are things around moving credentials from A to B that we haven't discussed. And of course protocol.
[slide: potential dispositions]
<inserted> scribenick: manu
chaals: These slides are available, yes?
dan: Yes, they are.
<inserted> scribenick: chaals
Ken: Changes to the current
version will be unpopular. But there is also stuff we might
want to put in a next version, there are things we might ask
the CG to noodle on until we know more about them, and there
may be things we want to send "somewhere else"
... when we are framing things for a spec, we should be looking
for stuff where there is already a reasonable base of
consensus, because that will get through the standards process
faster.
... A data model without a protocol is kinda missing
something.
... in Sovrin we had a connect-a-thon.
... we have people working with credentials, 11 different
agents with 6 codebases, 2 *completely* independent. 6 of them
got it all right. And we have seen people working on protocol
so htey can ahieve this. Maybe we should be incubating this for
now.
burn: I like people doing implementation work before we standardise. but it is good to encourage people to think we might standardise a protocol, so they should bear in mind there will be standards coming. It is the right thing to do if people know the direction.
yancy: Want to understand what the protocol work entails, and how that connects to DID
<Zakim> manu, you wanted to note CHAPI
ken: There is no protocol work planned and DID is trying to spin up a group. There are multiple communities exchanging credentials but no planned standards work so far.
Manu: there are other people looking at protocols and the goal is to standardise something. So we need to make sure we are talking to each other. Credential Handler API would be a good thing to look at and see if it's a good base since we have been building on it, and e.g. IBM and Digital Bazaar have implementation experience…
<Zakim> chaals, you wanted to noodle on protocols
Ned: protocol can mean lots of
things. In security it has another meaning ("security
protocol") so please be specific about what you are meaning -
RESTfulness, subscription messaging models, ...
... PubSub is useful in various places as a model, think about
IoT, ...
<Zakim> manu, you wanted to note the plan
Ned: Note that getting timely updates is important to security - PubSub can help that
manu: This is like the refresh
service. We want an automatic way to do this, if we can agree
on one it's less than trying to connect disparate
systems.
... Goal for credential handler API is to get more interop and
deployment, share it as open source project, we want to be able
to deploy in production.
DavidC: Alternative viewpoint, that we took, was to use FIDO - now WebAuthN - we would look at going in that direction, supplemented with IETF token binding. That is another possible pathway.
Kaz: I work on WoT and they have an API draft and a group note on protocol bindings, and that is another useful input.
Dimitri: Token Binding is only in Edge, not in any ongoing browser work.
stonematt: Not to completely derail, where do we introduce other things like crypto technologies for selective disclosure.
<kaz> wot protocol binding template
Ken: It's on the list for talking about here.
<kaz> wot scripting api
<kaz> @@ will move the above urls under my comment
DavidC: We are looking at verifying through OIDC. On your own system you have an issuer. OIDC is widely supported, so I don't need Google to identify me I can say "it's me". Again, alternatives
Dimitri: I highly recommend my paper from rebooting ( Oliver also contributed to/ it)
s/also contributed to//
<DavidC> -> Dimitri. send us the URL of your paper please
Ken: I have one example of
cross-platform interoperability in the wild - we started with
LD signatures a year ago, now we have added some stuff for JWT
and Zero Knowledge Proofs.
... The ZKP are in turn looking at newer approaches to ZKP…
there is presumably more on the horizon, some of which we
should be looking at.
Ned: Clarify interop in terms of signatures please?
Ken: How do you take credentials
with one type of signature and work with them in a system based
on different kinds of signature? How do you make libraries that
deal with multiple signature types?
... so consume-only, produce multiple signatures, ...
<Zakim> manu, you wanted to discuss plan on LD-Proofs, LD-Signatures, RDF Dataset Normalization... also, hashlinks (eg cross-platform specs)
Ken: ZKP project in hyperledger is trying to bring security/crypto work together to help people share benefits (and bugs)
Manu: There is a plan to get LD proofs and ZK to a W3C spec. It's in early stage of exploration.
<dmitriz> DavidC: Here are the current snapshots of our drafts of the OIDC papers (by myself, Oliver, and others) - https://github.com/WebOfTrustInfo/rwot8-barcelona/blob/master/draft-documents/did-auth-oidc.md and https://github.com/WebOfTrustInfo/rwot8-barcelona/blob/master/draft-documents/did-auth-vc-exchange.md. There will be ongoing work in the next few weeks to finish them.
Manu: For this sort of thing you need decent formal analysis. This is related to other Linked Data work, so there might be a group that handles a few pieces. Or there could be a signature packaging group. With any luck there might be a group working formally on this in a year.
<Zakim> achughes, you wanted to say goodbye to Alex from Caelum Labs
Manu: The other bit of work is to look at cross-ledger compatibility for proof.
Alex: thank you - we want to make sure that people in Europe can continue to participate
Manu: Is there a plan for the CL and bullet proofs stuff?
Ken: CL has had academic review.
Brent: Want to get from RSA to elliptic curve but that means we can't do predicate proofs. Maybe we don't need them if we go around another circle
Ken: In more security, we discussed immutability of signed meaning. We talked about immutable storage options, or using hashlinks as a mechanism.
Manu: Think the next thing for hashlink is to see if it is broken, can be implemented, solves actual problems…
Ken: You can describe alternate locations for a resource with hashlinks, as well as immutability guarantee
Ned: How does meaning mutate?
Ken: Someone updates a schema, which can change what a particular field is used for. e.g. drivers license changes birthday field to mean saint's day, so credentials mean something different.
Ned: Like tying versioning of schema to your credential.
<Zakim> kaz, you wanted to check who just joined webex
Ken: This can be by accident, or done deliberately as a anttack.
<burn> qL
Ken: this work is not new. Do we do it here? Should we try to standardise it in the next version, does it need more incubation, …
<Zakim> manu, you wanted to comment on multi-sig. and to also mention multi-proof
Ken: Another type of security is allowing multi-signature (e.g. require 3 of 5 possible signatories)
Manu: Multi-sig and multi-proof
has a lot of interest. LD enables sets of signatures, chain
signatures which adds the ordering of who signed.
... (like Notaries do with ink)
... Multiproofs are people proving that something existed -
e.g. is anchored on a blockchain somewhere
... Cuckoo cycle is being used to build ASIC-resistant proof.
You can mix and match these things. Some of this work might go
to wherever the linked data signature stuff happens.
Ken: In terms of making things
more useful...
... how to use multibase, ...
Manu: It's a way to declare
binary encodings. So you can describe what the rest of the data
uses. Multihash tells you what the following bytes are going to
be (SHA-1, Keccak-257, …) and how many of them there are.
... these are used in hashlink spec, and come from IPFS
community. They're handy tools.
Ken: These seem like low-hanging
fruit.
... There are lots of people around who want a compact form of
the data model. ZKP for example gives a lot of data, so e.g.
CBOR can help reduce the storage and transmission
requirements.
<Zakim> manu, you wanted to note another benefit of compact representation
Manu: Also compactness helps to hide data application developers shouldn't be messing with. E.g. key information you don't want to expose.
Ned: Does compat representation have ismilar goals to multbase or are they orthogonal
Manu/Ned: There is some overlap.
Dimitri: They are complamentary. Compact representation describes encoding, multi* helps integrate that for use.
Ken: Are there other tings we should put on a list for consideration?
Manu: SGML (or some specialised compact version of it)
Ned: CDDL, as an analogue to CBOR/JSON*
Ken: Some of the spec could use
deeper definition - e.g. terms of use, evidence, status and
revocation info, etc.
... We have MIME type work somewhere on our list of things we
could do…
Ned: I can see status being pretty deep information / lifecycle.
Manu: Sure.
stonematt: There is the language
of a status, and whether it is active. This is pretty
contextual to the issuer.
... e.g. how do you represent a post0-expiry grace period where
you still apply the value of an expired credential.
Dmitri: Aurthorisation framework based on credentials? In the past we said Don't do this! SHould we suggest how you actually can?
<Zakim> manu, you wanted to note ocaps!
[nodding...]
Manu: Object Capabilities - using these as a transport mechanism for an auth framework.
<burn> DavidC said yes to auth framework :)
Manu: how long can this website control or access your driver's license credential …
Joe: A request framework for
requesting credentials. We know that is part of what we
need.
... how do we do that to allow privacy as well as c hoice of
which issuer I refer to in the presentation I give to a
verifier
stonematt: My brain just
broke.
... At rebooting workshop, claims manifest came up. Is it
related?
Joe: You are asking for the proof of a claim. That's how we get consistency with ZKP terminology
Oliver: It's claims manifest,
being explored in DIF, some implementors are looking at
it.
... the issuer provides information that can be used in an SSI
wallet to present to the user how to find their credential
information to answer a request.
DavidC: How do we manage this one hour a week.
Joe: We prioritise.
... (and do work. And apologise, when we didn't)
DavidC: can we increase the meeting rate?
Joe: Sure… but that relies on people having the bandwidth.
<Zakim> manu, you wanted to note Credential Handler API has an example of this.
manu: Credential Handler API has
a query by example thing. Here is what I want, here are the
issuers I trust, …
... this is agnostic to query/response formats.
... partly because we don't expect version 1 to be the exact
thing we really want, partly because we might want to be
running different formats over the same API
Brent: There is an agent wallet project that might help here, hopefully going into HyperLedger.
<Zakim> burn, you wanted to talk about CCG scheduling
Burn: standards groups are volunteer work, There is more work than resources available, so the work that gets done is the work that people step up and do. You can't order it to happen.
<Zakim> achughes, you wanted to say this might tie into work at ISO SC27 ID proofing
Burn: And there are even fewer constraints than in a WG. So if you care, it shows because you did the work.
achughes: ISO SC27 are working on identity proofing standards - seems a bit like the request response is going to be a necessary part of this in future.
Oliver: Decentralised Identity
Foundation have started their interop project. Currently
looking at Educational use case.
... there is no membership required, so you can readily
join.
Ken: So there is a lot going on. The results will be seen from the work people do.
Burn: We need to be out of here
soon. I do want to say thank you to everyone here for a
productive meeting, staying focused. Especially Brent and Ken
for really good preparation that made a lot of that
possible.
... Thank you for being prepared to compromise.
Manu: Thank you to the chairs for doing the thankless work of making it go on.
Burn: Thank you to the sponsors
who helped make this possible. Caelum Labs, Digital Bazaar
(because food is more important than many people realise),
University of Kent and Brightlink for contributing to
that.
... Thank you to Payment Innovation Hub.
Silvia: We hope you felt comfortable. We were very happy to have you here.
[meeting adjourned]
This is scribe.perl Revision: 1.154 of Date: 2018/09/25 16:35:56 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00) Succeeded: s/protocl/protocol/ Succeeded: s/THere/There/ Succeeded: s/sholud/should/ Succeeded: s/coudl/could/ Succeeded: s/infinate tree/infinite tree/ Succeeded: s/objectison/objection/ Succeeded: i/brent:/scribenick: Yancy_ Succeeded: i/requests/(Charles joins) Succeeded: s/.../davidc/ Succeeded: s/gannon:/gannan:/ Succeeded: i/we have three/scribenick: Yancy__ Succeeded: s/1.12/1.1/ Succeeded: s/.../chaals/ Succeeded: s/third/the last/ Succeeded: s/point to the content type/point to a specific profile of the content type, eg. to the vc spec/ Succeeded: s/talk with someone/check within the W3C Team/ Succeeded: s/notation/web annotation/ Succeeded: s/mime type/mime type, so we should look into this as well/ Succeeded: i|oliver:|issue 438 Succeeded: s/eahc/each/ Succeeded: s/credentail/credential/ Succeeded: s/htere/here/ Succeeded: s/first Tuesday/12/ Succeeded: s/?// Succeeded: s|@@@ url|https://youtu.be/pqev9r3rUJs?t=12278| Succeeded: s/[meeting adjourned]// Succeeded: i/Security/scribenick: manu Succeeded: i/There is/scribenick: chaals Succeeded: i/These slides/scribenick: manu Succeeded: i/Changes to/scribenick: chaals Succeeded: s/To/Not to/ Succeeded: s/DImitri:/Dimitri:/ Succeeded: s/even though// FAILED: s/also contributed to// Succeeded: s/meddled/also contributed to// Succeeded: s/in it/it/ Succeeded: s/hack/attack/ Succeeded: s/a a/a an/ Succeeded: s/Burn: About time!// Succeeded: s/credential/claims/ Succeeded: s/ can/: can/ Succeeded: s/Andrew/achughes/ Succeeded: s/ID proof/identity proofing standards/ Succeeded: s/??:/Silvia:/ WARNING: Replacing previous Present list. (Old list: Manu_Sporny, Ken_Ebert, Amy_Guy, Kaz_Ashimura, Andrei_Sambra, Yancy_Ribbens, Ganesh_Annan, Dmitri_Zagidulin, achughes, Andrew_Hughes, Matt_Stone, Dan_Burnet, oliver_terbu, jonnycrunch, Brent_Zundel) Use 'Present+ ... ' if you meant to add people without replacing the list, such as: <dbooth> Present+ Manu_Sporny, Ken_Ebert, Amy_Guy, Kaz_Ashimura, Andrei_Sambra, Yancy_Ribbens, Ganesh_Annan, Dmitri_Zagidulin, achughes, Andrew_Hughes, Matt_Stone, Dan_Burnet, oliver_terbu Present: Amy_Guy Andrei_Sambra Andrew_Hughes Dan_Burnett Dmitri_Zagidulin Ganesh_Annan Kaz_Ashimura Ken_Ebert Manu_Sporny Matt_Stone Yancy_Ribbens achughes oliver_terbu Charles_McCathie_Nevile JoeAndrieu JustinRicher Found ScribeNick: rhiaro Found ScribeNick: yancy WARNING: No scribe lines found matching ScribeNick pattern: <yancy> ... Found ScribeNick: Yancy_ Found ScribeNick: Yancy_ Found ScribeNick: Yancy__ Found ScribeNick: rhiaro Found ScribeNick: Yancy_ Found ScribeNick: rhiaro Found ScribeNick: Yancy__ Found ScribeNick: Yancy__ Found ScribeNick: rhiaro Found ScribeNick: Yancy_ Found ScribeNick: rhiaro Found ScribeNick: kaz Found Scribe: chaals Inferring ScribeNick: chaals Found ScribeNick: manu Found ScribeNick: chaals Found ScribeNick: manu Found ScribeNick: chaals ScribeNicks: rhiaro, yancy, Yancy_, Yancy__, kaz, chaals, manu WARNING: No date found! Assuming today. (Hint: Specify the W3C IRC log URL, and the date will be determined from that.) Or specify the date like this: <dbooth> Date: 12 Sep 2002 People with action items: WARNING: IRC log location not specified! (You can ignore this warning if you do not want the generated minutes to contain a link to the original IRC log.)[End of scribe.perl diagnostic output]