W3C

– DRAFT –
Linked Web Storage - Face-to-Face meeting (day 3)

10 October 2025

Attendees

Present
acoburn, bartb, bendm, ericP, gibsonf1, jeswr, laurens, pchampin, ryey, tobin, Wonsuk
Regrets
-
Chair
-
Scribe
acoburn, pchampin, ryey, jeswr, ericP

Meeting minutes

Access Requests

tobin: introduction, working with autonomous agents, co-chairs OpenID working group

acoburn: let's start by focusing on authentication and identity
… we are focusing on 2 flows:
… 1. users interact via a browser, standard OpenID connect flow
… 2. client able to manage its own secrets (agent, bot, script)
… there are a couple of features that we need in the context of LWS
… globally unique agent identification: the sub claim should be a full URI
… I believe that sub claims can be full URIs, wonder about size restriction.
… we need global identity for clients (azp claim, client id)
… typically a string scoped to an issuer, but we want that to be global, decoupled from an individual ID provider
… we want to avoid the challenge of having to register any client to any ID provider
… and we don't see dynamic client registration; ephemeral client identity is of no use for authorization

tobin: the problem of getting a global ID is tough
… in MCP, using the metadata URL as an ID

laurens: is that speced alreadt?

tobin: I think it was merged today

acoburn: the Solid community has a "Solid OIDC" spec
… that defines a "webid" scope, we think we can get rid of that
… Solid OIDC requires DPop, which is sketchy; we think we can get around that
… the third think is a client identifier descrtibed by a JSON file sitting somewhere
… if we can avoid specifying that ourselves, that would be great

tobin: the client ID metadata was spec'ed today

<jeswr> https://datatracker.ietf.org/doc/draft-parecki-oauth-client-id-metadata-document/

<jeswr> announcment https://www.linkedin.com/posts/aaronparecki_the-ietf-oauth-working-group-has-adopted-activity-7382203049647230976-HnIp/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAB5334sBEoBFzCtubbRVzVnbgik6Sxn60y4

ericP: we need to determine what we need to write in our own spec, vs. Shim spec pointing to something else

jeswr: noting that Emilia Smith (formerly Inrup) is an editor of that spec, and that the spec mentions Solid OIDC :-D

acoburn: this is funny. This is currently a draft. We can normatively reference to it when this is stable.

tobin: in MCP, this is the minimally viable solution that we need.
… it does not solve the problem of authorization.

acoburn: in LWS we need global identifiers
… without requiring this mechanism for all client identification, this gives us an example of how this can be done
… Tobin, are you familiar with the notion of CID document, published by W3C in May?
… It defines a common data model for describing public keys and other ID info.
… In Solid we use WebID, that plays a similar role.
… WebID has a long history, but it is not an official spec.
… We could replace it with CID. We could have metadata in the CID document including the public portion of a JWK.
… That could be used by this clients sign tokens and produce ID assertions.

jeswr: for the problem of agents self-certifying, we were proposing "use a CID with a public key"
… is there in the MCP world / OAuth 2.1 an existing self-certifying mechanism?

tobin: a constant question with MCP is "what is an agent?"
… the spec is opinionated but that is going to change
… I don't know how to plugin in whatever robust ID you want into that setup

jeswr: we use the term "agent" very generally, could be a simple scripts that is authorized to access a server
… it could be an MCP server accessing a pod. We want to address a range of use-cases.

tobin: in AI, the permissions are expressed in natural language
… that sounds strange, but AI folks are doing this more and more
… I want to nudge you to leave room for that. This is moving faster than you might like.

acoburn: this is a useful nudge when we come to specify the ACL language.
… we know what we have today is insufficient, we are reluctant to define our own.
… Yesterday we talked about "Authorization details", as a way to not reinvent something.
… In your opinion, does it seem like a reasonable approach?

tobin: I have no issues with that. People with a more decentralization approach might have.

laurens: is there anyway that we could liaison with your group?

tobin: a cool touchpoint: we have a meeting with a bunch of OAuth people

acoburn: the timing could be convenient for me

Authorization Requests/Grants in LWS

laurens: there should be some way for an agent to express that it wants access to a resource
… prior art in Interop Panel, Inrupt (Access Grants)
… authorization_details could also be useful
… this relates to topic of notifications, since some notification is important for access requests
… looking ahead, we can discuss test suite in afternoon
… 2 scenarios
… (1) user with client application, accessing own data on RS
… could be client id matcher (e.g. in ACP), setting ACLs
… hard to put restrictions on clients when editing ACLs (e.g. time horizons, purpose, etc)
… authorization_code flow easier in centralized applications, harder with decentralized apps
… in GDrive, there are a set of pre-defined scopes
… problem with decentralization is that you need a universal way to express these operations

pchampin: to clarify, existing (centralized) apps have their own defined scopes

laurens: naive approach: scopes matcher, but this just shifts the problem
… these are strings and can lead to interop problems, since each impl can have its own approach
… this scenario typically includes a synchronous flow
… existing solutions in OAuth could be good solutions
… for async flows, the device code flow may be more appropriate
… but this doesn't really solve the access control problem

acoburn: seems that the first (single-agent) flow could be built today without new features in Solid/AWS
… async, multi-agent flow would need new features beyond what solid defines today

laurens: two agents, agent A wants access to data controlled by B
… A and B each have their own RS
… linked data notifications defines an inbox. Security considerations are significant
… LDN, webhooks (service-to-service) possibilities
… would like to see some sort of Inbox, hard to provide strong security guarantees

acoburn: not to suggest that we do what Inrupt does,
… but we do thing to make Inboxes tractable
… it has to do with HTTPsig
… in almost all cases where entities exchange messages, the security issues are not well considered
… token exfiltration is not a solution we should encourage
… Alice performs some action, that triggers a notif that may go to an inbox (or a specialized one)
… the server that delivers this can act on behalf of Alice, with its own signature
… let's assume it is the RS that sends this message
… it does not send an auhtorization header, instead it sends signature headers that need to be validated by Bob's RS
… Bob's RS needs to validate those signatures; we don't need to invent anything, this is spec'ed already
… we would just have to define a few required fields

laurens: I agree [mentions a few specs complementing HTTPsig]

ryey: would it make sense that the message is between ASs rather than RSs?

laurens: I think it would make sense because such servers already have signature material available

acoburn: to push back on this a little
… in LWS, we are focusing on Resource Servers
… Wednesday we talked about storage metadata
… it could be a CID, it could say something like "this storage has an agent"

laurens: you could make this argument for the Authorization Server
… some specs in this spec could be reused
… this kind of exchange could be about other things than resources in the RS

acoburn: I know that everyone is Solid is using OIDC today
… but I don't think we should tie ourselves to OpenID or OAuth

laurens: granted; the only reason I mention it is that someone else may want to spec it

jeswr: a hesitation I have with Resource Servers communicating:
… you may have several storages

laurens: you may also have several ID providers

jeswr: yes, but when you make an access request it is bound to a single identity

acoburn: the access request would be send to a server and ask "send your response there"
… with some proof that you control "there", not causing some spam to someone else

bartb: does this implies that the Resource Server should have an inbox as well?

laurens: possibly

acoburn: I think there needs to be a URL somewhere. It does need to be a container in your storage.
… It is a kind of service that you kind describe in the storage description metadata.
… It could also be described in the user's CID.
… It should not be a generic Inbox. We need to carefully thing about it (shape validation?) to avoid it being flooded.

bartb: would it be one per storage or one per server?

laurens: I don't think we must decide that
… but what is being exchanged should be addressed to someone

acoburn: I think this is a deployment consideration

laurens: should we start discussing the content of such access requests?

acoburn: there are two levels for this; I would start with an envelope.

laurens: yes; this could be a JWT, a VC...
… an envelope and a paylod
… the envelope would contain a recipient (or "addressee"?)

acoburn: important things for the envelope is a signature, information about the management of that credential
… how you can verify it, revoke it
… thinks like the issuer

jeswr: revocation here is different from the one you have in VC
… in VC, this is to prevent you from from using a signed document after it has been revoked
… here, you know the audience of the credential, so you can send them a new credential cancelling the previous one

laurens: I don't have a strong opinion
… what else in the envelope

acoburn: I see it as "how to manage the credential"
… the rest could go in the payload

jeswr: looking at the access request that Dokieli currently uses (the payload)
… type: AuthzRequest
… target: what resource we want access to
… agent requesting access
… mode of access that is requested (here: read/write)

laurens: I would want this to leave under "target" (noting that target is not an ideal name for it)

acoburn: I think there is a time element (this may expire)

laurens: I would put this in the envelope
… for example: issued at/not before/expires

laurens: target may not uniquely identify specific resource, could be category or resources or a service

wonsuk: could the target be a type index or other service?

laurens: not sure about the authZ model yet
… chicken and egg issue if you don't know where the data lives (i.e target)

pchampin: not sure it's a problem. App says "I'd like to see PhotoAlbum types"

laurens: there may still be certain categories of information shared out-of-band
… target needs to express more than just resource URIs
… e.g. user-defined type

acoburn: in the context of "Authorization details" discussed yesterday
… you may send some type to the AS, which does not know anything about it
… it still includes it in your token, then the RS uses it to decide to allow/forbid access (based on the actual type of the resource)

laurens: [refining the target field on the flipboard]

acoburn: compared to the flow we discussed yesterday,
… what you are doing here is capturing part of that flow, but deferring it
… because this is now an asynchronous flow

laurens: yes
… this should be aligned with the "Authorization details" spec
… that would give a consistent model across the different components
… Another thing we might want to consider:
… the requester may be the CID of an agent, the application description of the app,
… could also be a combination of both

acoburn: this can be mirrored in OAuth concepts
… authorized client, audience
… I had not conceptualized these similarities before

laurens: there are many similarities, with interesting possibilities

acoburn: we need to move on to Notification 12 minutes ago
… to wrap up: this is the request
… what about the grant?

laurens: could be very similar
… I don't have a full blown mechanism for how the grant is pushed back to the requester

acoburn: aside from the inbox/notif mechanism
… we also need a way to look up the requests/grants

laurens: that's why I am not a fan of a push based mechanism for this

acoburn: ESS has 2 parallel mechanisms: ACP and Access Grants
… if I want to know who has access to my pod, I need to look in two places
… this is where your earlier comment about managing this at the AS level makes a lot of sense
… if we find the way to push them into the AS (a lot of hand-waving here), that will be much better for auditing etc...
… Also, it is important to have a purpose associated with requests and grants

laurens: what I would like there be some way to associate metadata with this request
… to clarify the content in which it was made
… not necessarily in the payload, but something that could be referenced and persisted

bartb: yes, this is needed to give a legal status to those requests and grants

laurens: one example is what ISO did with the consent receipt

bartb: is this something whete ODRL could come into play?

laurens: some aspects of the access request could be evaluated against an ODRL policy on the AS

acoburn: at present, Inrupt uses VC and G-Consent
… we did consider ODRL, but it does not simplify the representation at all

pchampin: ODRL is very generic, other groups that use ODRL define a profile to refine how it is used for specific communities

ryey: in ODRL is current state is not meant to be used to represent an access request/grant
… instead it is used to evaluate an AR/AG

Notifications

<ryey> pesent+

[After break, topic to notifications]

laurens: an app may be interested in what has been updated, created ,deleted... eventually, a lifecycle of resources

laurens: lifecycle: from creation, to update (update to the resource itself, of the metadata, or associated ACLs), to deletion
… in solid, this is by WebSockets
… web browser app subscribes to resources, and receive notifications
… but problem is it only works in browser

acoburn: if you lose connection (etc), you face issues. Also not at scale.

laurens: Another existing solution is Solid Notification Protocol
… relates to discover of capability of resource servers
… for some interesting aspects, from SNP
… quite some level of interface flexibility. we may want them here.
… For the lifecycle events, they are *after* happened
… but we may want other ones in addition, prior to occurance

e.g. in k8s, you have such prior events
… (k8s) if the hook doesn't response succeed in time, the event fail
… this may allow integrity or consistency checks for us, e.g. shape trees, etc

acoburn: one complication issue: when posting a resource, I'm asking "can I do it *right now*?"
… but here for events, it's after something, or regular
… so the timespan now is after, so it's much longer
… something to notice and take care

laurens: notification may be specified separately than regular access modes

acoburn: we can do initial authorization at subscription time
… next, some time later (e.g. a day), something will change to the subscription. How do you ensure the access is still valid?
… different ways to model it
… 1. every time you change, you update it
… 2. watch how things with authorization, when things change for the resource, update it

bartb: for case 1, when the notification happens, the sender does it ever ytime?

acoburn: when I watch for something, I can pull. Other than pulling, I want push. If my authorization changes in the meantime (to no longer being authorizaed to access the resource), I should not receive the next notification.

laurens: so we have two options, either by the receiver side, or the sender side

acoburn: this is managed separately from storage. For a given subscription, we have delivery location. we associate a status as well -- instead of just making it disappear, we send an error (or other similar messages).

laurens: we may leave the details out of scope for our spec. we can only have abstraction descriptions on what is needed; if time allowed, specify it

acoburn: If we are going to specify anyhting about notifications, we should specifiy how AuthZ on notifications work

laurens: We should also specify MUST have requirements for events in the resource lifecycle, and some of the content data
… with an abstract definition of the data model in the core protocol
… this leaves the question of whether there is true agent-agent notifications. We have discussed resource server-resource server notifications. I would be against a generic agent-agent notification spec
… you could have a container live somewhere with public write permissions on it to satisfy that use case
… I would be hesitant to specify an inbox

acoburn: I have pulled up the docs for Inrupts webhook notifications. The data that we do include is
… it is a JSON object. Every notification has a UUID,
… reference to a subscription - this is a UUID
… a date for when the notification for when the notification itself was published
… a type - which is a string that is a URI to capture e.g. resourceUpdated, resourceDeleted, etc.
… when you create a subscription, you can subscribe to multiple types
… there is a purpose, which is set at subscription time and optional. This purpose is carried with the description so the webhook processing can do something with this purpose.
… the webhook recipeint cannot necessarily access all of the metadata about the subscription - this is why purpose is needed.
… Then we have a field for the URI of the resource
… then we have "controller" and "audience"
… audience defines the intended recipeint of the notification
… controller defines the controller of the resource that changed. In the case of an access grant being issued; the controller is the entity that created it
… for an access grant expiring or being revoked, the controller is the entity that created it
… for a resource in a storage, if you create a resource in your own storage you are the controller
… if someone else has write access and creates a resource - they are not the controller. The owner of the Pod is still the controller

laurens: I would also want the actor that triggered the update.

acoburn: Sometimes the actor is relevant - sometimes there is no actor (e.g. access grant expiring)
… The expiry notification is useful if I issued the notification and want to know about that

laurens: Can the ID be used to define idempotency on events
… can we say that an ID can never be used again

acoburn: With UUIDs you can be pretty certain they are unique, but this is not guaranteed
… with webhooks, we need to be able to account for intermittent network failures - and ocassionally need to retry
… there will be cases where there is a network failure on the response which means that you send the same request twice

laurens: You could have a weaker guarantee which is if the same event is re-transimitted it is a unique identifier
… to clarify, the weak guarantee - is if it is the same event - it must use the same identifier
… We also need to still define who the actor and controller are for different events

acoburn: There is also an audience field - which defines the recipient of the message
… in effect it is metadata on the subscription. If it is an access grant, there is the entity who created the access grant, then some target of that - that is the audience.
… when a resource is created, the audience is whoever created the subscription

laurens: We just need an event modelling session around this

acoburn: Another element, which is optional - is data minimisation
… these are rules on how to minimise the data.
… it is an object, which includes things like a retention period to say "you can keep this object for 30 days"
… in all messages headers are signed using HTTPSIG
… there is a content-type and then a content digest header. The content digest header is what is signed.

pchampin: What is the state of the art in Solid today

acoburn: The current notifications model based on Websockets which is largely deprecated already - this is part of the reason Inrupt has not implemented that particular notifications standard
… This came out of the solid noticications panel.

jeswr: cxres produced PREP https://cxres.github.io/prep/draft-gupta-httpbis-per-resource-events.html - is this an input we should be considering

acoburn: It i not an input in the charter; but I wouldn't ignore it

ryey: A reason this was written was for efficiency of notifications
… A question I have is - if I want to subscribe to updates on many resources; what would be the best way?

acoburn: In ESS you would create a subscription with pointers to all of the different resources you want to listen to. This means you can also subscribe to a container and get notifications of updates to any resource in that container.

ryey: What if the resource update is frequent, is there a way to limit noise

acoburn: Not in our implementation, we acknowledge that it is an issue.
… It is part of the reason that there is a back-off retry on events. It is also not something that you can necessarily spec for. You do need to assume that there is a high rate of notifications coming in; that may exceed your ability to send them out. In turn this may exceed your ability for your webhook reciever to process them. So you do need

to account for things like backpressure in implementation.

pchampin: On websocket vs. webhook considerations. In a standard client-side app - are webhooks an option

acoburn: Webhooks are not very useful for a client-side SPA with no server component
… because a webhook delivery mechanism has to have some kind of server that can receive it

pchampin: My assumption is that some kind of simple client-side mechanism is still needed

acoburn: If I were writing a highly scalable system - I would set up a WebHook that becomes a resource sinc. This is the server side portion of my app. The sinc there publishes a Websocket API that makes use of standard AuthZ (e.g. cookies) which addresses the case of someone who is actively using and logged into that applciatiosn.
… then when someone is disconnected from the network, the webhook continues to get messages; and when they reconnect you catch up.
… from the point of messages being sent from the LWS server - it is all out of scope.

Test suite

acoburn: before we get to the content of the test suite
… one issue I have seen: after a WG finishes their work, the test suite sits there and is not used anymore

bendm: this is a side effect of the WG process

pchampin: there are "maintenance" modes that are possible

w3c-ccg/mocha-w3c-interop-reporter

https://canivc.com/

laurens: the infrastructure that runs the test suite on a regular basis needs maintenance
… this is something I'm not sure we can solve
… one way to address that would be to have the test suite include an abstraction layer
… the Solid test suite does that. I'm on the fence about that, as It adds a lot of complexity.

ericP: I want to keep in mind that testing keeps you honest wrt providing interop.
… testing tells you when you have been too flexible.
… As far as what format you use for the test reports,
… the SPARQL manifest has been used by many groups
… Shex has moved away from it, but it has a lot of implementations

pchampin: this is complementary to what I presented earlier

laurens: it requires to write our test suite using moccha
… I'm not enough of a proficient typescript developer to do that

ericP: we should have a look at caniuse , what infrastructure they use

laurens: requiring every server implementation to have an instance up-and-running for testing purposes is probably a lot to ask

ericP: that's an advantage that caniuse people have; they have funding to install the latest version of each browser
… test suites on W3C have always been based on honor: we trust the implementation report provided by implementers

laurens: as long as the result they push to us respect the expected format, I would be ok
… even if they do not use the same test suite

ericP: something analogous to caniuse would require that everytime somebody runs a new release, either they or us run the test suite again

laurens: this could work through a PR

ericP: when a test harness is useful enough, people integrate it in their own toolchain

jeswr: we can ask them to provide a link to their implementation
… I'm pretty sure a company in VC land is doing that

laurens: then again, that's a big thing to ask
… as an example, for our infrastructure, we know how to setup eidas for authentication, but not everyone may have that skill
… I could imagine a test suite with some deliberate blank nodes, left up to implementers to fill in

ericP: that raises an interesting point
… SPARQL, Turtle, have purely declarative test suites
… in LDP, we had a test suite for servers that was implemented in code, that was managed by one person in the WG
… it was not declarative at all
… may require a specific "test-mode" for implementation

pchampin: such a test-mode is an invitation to optimize for the test suite

laurens: ericP you talkes about facets of test suites. Can you say more?

ericP: in SPARQL 1.0, we had a lot of tests, people had partial coverage, we didn't know how to interpret it
… I parsed the implementation report, and identified certain patterns (for "facets"), giving names to them
… It was a fair amount of work to do this retroactively, but a lot of fun
… provided a way to say "this implementation is failing here because they are missing feature A"

laurens: these features could be defined in the spec

ericP: some of these features were actually quite complex

laurens: could be used as an affordance for implementers to understand why given tests fail

ericP: in Shex, the manifest is ordered; a feature is tested before it is used in further tests
… we were not so diligent about that in SPARQL

pchampin: compliance is about meeting the normative statements; the test suite is a proxy
… the expected report should maybe have the granularity of the normative statements rather than the tests
… "features" may be defined as set of normative statements; possibly the sections of the spec

laurens: sections may not be the right grouping

<ericP> https://github.com/shexSpec/shexTest/blob/main/validation/manifest.ttl#L1382

laurens: also note that the VC test reports refers to normative statements

ericP: the purpose of the facet tree was to have names for things, but then you can have maps to the normative language

pchampin: I agree; the sections provide a "natural" grouping, but we may find a more relevant one in special cases

ericP: we can also demand hooks in e.g. a server implementation to return a T/F for whether a requester is recognized as authorized

Next steps

laurens: I feel that we have general agreement on the discussion abstract entities
… we are in a position to write text and refine from there
… acoburn you were willing to start writing something?

acoburn: yes

laurens: regarding Query / Data discovery

acoburn: I think we agreed to start with Type Index, and time-permitting continue with Metadata query
… I believe that both still needs discussion

laurens: yes, I think we need more discussion about this before we can start writing
… a lot of the structure of the data in is not going to be specified; so query will be more at the metadata level

acoburn: also there are dependencies between those

bendm: in terms of timeline, it seems that we need to agree on step 1 before we can go to step 2
… I'm afraid we can go into a rabbit whole of LD vs non-LD in the type index

laurens; also we need to rename Type Index to something else
… my experience is that reaching agreement in the abstract is harder than reaching agreement on a concrete thing, considering alternatives
… so I hope to discuss Query / Data discovery with some concrete examples

acoburn: I would be cautious about having too many discussions open at the same time
… another item is Resource Metadata
… we have good conversation and some level of alignment

laurens: we also discussed Containers
… for Resource Metadata and Containers, I would like to have concrete examples
… Erich Bremer's PR is a good start for Resource Metadata
… then we can discuss the specifics

acoburn: there is also Storage Medatata

laurens: we had less discussion on that

gibsonf1: type index search can be very simple; metadata query is an open search
… the scope of the metadata description determines the complexity of the metadata search

acoburn: Wednesday we talked about 3 levels of searches
… one was considered completely out-of-scope
… type-index (or other name) search was considered as in-scope
… metadata query was considered between the two, so lower priority
… I doubt that it is going to happen before most of the other things listed here

laurens: about Storage Metadata
… we might want to start by writing something, because the scope is narrow enough
… then we have Authentication
… on the one hand we have this abstract definition; we discuss cert-asserted identities vs. IDP/OP issues identities
… go to the identity part with CID

acoburn: I propose 3 topics under Authentication
… 1. abstract concepts, which we can start immediately
… 2. bindings to OpenID connect
… 3. bindings to OAuth

bendm: also a list of what functionality you need and why, and what bindings you should use for each

laurens: I think before we can start writing on 2 and 3 above, we still need some investigation

acoburn: I think that for authentication itself, we have a good understanding
… by OpenID and OAuth, I mean "an entity that delegates its identity" vs "an entity that asserts its own identity"
… you can put my name under Authentication

laurens: I already volunteered for Query / Data discovery; I can propose something to start the discussion

bendm: I volunteered for the Resource Metadata

laurens: then we have Authorization

acoburn: again, we would have to first describe abstract concepts
… cf. the diagram we had with the boundaries between RP, AS, RS
… it will be described here at a high level
… then protocols for the RP-RS, RP-AS, RS-AS, respectively

bendm: some shared assumptions would need to be described as well

laurens: would access request be another subitem of Authentication?

acoburn: I would consider it as a separate item

laurens: I have some ideas for that; acoburn we could work together on this?

acoburn: yes
… again, start with abstract concepts

bendm: for me the responsibilies of the AS and RS are still a bit hazy

laurens: we had interesting discussions about this this morning

acoburn: short story: we had the same question, but we found some ideas
… the question being: can we align what's in access request / access grant with the actual Authz protocol

laurens: we can start to write something about the abstract concepts in Authz and Access request
… there will be some interactions between the two
… then we have Notifications

acoburn: I suspect this will still be "abstract concepts / data model / protocol"
… question whether protocol will include authz or whether this is something different

bendm: sorry for missing the discussion this morning
… how do notification relate to access grant?
… are they managed by AS or RS?

acoburn: still in discussion
… access requests and grants are resources that one can access, but also tied to the AS

laurens: finally: test suite
… and Use Cases and Requirements
… I want the UCR to be reflect the state of our discussions
… thanks to Wonsuk for proposing to help

acoburn: what do people think of keeping the document only those requirements that we decided to address?

laurens: I'm in favour of that. I don't think the documents is supposed to be changelog of the WG's decision process
… people can look at the github history for that
… the document should be a more concise view of the current state

acoburn: may I propose that eventually, each requirement will point to the normative text that satisfy this requrtement
… all requirements that point to nothing should go
… likewise, use-cases linked to no requirements should go

ericP: as we aim to be a foundational spec, there is value in keeping linkable references the requirement that we do not address directly, so that other people could link to them
… that being said, that's more work

laurens: I'm in favour of that for partially satisfied requirements
… but we are not exhaustive anyway, so I don't think we should aim for this document to be "lay of the land"

bendm: what we can do is have a version of the spec with all the requirements, then the next version removes them with a link to the last draft containing them
… "there are the requirements that we chose to not address"

laurens: I would accept that, as long as the final product is more concise
… I would also like to consolidate some issues by the end of December

ericP: another level of diligence we could go to is close the issues we know we don't want to address
… we need agreement for the WG to do that

ryey: I made a PR some times ago with additional requieremnts
… some use-cases do not belong to any requirements, we would like to address this

laurens: I agree, I'll have a look into that
… also I would like the UCR document to reach a state that is mature enough that we can refer to it when there is discussion on another topic
… which is a different requirement from checking that all use-cases are covered
… we can make these adjustments afterwards

acoburn: we don't have names in front of each item, and we need timeframes

laurens: I would like to settle the UCR document by December

acoburn: I intend to start the work on Abstract Entities next week, then turn to Authentication
… I really would like to have a first draft by the end of this year
… I would like to have a lot if these in really good shape by April

pchampin: April is a good dealine, because that's the time that we will have to request a charter extension / rechartering

laurens: Notification is something that can probably wait until Q1 2026

acoburn: keeping in mind the general April deadline
… also, we mentioned a Spring F2F, jeswr?

jeswr: the Solid Symposium is scheduled 30 Apr - 1 May
… I would suggest we meet 27-29 Apr

acoburn: we still don't have names on some items
… I volunteer ericP for the test suite

laurens: we still needs someone for Notification

pchampin: how much could we reuse from Solid Notifications?

jeswr: how much could we reuse from Activity Streams?

laurens: not opposed to reuse parts of Activity Streams, but I don't know enough about it

jeswr: I can have a look at it
… what I like about the past few days is that we are bringing LWS closer to a number of existing specs
… by bringing it closer to Activity Pub, we would bring it closer to Mastodon / BlueSky

jeswr: I'm happy to have a look at Containers, and the data model side of Authorization

laurens: time to wrap up; thanks to all who joined, physically and remotely

[all: thanks for hosting]

Minutes manually created (not a transcript), formatted by scribe.perl version 246 (Wed Oct 1 15:02:24 2025 UTC).

Diagnostics

Succeeded: s/aaron: /acoburn:

Succeeded: s/Monday/Wednesday/

Succeeded: s/some time/some type

Succeeded: s/test suite/test reports

Succeeded: s/ericP: @@1/

Succeeded: s/specific/specifics

Succeeded: s/3. b/... 3. b

Succeeded: s/eley/ely

Succeeded: s/pchampain/pchampin

Succeeded: s/bartp:/bartb:

Succeeded 4 times: s/bart: /bartb: /g

Maybe present: [all

All speakers: [all, acoburn, bartb, bendm, ericP, gibsonf1, jeswr, laurens, pchampin, ryey, tobin, wonsuk

Active on IRC: acoburn, bendm, ericP, gibsonf1, jeswr, laurens, pchampin, ryey, Wonsuk