From W3C Wiki

Social Web Working Group Teleconference

17 Nov 2016

See also: IRC log


evanp, ben_thatmustbeme, rhiaro, sandro, tantek, aaronpk, julien, cwebber2, mattl
ben_thatmustbeme, rhiaro, sandro


<ben_thatmustbeme> scribenick: ben_thatmustbeme

<scribe> chair: tantek

<cwebber2> (but text only currently, my own fault)


<Loqi> Social Web WG Face to Face Meeting at MIT (F2F8)

tantek: we have 5 clusters of topics to discuss, PR transitions, CR transitions, WD updates or note transitions, group continuity or transition to CG, or other business
... we have one request form csarven to discuss LDN tomorrow, that seems fine since he is only here tomorrow

<csarven> Not present completely. Will watch out for keyword highlights.,

evan: if its okay i'd like to discuss AS2 tomorrow as well so i can handle some issues first

<csarven> Yes, I'd appreciate if LDN is tomorrow.

tantek: are there any other specific scheduling concerns?

<cwebber2> I have to drive my wife to an event tomorrow, but let me check the time

tantek: i think we should sort by priorty, but if any preferences

eprodrom: i think we should discuss contintuity to inform our other decisions on what we are doing with these in the future

tantek: that sounds good, lets put that down for 10 today

<cwebber2> the thing is at 5pm tomorrow, so it should be fine

<aaronpk> cwebber2, I opened an URL (see above) if you want to try that

tantek: anotherway to look at things is to tackler harder things in the morning since we are all fresh
... the one obvious candidate is pubsub

julien: there is one issue that is a bit complicated, but it would be good for group feedback
... the correct solution might be to do nothing

sandro: Is cwebber2 on audio, wondering how much stuff we have on activitypub

<cwebber2> I'm not on audio unfortunately, I can't seem to get my setup working

tantek: i'm going to suggest after continuity, we do pubsub next

<cwebber2> my fault for running a fringe GNU/Linux distribution using modified browsers and missing my webcam :P

tantek: and then activitypub and micropub after that
... so i guess all the pubs first

<cwebber2> I will watch irc closely and participate here, sorry for the non-ideal participation :(

<mattl> cwebber2: try your phone with talky? assuming you have a camera there

<cwebber2> mattl, my phone's microphone broke! but I'll try my wife's

<cwebber2> everything is breaking!

tantek: we had some comments from mozilla (publicly) on webmention
... not officially yet but on their internal messaging

eprodrom: for those of us not as familliar with the process, whats the process from here?

tantek: the voting period is open, its closes on the 30th and it goes by one formal objection can slow it or potentially kill it
... usually its pretty bad form to have problems raised at that point unless its more beurocratic
... or possible "good, but please fix X"
... if you do know any w3c members, encourage them to take a look


<sandro> (access controlled -- only for W3C AC reps)

tantek: how long for continuity?

<aaronpk> swwg_laptop_:

eprodrom: 30 minutes should be good

tantek: julien, how long for pubsub?

julien: i think 30 minutes

tantek: cwebber2, aaronpk, eprodrom, any preference for which we talk about first and how much time?

aaronpk: i don't care, i don't have any open issues

tantek: its also to discuss what you need to move forward

cwebber2: i'd prefer to go second


<rhiaro> ActivityPub ^


tantek: ok, i think we have everything through lunch scheduled, lets go ahead and start with that, if we need more time on things we can take more time and movethings around

group continuity / SWICG

tantek: we already resolved to create the group and Ann offered to do that, and consolidate all the other community groups
... there are several of them


tantek: first question, where are we with that group, is it created yet?

aaronpk: cwebber2 and I offered to chair the group. I am happy to do the actual submission, write up the group description
... we need a short name

tantek: SWICG?

aaronpk: CG probably shouldn't be in there, its like ATM Machine

tantek: some other groups do that already
... web platform incubater community group. look at them for an example, look at existing groups and try to incorporate them

<tantek> FYI:

<tantek> (still up)

eprodrom: there is ostatus group, activitypub, others
... the activitypub one should probably be closed. the activity streams one could still be open since we have work to do
... well these are at CR, but if there are things we think should be included and incubated, those would be good for the community group.
... the incubator group can continue to add extensions and add features to the specs we've defined
... with AS, when we talked about extensibility, we talked about adding it to the namespace, which means there is some document maintinance

tantek: so part of it should probably be messaging all those other groups to tell them there is a community group that was not active and they may want to look at this group

sandro: I can imagine ppl being annoyed by that because they only want to follow one technology

tantek: i don't think anyone will really care since these groups are pretty much dead

sandro: if someone complains i think it would be good to keep the group open and maybe just have some major updates

tantek: i think that makes it worse as it looks like the discussion is there, but it isn't
... i think we should include that the description
... in this group most of the discussion takes place on github, is that something we want to keep for the CG?
... how does cwebber2 feel about that?

<rhiaro> scribenick: rhiaro

<cwebber2> I'm fine with it

<cwebber2> but, the laptop disconnected

<cwebber2> so I can't hear/speak

tantek: THe web platform incubator group is not using the email list for the CG as well. That's clearly an option for us
... We also had tension in this group over use or non-use of email, so we should be explicit rather than ambiguous about it

<tantek> we are not hearing you cwebber2

<aaronpk> one sec

<tantek> apparently both talky and appear in didn't work for us for different reasons

<ben_thatmustbeme> scribenick: ben_thatmustbeme

<cwebber2> I see a "mute" icon on both participants

<cwebber2> nope

<cwebber2> is the microphone enabled on yours?

<aaronpk> it's saying it's not able to send audio

<aaronpk> cwebber2, does hangouts work for you? or even a phone number?

<cwebber2> I can call in

<cwebber2> how about I call in for activitypub and then closely track irc otherwise

<aaronpk> call me directly, i'll PM you my number

tantek: i think that gives you some next steps for the incubator group?

aaronpk: we were asking if cwebber2 was okay with using github and no mailing list

<mattl> use GitLab ;)

cwebber2: i'm okay with it, but its a little ironic

tantek: we are just saying we are specifically NOT using the mailing list

<sandro> existing CG that look related to me: activitypub, fed soc web, pubsubhubbub, ostatus, microposts, social business

tantek: i think the community group gets to specify an IRC channel

aaronpk: i think we should continue to use #social
... it does mean the logs are shared though, if thats good or bad i don't know

tantek: since its a continuation of us, i think thats fine
... i think that gives you enough time to write everything up and we can take a look tomorrow possibly
... or later today maybe
... that takes care of the community group portion. now for continuity

sandro: one thing is that when we took on pubsub, we are legally required to extend the group
... the waiting period requirements makes it required
... we don't actually have to do anything other than the mechanical things (assuming no issues come up)
... that makes it pretty easy to continue some things if we need to
... the administartion is ok with extending the charter for that reason

aaronpk: the winter break also messes things up as well

sandro: i think as long as we got to PR before christmas we were going to be fine

tantek: the publication morritorium starts the 18th, which means the last day ........ etc etc

sandro: yeah, letting that slide to january makes a lot of sense

tantek: you need at least a week before to schedule transition calls, plus the CRs don't end until the 13th for LDN and AP

rhiaro: we could make it happen but its a lot less stressful to do it in january

tantek: so we have to request an extension, do we have to specify additional things?

sandro: we can probably find out more about what exactly we need for that in the next day or two
... mostly its for us, do we want to be at a point where we need telcons?

rhiaro: also that will overlap with the CG, is that okay or do we want wait?

tantek: i think its good to have some overlap

sandro: do we want to try to do a webinar. tutorials on what exists already, "this is what webmention is, heres how to use it" etc for each of our specs?
... its a lot of energy

tantek: i'd say we wait until after PR

aaronpk: this sounds like my talk at open source bridge, i went through a brief overview of the group

tantek: i think for the rest of the agenda, we talkabout how much we want to go past the end of the year

eprodrom: so we are expecting to go to at least april with our group, and we expect to have a CG that will continue indefinitely

tantek: i think we should be clear that everything else should be finished up by january, and the rest is really just for pubsub mainly. its not like we are trying to cram a bunch of other things in there
... and we're not expecting to be doing an F2F after now, right?

sandro: I don't think so, the big question, what happens if someone points out a BIG issue with one of our specs that has to go through the whole cycle again

eprodrom: for us as a group, would it be fair to say, after jan 1, we have telcons as needed

tantek: i'm going to guess we'll want to do them at least monthly?

rhiaro: monthly 4am phone calls will be preferable to weekly 4am phone calls for me

tantek: presumably we can still get staff contact time for the extension? and the chairs will be able to commit to having time?

eprodrom: yes, i can commit to being around for that

sandro: staff contact time will be fine

cwebber2: we can ge tto this in the AP time, but as AP has the most amount of work to do, i don't want to completely discount work on AP after Jan1

tantek: thats certainly something we need to talk about, and making that kind of request is important, we'll go over that schedule when we get to AP
... anything else on group continuity / incubation group?
... as we discussed, any impact on continuity is something the editor should bring up when we discuss them

10 minute break

<csarven> [16:22:41] <ben_thatmustbeme> eprodrom: so we are expecting to go to at least april with our group, and we expect to have a CG that will continue indefinitely --- Is this agreed? April is the extension?

<aaronpk> we still have to request it

<csarven> What will be requested?

<aaronpk> it's not just up to us

<tantek> csarven: we will be requesting a charter extension for PubSub in particular

<csarven> "We'd like request to extent until May 31?"

<tantek> until April-ish

<csarven> Ok

<tantek> per IP requirements for PubSub FPWD + CR disclosure periods

<tantek> given that, we need to be specific about how much time (if any) any of our other specs need to complete CR & PR

<tantek> and we will be discussing that, per spec, while we are discussing next steps for each spec during its slot on the agenda

<tantek> there was a vague discussion about trying to get everything else (not yet in PR) into PR in January

<tantek> but we'll figure out the specifics for each spec in depth during today & tomorrow

<csarven> ok, so all specs need to wrapup by April or wahtever

<csarven> and that it is PubSub that needs the most time.. hence the extension being that long..

<csarven> otherwise it is maybe another month or so.. did I understand that correctly?

<csarven> Sorry I'm not on the call.

<tantek> csarven, no problem.

<tantek> that's roughly correct. there's a strong preference to wrap up our other specs in January, however we will discuss each spec in particular and figure out its particular needs.

<ben_thatmustbeme> its not like the arbitrarily chose April, we have to give a certain amount of time for IP exclusions, since that period is still going on for pubsub, we need to extend

<ben_thatmustbeme> that gives us the opportunity to finish dotting i's and crossing t's on other specs, but we want to be clear that we are not just trying to cram a bunch of other new stuff in by extending

<cwebber2> btw, I assume #social is a pretty welcome place in general to invite people hoping to implement socialwg specs?

<cwebber2> or should I encourage people to join another channel

<scribe> scribenick: none

<ben_thatmustbeme> that will make things a bit easier on me

<ben_thatmustbeme> cwebber2, i would say thats fine, especially if we are going to suggest this as the IRC for the CG

<ben_thatmustbeme> and you will be co-chair

<cwebber2> ben_thatmustbeme, cool, thx for your input

<rhiaro> scribenick: rhiaro


tantek: Most complicated issue?

julien: the signature one

<aaronpk> and

julien: How do we ensure that requests coming from the hub to the subscribers (that are not content) are actually coming from the hub
... It was suggested that we use a signature mechanism, except we don't have a body, so nothing tos ign
... I think (and aaronpk agrees) is to not provide a signature mechanism but strongly incentivise subscribers to use complex urls that are not guessable

sandro: capability urls

julien: if we go in that direction, which I think is safe in terms of security, is why do we even need to sign notifications? If we have https and complex urls, that should be secure enough

sandro: https protects the url?

julien: yes

tantek: no

julien: it exposes only the domain

sandro: only the ip address

aaronpk: you iknow the domain because of SNI

julien: you don't even know the method

tantek: so you want to make https a MUST?

julien: I still think we should allow for non-https, that adds complexity, and I think we need the signature for the case where you could have a man in the middle thing where someone could alter the content and you would not know that they had

sandro: there's nothing to sign because..?

aaronpk: two things
... The confirmation that the hub sends is a GET request, here is: 'yes the subscrption has been activated'
... For that it's using a complex URL, because there's no body you can't sign the payload
... Some alternatives could be come up with a signature method based on query strings.. sounds like oauth1, or sign the headers... whatever..
... But a much simpler solution is use a 128 bit capability url
... That's probably fine over https
... That url is never exposed because it's sent in the POST body over https to the hub
... With that in mind, totally separate issue: why do we even have signature son the body?
... If you assume the URL is secret and nobody can send forged requests to it, why does the hub need to sign the payload?
... One reason to continue using signatures is it does allow subscribers to not support https if they are willing to take the risk of forged confirmations


<Loqi> [Jeni Tennison] Good Practices for Capability URLs

aaronpk: but when they get the delivery they know the payload was not forged so they can check the signatures

<cwebber2> I guess if you don't trust certificate authorities too :)

<cwebber2> yeah

aaronpk: Second, it secures against mitm over https, which is a thing in corporate network environments

sandro: how do you know the public key..?

aaronpk: the subscriber sent the secret to the hub during subscription over https

julien: since it's async it's harder to mitm

aaronpk: it's not pgp level safe, but it's..

tantek: the other reason even in https is the general concept of defense in depth
... Multiple levels of things to use as defenses
... One more thing to break through
... Good security architecture

julien: technically the payload is more important than the confirmation

<tantek> FYI:

aaronpk: if you intercept a confirmation the worst thing that happens is the system thinks it is subscribed and it's not
... whereas the payload could have real consequuences

julien: so the right thing is to not change the spec at this point, but to use capability urls

aaronpk: it's essentially a bearer token in a URL
... Not change the normative spec, but the suggestion in security considerations would be to put a strong recommendation to use these URLs
... It doesn't affect interop, but for your own good you sholuld be doing this

sandro: are people doing that now?

aaronpk & julien: I think so

aaronpk: subscribers

julien: hub is blind to everything

aaronpk: woodwind does and my switchbaord does

tantek: strong enough that you can make it a must?

aaronpk: it doesn't affect interop

tantek: if you can make it a MUST it's good for the future

aaronpk: you don't want to put a specific length.. how do you define what's strong enough?

sandro: you could say it has to be unique for every subscription?

julien: should we enforce it at the hub?

aaronpk: the hub could deny a request if it's seen the URL before


<Loqi> [Jeni Tennison] Good Practices for Capability URLs

eprodrom: W3C note on capability URLs that is very thorough


eprodrom: WD from 2014, so maybe not quite... oh now I found Jan 7 2015
... Can't be normative, we can use as informative reference

aaronpk: So then the issue is do we require unique URLs for subscription, which would be a testable implementation change

julien: it would break things

aaronpk: It's a strong change

tantek: you could make it a SHOULD

julien: I think it should be a SHOULD
... These are things that protect you, do them, if you don't you're the one who is going to suffer

tantek: for new implementations and new subscribers it is a MUST

aaronpk: I have a hard tiem justifying it as a MUST
... It doesn't change the protocol
... It works just as well with all the peices together whether or not subscribers do this
... I have a hard time saying it's a MUST because of that
... but if someone is implementing this.. I opened this issue because I was writing the test subscriber, and I was like 'wait a second what is stopping someone else from making this request?' there wasn't something in the spec to tell me how to protect myself
... I think just at that point the implementer will go to the spec and see the recommendation to make it a unique URL
... I can make the test suite measure that too
... whether that different
... could check the entropy as well

tantek: if it becomes common practice we can chane it to a MUST and tighten the security of the whole system
... What about signatures?

aaronpk: Right now signatures are a optional for the subscriber

julien: the subscriber can take the risk to receive forged content

aaronpk: or they can ignore the payload and go fetch the content themselves

julien: very few implementations actually check the signatures
... so in practice even though the signature is safer, people will use the URL
... as a way to cover their base

<tantek> (mitigate)

julien: the reason is because the signature mechanism I felt people have had trouble coding
... People will not sue the right encoding
... It's harder to debug
... THe hub will compute the signature at the byte level, the subscriber will use a different encoding and compute the siganture on the string version of this, and get a different result

aaronpk: shouldn't hte hub compute it on the string level then?

julien: the hub doesn't know th eencoding

eprodrom: the issue is signatures are hard.

julien: PHP doesn't help

sandro: In JS you oculdn't do it until recently

aaronpk & julien: something about gzip

scribe: good test case

julien: the hub is the party signing the content, not th epublisher
... it would be more secure for the publisher to sign and the hub just transmit it, and be agnostic of the content

ben_thatmustbeme: but then you're starting to define the publishing

julien: right htat's a completely different thing

sandro: In the current model the hub has to sign it differently for each user (per secret)

eprodrom: one of the problems iwth capability urls is as soon as you publish anything across security boundaries you've started yoru clock ticking. Somebody is going to get to it at some piont, whether it's 100 years from now or next week
... Once yo've published it, ...

julien: We should have a requirement, when the subscription expires, don't reuse your callbacks
... So you are exposed for the duration of your lease, when the lease is over you can start fresh

eprodrom: and if you do a large enough key

julien: start with 20, then 50, then 200, then 2000...

eprodrom: if you can do it in a way that your window is small enough then you have enough time

julien: someone has asked about unbound leases before, I think no this is wrong. You need to provide a maximum period
... the hub MUST have a period

sandro: how does the hub reply?

aaronpk: it's part of the confirmation

sandro: does the spec now say 'we know it's ugly?'

julien: no

sandro: I would like it to say that it recognises that it's in violation of web arch

aaronpk: it's important to put that in there because other people will think that and raise issues or try to reinvent

julien: I think it's elegant to have a different verb for the handshake thing and the notification
... On the implementer level it's easier to just say it's a GET, or a POST

aaronpk: It is easier. And for the payload, the post body is the document, it's not a wrapper
... If it was one of the form params it would be much easier on implementer level to use the hub.mode to swithc on that
... but as an implementor if I have to handel reading the raw POST body to understand what type of ..

sandro: and it's the same URL?

aaronpk: it has to be

sandro: if they were both POST I'd use two URLs ors omething

aaronpk: that would be even crazier

julien: SO I understand it's not the nice way of doing http things, but it makes implementers job much asier

sandro: how do you tell whether you're getting a diff or full content

aaronpk: different issue

sandro: I'd use http PATCH verbs
... and Accept-Patch
... Not a custom header
... You can have that in the subscription request
... For instance when you do the confirmation, on that GET the return header could be Accept-Patch

julien: Oh I see

sandro: could be an extension
... You could also, the hub could just send a patch and see what status code it gets back
... A server is not going to give a 200 on a patch unless it's handling it

julien: I was hoping to be using a content type header

sandro: or you could do it after the first post to a subscriber
... they can say they accept patch for future reference

julien: there's a continue http code, 100 or something, the subscriber can say hey send me the patch

sandro: I think continue is in response to a get

julien: I dunno

aaronpk: All these things are nice to do, but nobody has implemented any of them

sandro: It can cleanly go in as an extension

julien: when we went with fat ping we talked about ability to have extensions

aaronpk: Okay so
... Recommend SHOULD use capability URLs
... SHOULD not reuse capability URLs
... Hubs MUST NOT issue unlimited subscriptions?
... MUST enforce lease

julien: if the subscriber does not submit one the hub will tell you what it is

sandro: but we're okay with 20 year leases?

aaronpk: I don't think the hub should decide the limit

sandro: in security considerations

julien: Should be short

tantek: how muc hof this can you test?

aaronpk: All testable
... Can test that a subscriber is using unique urls, that the hub is returning lease seconds

sandro: if it's more than a year you can give a warning

aaronpk: That's basically going to resolve these two issues
... 43 and 36
... And then signatures on the notification not changing, because it is implemented some people are doing things with it, if you don't want to use it you don't have to, and you can always go fetch the original content if you don't trust the hub's payload

tantek: are any of these SHOULDs currently unimplemented

aaronpk: we don't know about renewal

julien: we dont' check that you're using a differnet URL

aaronpk: there's no survey of all of them
... but I know that mine does use a unique one
... We don't know how widely, but we can test it

tantek: that sounds like we can make a SHOULD and see what happens
... if everyone is doing it, we can make it a MUST
... without discussing again

julien: I don't think everyone does it

aaronpk: could you check that on superfeedr's end?

julien: wordpress and google doesn't do it either

tantek: we should gather data
... and raise it with them as a security concern

<cwebber2> hi

<cwebber2> not sure why I was on the queu

<cwebber2> e

<cwebber2> I think that was from way before

julien: I have a process question
... People open issues, I close them, what should I do?

tantek: what we've been doing so far is to try to reach a conclusion that the person who opened the issue has been happy with, and ask them to close it

sandro: We don't want to have the situation where someone feels like they haven't been heard
... If you do exactly what they ask for it should be fine to close

julien: Same with PRs, I just merge them myself

<tantek> FYI julien:

tantek: You said you're opening PRs. If you get PRs from people who are outside of the WG we actually need to get them to agree to the contributor's agreement before we merge them

julien: okay I think I merge da couple of things..

sandro: there's the patent and copyright question
... if it's editorial it could be a copyright issue

aaronpk: you can put a
... people generally agree to that

tantek: can we put the IE agreement there?
... if the person belongs to a W3C member company that's okay too

julien: and when they're not?

sandro: the peopel who have already contributed, we can get them to confirm it
... It's part of the IE agreement, but not the whole IE sign up

tantek: We should put the IE agreement into the
... rhiaro can find the links to the agreements
... You need to ask tony to check he's agreed to the following
... If it's not okay you'll have to remove his changes. Hopefully won't coem to that


aaronpk: Host meta thing
... I opened this one as I was writing the subscriber

<tantek> in general

aaronpk: Right now the discovery steps in the specs say check the http headers and then if it's xml or html then look for the link tag, and then lastly check host-meta
... The spec doesn't say much about how that part works
... IT's sort of left as go read about host meta and figure it out
... I supsect there's not much implementation of it
... I suggest to drop it if there's no known implementations

julien: I initially agreed and then changed my mind. There's a use case that's very useful - github pages

aaronpk: a hosting environment that doens't le tyou set http headers, for document types that don't support embedded links

sandro: eg. json data on github

aaronpk: host meta turns out to be a weird rabbithole
... hostmeta is actually an xml document
... hostmeta is a specific part under .well-known
... In my opinion, the best way to do it for pubsub (and possibly webmention) is to define a pubsub .well-known
... where the pubsub spec defines the document inside there

tantek: that's not unique to pubsub, adding one more step to discovery

aaronpk: we solved for webmention with not supporting wellknown, and whole domain delegation, at an http header level
... We don't support per-document discovery on non-xml content types, but the assumption was the majority of the use cases for that could be solved with headers

<cwebber2> that requires that you control the web server, so it wouldn't work with something like github pages right? which is probably fine

<cwebber2> oh there we go :)

eprodrom: What we're trying to do is do discovery on name.example/something.jpg
... Can we say that if you can'tn do any discovery on something.jpg you can do it at the host name leveL?
... Is that a fair way to do it?
... If you can't do discovery on name.example/something.jpg try doing discovery on name.example/

aaronpk: you wouldn't find the self url
... and doesn't solve it for github
... oh right subdomains on github

sandro: you don't seperate subscribing to root from subscribing to everything else

eprodrom: good point

julien: why do we have to define discovery, why isn't there a discovery spec?

aaronpk: that is kind of host meta...

tantek: we did roughly define our own discovery guidelines in this group, across all the different approaches

aaronpk: that doesn't solve this particular use case

tantek: I think that was considered, this isn't a new use case
... There's not enough new information to bring it up from two years ago

eprodrom: Link header and then link tag is going to discover somewhere north of 95% of cases
... So it might not be worth doing anything more than punting and saying go look around the web and see what other discovery mechanisms there are

aaronpk: Here's my concern. It makes writing subscribers harder. As a subscriber it's nice ot know when you've checked all the possible ways to find a thing

julien: a long time ago, the discovery mechanism itself can be extracted into a custom service. Can just be a library that people can reuse
... I wrote a service a couple of years ago called feed discovery..
... It's harder to implement, but it's a matter of just using a library that does it

aaronpk: the reason that we have to talk about it is because it is in the spec already
... We're not talking about adding it
... hostmeta has been in the spec since the beginning

julien: a lot of peopel who host a jekyll site on github have asked about it

sandro: one other discoveyr technqiue, used in csv on the web recs, is
... if you have a cvs file you're suppose dto look for -metadata.json
... and also csv-metadata.json

tantek: wat

various: wat?
... *descends into silliness*

tantek: So who implements hostmeta discovery for pubsub?

julien: no-one
... I have email of people asking me how I do this on github with jekyll
... I say you can't. it's excluding them, and then saying oh well nobody is doing it

ben_thatmustbeme: It's the subscribers we need to know if they do it
... With webmention we don't want to add it cos every single implementation would have to update
... But with pubsub if it's been in the spec since the beginning that's not the case

aaronpk: My issue is as an implementor I look at this sentence I don't immediately see how to implement it
... I want to see the URL I request, the response body taht I get, and then I'll understand the question

tantek: I'm asking are there any implementations of subscribers that do discovery this way
... We knew for webmention was 0 so we could easily make a decision

julien: I don't know.

tantek: are there any publishers that implement this?

julien: We don't know

aaronpk: hard to know, we'd have to survey the whole web..

tantek: follow up is that are these publishers depending on it

julien: people are asking me about it because it's the only option for them

sandro: it's a thing that's in the spec but there's not enough guidence

julien: we can definitely put it as at risk

aaronpk: looks like hostmeta was added in 0.4
... The other thing is that it doesn't link to the hostmeta spec, it links to the wellknown spec

julien: that's not right

aaronpk: So I suspect there are few imlpementations because of that

tantek: so it's underspecified. Implementation status is unknown. We can specify it in more detail and put it at risk.
... that's an exception from the prior resolution of FYN

aaronpk: It's an exception becasue it was already in the spec
... It's not being added

<cwebber2> of course, At Risk, it's not very at risky, because it has implementations :)

aaronpk: Putting it at risk is bringing it closer to our earlier resolution
... and I'll check in the test suite if people ar eusing it

<ben_thatmustbeme> cwebber2, do you have examples of where it has implementations?

<cwebber2> oh

<cwebber2> nm maybe I'm wrong

<tantek> cwebber2: in the room we have no knowledge of any PuSH publishers or subscribers that implement host meta discovery

<tantek> if you know of one, definitely say something!

<cwebber2> I misunderstood, sorry!

<eprodrom> I need to step away at noon for 15 minutes

aaronpk: My problem with hostmeta is the spec gives you 5 different ways to find the hostmeta
... Conneg, there may be a .json version fo the file
... I don't feel comfortable recommending that to people using pubsub
... pubsub right now does not recommend that
... there are so many way sit can work, makes discovery very difficult
... I think there are much better ways to solve the use case the hostmeta is tryign to do
... One of those is to define our own pubsub .well-known
... It's much clearer
... BUT
... that is a big change to the spec
... We know nobody is doing it
... Adding that is a big deal
... That breaks every existing subscriber
... But that is a better solution to the technical problem
... However completely delegating to the hostmeta spec is also a terrible solution
... and will also break subscribers
... The only solution that doesn't break *possible* implementations that we haven't confirmed, is to harden the aspect of hostmeta that the spec does *already* refer to
... Only looking for the xml format in the hostmeta file
... If there are implementations they did it that way, cos that's all the spec hinted at

tantek: or we drop it right now.

aaronpk: Or because we don't know any implementations we drop it from the spec

sandro: Sounds like the right solution is a .well-know extra headers
... as a workaround for these stupid things that don't let you provide headers

aaronpk: where you can put in literal text of http headers

various: lol

aaronpk: I would rather drop it, but am okay with restricting to xml, to support the use case that has technically been supported before this group adopted it

julien: I want the mechanism to exist for these people
... Opposed to dropping

ben_thatmustbeme: I'm okay with any of them

rhiaro: I agree with julien

<cwebber2> no objections

tantek: cwebber2 do you have an objection to either way?
... I'm going to declare consensus on aaronpk's proposal, of restricting the scope to what it seems the pubsubhubbub spec intended, and marking it as at risk
... And indicate in the spec that we know of no known implementations, and if you have an implementation the group stronglyr equests your input on this issue in particular

ben_thatmustbeme: I think the most important feedback is from subscribers to know that they're all checking for it
... If nobody is checking for it, what's the point of specifying it?
... In this version

aaronpk: we can solve it a better way in a future version

tantek: we're asking for an extension for this spec in particular, so if anyone decides to look through with a fine tooth comb, one they will look for is narrowing scope, not adding new features


<Loqi> [@sandhawke] Any workaround for sites (@github) which don't let you set HTTP Link headers? I find myself sadly wanting .well-known/extra-http-headers.txt

tantek: So.. what do we need to get to CR? Have we covered all issues?

julien: yes

tantek: Continue with taking pubsub to CR discussion after lunch


<cwebber2> yup

<aaronpk> k

<cwebber2> ty

<aaronpk> 👍

<tantek> aside: Snowden advocated federation:

<Loqi> [Ben Werdmüller] I missed this: Snowden advocated federation as an antidote to filter bubbles. #indieweb #decentralize...

<cwebber2> tantek, nice to see

hey cwebber2 we're doing the group photo, any chance we can get you on video?

otherwise we'll hold a laptop with a blank screen and shop you in later :)

<cwebber2> rhiaro: shop me in later!

<cwebber2> no viable webcam option right now, so if yer gonna fake it, post-production is just as good

<cwebber2> rhiaro: or, hilariously, you could load this image:

<cwebber2> gavroche-wip15.jpg

<cwebber2> just as good, looks just like me

<wilkie> you're looking a bit untextured

<tantek> wilkie are you joining us remotely?

<tantek> we are about to resume

<tantek> cwebber2 can you reconnect?

<tantek> call aaronpk

<cwebber2> yep

<cwebber2> wilkie: yeah it was for a 3d print from the mediagoblin campaign rewards

<cwebber2> never got around to texturing

<wilkie> looks good

<tantek> cwebber2: focus :)

<scribe> scribenick: rhiaro

julien: what to do to get to CR

tantek: A test suite, or plan, or start, which we have

aaronpk: yeah the publisher and subscriber tests are done, the hub is in progress

tantek: Plan?

aaronpk: yep

tantek: where?

aaronpk: github issues on tests repo

tantek: Implementation report template?

aaronpk: It is part of the test plan

tantek: You can use the CR to go to implementors and say you need ti either implement this to the spec or say why you can't
... The implementation report template is a requirement to enter CR in other specs
... So that as soon as we ask for impleemntations they have a way to provide reports

aaronpk: It is better to have it that way

tantek: Do we have conformance criteria?

aaronpk: I don't think so

sandro: Do we have the three conformance classes of publishers, subscriber and hub?

aaronpk: yep

sandro: different kinds of hubs?

aaronpk: Not in this spec
... It's actually pretty small. Not a lot of options for each of them. Conformance criteria is not much more than the spec itself

sandro: *explains conformance criteria*

tantek: should be a summary of features

<Loqi> good riddance

aaronpk: sometimes it's more obvious that some parts of the spec are optional or only apply to certain roles

We missed you Zakim

tantek: Is there CR exit criteria in the spec?

sandro: Our standard boilerplate is two implementations of every feature passing the tests

tantek: interoperable implementations

sandro: the higher bar we could go for is two implementation with all the features
... 2 hubs that do everything a hub is supposed to do

aaronpk: a great example of criteria for the hub is that it must support signatures, whereas the subscribers doesn't

tantek: file issue for CR exit criteria in the spec

aaronpk: example of feature is hostmeta discovery
... we might not get two implementations of that
... i actually do have the feature list
... 4 discovery methods, headers xml tags html tag, hostmeta
... Let's set the bar of two implementations of each feature

<cwebber2> ben_thatmustbeme: softflex gloves from the ergoguys store; you can also get them on amazon

varoius: *discussion about publication process*

various: *discussion of github's partial implementation of pubsub*

tantek: it's almost like a different class of conformance that's beyond the scope of the spec
... authenticated subscriptions
... a future version of pubsub could specify if you want to only allow authenticated subscribers, here are some mechanisms
... sounds like their use case is different from what we're speccing

julien: requiring authentication makes a lot of the things we do irrelevant
... facebook do the verification step even though they know you did it with your credentials
... In the past, it works to say this will make their thing work with other apis
... Github could let you subscribe to a repo by html without authentication
... But currently the API is behind authentication

sandro: for github pages, they could provide a link header to their own hub

aaronpk, julien: that would be great

julien: medium has its own hub for all of the feeds (superfeedr)
... We don't have an API for reading, we just have feeds
... I can see where we might eventually have some things not available as feeds to be available through pubsub. I'm fighting hard against auth for that

aaronpk: There was text modified to make it clear to send the full page as the notification payload. Are will leaving it open to send diffs?

julien: diffs are the widest thing implemented

aaronpk: you still send an atom feed, it just only has one item in it
... you send a wrapped feed
... So the actual diffing mechanism is not in the spec

julien: I think we should remove it from the spec, but I still want a trace in here because people will ask about it

aaronpk: Add it to the test tool to check if it's being done, so we can keep track of it
... If the actual diffing mechanism is not in the spec, how do people know what to do and what to expect from the payload
... Can we say here is where to go to learn about what to expect?

sandro: in practice, when people get the new version of an rss feed, they don't actually get it, they get a stripped down verson that only has the new stuff?

julien: yes, only the new entries

aaronpk: 0.3 specified diffing for rss and atom
... it said send the feed with only new items

eprodrom: people tend to mess that up

aaronpk: it was written in 0.3
... it got explicitly taken out in 0.4
... so now there's no clue what to expect about a diffd payload
... Where do we point people to?
... if it's a MAY be a diff

julien: ..define a diff. It's a subset
... I thikn it's still fine

aaronpk: Right now as the spec is written it's not clear what implementors should expect
... to receive or send

sandro: sounds like it's not in conformance to the spec
... I imagine it says the fat ping is the content that is being published
... It sholud say it's either the content being published, or the subset appropriate for that media type

<julien> <p>A content distribution request is an HTTP !RFC7231 POST request from hub to the subscriber's callback URL. The HTTP body of the POST request MUST include the payload of the notification. This request MUST have a <samp>Content-Type</samp> Header corresponding to the <samp>Content-Type</samp> of the topic, and SHOULD contain the full contents of the topic URL. The hub MAY reduce the payload to a diff between two consecutive versions if its format allo[CUT]

<aaronpk> if its format allows it

sandro: I'd use the word subset not difff

aaronpk: the point is that the actual, now you can't even tell what the payload is going to be
... "subset or diff" are not actual spec words (not defined)

<wilkie> RSS/Atom is fairly straightforward, too. if you know what RSS is, then you can tell when an entry is an entry you've never seen before. So you just always treat the incoming data from PuSH as a subset and just take what you need.

sandro: so, for formats that are a set of items, this may be reduced to only the changed items

aaronpk: to better define the 'diffing mechanism' in generic terms?

eprodrom: what about defining the diff for... it's such a common use case that we've got, using rss items and atom entries, it seems worthwhile to.. it would take two sentences to define it

aaronpk: the problem is that for the content types that aren't those, what is expected?

eprodrom: figure something out

tantek: the answer is not implemented

eprodrom: it could be a diff

julien: a subset

eprodrom: it could be a subset. For rss it's an item, for atom it's an entry, for AS2 it could be a single activity

<wilkie> if you let people know that a subset that is based on the content type is how it is expected to work, I don't think people will be confused to that purpose

<wilkie> if you understand the content type (RSS or AS2) you won't find a subset surprising or hard to deal with, basically

sandro: if the topic is a json document and the top level si an object and one key value has changed can you just send that key value?

julien: no
... we're back to diffing

aaronpk: what I would like to see from the spec and as an implementor, I want to see exactly what to do and I don't want any wriggle room

<ben_thatmustbeme> scribenick: ben_thatmustbeme

rhiaro: i was queued to continue answering what is needed for CR

<rhiaro> scribenick: rhiaro

eprodrom: RSS, Atom and undefined would get us pretty far

aaronpk: I would say RSS, Atom and "don't do it" is a better option

eprodrom: I'm fine with that

julien: Do we even want to open the door for RSS/Atom?

eprodrom: yeah. Subscribers SHOULD be able to the whole content, or a subset

sandro: but the subscriber doesn't get to pick, and can't tell what it's getting


<wilkie> yeah, a general diff may just be too complicated to get right. noting that a formalized subset defined elsewhere is to be expected seems to be a good note in the spec, giving RSS/Atom as an example.

julien: you havea feed with ten entries, all new you get ten. You have a feed with 100, 10 new, you get 10. The subscriber doesn't know

eprodrom: I think we always did exactly one

julien: superfeedr as well

tantek: sounded like consensus forming about what to specify for rss/atom vs other formats

aaronpk: I want clarity on th enotification payload when it's not the full contents
... We can describe what to do for RSS/Atom
... We know that we have implementations of hubs that send individual entries within an RSS/Atom feed
... We don't know and I doubt there are implementations of other diffing mechanisms
... I think it would be very reasonable to put in a setnece or two for RSS/Atom
... Subscriber MUST be able to handle that
... But the hub doesn't have to

julien: practically it doesn't change anything for the subscriber

sandro: they ahve to not delete what they've already got and replace it

<wilkie> brand new subscriptions only got the latest 3 entries for my implementation as their initial data, and then just the new ones.

julien: depends what they're doing
... sometimes it's a mirror or an archive

aaronpk: very application specific

sandro: I could have a completely broken implementation without realising it. I might think th ehub is broken because it's sending only soe of it
... Every consumer has to be written with this awareness, that if it's RSS/Atom it might not be getting the full content

aaronpk: the alternate is that you assume you're getting the complete feed and don't dedup

sandro: I could be building a subsystem, a fetch module, you give it a url and it gives you back the content, and I want to add pubsub discovery rather than a polling mechanism
... not rss/atom. A web mirroring system. It turns out that pubsubhubbub on two media types do somethign different than pure mirroring
... I have to know that

tantek: if you're looking to mirror you have to go get the actual resource

sandro: the point of fat pings in pubsub is I don't have to do that

tantek: but the point of fat pings is not to support mirrors

julien: you're never able to guarantee you get the full mirror of a feed

sandro: as someone writing a consumer I just have to know that for rss/atom the hub is not bit true

tantek: the fat pings are not bit true

sandro: there are no bits if it's not fat pings
... fat pings for feed formats, the hub messes with you (in a good way)

julien: if you ask for a feed then you sholuld expect a window in a stream
... whether one or ten is not something you can control

<cwebber2> I forget, can pubsubhubbub be reasonably used for delivering private data? it needs salmon for that to work, right?

julien: It's hard to build a mirror of an infinite stream
... you can only get mirrors as windows

<wilkie> this has nothing to do with the protocol itself, either. my implementations may only choose to publish a max number of entries to a hub.

<cwebber2> it's pretty much only for public info right?

tantek: sounds like an informative note in the section on fat pings could address this

cwebber2: can pubsubhubbub be reasonably used for delivering private data?

aaronpk: implementaitons so far have only used it for public feeds

julien: no, there's a lot of private stuff

tantek: this is a new topic

cwebber2: if there's an expectation that you have to jump back to very correctly get the content if it's somethign that was private I guess it has to be both private and transient for that to be an issue
... But if that's an issue there's not any guarantee you can fetch it again
... But maybe that's not a pubsub issue
... It's an issue we have in AP, but I don't htink pubsub is bein gused for this

julien: there is a lot of private data, obfuscated not behind authn
... you had an rss feed in your gmail inbox, that was a public url, a capability url
... peopel plugged that into services like superfeedr
... we were technically accessing thousands of people's emails as rss
... it's private because of the content, but not behind authentication

tantek: facebook, foursquare, still have stuff like this

<cwebber2> so, the reason I raised this was we realized if you had private *and* transient data flying across the wire, you won't be able to fetch it again... which is possible in activitypub

aaronpk: Summary

<cwebber2> which means that a partial update would have to be sure how it worked

<cwebber2> which, we dropped partial updates for federation so its no problem

aaronpk: We will add to notification payload section describing how to send changes in the feed for RSS and Atom specifically

<ben_thatmustbeme> cwebber2, thats what i was going to say, but i figured you were getting to understand that as well

aaronpk: For any other content type you must send the full original content

<cwebber2> but just saying to the "if you're mirroring you should go back and fetch"

tantek: I heard a different suggestion from eprodrom

<ben_thatmustbeme> correct, you cannot re-fetch the content other than going to the source

tantek: In addition to those content types, to consider putitng in at risk how to also do it for AS2
... And frankly any other format that pubsub consumers are consuming right now
... eg. h-feed
... You might get a subest h-feed with only h-entries
... but at risk

aaronpk: the way sandro described it was a generic term for all of these things
... If the URL is a collection of items
... but it's very clear for implementors what to do for their content type
... If you are an ical feed of a bunch of events
... it's very obvious that you can send only the feed with the new events
... if you're an RSS feed it's very obvious that the items are the individual things to send

tantek: i half agree with that
... ical there is the implicit assumption that if an event is not there it's not in your calendar
... it would just delete the event, it wouldn't treat it as an update mechanism

aaronpk: that was a bad example
... The concrete example of rss/atom would be useful, and then the generic text about items in collections

tantek: again I'm going to say that generic ttext is a bad idea

julien: If peopel go away from RSS/Atom we can remove it from the spec?

sandro: It'll be hard for peopel to go away from that

julien: it was implemented becasue it was in the spec

tantek: the could have always provided the full feed, they didn't need to do the subset thing

julien: My point is in the end it's the same amount of processing for the subscriber whether it's a diff or the full feed

<sandro> "If the content type represents a 'feed' of items, such as RSS, Atom, and AS2, then the hub MAY trim pre-existing items from the feed."

julien: Except that it introduces uncertainty as sandro said
... The subscriber doesn't know if it's the content of the resource at this point

tantek: that warning is useful
... that's where the generic terminology might be good

<sandro> tantek: unchanged

sandro: If the content type represents a 'feed' of items, such as RSS, Atom, and AS2, then the hub MAY trim unchanged items from the feed.

aaronpk: I would still like to see the concrete description of RSS and Atom since they have been implemented

tantek: for anything else we include we list as at risk

aaronpk: we won't describe the specific mechanism for any other formats

<Loqi> Scapadis made 2 edits to Socialwg/2016-11-17

<Loqi> Tantekelik made 7 edits to Socialwg/2016-11-17

<Loqi> Tantekelik made 1 edit to Socialwg/DocumentStatus

<Loqi> Abasset made 1 edit to Socialwg/2016-11-17

<Loqi> Rhiaro made 1 edit to Socialwg/push-name

julien: we're adding more uncertainty

sandro: we have the uncertainty of all feeds, or only rss and atom

<sandro> eprodrom: Nah, don't trim AS2.

eprodrom: one possibility is that people do implement it with one Activity in a feed, at which point we document it

julien: there are things where the size of the feed is not fixed
... It's hard to know what the 'full feed' is
... I wish for RSS and Atom we consider it a subset of the global stream no matter what

eprodrom: If I get 5 entries, i check each of their ids against what I already have, if I don't have it if it's new, if I do see if it changed

julien: the subscriber can't rely on jus tgetting the new version from the hub

tantek: I would agree with syndication feeds and search results
... I would disagree with calendar events

<sandro> tantek: syndication feeds and search results, not calendar events

sandro: a flag would be great

tantek: now we're talking new features

julien: you can have very large requests that need trimming

tantek: I think the assumptions you have for rss and atom consumers for pubsub
... probably the same for h-feed

aaronpk: more challenging with h-feed because consumers of h-feed are usuallyd ealing with the parsed result of the page, not the html itself
... however it's challenging for the hub to reduce the html minus specific items, can't do it at the json level

tantek: it could

aaronpk: no
... it has to send html if the topic is that
... It's a lot more work

ben_thatmustbeme: extensions for how to handle certain content types in smarter ways

<wilkie> the publisher would do that, not a hub?

tantek, julien: subscriber has the same code no matter what

aaronpk: maybe that's the way to word it
... If the hub is going to manipulate the contents, it must do so in a way that the subscriber does not have to change its behaviour

sandro: if the subscriber in the subscriptionr equest could say trim feeds yes/no/don't care
... Can we add that?

tantek: trying to add more features?

julien: adds complexity
... Doesn't matter if the hub sends a full or truncated version
... as long as it does it in a subset way

sandro: if I get an html page I din't know it has hentries on it, but the hub knows that and trims it, but I don't know it has h-entry, I'm screwed

julien: yeah. Subscribers SHOULD NOT care wehther the feed is truncated or full. For RSS and Atom

aaronpk: subscribers should not make any assumptions about whether the feed has been truncated or not

tantek: reducing the scope to two specific content types

<sandro> -1 to triming HTML h-entry

aaronpk: yes

<sandro> because it would screw up HTML without me knowing it

<wilkie> subscribers would just parse it and pull out anything it sees that's new

<tantek> sandro - sounds reasonable

aaronpk: The only potential situation where we may want to reconsider the feature is with AS2 feeds because they are intended to work like RSS/Atom feeds, only json

sandro: we could do an at risk thing around as2 and get some experience

aaronpk: An AS2 object has the same content type as a Collection

julien: don't include it
... only RSS/Atom

aaronpk: just saying AS2 this is probably going to come up again

julien: RSS/Atom are not a good role model, do not mimic them

eprodrom: just document existing practice where people send entries, and don't recomend it for other mime types
... I think that's fine

sandro: maybe a sentence about a patch based extension some day

various: no

sandro: to head off all the comments

julien: I can see how we might have comments but it's the opposite of a good idea, it adds so much complexity

eprodrom: It in no way makes up for the bandwidth

sandro: where do we say we've thought about it and that it's a bad idea

ben_thatmustbeme: we can say we included it for legacy reasons, but don't do it for other content types
... in a note

aaronpk: we're going to describe what RSS/Atom are doing for fat pings, describing actual thigns people may be receiving, and say they are there for legacy reasons and say you must not modify the topic URL for any other content types
... Hub should never modify it
... and subscriber sshould never assume it's truncated or not

<sandro> where "should" should be "must"

<Zakim> tantek, you wanted to mention implementations of specific subset items vs at-risk for anything else

<ben_thatmustbeme> scribenick: ben_thatmustbeme

<Zakim> rhiaro, you wanted to say evidence of wide review

rhiaro: one of the things for getting to CR is we need evidence of getting wide-review
... we need to count the number of issues from outsiders
... i've done a lot of this after the fact, and its easier to do it now

tantek: to that i would add that we should start the wiki page for that

<tantek> the CR transition wiki page for PubSub

rhiaro: activitypub has a really good wiki page for that, that might bne good to copy
... it would be good to start capturing those now so we don't lose track of any of them

<sandro> The group is clearly RESOLVED even though Tantek doesn't want it on the record as a RESOLUTION. Aaron has now documented it at

<ben_thatmustbeme> sandro++

tantek: this means we can go to outside groups and get them looking at the groups

<sandro> For RSS and Atom, we will add a sentence like 0.3 had, describing how to deliver partial feeds with only the new items.

<sandro> For other content types, the hub MUST NOT modify the document that it retrieved from the topic URL.

rhiaro: we need to find all the review from other groups and get those links together at least

RESOLUTION: aaronpk's proposal as documented at the comment (and as copy/pasted above by sandro)

tantek: note that this is in the IRC, and if that comment changes, they can see it there

discussion of getting reviews from other groups

rhiaro: i asked internationalization for review of the spec, we haven't done others yet
... i can send the emails to accessibility and security

tantek: i think that brings us to the last big issue for pubsub



tantek: last time we had no concensus
... would anyone like to choose a name to advocate?

rhiaro: hubbub? it was a joke before but now i kinda like it


aaronpk: what was the original problem with pubsubhubbub?

julien: it sounds like a joke, its hard to pronounce
... it kind of was a joke

<wilkie> I like hubbub. it has grown on me immensely. or just keeping it PuSH. anything but pubsub lol

aaronpk: thats obviously totally valid criticism

<cwebber2> I'm ok with a change but I can live with pubsubhubbub :)

julien: i can live with anything at this point

<cwebber2> pshh

rhiaro: typing pubsubhubbub or PuSH are both horrible to minute

aaronpk: i agree with the points that have been brought up against pubsub
... we can drop things that are have multiple people against

<eprodrom> Unfortunately I have to step away for 1h

<aaronpk> oh yeah sorry. i can try calling into the conference line

<eprodrom> tantek: unfortunately I have to step away until after 4

<aaronpk> i can't find the conference number, hang on

<aaronpk> found it

<aaronpk> sorry cwebber2 sandro is starting the meeting

<aaronpk> i didn't realize it wasn't always active

<aaronpk> cwebber2, i'm in now


<ben_thatmustbeme> +1 to any item not crossed out on the board

<cwebber2> wilkie, btw, not sure you noticed ^^^

<KevinMarks2> Hm, I'm late here, but didn't as1 solve pagination of a list of items with the subset expressed?

<aaronpk> what issue is that about?

<KevinMarks2> The discussion of feeds of items in pubsub

<aaronpk> oh, sure, but it's a new feature from the perspective of PubSub and we are trying to not add new features if possible

<wilkie> I think PuSH just needs to operate at the item level and not the feed level and that "fat pings" are just the sending of multiple yet distinct entries instead of trying to syndicate a feed which is apparently impractical/unnecessary

<wilkie> so maybe that's an extension

tantek: i've udpated the wiki page with candidates and rejected

<cwebber2> I don't really care anymore :)

<cwebber2> pick a name!

tantek: we are agreeing to reject all those others and we are not going to revisit those

<wilkie> I like hubbub. it sounds reasonably ambiguous and apolitical

<wilkie> anything but pubsub

tantek: we will choose from those 3 or any new ones

<cwebber2> is it time to bring out my dice

<cwebber2> we can narrow this down fast with some dice rolls

PROPOSED: We reject all pre-exisitng names before today, and we will pick from between WebSub, WebSubscribe, and WebFollow barring any new better names

<julien> WebSubscribe++

<Loqi> websubscribe has 1 karma

<tantek> +1

<julien> +1

<rhiaro> +1

<aaronpk> +1

<sandro> +1

<wilkie> +1

<cwebber2> +1

<mattl> +1

RESOLUTION: We reject all pre-exisitng names before today, and we will pick from between WebSub, WebSubscribe, and WebFollow barring any new better names

PROPOSED: add aaronpk as a co-editor for PubSubHubbub / PubSub / whatever we name it

<ben_thatmustbeme> +1

<rhiaro> +1

<aaronpk> +1

RESOLUTION: add aaronpk as a co-editor for PubSubHubbub / PubSub / whatever we name it

tantek: i'm not hearing any objections
... could you prepare a new WD?

aaronpk: i think after we get through the stuff we discussed today, I publish a new WD

julien: i can probably get the changes proposed done for next tuesday which will then be available to publish a new WD


tantek: we should resolve the new name before we publish the new WD

break until 3:45


<cwebber2> pubsubdubstep

<cwebber2> I said I wanted to go after micropub

<sandro> do you still?

<cwebber2> I don't care really now

<scribe> scribenick: ben_thatmustbeme


<sandro> scribenick: sandro

tantek: test suite status?


aaronpk: It covers every feature
... of servers
... you could test clients

tantek: do you have a plan?
... for testing clients

aaronpk: I have a rough outline, in an issue

tantek: timeline?

aaronpk: I've let this sit while working on pubsub

tantek: getting to CR seems like the priority
... implementation report templates set up, etc

aaronpk: I'd like it to be automatic
... but that's an additional step
... I could make a manual checklist quickly

sandro: why get to CR quickly
... I suggest we have the test suite in nice shape before we outreach to existing imeplentations

<cwebber2> oops

sandro: make the story be "There's finally a test suite! Try it!" No need to even read the spec.

ben_thatmustbeme: Do we have an impl report for mp that's not in the test suite?

aaronpk: If you go to the site you can see what everyeone's done.

ben_thatmustbeme: do we want one?

sandro: *shrug*

ben_thatmustbeme: if we have an offline one, a template, then people have more options

aaronpk: I have to write it first anyway; I can publish it

<tantek> eprodrom returns

aaronpk: I'll write pubsub impl template first, then finish test suite for pubsub, then finishih autmatic impl report submission for pubsub,
... THEN go backt o micropub and add client tests
... with impl report
... I'm moving soon
... so next week is out

tantek: goal one - get pubsub to CR, maybe with a nice test system
... try by end of year
... so resolve CR within 19 days - by Dec 6)

aaronpk: that's a bit of a stretch

tantek: with publication Dec 15

julien: Google has a hub, and WordPress has multiple hubs
... AMP might be a way to show Google the value here
... AMP has no distribution mechanism -- it's aggressive fetching

tantek: google is crawling amp pages, and if they could subscribe, that could be good

julien: because they're serving the cached content, they crawl often, like on average four times per day

<KevinMarks> Well, Google is crawling everything.

tantek: I don't want to hold up for hub testing

aaronpk: Yes, I'll focus on auto submission of consumer and producer tests.

tantek: and they hub tests can be done during CR
... closer to pubsub CR in December?

<tantek> by Dec 6

aaronpk: Yes. I can have this ready for users by Dec 15

tantek: document edits too?

aaronpk: yep

tantek: pubsub to CR is priority. then micropub.

aaronpk: then I can go back to hubs
... 1.5 - 2 weeks
... client test -- walks you through what you need to tell the client to do, and then knows on the server what's been done.

sandro: sounds fun/compelling

aaronpk: yes, check boxes are very motivating

tantek: where are we on server tests?

aaronpk: five implementation reports. four other than me.
... three outside wg
... interop : every feature has two or more implementations.
... everyone supports everything, except my server doesn't do alt text yet.

tantek: any non-editorial issues raised?

aaronpk: no


aaronpk: still waiting for response about alt-text from a11y group


tantek: Can you add to issue-34 what we just discussed about MP server implementations, and see if that spurs a response
... Our expectation is to exit CR on MP in January
... because we're there on the servers; don't yet know about the clients

aaronpk: There are several clients, but I don't know if we have two that support editing

ActivityPub CR

eprodrom: How are we on testing?

cwebber2: I still need to write the test infrastructure
... or impl report template
... we just hit CR and I haven't done that stuff

eprodrom: Are you planning to use for the testing?

cwebber2: I might have folks download stuff and run things locally, but I like what Aaron's been doing,

eprodrom: esp with automated implementation report
... so you're building that out. do you need any guidance?

<tantek> chair: eprodrom

cwebber2: I think I just need to do it. If I need to, I'll reach out.
... I haven't started, so not sure

eprodrom: test for clients and server?
... web based client

aaronpk: my testing tool can't always tell if the right end result happened, so it asks the user, with a check box, like "Does the photo now appear...."
... the test tool becomes the consisten payload sent to the server, then have the human check the box, if you can't tell
... still very very helpful

cwebber2: Okay, I'm planning to move forward, now that other things are cleared off

eprodrom: implementation report template?

cwebber2: yep, will do

eprodrom: Outstanding issues?

cwebber2: all editorial


cwebber2: except for change tracking, which we said it out-of-scope, all of these are editorial
... I plan to handle them before PR

eprodrom: Anything we can help with?

cwebber2: could use clarification on questions from
... they're thinking of doing the re-write -- would that count as independent implementation?

eprodrom: yes

sandro: yes

cwebber2: got a pile of things to do

eprodrom: implementations?

cwebber2: pubstrate is mine

eprodrom: as a second one

<rhiaro> hackertribe

cwebber2: someone said something, ... hackertribe
... dunno if they've started
... other than that, we also have Amy's work, and BenGo started something.
... those are what I know of
... I haven't started converting MediaGoblin

sansdro: connection between MG and pubstrate?

<csarven> beep beep beep

cwebber2: None. pubstrate is a clean codebase.

sandro: what about gnusocial?

cwebber2: I dunno

eprodrom: Diaspora? Friendica?

<KevinMarks> Mastodon?

cwebber2: Diaspora-- we had Jason Robinson for a while -- I think he lost some time, and was disappointed we didnt do signatures, but we did address some things, so they COULD implement them
... no commitment they will
... but I heard they were maybe doing their own stuff
... I got the impression they had gone a different direction

eprodrom: one way to find out!

<KevinMarks> interoperates with gnu social

cwebber2: yes, worth a try, I'll reach out

<KevinMarks> And supports pubsubhubbub

eprodrom: wordpress, other publication apps

cwebber2: gitlab maybe

<rhiaro> cwebber2: mattl is in the WG and works for gitlab. He was here earlier but just left

sandro: (

cwebber2: Great, I'll do some outreach!

eprodrom: timing?
... to implementations of each feature.... pubstrate + ...


cwebber2: strugee sounds energized to run with it in

eprodrom: think we'll have the testing system up in a couple weeks?

<KevinMarks> You typoed it, sandro

cwebber2: that would be nice. I also have some contracting work.
... a couple weeks is probably over optimistic, but I'll try
... January for AP CR-exit, not impossible, but I wouldn't give a confident commitment.
... the whole gammut probably by the end of January, but probably not the beginning of January
... I think February is feasible

eprodrom: that's it for AP, and we're done for the day

rhiaro: We've all been invited to an art party thing after dinner

<wilkie> but what do we call dinner?

<aaronpk> PubSubYumYum

<wilkie> aaronpk++

<Loqi> aaronpk has 69 karma in this channel (1141 overall)

<wilkie> anything but PubSub

<tantek> time for YumYum

<wilkie> enjoy

rhiaro: dinner at VeggieGalaxy, then people can hop on T to party or wherever

<KevinMarks> I was going to suggest "flowpast" as a pubsubhubbub name, but looks like Google expired the domain on me

<cwebber2> strugee: eprodrom said he'd stay out of your way so it can be an independent implementation :)

<strugee> excellent. I read some of the log but must have missed that part

<cwebber2> eprodrom might have just said it on the call :)

strugee, eprodrom says: "please thunder forward as fast as you can!"

(reading over my shoulder)

<strugee> hahahahaha

<Loqi> nice

<strugee> I LOVE it

<strugee> eprodrom++

<Loqi> eprodrom has 41 karma in this channel (42 overall)

Summary of Action Items

Summary of Resolutions

  1. aaronpk's proposal as documented at the comment (and as copy/pasted above by sandro)
  2. We reject all pre-exisitng names before today, and we will pick from between WebSub, WebSubscribe, and WebFollow barring any new better names
  3. add aaronpk as a co-editor for PubSubHubbub / PubSub / whatever we name it