14:44:03 RRSAgent has joined #social 14:44:03 logging to https://www.w3.org/2021/01/23-social-irc 14:48:33 Today's meeting in 12 mins, info: https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305 14:52:16 bobwyman has joined #social 14:54:11 Zakim, please start meeting 14:54:11 RRSAgent, make logs Public 14:54:12 please title this meeting ("meeting: ..."), rhiaro 14:54:26 Meeting: Social Web Incubator CG 14:54:41 bobwyman_ has joined #social 14:54:46 chair: nightpool[m] 14:54:51 scribenick: rhiaro 14:56:06 Note all the BBB is down for unknown reasons, we'll fall back to jitsi https://meet.jit.si/ScatteredConsequencesActRegardless 14:58:01 Erik has joined #social 14:59:15 present+ 14:59:22 present+ sl007 14:59:25 present+ bobwyman_ 15:01:45 mathew has joined #social 15:01:48 Cristina has joined #social 15:01:48 BBB is back up everyone! 15:03:02 present+ 15:04:13 Erik_ has joined #social 15:04:40 present+ jarofgreen 15:04:50 present+ Cristina 15:04:56 present+ mathew 15:05:02 present+ 15:05:11 present+ erik 15:06:19 present+ sandro 15:07:30 Annette: Ive been working iwth the credibility group 15:07:38 join audio at https://bbb.w3c.social/b/rhi-vp1-fv6-vn7 15:07:45 ... I'll take responsibility for pushing the email chain on the credweb public email 15:07:53 ... with the idea of trying to developingsome sort of spec for social media post vetting 15:08:30 bobwyman_: also from credweb 15:08:31 annette_g has joined #social 15:09:11 Cristina: this is the third meeting I attend, and I'm still trying to understand better what the activities are happening around this place. I know about AP from the conferences 15:09:22 ... interested in general about decentralisation 15:09:34 sl007: and cristina had 2 talks at the conferences 15:10:03 hans: Free software person, want to see what is going on 15:10:23 erik: also received an invite from sebastian, curiosu what's happening, first time 15:10:52 jarofgreen: i'm a software developer, open data around events and activitypub 15:11:25 mathew: hi, second meeting, brussels bubble, EU policy sphere, help institutions with online strategy, getting up to speed on fediverse 15:12:25 paul: interestedin DIDs in the fediverse 15:12:52 sl007: doing redaktor, a content management system and tool for journalist on the fediverse 15:13:07 ... actively seeking funding at the moment, and in respect to the topic of today we are investigating what can help with content moderation 15:13:15 ... things we can use immediately like external tools 15:13:25 ... or blockchain based solutions like agoric alpha 15:13:33 ... and how the eunomia projet can help 15:14:03 nightpool[m]: I'm a cochair, been involved as a mastodon developer and behind the scenes with the CG 15:14:08 FLOX_Advocate has joined #social 15:14:12 ... my computer crashed as the meeting was starting... 15:14:22 Topic: New fediverse users 15:14:40 nightpool[m]: Specificallly the moderation challenges brought on by the influx of right wing users away from twitter in the past month or so 15:15:04 https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305 15:15:04 https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305 15:15:38 ... there are a couple of interesting suggestions left on the socialhub ^ 15:15:45 ... There have been discussions of specs of different moderation tools 15:15:58 ... the initial AP standard launched as a how you communicate messages and left a lot of further extension to implementations 15:16:05 ... this is one of the things where we can talk about what people have come up with 15:16:13 ... what challenges still need to be addressed, and what specs are helpful in the future 15:16:19 q? 15:16:20 q? 15:17:37 mathew: I'm new to this, but not new to the 'splinternet' 15:17:41 ... when everyone was using blogs and forums 15:17:46 ... before everything centralised 15:17:52 q+ 15:18:10 ... was wondeirng how much people who were involved in the fediverse debate now have the experience of the blogosphere 15 years ago and the fears back then about the emergencence of a fragmented space with echo chambers 15:18:17 ... this was before the idea of filter bubbles 15:18:38 ... trying to get up to speed about what ideas people have about trying to avoid the bad things which happened in the blogosphere and the splinternet 1.0 15:18:50 ... can anybody share anything about what has been written about that today? 15:19:07 nightpool[m]: I was mostly invovled in phpbb forums 15 years ago 15:19:22 ... one thing is that it seems like we've been lurnching between two extremes. Everyone hadi their own blogs, very fragmented, no interaction 15:19:34 ... there are standards that address some of that, early on wordpress pingbacks 15:19:45 ... with the centralised thing everybody is in the same social place 15:19:57 ... it seems that what we're trying to do in the conversations I've heard is find a happy medium between those two places 15:20:01 mathew: that's what I'm hoping we do 15:20:10 ... you have to be careful what you wish for 15:20:15 ... let's not go into this blind to what happened in the past 15:20:22 ... best of both worlds in the future, and not swing between extremes 15:20:31 ... any proposals, so relevant to the subject of moderation and trust levels 15:20:39 ... a copule of people from the credibility group, sounds intriguing 15:20:40 q? 15:20:49 ack sl 15:20:55 sl007: my direct proposals to the fediverse 15:21:11 ... I doubt access contorl lists and blocking whole instances 15:21:33 ... if you have a large instance of 1mil users and half are facists but the other way are within the democratic spectrum, then it is very hard to block the whole instance in terms of democracy and freedom of speech 15:21:45 ... this does not mean any political direction, it just means the boundaries of the democratic spectrum 15:22:01 You don't need algorithms to produce the effect of "filter bubbles." The same effect can be produced when people make individual decisions about the blogs they will follow, what email lists to join, etc. Filter bubbles are a problem whether or not you have algorithms. 15:22:02 ... my proposal is that the very diverse AP implementers who have now blocked gab for example are coming together to have another layer above the AP 15:22:09 ... which is a governance layer, which might be on a blockchain or something 15:22:50 ... my dream would be that, I know it is very hard to achieve, it begins with wanting election justice, only every human has one vote, but apart from that I am proposing a governance model based on the free city of hamburg 15:22:55 ... for maximum level of transparency and justice 15:23:11 ... my idea is to have blockchain based token system with every fediverse user joing an instance gets a citizen right and can elect moderators 15:23:30 ... and together with tools by eunomia for example we can have a thing like a trust layer where i'ts easier to react 15:23:40 ... the basic assumption must be that we limit instance sizes 15:23:53 ... an instance of 1mil users with moderators who are directly involved in the instance and not transparently elected is not acceptable 15:23:56 q? 15:24:22 nightpool[m]: some background info here, when sebastian is talking about instances he is talking about a server or collection of servers that share software, mod team and database 15:24:28 ... it is a common term in the fediverse today 15:24:34 @sebastian, using liquid democracy? 15:24:40 ... it's not a term defined by a standard. It's one installation of a piece of software 15:24:50 ... a lot of the current mod tools are based around working with individual user accounts and instance accounts 15:25:11 sandro_ has joined #social 15:25:16 ... on mastodon and pleroma those are the main tools that admins have to block content, restrict connection, mostly in a blocklist type situation,r estrict content from being processed from other domain names 15:25:23 ... that's where we are today, there are other nuances 15:25:32 ... every instance can be different as far as its moderation policy. Some are heavily moderated, some are light 15:25:36 ... some are 5 people or one person 15:25:49 ... some instances as big as mastodon.social where you have a group of 10 moderators 15:25:56 q+ 15:26:05 ... if anyone has other questions about the state today, happy to answer those 15:26:37 sandro: I didn't intro earlier, I chair the credweb cg and I helped with AP creation 15:26:52 ... I'm working fulltime on credibility, connected with content moderation 15:27:12 ... the instance based with small instances is a good approach but I want to go in a slightly different direction which is ignore the instance and instead let everything be relative to each user 15:27:32 ... we have that with users able to block other users, doesn't scale well, but with additional tools like user controlled algorithmic blocking or up/downvoting of content 15:27:36 q+ to talk about the benefits of instance-based moderation 15:27:38 ... I pick the kind of sources i want to determine the kind of stuf fI see 15:27:47 ... my intuition is that is the most powerful way to solve this problem 15:27:47 q+ 15:27:52 ... maybe small instasnce level is 15:27:54 ack sandro_ 15:27:56 ack nightpool[m] 15:27:56 nightpool[m], you wanted to talk about the benefits of instance-based moderation 15:28:11 nightpool[m]: one of the reasons why instance based moderation is regarded as powerful on mastodon 15:28:19 ... is because instances are regarded as self organised communities 15:28:28 ... when people come to join the mastodon network they com eto join a specific instance 15:28:42 sandro: that's one of the reasons its a non starter, I'm not in one community, but in lots of different ones and go in and out 15:28:53 nightpool[m]: but when there's already that self organising principle there that's when it seems very powerful 15:29:01 ... and shared moderation from the instance, based on the policy of th server itself 15:29:15 sandro_: you may need instance moderation for legal reasons, we may never be able to get rid of instance moderation 15:29:20 ack sl007 15:29:22 ack sl 15:29:40 sl007: the reason to limit instance sizes was what nightpool[m] said, users new to the fediverse join based on topics interested in or because of their friends 15:30:01 ... I want to avoid that an instance like gab becomes as large as it is, but stays at the same level like local instances like for a village or something 15:30:05 ... I don't think what sandro described is the opposite 15:30:12 ... I think making everyone relative in terms of governance is one thing 15:30:20 ... another thing is that you can join based on your topics 15:30:27 ... that is exactly why things like a decentralised identifier is so important 15:30:42 ... one time we have just one identity in the internet but we can be part of many instances and many implementations with this identifier 15:30:49 q+ 15:30:51 I suggest that we should distinguish between 1) A user's desire or need to limit exposure to non-credible content and 2) An ability to moderate the content that is seen by one or more users. 15:31:13 nightpool[m]: one important thing is this current situation where the software you use is coupled to the server and the domain name is a little bit, not exactly one the AP spec provides for 15:31:24 ... AP contemplates an authoritative server, but is agostic to the content 15:31:31 ... and very opinioned clients, so many clients for the same server 15:31:44 ... decentralised identifiers have other benefits as well. What happens when the person who runs your server stops paying the bills? 15:31:51 ... an issue in the days of forums, and an issue now with federated social 15:32:07 sl007: we want to have another session about this generic servers and diverse clients 15:32:15 ... that is important 15:32:21 ack Cristina 15:32:55 Cristina: thinking that as we see th efediverse as a group of different communities with the core values of diversity, inclusion, feedom of expression 15:33:10 ... the way that intuitively I see it is that the base of what brings together thse communities is relationships 15:33:30 ... when you developing some sort of rapport and that should be based on core values that are shared 15:33:38 ... two communities don't share their values, we have a conflict and that's not okay 15:33:48 sandro has joined #social 15:33:55 ... i was wondering if it could be technologically feasible, thinking about also the blockchain idea sebastian mentioned, to define som esort of policy layer 15:34:16 ... so when you as an admin were peering with another instance you are showing your set of values, and if that other instance believes that they are sharing those values, that instance can peer with you 15:34:24 ... in this way, when that instance is not following those values you can close the connection 15:34:25 s/som esort/some sort 15:34:35 q+ 15:34:38 q+ 15:34:47 ... otherwise it's kind of impossible to envision a situation when you have decentralisation and you are also trying ot centralisae an entire way of doing things for all instances 15:34:54 ... what you can do is not peer with an instance that dosen't share your values 15:34:57 ... can this be automatic? 15:35:17 nightpool[m]: thank you! some of what you said with the tech details, the way the fediverse works now is an actor to actor connection 15:35:28 ... while these can be thought of as peering between instances, they happen naturally as users follow other users 15:35:39 ... we already have some of that, the rapport, that happens as users follow other people 15:35:42 There is a difference between filtering based on the speaker's identity and filtering based on the content of a specific speech act. 15:35:45 q+ 15:35:53 bobwyman_, I'm not quite following your distinctintion. Is it about user-for-themself vs someone-else-protecting-users, or is it about credibility vs other aspects of content quality? 15:36:05 ... as users follow other pepole, they subscribe to their updates and as those updates come up we can think of those follower connections as being two instances connected by 3 followers 15:36:07 q+ 15:36:30 ... definitely there are other ways you can learn about a post if someone boosts it or if somebody can send you a piece of content out of nowhere, they can write a reply without your instance every knowing anything about them 15:36:42 q+ 15:36:45 ack bobwyman_ 15:36:55 bobwyman_: trying to understand the focus of what you're trying to accomplish 15:37:15 ... cristina was talking about filtering based on identity or history of individuals? essentially blocklists, arbitrarily interesting technology there 15:37:32 ... another problem which is not focussed on speakers but on what is said, on measuring the credibility or content of the messages themselves 15:37:47 ... curious is the focus in this group more on filtering the people or is it on making statements about content, or both? 15:37:56 nightpool[m]: that's a good question, the group probably has varied opinions 15:38:22 ... the work done currently is more about watching the types of software people implement, the moderation seems to be more based on filtering users because that's the pattern we have looking at the types of moderation examples in the past 15:38:42 ... we ban people from irc rooms, twitter bans people from tis platform. If i'm on a discord server with people, a person is kicked out, not some of their messages 15:38:53 ack annette_g 15:39:26 Would someone mind linking the email thread in question? 15:39:31 Sandro, one view relies on users making their own choices, the other view delegates decision making to others. I prefer systems that allow users to craft their own "filters" rather than those that facilitate the ability of others (or software) to make decisions about what should be seen. 15:39:42 annette_g: I want to start out from circling back to what i was proposing on the email thread which is coming from th epoint of view of seeing what would happen with the US presidential race recently where it took some examples of multiple platforms deciding to block trump before they all did. there was a groundswell of decision before they decided they should do it 15:40:00 ... the platform mods were probably holding back to see what the others would do. Feeling if they were the first to block they'd take a hit in terms of how attractive their platform is to their users 15:40:07 ... how true those concerns are and how they should be weighted is a different quesiton 15:40:15 ... the dynamic i'm seeing is it helps to have some sort of an agreement 15:40:23 ... it might make sense to develop a standardised approach to these things 15:40:43 ... to have the right set of people, with expertise in sociology, psychology, politics, all the things that w3c doesn't necessarily have currently 15:40:57 ... and get some sort of agreement between providers to say this is the minimum criteria that we're going to use to block somebody 15:41:02 ... or to kick somebody off a platform 15:41:12 ... aiming more at dealing with th emost extreme behaviour and making it so its an easy decision 15:41:45 ... but different groups with have different values, so maybe the best approach is to define levels and saying maybe level 1 protection system you will block with this particular stimulus to do so and another level you have a higher bar that someone has to reach before you block them 15:42:04 ... and it also occurs to me that defining these levels could be akin to waht was suggested earlier of having different instances that have the same level of values 15:42:08 ... those could speak to each other more readily 15:42:35 ... it cold be that we would want to define values as these different levels and allow maybe more free or i fpeople from different levels are trying to connect then their posts are marked 15:42:45 ... so users see something that gives a guarantee of what level of enforcement they're seeing 15:42:51 q+ 15:42:56 ... and those running instances can have assurance that what they're doing is acceptable with the communities they're working with 15:43:25 nightpool[m]: one thing to note is about twitter and facebook both were watching each other act, and facebook took the first move and twitter had to do it 15:43:30 You may detest my political views or "values," but still find listening to me to be useful if we are talking about software design not social issues.. 15:43:34 ... those platforms already have very strict guidelines but they are interpreted very subjectively 15:43:44 ... a struggle is its always going to be up to a person to subjectively implement those levels 15:44:02 ... it's one of those things that seems like as objective as it can be, always twisted for political or commercial ends 15:44:08 ack sl 15:44:25 sl007: I would like to give Cristina and Annette full acknowledgement first, I speak for my own activitypub software redaktor 15:44:31 ... I'd like to translate into the fediverise 15:44:45 ... imagine we establish a common set of linked data code of conduct princpples or terms of service principles 15:44:52 ... the minimum set would be the human rights delcarations 15:45:06 ... the major set might be established values from other orgs, like the associated press or NGOs with codes of conducts 15:45:30 ... we imagine you come from a country like romania and [??] becomes a dicatator in the country, to join the fediverse to have a voice there he would have to agree to the human rights at least 15:45:42 ... that would be my solution based on a linked data vocabulary for code of conduct and terms of servce 15:45:51 ack Cristina 15:46:18 Cristina: from the policy perspective, the way I mentioned it was a policy in terms of moderation 15:46:25 ... social web incubator can define best practices 15:46:43 ... if we want to go into human rights, we need to discuss about the topic and define it further, but what we can do I believ eis define a set of best practices of a way of moderating your own instance 15:46:56 ... I'm sure that small instances might be very interested, maybe they don't know how to do this kind of policy work for their own instance 15:47:07 ... Regarding the policy aspect more from the point of view of how instances are peering with each other 15:47:15 ... would be great to make it such that this is automatic 15:47:40 ... defining a set of values which are agreed on or not agreed on at the higher level in terms of this instance will peer with this instance and if they do not peer in that situation 15:48:04 ... A small remark about individuals - I would be in gneeral a bit reluctant to promote censorship at their level, and let them free to do whatever they want as long as they agree to a certain set of conduct on the platform 15:48:10 ack mathew 15:48:23 mathew: coming back to bob about focussing on the person or the content 15:48:29 ... we have this legacy of focussing on the person or the account 15:48:36 ... interesting to look at it the other way 15:48:51 ... defining certain levels, the trouble with levels is with any standard, twitter had standards and ignored them when it came to Trump until they had not chocie 15:49:03 (That was Annette, I believe, who brought up the subject of levels) 15:49:06 ... there's an interpretation of the standard, does this content meet our standards or not, two people can have a different answer fo rthe same content 15:49:13 ... the idea of having servers that set a certain level of tolerance? 15:49:29 ... then people on the server presumably respect that level. If they see content that breaks that level of tolerance they can register a vote on it 15:49:52 ... and the collective votes of the users on the instance inform the algorithm on the instance towards whether the content does respect the servers stated level, and that affects whether it can travel to other instances 15:50:00 ... if content comes from an instance that says we are at level 3, but it doesn't, that' sa problem 15:50:09 q+ 15:50:17 ... is anybody talking about using liquid democracy? Most people do not have time to set ifilters and play with settings, but might trust someone else 15:50:33 ... other people can adopt someone else's model, that's a form of liquid democracy 15:51:16 nightpool[m]: when mastodon first formed there were shared blocklist and chained blocklists, especially with in the aftermath of the blocktogether plugin, the initial queer and lgbtq communities who formed mastodon were on the receiving end of a lot of blocking due to conflicting with bigger social media personalities 15:51:24 ... there's an article about why Wil Wheaton has me blocked on medium 15:51:40 ... historically that is why there has been resistance to that liquid democracy subject, when things get out of hand there are a lot of failure modes 15:51:42 ack FLOX_Advocate 15:51:53 FLOX_Advocate: annette got me thinking about instance filtering as moderatro weighting and keywording 15:52:00 ... mods could block if something comes in that the instance says we don't like 15:52:10 ... there's a gardening instances, someone is posing non gardening stuff, the modsof that instance would block it 15:52:20 ... or they're posting things about growing weeds and they don't like that as a subject 15:52:27 ... but at the client level, I could choose to apply those filters completely or partially 15:52:37 ... maybe on weekends I like to read bout weeds so I'd allow those things to come through anyway 15:52:50 ... for me as a user i'd love a client that supports procmail on the backend so I can do thes ame as I do with my email 15:53:10 ... On a different topic, applying it, a frien dof mine ahs refused to join the fediverse due to the inability to block all content from a stalker, no matter how that comes in 15:53:23 ... if its booted by someone you trust and someone is commenting on it that content can still show up, that is a problem 15:53:37 ... I understand where that person is coming from but I don't know enough, it might be I dont' know enough to explain how the tools work 15:53:56 How can anyone block all content from a stalker on any platform that doesn't have mandated one-identity-per-human? 15:53:59 nightpool[m]: totally, a valuable perspective. For mastodon specifically all of the areas you mentioned we still block the user to prevent the content, but possibly there's a bug, we're a small team 15:54:02 ack sl 15:54:15 sl007: about what mathew said about liquid democracy, we are investingating liquid feedback and such tools to use 15:54:28 ... my only other level was that the moderators should be elected 15:54:33 ... to have a better level of transparency 15:54:48 https://liquidfeedback.org 15:54:56 nightpool[m]: there is a spot on our agenda for discussing the next meeting 15:55:15 q+ 15:55:27 Topic: Next meeting 15:55:34 ack sl 15:55:58 sl007: I would propose we do the session about .. we had a lot of policy meetings, we should do the generic servers and diverse clients problem together with pleroma, mastodon, kaniini who was interested, immer.space 15:56:04 ... immerspace is an awesome project 15:56:16 ... that is a technical thing where we can speak technical again 15:56:27 nightpool[m]: that's a great topic, I can't make friday 15:57:26 rhiaro: we can get some demos lined up for two weeks today 15:57:33 nightpool[m]: any further short statements on the main topic? 15:57:38 q? 15:58:10 https://socialhub.activitypub.rocks 15:58:12 https:​//https://socialhub.activitypub.rocks/ 15:58:16 sandro, would it also be necessary to have one-human-per-identity? 15:58:37 see you here or on https://socialhub.activitypub.rocks 15:58:56 present+ FLOX_Advocate 15:59:41 hi all 15:59:55 the meeting's here or in another platform? 16:00:03 Zakim, end meeting 16:00:03 As of this point the attendees have been rhiaro, sl007, bobwyman_, nightpool[m], jarofgreen, Cristina, mathew, paul, erik, sandro, FLOX_Advocate 16:00:05 RRSAgent, please draft minutes 16:00:05 I have made the request to generate https://www.w3.org/2021/01/23-social-minutes.html Zakim 16:00:08 gekk, the meeting just finished I'm afraid 16:00:09 I am happy to have been of service, rhiaro; please remember to excuse RRSAgent. Goodbye 16:00:13 Zakim has left #social 16:00:22 RRSAgent, make logs public 16:03:44 what would you guys say were the most important topics/conclusions during this meeting? 16:04:30 gekk, you can read the notes here https://www.w3.org/2021/01/23-social-minutes.html 16:04:55 gekk, we discussed moderation policies, how to automate that, whether to filter people and/or messages, how to personalise moderation vs having a group of moderators, that sort of thing 16:05:10 ok interesting. 16:05:42 decentralised moderation based on social affiliations of sorts instead of algorithms 16:06:04 its a hot topic now I hope we'll find a standard. 16:11:22 nightpool: can I ask a Mastodon question - I’m sending HTTP signed follow requests to Mastodon accounts but getting an HTTP error code back that indicates there was some access or permission problem. 16:11:40 nightpool: Is there some test instance that would give me more logs on the exact problem or guide for people who are trying to get their software to federate? 16:12:32 jarofgreen: if you're getting an error message, it should be pretty detailed, but I can help you interpret it 16:13:39 if you're getting to the point where you're not getting an error message but things still aren't showing up, you can ask Claire (@Thibg@sitedethib.com), she's another core developer who's pretty easy to get ahold of and can spend some time debugging things with you 16:14:19 as far as guides go, I think https://blog.joinmastodon.org/2018/06/how-to-implement-a-basic-activitypub-server/ is still one of the best, we update it occasionally 16:15:36 Part 2 (about receiving posts and accepting follow requests), is here: https://blog.joinmastodon.org/2018/07/how-to-make-friends-and-verify-requests/ 16:17:44 (sorry, up to date link is this one: https://social.sitedethib.com/@Claire) 16:26:03 nightpool[m]: ok, got it - I realised I wasn’t checking for error messages properly. now I can see: Mastodon requires the Digest header to be signed when doing a POST reques 16:26:11 nightpool[m]: thanks for prod 16:29:14 no problem! 16:34:59 Just wanted to add that this is a recent change — I didn't send this header and some instances started rejecting my requests even though nothing changed on my side 16:35:34 GregoryKlyushnikov: Ahhhhhhhh. I thought the docs didn’t mention that last time I checked them 16:35:48 yep, we added it in late 2020 as part of our hardening against insecure requests 16:36:31 After I added it on my side everything magically fixed itself 16:39:59 it's a pretty critical requirement—if you exclude it, anyone you send a payload to can impersonate you to anyone else within the next 5 minutes 16:42:01 Can they? The signature already includes the host header, so the receiver could only replay it to itself 🤔 16:52:06 nightpool[m]: Well, I now get a 202 response from Mastodon but if I follow a Mastodon account I don’t appear in “Followers” - if I installed my own Mastodon instance and looked in local logs would I see more? 16:52:54 nightpool[m]: Follows the other way seem to work fine. That might be enought for now - I’ll try sending notes later and seeing what happens 16:54:03 Sure, let me know. By "the other way", do you mean Mastodon accounts can follow you, but you can't follow mastodon accounts? 16:54:17 I'll admit I haven't heard of that happening, maybe something about your Follow payload is a little wonky? 16:55:14 nightpool[m]: “Mastodon accounts can follow you” Yes 16:55:28 nightpool[m]: “Follow payload is a little wonky?” very possibly :-) 16:55:58 nightpool[m]: would that be in local logs if I installed my own Mastodon instance? 16:57:24 almost certainly 16:59:22 nightpool[m]: great, thanks, i’ll try that later 17:52:52 bobwyman has joined #social 18:24:32 h_ll_k_n has joined #social 18:25:03 [@bengo] The @bluesky Ecosystem Report feels like 2021 edition of @rhiaro ‘s “Social Web Protocols” note from w3c Social WG. 18:25:03 18:25:03 2013-2016: https://www.w3.org/TR/social-web-protocols/ 18:25:03 2021: https://matrix.org/_matrix/media/r0/download/twitter.modular.im/981b258141aa0b197804127cd2f7d298757bad20 (https://twitter.com/_/status/1353045950302810112) 18:52:06 cwebber2 has joined #social 20:44:18 Grishka has joined #social