W3C

– DRAFT –
Social Web Incubator CG

23 January 2021

Attendees

Present
bobwyman_, Cristina, erik, FLOX_Advocate, jarofgreen, mathew, nightpool[m], paul, rhiaro, sandro, sl007
Regrets
-
Chair
nightpool[m]
Scribe
rhiaro

Meeting minutes

Today's meeting in 12 mins, info: https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305

Note all the BBB is down for unknown reasons, we'll fall back to jitsi https://meet.jit.si/ScatteredConsequencesActRegardless

BBB is back up everyone!

Annette: Ive been working iwth the credibility group

<sl007> join audio at https://bbb.w3c.social/b/rhi-vp1-fv6-vn7

Annette: I'll take responsibility for pushing the email chain on the credweb public email
… with the idea of trying to developingsome sort of spec for social media post vetting

bobwyman_: also from credweb

Cristina: this is the third meeting I attend, and I'm still trying to understand better what the activities are happening around this place. I know about AP from the conferences
… interested in general about decentralisation

sl007: and cristina had 2 talks at the conferences

hans: Free software person, want to see what is going on

erik: also received an invite from sebastian, curiosu what's happening, first time

jarofgreen: i'm a software developer, open data around events and activitypub

mathew: hi, second meeting, brussels bubble, EU policy sphere, help institutions with online strategy, getting up to speed on fediverse

paul: interestedin DIDs in the fediverse

sl007: doing redaktor, a content management system and tool for journalist on the fediverse
… actively seeking funding at the moment, and in respect to the topic of today we are investigating what can help with content moderation
… things we can use immediately like external tools
… or blockchain based solutions like agoric alpha
… and how the eunomia projet can help

nightpool[m]: I'm a cochair, been involved as a mastodon developer and behind the scenes with the CG
… my computer crashed as the meeting was starting...

New fediverse users

nightpool[m]: Specificallly the moderation challenges brought on by the influx of right wing users away from twitter in the past month or so

https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305
… there are a couple of interesting suggestions left on the socialhub ^
… There have been discussions of specs of different moderation tools
… the initial AP standard launched as a how you communicate messages and left a lot of further extension to implementations
… this is one of the things where we can talk about what people have come up with
… what challenges still need to be addressed, and what specs are helpful in the future

<jarofgreen> https://socialhub.activitypub.rocks/t/2021-01-23-socialcg-meeting-new-fediverse-users/1305

mathew: I'm new to this, but not new to the 'splinternet'
… when everyone was using blogs and forums
… before everything centralised
… was wondeirng how much people who were involved in the fediverse debate now have the experience of the blogosphere 15 years ago and the fears back then about the emergencence of a fragmented space with echo chambers
… this was before the idea of filter bubbles
… trying to get up to speed about what ideas people have about trying to avoid the bad things which happened in the blogosphere and the splinternet 1.0
… can anybody share anything about what has been written about that today?

nightpool[m]: I was mostly invovled in phpbb forums 15 years ago
… one thing is that it seems like we've been lurnching between two extremes. Everyone hadi their own blogs, very fragmented, no interaction
… there are standards that address some of that, early on wordpress pingbacks
… with the centralised thing everybody is in the same social place
… it seems that what we're trying to do in the conversations I've heard is find a happy medium between those two places

mathew: that's what I'm hoping we do
… you have to be careful what you wish for
… let's not go into this blind to what happened in the past
… best of both worlds in the future, and not swing between extremes
… any proposals, so relevant to the subject of moderation and trust levels
… a copule of people from the credibility group, sounds intriguing

sl007: my direct proposals to the fediverse
… I doubt access contorl lists and blocking whole instances
… if you have a large instance of 1mil users and half are facists but the other way are within the democratic spectrum, then it is very hard to block the whole instance in terms of democracy and freedom of speech
… this does not mean any political direction, it just means the boundaries of the democratic spectrum

<bobwyman_> You don't need algorithms to produce the effect of "filter bubbles." The same effect can be produced when people make individual decisions about the blogs they will follow, what email lists to join, etc. Filter bubbles are a problem whether or not you have algorithms.

sl007: my proposal is that the very diverse AP implementers who have now blocked gab for example are coming together to have another layer above the AP
… which is a governance layer, which might be on a blockchain or something
… my dream would be that, I know it is very hard to achieve, it begins with wanting election justice, only every human has one vote, but apart from that I am proposing a governance model based on the free city of hamburg
… for maximum level of transparency and justice
… my idea is to have blockchain based token system with every fediverse user joing an instance gets a citizen right and can elect moderators
… and together with tools by eunomia for example we can have a thing like a trust layer where i'ts easier to react
… the basic assumption must be that we limit instance sizes
… an instance of 1mil users with moderators who are directly involved in the instance and not transparently elected is not acceptable

nightpool[m]: some background info here, when sebastian is talking about instances he is talking about a server or collection of servers that share software, mod team and database
… it is a common term in the fediverse today

<mathew> @sebastian, using liquid democracy?

nightpool[m]: it's not a term defined by a standard. It's one installation of a piece of software
… a lot of the current mod tools are based around working with individual user accounts and instance accounts
… on mastodon and pleroma those are the main tools that admins have to block content, restrict connection, mostly in a blocklist type situation,r estrict content from being processed from other domain names
… that's where we are today, there are other nuances
… every instance can be different as far as its moderation policy. Some are heavily moderated, some are light
… some are 5 people or one person
… some instances as big as mastodon.social where you have a group of 10 moderators
… if anyone has other questions about the state today, happy to answer those

sandro: I didn't intro earlier, I chair the credweb cg and I helped with AP creation
… I'm working fulltime on credibility, connected with content moderation
… the instance based with small instances is a good approach but I want to go in a slightly different direction which is ignore the instance and instead let everything be relative to each user
… we have that with users able to block other users, doesn't scale well, but with additional tools like user controlled algorithmic blocking or up/downvoting of content
… I pick the kind of sources i want to determine the kind of stuf fI see
… my intuition is that is the most powerful way to solve this problem
… maybe small instasnce level is

<Zakim> nightpool[m], you wanted to talk about the benefits of instance-based moderation

nightpool[m]: one of the reasons why instance based moderation is regarded as powerful on mastodon
… is because instances are regarded as self organised communities
… when people come to join the mastodon network they com eto join a specific instance

sandro: that's one of the reasons its a non starter, I'm not in one community, but in lots of different ones and go in and out

nightpool[m]: but when there's already that self organising principle there that's when it seems very powerful
… and shared moderation from the instance, based on the policy of th server itself

sandro_: you may need instance moderation for legal reasons, we may never be able to get rid of instance moderation

sl007: the reason to limit instance sizes was what nightpool[m] said, users new to the fediverse join based on topics interested in or because of their friends
… I want to avoid that an instance like gab becomes as large as it is, but stays at the same level like local instances like for a village or something
… I don't think what sandro described is the opposite
… I think making everyone relative in terms of governance is one thing
… another thing is that you can join based on your topics
… that is exactly why things like a decentralised identifier is so important
… one time we have just one identity in the internet but we can be part of many instances and many implementations with this identifier

<bobwyman_> I suggest that we should distinguish between 1) A user's desire or need to limit exposure to non-credible content and 2) An ability to moderate the content that is seen by one or more users.

nightpool[m]: one important thing is this current situation where the software you use is coupled to the server and the domain name is a little bit, not exactly one the AP spec provides for
… AP contemplates an authoritative server, but is agostic to the content
… and very opinioned clients, so many clients for the same server
… decentralised identifiers have other benefits as well. What happens when the person who runs your server stops paying the bills?
… an issue in the days of forums, and an issue now with federated social

sl007: we want to have another session about this generic servers and diverse clients
… that is important

Cristina: thinking that as we see th efediverse as a group of different communities with the core values of diversity, inclusion, feedom of expression
… the way that intuitively I see it is that the base of what brings together thse communities is relationships
… when you developing some sort of rapport and that should be based on core values that are shared
… two communities don't share their values, we have a conflict and that's not okay
… i was wondering if it could be technologically feasible, thinking about also the blockchain idea sebastian mentioned, to define some sort of policy layer
… so when you as an admin were peering with another instance you are showing your set of values, and if that other instance believes that they are sharing those values, that instance can peer with you
… in this way, when that instance is not following those values you can close the connection
… otherwise it's kind of impossible to envision a situation when you have decentralisation and you are also trying ot centralisae an entire way of doing things for all instances
… what you can do is not peer with an instance that dosen't share your values
… can this be automatic?

nightpool[m]: thank you! some of what you said with the tech details, the way the fediverse works now is an actor to actor connection
… while these can be thought of as peering between instances, they happen naturally as users follow other users
… we already have some of that, the rapport, that happens as users follow other people

<bobwyman_> There is a difference between filtering based on the speaker's identity and filtering based on the content of a specific speech act.

<sandro> bobwyman_, I'm not quite following your distinctintion. Is it about user-for-themself vs someone-else-protecting-users, or is it about credibility vs other aspects of content quality?

nightpool[m]: as users follow other pepole, they subscribe to their updates and as those updates come up we can think of those follower connections as being two instances connected by 3 followers
… definitely there are other ways you can learn about a post if someone boosts it or if somebody can send you a piece of content out of nowhere, they can write a reply without your instance every knowing anything about them

bobwyman_: trying to understand the focus of what you're trying to accomplish
… cristina was talking about filtering based on identity or history of individuals? essentially blocklists, arbitrarily interesting technology there
… another problem which is not focussed on speakers but on what is said, on measuring the credibility or content of the messages themselves
… curious is the focus in this group more on filtering the people or is it on making statements about content, or both?

nightpool[m]: that's a good question, the group probably has varied opinions
… the work done currently is more about watching the types of software people implement, the moderation seems to be more based on filtering users because that's the pattern we have looking at the types of moderation examples in the past
… we ban people from irc rooms, twitter bans people from tis platform. If i'm on a discord server with people, a person is kicked out, not some of their messages

<nightpool[m]> Would someone mind linking the email thread in question?

<bobwyman_> Sandro, one view relies on users making their own choices, the other view delegates decision making to others. I prefer systems that allow users to craft their own "filters" rather than those that facilitate the ability of others (or software) to make decisions about what should be seen.

annette_g: I want to start out from circling back to what i was proposing on the email thread which is coming from th epoint of view of seeing what would happen with the US presidential race recently where it took some examples of multiple platforms deciding to block trump before they all did. there was a groundswell of decision before they decided they should do it
… the platform mods were probably holding back to see what the others would do. Feeling if they were the first to block they'd take a hit in terms of how attractive their platform is to their users
… how true those concerns are and how they should be weighted is a different quesiton
… the dynamic i'm seeing is it helps to have some sort of an agreement
… it might make sense to develop a standardised approach to these things
… to have the right set of people, with expertise in sociology, psychology, politics, all the things that w3c doesn't necessarily have currently
… and get some sort of agreement between providers to say this is the minimum criteria that we're going to use to block somebody
… or to kick somebody off a platform
… aiming more at dealing with th emost extreme behaviour and making it so its an easy decision
… but different groups with have different values, so maybe the best approach is to define levels and saying maybe level 1 protection system you will block with this particular stimulus to do so and another level you have a higher bar that someone has to reach before you block them
… and it also occurs to me that defining these levels could be akin to waht was suggested earlier of having different instances that have the same level of values
… those could speak to each other more readily
… it cold be that we would want to define values as these different levels and allow maybe more free or i fpeople from different levels are trying to connect then their posts are marked
… so users see something that gives a guarantee of what level of enforcement they're seeing
… and those running instances can have assurance that what they're doing is acceptable with the communities they're working with

nightpool[m]: one thing to note is about twitter and facebook both were watching each other act, and facebook took the first move and twitter had to do it

<bobwyman_> You may detest my political views or "values," but still find listening to me to be useful if we are talking about software design not social issues..

nightpool[m]: those platforms already have very strict guidelines but they are interpreted very subjectively
… a struggle is its always going to be up to a person to subjectively implement those levels
… it's one of those things that seems like as objective as it can be, always twisted for political or commercial ends

sl007: I would like to give Cristina and Annette full acknowledgement first, I speak for my own activitypub software redaktor
… I'd like to translate into the fediverise
… imagine we establish a common set of linked data code of conduct princpples or terms of service principles
… the minimum set would be the human rights delcarations
… the major set might be established values from other orgs, like the associated press or NGOs with codes of conducts
… we imagine you come from a country like romania and [??] becomes a dicatator in the country, to join the fediverse to have a voice there he would have to agree to the human rights at least
… that would be my solution based on a linked data vocabulary for code of conduct and terms of servce

Cristina: from the policy perspective, the way I mentioned it was a policy in terms of moderation
… social web incubator can define best practices
… if we want to go into human rights, we need to discuss about the topic and define it further, but what we can do I believ eis define a set of best practices of a way of moderating your own instance
… I'm sure that small instances might be very interested, maybe they don't know how to do this kind of policy work for their own instance
… Regarding the policy aspect more from the point of view of how instances are peering with each other
… would be great to make it such that this is automatic
… defining a set of values which are agreed on or not agreed on at the higher level in terms of this instance will peer with this instance and if they do not peer in that situation
… A small remark about individuals - I would be in gneeral a bit reluctant to promote censorship at their level, and let them free to do whatever they want as long as they agree to a certain set of conduct on the platform

mathew: coming back to bob about focussing on the person or the content
… we have this legacy of focussing on the person or the account
… interesting to look at it the other way
… defining certain levels, the trouble with levels is with any standard, twitter had standards and ignored them when it came to Trump until they had not chocie

<nightpool[m]> (That was Annette, I believe, who brought up the subject of levels)

mathew: there's an interpretation of the standard, does this content meet our standards or not, two people can have a different answer fo rthe same content
… the idea of having servers that set a certain level of tolerance?
… then people on the server presumably respect that level. If they see content that breaks that level of tolerance they can register a vote on it
… and the collective votes of the users on the instance inform the algorithm on the instance towards whether the content does respect the servers stated level, and that affects whether it can travel to other instances
… if content comes from an instance that says we are at level 3, but it doesn't, that' sa problem
… is anybody talking about using liquid democracy? Most people do not have time to set ifilters and play with settings, but might trust someone else
… other people can adopt someone else's model, that's a form of liquid democracy

nightpool[m]: when mastodon first formed there were shared blocklist and chained blocklists, especially with in the aftermath of the blocktogether plugin, the initial queer and lgbtq communities who formed mastodon were on the receiving end of a lot of blocking due to conflicting with bigger social media personalities
… there's an article about why Wil Wheaton has me blocked on medium
… historically that is why there has been resistance to that liquid democracy subject, when things get out of hand there are a lot of failure modes

FLOX_Advocate: annette got me thinking about instance filtering as moderatro weighting and keywording
… mods could block if something comes in that the instance says we don't like
… there's a gardening instances, someone is posing non gardening stuff, the modsof that instance would block it
… or they're posting things about growing weeds and they don't like that as a subject
… but at the client level, I could choose to apply those filters completely or partially
… maybe on weekends I like to read bout weeds so I'd allow those things to come through anyway
… for me as a user i'd love a client that supports procmail on the backend so I can do thes ame as I do with my email
… On a different topic, applying it, a frien dof mine ahs refused to join the fediverse due to the inability to block all content from a stalker, no matter how that comes in
… if its booted by someone you trust and someone is commenting on it that content can still show up, that is a problem
… I understand where that person is coming from but I don't know enough, it might be I dont' know enough to explain how the tools work

<sandro> How can anyone block all content from a stalker on any platform that doesn't have mandated one-identity-per-human?

nightpool[m]: totally, a valuable perspective. For mastodon specifically all of the areas you mentioned we still block the user to prevent the content, but possibly there's a bug, we're a small team

sl007: about what mathew said about liquid democracy, we are investingating liquid feedback and such tools to use
… my only other level was that the moderators should be elected
… to have a better level of transparency

https://liquidfeedback.org

nightpool[m]: there is a spot on our agenda for discussing the next meeting

Next meeting

sl007: I would propose we do the session about .. we had a lot of policy meetings, we should do the generic servers and diverse clients problem together with pleroma, mastodon, kaniini who was interested, immer.space
… immerspace is an awesome project
… that is a technical thing where we can speak technical again

nightpool[m]: that's a great topic, I can't make friday

rhiaro: we can get some demos lined up for two weeks today

nightpool[m]: any further short statements on the main topic?

<jarofgreen> https://socialhub.activitypub.rocks

<nightpool[m]> https:​//https://socialhub.activitypub.rocks/

<bobwyman_> sandro, would it also be necessary to have one-human-per-identity?

<sl007> see you here or on https://socialhub.activitypub.rocks

<gekk> hi all

<gekk> the meeting's here or in another platform?

Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).

Diagnostics

Succeeded: s/som esort/some sort

Maybe present: Annette, annette_g, hans, sandro_