Consent Communication on the Web
Facilitator: Lauren Lee McCarthy
Speakers: Evelyn Masso, Aarón Montoya-Moraga, and Xin XinHow do we foster consentful communication on the web? Curated and moderated by Lauren Lee McCarthy of the p5.js project, this session will look at different tools and approaches. Panelists Evelyn Masso, Aarón Montoya-Moraga, and Xin Xin will share their experiences working to welcome newbies on GitHub, centering developer discussions around accessibility, considering the cultural implications of internationalization, and building open-source chat tools that think through security and memory in conversation. Brief presentations will be followed by open discussion among panelists and Q&A with the breakout participants.
It's so nice to be here and thank you all for joining us today.
I know it's sort of late for some of you, depending on where you are.
And I'm really excited about, this is the third panel in a series that we've been curating sort of on behalf of p5.js to share some of the work that our contributors and people working in adjacent communities have been doing, and hopefully open up a conversation with all of you.
This last panel here, or this current panel is all around consentful communication on the web.
And we were talking, and planning for this panel about how consentful is not actually a word, but it was one that we felt represented what all of us were really interested in when it comes to communication.
So I've been leading p5.js project since 2013.
And working on that project, there was a lot of technical work that happened, and a lot of many different aspects of the project from design to software to documentation.
But I think when, I just, you know, passed on the role to someone else.
And so I've been doing a lot of reflecting around that seven years of work.
And one of the things that just kept coming up for me was thinking about how much the project really depended on communication and how essential that was and how much I learned about it through all the contributors over the years.
And so I'm really happy to introduce this panel, which features three of them who have taught me so much.
So, before I introduce them, I've realized I forgot to go over some basics here.
So if you have questions as the presentations are going, you can put them in the chat and then we'll have some time for open conversation afterwards.
You can also use your raise hand feature at that time and join the conversation directly.
It doesn't need to be all mediated through chat.
And then also to describe myself for anyone that can't see, I'm a mixed race, Chinese American woman with short dark hair, and I'm wearing a collared shirt in a kind of black and white sweater right now, and I'm sitting in my home.
And so this is also just a reminder to the panelists to please introduce, or describe yourself and describe your slides if there's visual content on them as we're going along to make it more accessible.
So I think we'll get right into it.
As I mentioned, there's three presentations, and then we'll go into a more open discussion.
And first up, we have Evelyn Mason, who is a person all the time, an engineering manager on weekdays and a poet on weekends.
She's been working in and around Open-source software since 2017.
Primarily on p5.js and GitHub Desktop.
And she likes to talk about mixed identities, queer poetry, and her recent love for running.
Originally from Ohio, she currently lives on unseated Tongva land near Los Angeles, and uses she and they pronounce.
Take over Evelyn.
Hi, I'm Evelyn.
I'll do a quick, short little description of myself and my surroundings.
I am light-skinned, non-binary femme-ish person with like wavy brown hair.
And I'm sitting in like a room with like white walls and a funky orange deer, a stuffed animal head on the wall behind me.
And I'm wearing like a colored shirt with stripes on it.
I'm going to talk a little bit about some things that I've been thinking about with regards to consent lately, especially in Open-source software.
As I mentioned the intro, a lot of my experience that's relevant to this is, yeah, through working in Open-source and GitHub Desktop, and then also like on p5.js.
And that work has taken a couple of different forms over the years.
In some cases it's been writing workshops for how to get involved in contributing to Open-source, especially with p5.js.
Sometimes it's been writing code and patches, or like helping moderate a repo on GitHub, responding to issues, doing pull requests, writing code, documentation stuff, things all along that axis, I suppose.
But I think one thing I want to talk about today, especially in relation to this topic is the role, like in a lot of those activities, but especially communication on a platform like GitHub or any other sort of online primarily text-based collaboration platform, especially when it's involved in the production of like software of some sort is the role of boundaries and consent.
And so I'm gonna start sharing a couple of slides now.
I wanted to start by sharing this quotation from Prentis Hemphill.
It says, boundaries are the distance at which I can love you and me simultaneously.
Prentis is a healer and a somatic practitioner.
I'm not sure where they're based, I forgot, after putting this together, but I think this is like a really nice way of crystallizing, like the importance of boundaries and less of like consent of saying like, yes, I agree to this thing or I don't whether it's explicit or implicit.
The thing that I think it's lost when we talk about communication on platforms like GitHub and Open-source in general, is that there's two people on each side of most interactions, maybe more than two people.
But they can be roughly grouped into like maintainers and contributors, right?
Maintainers of the people that are running project, or like putting in usually like more time and like kind of responsible for stewarding it.
And contributors that people that are bringing something to share.
And I put together a couple of examples here of like a couple of different issues and responses to them.
But I think it's like best summarized by saying that, as someone who's been a maintainer of an open-source project, there's definitely like an, for myself like there's been a play of like what my boundaries are as a maintainer and like negotiating that with like the boundaries of people that are contributing to the project or with other maintainers.
And so these two examples sort of go with that.
I'm actually going to skip through for the sake of time.
Sort of starting that, like how, if someone is like opens an issue or communication in a repo that I'm maintaining and there's kind of a bad faith tone to that issue.
I probably won't spend as much time on it because I want to prioritize my energy for people that are acting in good faith and for people that are marginalized or like not well-represented in open-source spaces.
So there's like kind of like a, I got processed.
I feel like I sometimes go through and responding to things about like how much energy am I really willing to spend on something, on this kind of like interaction with this person in this particular situation.
And so I'll skip ahead to just like a couple of little graphics I made.
One being of like a Venn diagram of maintainers, boundaries and contributors boundaries, and how like finding the overlap there of like, where there's like that space where you can both participate in a way that feels good for both of you, is like a really important thing to like negotiate.
And a lot of times I think an open-source it can happen very implicitly as well as explicitly.
And that there's like, yeah, I think I'll just summarize.
There's like a lot of different ways to like, communicate those different kinds of boundaries you have.
And framed in like another sense, it would be that like them, finding the meetup between like maintainers and contributors consent is where you find open-source collaboration.
This is an image of a silly internet meme with two very muscular people, like grasping hands.
And so like each arm is labeled with maintainer's consent and contributor's consent.
And where their hands together is like open-source collaboration.
So I'll go ahead and stop there and invite whoever is next to talk about their work.
So next up we have Aaron Montoya-Moraga, who is a Chilean graduate student at MIT Media Labs, Opera of the Future and Future Sketches Research Groups.
And Aaron uses synthesizers machine learning and computers for audio visual art projects.
And Is a contributor and open-source enthusiast regularly teaching with and contributing to the project.
Go for it.
Hi, thank you.
So my description will be, I'm wearing glasses, I have short black hair, kind of greenish t-shirts.
Behind me there's a door and some computers.
When I turn my slides right away.
So today I wanted to share some thoughts about consent and i18n, which stands for Internationalization and also about media arts.
And I wanna to say that I'm currently a graduate student at MIT Media Labs in media arts.
So I'm super lucky to be, all day my work is thinking of media arts and the tools and the practice.
I consider my practice on machine learning nowadays and focusing with open-source and algorithmic justice and biases.
So with the Processing Foundation, I have worked on the internationalization project.
That's a mouthful.
In particular, I led the translation of the p5.js book.
There was an introduction to p5.js book that we released some years ago, and also the website.
The whole reference, it's on Spanish, and we make our best to any new content that we upload to translate to Spanish.
And also it's been expanded to more languages.
I work on the Contributors Conference, which is intentionally named as Contributors Conference and not Developers Conference, to make it more inclusive.
And also I'm always on the repositories trying to moderate on the forums.
So that's where my experience with consent comes from.
And I wanted to share some of my personal interfaces and challenges.
Like, I am not a native English speaker.
I deal with having to think in English and all of the day in Spanish.
I've been out of my country for the most part, for the last five years.
So I feel like also even my Chilean Spanish is kind of outdated.
There's all sorts of mostly pieces of Spanish.
There's a lot of problems there on doubts about, am I saying the right things, which I also have to do with consent in the sense of, I don't want to overstep people's boundaries and I want to actually be true to what I want to say.
So that's an important interface for me.
And also I use a ton of tools, and I am concerned about the ethics behind these tools, who are making them, why am I exploiting some more people with some tools I'm using with tracking.
I'm also always thinking about the expectations from other people, about their own biases, my own biases, how I am fair with myself and with others and also about access and the data.
Like we're living in a world where we have both access and we produce a lot of data.
And those are very easy to go over by corporations, by people, by ourselves.
So some doubts are usually the only consent on dealing with open public forums, such as GitHub or working on the open and open-source in my work.
I always deal with like, am I doing enough or too much?
Am I being too extra to somebody in the internet?
Can they opt out of what I'm doing?
Am I orienting with notifications?
So it's really important for me to have an opt out.
Am I using the right tone and words?
Am I being too condescending?
Am I being too nice?
Am I being encouraging?
And also like right now, I'm talking in English and I use it on a daily basis.
And these are some aspects that we talk about impressing us in is it easy to colonize and to write in English?
Is it a transparent way of communicating?
We try to reflect on the side effects of everything we do.
And in order to not be overwhelmed and not to be paralyzed with fear and anxiety, and to actually do some work, there's some guiding documents that I wanted to share today that are really helpful for me or my practice.
I'm really proud and happy about the community document, community statement of the p5.js.
The community is what actually led me to start working there on the first place.
We also uploaded some contributor talks for people that want to dive deeper.
And these are always being updated.
Aside from the p5.js and processing realm, I'm really excited about the Berlin Code of Conduct and also the Chatham House Rules that I encourage people to look them up.
And in terms of my own practice, I would say that, how do we consent on like a very rough, this my minimum.
I add licenses and disclaimers to everything I do.
I say, this is what I did, this is the license.
If you run the software that I grow, these are the guarantees.
I also always start to add a context.
I say, it was me.
It was these states.
'Cause I know that for example, it's so easy to cancel somebody for the password, but everything, all the work comes from a context.
So I try to give it like this.
It was me on the state.
That's very particular for me in 2020.
Like I feel like most probably, I'm going to come out of the quarantine.
I'm gonna say like, this idea was really dumb.
Like, oh wait, I was isolated in my house in 2020.
So like, it's really important to have a context.
Also I use open-source in the sense that I consent, I read the license of these tools I use.
Everything I do I upload in tiny increments.
I open my process to the world.
I try to open the tool site right so that nobody gets conflicted with using what I do.
That's been important for me.
I mean, in particular, that's why I brought it in parenthesis here at the end.
That has led me to, nowadays we're working a lot with mega controllers because they have no backdoors.
Even if I do self surveillance on myself and I have microphones in my room on a wreak sensors, I know that there's no backdoor for anyone to weed out my consent to work.
And that is a very tiny complication experience that is slow.
It's a mega controller, but it has a different flavor that I'm exploring this current year, that I'm really excited about.
That's why I wanted to share it today.
And as a final thought, I wanted to say that, these are my rules of thumb.
That when I'm in doubtful, if I'm acting consentfully or if I'm doing the right thing, I try to do the research.
I try to look up what I understand.
I ask people, if I cannot find the research.
I try to listen to them, read them.
I mostly try to slow down.
I find myself when I'm in doubt it's really better to slow down and also practice self care so I can have a clear head space that lets me, as Evelyn also pointed out, I try to assume with faith.
And that's also a way I operate.
If somebody I ever acted in a way that they don't like, at least I want to be given the benefit of the doubt that I was doing something with faith.
And those are my strategies for being consentful and trying to work in the open and in organizations like this.
Thank you so much.
I love the kind of guidelines or these grounding points that both you and Evelyn have laid out.
I have some more specific questions too I want to dig into.
But I want to go through each of you first and then we'll dive a little deeper.
So last, I'm excited to introduce Xin Xin who is an interdisciplinary artist and community organizer, working at the intersection of technology, labor and identity.
Xin co-founded voidLab Lab, an LA-based intersectional feminist collective dedicated to women trans and queer folks.
They were the Director and lead Organizer for Processing Community Day in 2019, a worldwide initiative, celebrating art code and diversity.
And they currently serve on the advisory board for the Processing Foundation.
Their work has been exhibited and screened at Ars Electronica, DIS, Gene Siskel Film Center, Tiger Strikes Asteroid and Machine Project.
And she received her M.F.A from UCLA Design Media Arts and teaches at Parsons School of Design as an Assistant Professor of Interaction and Media Design.
Hi, I am Xin.
Thank you for the introduction and thank you for the invitation.
I just feel so honored to be here and be with such a wonderful group of people on this panel.
I think I would start with a visual descriptor of myself.
I am a non-binary Han Taiwanese descendant with short asymmetrical haircut.
There's a few strands of white hair embedding my black hair that is probably hard to see through the video chat.
I have a diamond-shape face, light olive skin, wearing a black and white striped shirt.
And I am using a virtual background.
It is a white background with a pattern of black sharpie strokes overlap on the white background.
So, I'm going to share my screen right now.
I'm also going to share the link to my slideshow.
There are a couple embedded hyperlinks in here, so I just want to make sure that you have access to this.
So today I am going to talk about a project I have been working on since July, called Togethernet.
Before talking about the project, before getting into it, first want to talk about what brought me to this project, what was the background of it?
And basically I'm gonna start with the two questions that I had basically around the time of when quarantine started.
And the first question I had, was how would social media look like if it prioritizes user intentions behind every instance of online communication.
In a way this question has been sort of cooking and brewing in my mind for a very long time.
And for some reason, the quarantine really allowed me to slow down.
And very similar to what has been mentioned already, this like slowing down process actually had helped me to acknowledge, you know, the consent or the lack of consent in the relationship I've had with social media.
And the second question I had in May was like, what are ways to transform digital rights policies into an embodied practice?
So, in a way, when I say embodied practice, in a UI Interface context, I'm thinking about a lot of the things that we might, you know, some of us might inherently know how to do or, you know, what to notice just by, you know, growing up or living through a certain era of digital practices.
So for instance, being able to recognize what's a spam mail and what isn't a spam mail, and be able to recognize different kinds of dark patterns stats embedded in like the user interface design.
I wonder, what is the opposite way of thinking about this?
Like if we were to transform digital rights policies such as the right to be forgotten, the right erasure into, you know, embody user interface design, how would that look like?
So I do not come in as an expert of consent by any means.
I am very much a baby, I'm learning all the time.
And I have definitely made mistake in the past in terms of crossing boundaries.
And I might very likely mess up again because it is an ongoing practice.
That is where I am right now in terms of how I understand this.
So here are some readings that I have been looking into.
I would very much encourage you, if you're interested in diving in deeper into this topic to check out some of these readings.
The first inspiration is Consentful Tech Zine" by Una Lee and Dann Toliver, which was really the biggest inspiration for this project. And later on, I dived more into the specific topic of consent and how consent has been understood and practiced. And I think that actually led me to looking at the BDSM and kink community and seeing that there's just a wealth of knowledge and a wealth of information that is really highly applicable to all kinds of communities. So I'm going to just try to encapsulate as much as I can, in this very short presentation. So in order to consent to something, we have to fully and profoundly know that we don't have to do that thing now or ever, right? So, I think that, the sense of like being able to opt out but still have other options to participate is not necessarily something that we're very used to when it comes to thinking about building software. Or thinking, like the way we approach even like user agreements, right? As a continuation of this quotation, Meg-John Barker also said that this applies whether the thing in question is having sex with a partner. During the task, we would set ourselves out on a particular day, hanging out with a friend or being in a certain relationship or group. We have to know that nothing is contingent on it. That we're not bound by entitlement or obligation that there will be no punishment if we don't do it. And that there's no assumed default script or path that we're expected to follow here. So continuing this thought, and referencing back to the Consenful Tech Zine," it actually offer a very nice framework.
It's called a FRIES framework that is also borrowed from the planned parenthood notion of what practicing consent means.
And if we were to think about the kinds of boundary and consentful practices that has already been produced in other contexts in the physical world, and start thinking about how that would map onto the digital world, I think a lot of very interesting dialogue and inspirations come out.
So for instance, Freely Given is based around the notion of making sure that there is no coercion and manipulation.
And that essentially is what the dark pattern is all about.
Like tricking someone to click on something or assuming that if someone agree to one thing.
Then if someone signed up for like a free trial, it means that they're going to pay at the end of the free trial, right.
That is a very specific kinds of strategy that is also crossing consent.
So also, I think another really interesting one is like thinking about reversibility.
If someone agreed to something, they should be able to change their mind at any time.
It shouldn't be this kind of like contract that they sign themselves off to.
So, in a way, I sort of brought a lot of these keywords and thoughts back into the project Togethernet, which is a communication software that has both a P2P chat feature, as well as a archival check mode.
What you will see on the screen right now is three different kinds of network.
And one is decentralized, which is in the arrangement of a star shape.
And the middle one is distributor, which is in a shape up a net.
And the one on the right is decentralized, which looks like a couple of different star shape with some loose connection between each other.
And so, in thinking about how each of these network might respond or not respond to consentful practices, I eventually ended up with thinking about, okay, so, I do need a portion of this chat to operate on a peer-to-peer basis.
And what protocol would I use?
And what you see on the screen right now, is like three different icons.
There's the WebRTC icon, and there's the HyperCon icon in the center.
And there's also the Secure Scuttlebuttt on the right.
And so I essentially use for my own matrix and in terms of like evaluating protocols, using the five keywords that I've mentioned and think about which one would work best based on a consentful practice and building consentful tech.
Eventually I ended up with WebRTC.
On the screen, there are two different nodes, two different computers routers connected to two different users browsers.
And the idea of WebRTC is that you make up two different browsers to their end users make initial connection with each other through web socket.
They exchange peer ID on something called the signaling server.
And after that, they are able to essentially like learn each other's addresses and be able to find each other directly and communicate through a data channel without going through a server.
So that is part of what this software is.
So what you're seeing on the screen right now is, it's a interface and we are in the peer-to-peer ephemeral mode.
And in this mode, which I am calling, sitting at a park right now, sitting at a park is currently activated, and you see a couple of different avatars, essentially representing different users in the space.
And they are able to communicate with each other in a way that's very much spatial.
It's almost like a hybrid between a game space where you can walk around and a text chat.
So there are many different things I get into, but I think I'll just be brief.
Basically in this space, every single chat communication is ephemeral.
Once the browser closes, everything is erased.
Obviously if someone wants to, or, you know, decide to take a photo of the computer or copy and paste certain things, there's no way to go around that.
But as much as possible, I'm trying to keep it within the browser and keep it ephemeral.
Like the whole idea of the software is that, in order to archive content, in order for data to live on a server, the group that is meeting with each other needs to achieve consent.
And this is a feature called Consent to Archive.
When a message is being left on the screen, one will need to hover your cursor over the message and hold on to it.
And a menu called Ask for Content to Archive is going to come out.
And once that happens, all the other avatars will have to overlap that message and hit enter in order to achieve consent.
Consented messages can also be revoked at any time.
So something that gets pushed to the archive can also be deleted and taken back.
So this is one of the features that I thought I would share.
On this screen, you're seeing, this is the archival mode where once messages get passed and stored into the archival mode, you would see it sort of, you know, the goal here is to like visualize and align these messages in a way that would mirror the database infrastructure as much as possible.
So on the left, you see kind of like a history window where different messages are lined up in multiple rows.
And some messages, or one of the messages, there's a sign that its been revoked.
So even though there's no content, there is in a way an empty message, but you would still see that, oh, this person has revoked a message.
Sorry, how much time do I have?
Oh, we're fine.
(mumbles) very quick.
So you can take whatever time you need.
I'll try to be quick.
So then this essentially, like this line of research and thinking Togethernet really sort of like put me into a warm hole.
Like the deeper I think about it, the deeper I go into it, the more expensive it gets in a way.
So eventually I had to like really map out and categorize, like, what are the different kinds of consent I'm really thinking about here?
And ultimately I ended up with three different layers.
The first layer is the concept between peer and peer.
And the second layer is peer to server.
And the third layer is server to server.
And very interestingly, at least from my current perspective, I think peer to server, which is the consent that I feel like I'm most violated by on a day to day basis in my interaction with Instagram or Facebook or whatnot.
That layer of consent in a way is the most plausible to achieve.
Or like, you know, it's like the most plausible to like, think about, okay, so you have to do the feature to allow user to like take their data back or to find other ways to use the software without agreeing to certain things in a way that's just thinking about how you're gonna arrange different features together, where I think that, you know, how has software encourage a consentful peer to peer communication is a very big and complicated question that I don't have any answers for.
But I'm very excited to continue to explore.
And the third layer, which is the idea of server to server consent is something I'm very excited about and maybe a little more directly related to W3C because I am planning on using the activity pub, which is a social networking protocol that gives application a common language to be able to communicate with each other.
So the idea for this is that there would be a code of consent specification that will be hosted on the activity pub.
And so that, you know, very similar to how different email servers can connect and communicate with each other, in a way different instances of Togethernet can communicate with each other by evaluating whether they have matching or different code of consent.
So this is basically the world I'm imagining Togethernet would be joining, which is The Fediverse.
This is like a list of different, you know, social media, social network on The Fediverse that all uses the ActivityPub protocol.
And so if you see two instances, let's say, you know, you have an instance for your group, I have an instance for my group.
If we have matching code of consent, then it will be possible to think about data portability and think about ways for communication to cross.
Versus if you have mismatch Code of Consent a layer of protection would be in place where maybe additional information will be needed from both sides in order for the two to communicate.
In a way, this is very much responding to, you know, some of the issues that social media on the federated network might be experiencing in terms of feeling like, since everybody is hosting their own instances on their own server, it becomes very difficult to think about centralized moderation, you know, the kinds that, you know, maybe (mumbles) course.
So, this for me is a way to kind of like coming between a centralized moderation model versus like, you know, instance based moderation model.
And I think, last but not least, I would like to give my special thanks to Dorothy Santos, who I believe is on this call right now.
Just giving me the inspiration and idea on how, basically I remember when I had the first phone call with Dorothy, Dorothy was like, you shouldn't call this a chat room, chat room is not consentful.
And I was like, you're totally right.
And so, yeah, I did a Play Is Not Fun Without Consent, right?
Like the idea that, oh, we should be able to just casually chat or just have fun.
That imagination of play is not always going to be fun for everybody if consent is not communicated beforehand.
And the second idea, I want to give my special thanks to Maxwell Mutanda and Laura McCarthy.
These two people really helped me, also like highlight the notion of speed.
And highlight the notion of how, like being slow is very much part of consentful communication.
And that, more than anything, it's important to think about moving at the speed of trust and what it takes for that to happen, even on the level of interface design.
And here, this is, I think a quote that I was drawn to from consent culture.
It's an article on the blog of Pervocracy.
So, I think part of the reason we have trouble drawing the line, it's not okay to force someone into sexual activity, is that in many ways, forcing people to do things is part of our culture in general.
So cut that S all of your life.
If someone doesn't want to go to a party, try a new food, get up and dance, make small talk at the lunch table, that's your right.
Stop, the 'oh come on' or 'just this once' and the games where you playfully force someone to play along.
Accept that no means no all the time.
And obviously as the extension, yes means yes.
So, this is, I guess, where I would leave this at in the sense of like, I think making this project, is also very much simultaneously a reflection of how it can go beyond the normative default scripts, of how users ought to interact or how user and silver ought interact.
And I very much look forward to this, the discussion and to think about, you know, what else is out there?
How can we imagine outside of this framework?
if you're interested in learning more about this project, there is a website.
And you can also email me.
I am going to begin hosting some co-designing workshops and I would very, very much love anyone's participation.
Also, I have to mention that this project was produced at Eyebeam.
So thank you my Eyebeam team and organizer supporters.
That was fantastic.
Yeah, these are all really wonderful.
And you could see people silently clapping.
So let's see.
I know the panelists have a number of questions for each other.
If you have questions as a participant here, you can put them in the chat or click the raise hand, then I can call you or move to you.
But first I had a couple of questions of my own.
And what I really love about kind of what all three of you are getting at, and then Xin, you just gave us a really nice case study, is this idea of like basically encoding some of these values directly into the system.
So starting with, what are the values and thinking about how does that play out in the technical decisions you're making or the documentation that you're writing around it?
And so I just wanted to invite Evelyn in the room to expand on their work a little bit.
Evelyn, I was thinking particularly of like, I feel like that's a lot of the work that you've been doing over the last couple of years, at least with p5 and probably with GitHub too.
It's been thinking of like, trying to feel out or help guide some of the values of the community and then find ways to document them or encode them.
I'm thinking specifically of like the accessibility or the access statement that you drafted or your work in a friendly area system.
And I'm wondering if you want to talk about either of those and maybe some of your thinking around it.
Yeah, I was thinking about, it'd be good to mention the access statement.
I think maybe the thing that comes up for me around those things about like, including things in a technology is like, actually, this is like something that Shan and Michelena Holloway mentioned in an earlier panel that I saw about how like technology, or maybe it was a different talk of hers and I'd be wrong.
But about how like technology is like in a broad sense, like lots of things that aren't just like computers, like certain kinds of like practices and (mumbles) Yeah, practice, I guess, is like another way of saying technology.
And so I think like bureaucracy and like documenting like policies and things like that is like also a form of like technology and sort of encoding or like reinforcing certain kinds of like values or practices.
And so I think the access statement for p5 is sort of document/statement that says that, as a community we're differing on like building new features that aren't related to increasing access to p5.js and like creative coding or coding in general while also it maintains things that currently exist.
So it's kind of like a like, we are agreeing to like, keep fixing it, like maintaining the things that already exist, but we're not going to add like new features that aren't like directly tied in.
I think Aaron actually added that, some reference to that in some of our like opening issue templates and like PR templates.
Like just basically like a question.
It's like, how does this feature request or like enhancement relate to accessibility?
And I think it's been really nice to have those conversations, like really forefronted as we discuss taking on new things or building new things or accepting contributions of certain kinds.
I think there's like a couple of ways that technology can like reinforce those kind of like agreements.
but I think like constructing them and building consensus around them and then recording them in some way, is also like a way of pushing that forward too.
I remember when we first sort of raised that idea of like, we won't add any new features except those that increase access, I think there was some moment of worry that this would like, because one of our goals, right, is to be as inclusive and inviting as possible in terms of new people coming to the project.
So is this like raising the barrier or making it harder for people to contribute or to suggest their idea?
And I think where we landed was that, the point is not about shutting people down and saying, no, we're closing your poll requests, but opening up a conversation about what is access and is there a way to adjust this, you know, feature requests so that it is taking into account access and its many dimensions.
I think one of the things I mentioned about that is, part of my motivation for doing that is because as like someone who is helping maintain, like I was spending a lot of energy on issues that I felt like weren't as like important to like what I felt like the values of the community were.
And so I wanted to have like a way for us to be like, hey, these things are not as important as like spending time on these other things.
And that makes me think also of, I mean, that makes me think of just like how helpful documentation and like setting out the priorities can be, especially for new people coming into a project because you don't feel like you're kind of stumbling around in the dark, trying to figure out where you won't get shut down or turned away.
And it makes me think, Aaron, of your work with the internationalization.
And I'm remembering specifically your work on the global contributors toolkit that you were working on this summer with some collaborators.
I think Noto is one of them.
This may be here in the chat too.
And I wonder if you want to talk about that a little bit.
I thought what was really interesting about it was that you were creating a document that was aimed at helping like onboard people to just be able to contribute to internationalization efforts.
But then your group really went much further and thinking about what does that actually mean in terms of the project or in terms of the larger dynamics of the society.
So yeah, last year on the Contributors Conference, we started talking a lot about, we started from the idea that everybody that was there was very different.
And we tried to come together with language and also from a place of trust of being able to talk things that might be uncomfortable.
Like we talk about colonization, we try to go super deep to the topics and how we felt about any aspect about p5, about promoting a tool that was invented in The US, built on computers and running operating systems.
I remember like I found language to explain real difficult things that I think on a daily basis, but, or like got feelings.
Like I feel like sometimes the correct answer is to give up.
To say like, I'm going to burn my computer and move to the mountains.
I never touched it again.
But at the end I also want to be part of the world.
And how do I not feel guilty about engaging with that world?
How do I not promote things that might be hurtful for other people?
How do I keep on listening?
So like, I think like that was the basis of the conversation that led to this document that Lauren posted on the chat.
And the most important thing is that it's an ongoing conversation and also knowledge in that, it's a very difficult conversation and it's a necessary conversation.
And then I know that all three of you had some questions for each other.
You shared them with me ahead of time now.
They were all really great.
So would any one of you want to jump in here with a question for the group.
I'm kind of dying to ask one right now especially like with what Aaron just mentioned around yeah, like the fact that like, we can choose not to like use computers or like, etc but there's like a cost of that or like a loss of access of certain kinds of communities and information and things like that.
So I was actually curious to hear like both Xin and Aaron talk a little bit about like how do you think about like power differentials and differences in access, if you opt out of like a particular kind of like platform or community or like forum.
Like for like fishing, like in the application you're talking about, I can opt out of using the application, but like depends like where communities are too, right?
I was thinking about the same question and I think it's very difficult when, you know, like the team I have right now is me and one other person.
So, it's a question of balance as well.
I think one of the goals I have is to, you know, first of all, there's the open-source project.
And the goal is to think of how we can pluralize the way the software works and either wait for different communities as much as possible, but also recognizing that, you know, a lot of times when we usually leave it at that the communities who are able to iterate and create their own versions are already very tech savvy and have a deep root in tech.
So, it's already cutting out a lot of access and situating the project in a very particular culture.
I don't have an answer, but I think that, you know, one thing I'm thinking about is to try to make really good, highly accessible tutorials on how to make some of those pluralization remixing happen.
You know, I think p5 and Processing Foundation set very good precedents on what accessible tutorials look like.
And I think, on the other hand, you know, like possibly like as this project goes on, instead of like building the base template and like expanding that, focus on, using the funding or support that I might possibly get to work with other communities and be their tech support and co-design with them.
I guess like, yeah, Aaron, I'd like to hear from you too, especially around like, how does consent work when you there's like such a huge amount of access, like tied to a decision like that?
My first thought to start answering that is that I have a thing on popular opinion that computers are too amazing nowadays.
I decided at some point, like I went to electrical engineering for undergrad, and there's a lot of talk about productivity, and the Moore's Law of let's make things faster for faster sake.
And I think that I'm a big proponent of like, let's stop investing on faster computers.
Let's understand what we have right now.
And that's something that I think highly resonate with Processing Foundation.
'Cause the Processing Foundation is a way of paving the wave for things that you can do already with computers to make them in a collaborative way to like, it's like paving the road of these very difficult mountains to climb.
And I think a lot of my worries like that I'm never looking for optimizing or for reaching the unknown.
And mostly about understanding and making these things more available for society so that if we can all discuss, if we think they're right or not.
And that's like consent on a citizen level, I feel like there's something about my experience in Latin America is like very crass can be.
It's like, you're super cool if you have the device.
And here, I see people from a hacker culture where you're cool when you can actually program on it.
So there's like a different realm.
There's something about the, I've always thought that the transistor made a huge change because before the transistor like you had this big radios or you had these big cars where you could see the parts and what was broken and you can actually manipulate them and modify them.
But then at some point, they become this tiny tips that you cannot open them.
So you start to like, under they're so magical and powerful that I think that you just try not to worry about it.
So and somebody starts making decisions about you and you give away your rights.
You give away a lot of your agency.
So a lot of my work is about making things are not flashy, not fast, but actually, you know, what's going on inside.
That's how I am reclaiming that space.
That's how I feel comfortable.
And I feel like I'm never going to have enough time to.
And just to recap, a little bit of what's happening in the chat, there are few people responding to all of your ideas.
But one thread I see is thinking about, like, what does that actually, how do these ideas of revocability or consent actually play out in the, you know, if we're thinking about specifications for the web, what might you add or what APIs or user interface changes do we want to see?
But I think I would open that up a little more and just, (mumbles) So Burt and Jeffrey, I hope I'm recapping your questions properly here, but it's, you know, so thinking about models for working with data, like the GDPR in Europe.
But like, what do you want to see?
And then to open it up, I liked this question from Jeffrey, which is, what are the questions you ask before adding something?
Or maybe, what are the values that you're trying to keep in mind as you're making those decisions?
If you're less familiar with the particular spec or API, I would put it to you like that.
Yeah, what are the questions that you think are important to ask when you're thinking about adding a new feature or going into a new direction or building a new tool?
I think one example that I often think about is how in signal, there's a way to turn the feature where that prevents a screenshot.
I believe there was a time where it was like automatically on.
Now it's automatically off.
And I think it's a very, you know, not signal specific, but just in general, thinking about the boundary between security and consent is really interesting to me.
Because when you turn screenshot prevention on automatically, although it is more secure, there's also almost a illusion that, like someone can just take a photo of the phone.
Or someone can just take a screenshot in some other ways, right?
I think sometimes like, although meeting that need for security is not always the most consentful way to go about it, it doesn't really directly answer the question.
But I guess I'm just thinking a lot about, you know, what is the kind of expectation I'm presenting through adding a new feature?
And how sometimes, you know, like adding a new feature, even though it might seem like it's a very good idea might actually take some of that agency away or that transparency of what could actually happen away.
So, yeah, it's a bit of a, just an example and half answer, but it's something that I think about a lot.
Yeah, I would say that also, like any requests for new features also assumes that somebody is going to read it.
And like, as part of like what she was explaining, like you need to also understand maybe it won't be read.
Maybe like a lot of ways that I am still able to participate on the Process Foundation over the years is because I value people first.
So I've been called to like work less, to not burn out.
And at the end, no matter how oppressive a computer could be or consentless, they're made by people.
So as the same as the laws of physics allow for weapons to be fabricated, we need to work on education and self respect and love for each other.
So that even though awful things can be done, nobody is going to have the heart to do them because we care about each other.
Yeah, that's my hippie message.
There was something I saw in the chat that I think was related to this around, like, how you communicate, like what someone needs to know to consent to something, especially around like APIs and like in software.
And mostly, I just wanted to say that it's really hard.
Like I used to work as a UX designer for LyncSys like working on like the UX flows for like setting up your home wifi network and like trying to like explain to someone who like, doesn't want to spend that much time on this.
Like, what are the implications of like using a particular kind of like security or putting the router into bridge mode or like, whatever.
It's just like, there's so much you can explain.
But also, like, I think a lot of it has to do with the context and like the situation that the user, the person who you're communicating with is in.
It's a lot of effort to like encode all the different, how to like know what that situation is in the context.
And also like, you know, create the complete diagram in logic of like, given these conditions.
We think this person is in this situation therefore we want to tell them these things.
I think I'm just saying it's really hard.
It's a lot of work.
But I mean, it's obviously important.
It's just, it's so much easier when it's like in your head and you're like in person with someone else or on Zoom with someone else, and you have some context on this person or what the situation is.
I'm going to bring us to a close unless any of the panelist wanted to share any last thoughts that they didn't get to yet?
I mean, everything that you all are staying has inspired a lot of like reflection and conversation in the chat as well.
I think it's just such a huge topic.
You know, everything from thinking about how do you onboard new contributors to how do you document the values or principles of a project, to how do you build that into the technology and then how do you communicate it through notifications or UI decisions or APIs.
And I think (mumbles) one of the things I'm excited about here is like, it just points to the need to actually, I think a lot of times, communication gets like thrown to the wayside, like, oh yeah, it's going to happen.
And to actually take the time to have explicit conversations about it, I think is one starting point to unpack these.
So I'm really glad that we could do that today.
I wanted to just add an edit to something I said earlier, 'cause I shouted it out to Noto Hedo, who is helping along with the global contributors talk it, but I see that also Xin Xin is here.
I wanted to say thank you to her as well.
But I think that's our session.
I'm sorry, I couldn't ask all of these questions here, but I hope that the conversations can keep going.
And yeah, thank you again to the panelists and to TPAC and W3C for letting us join with this panel today and for all of you for being here.
And especially to Don for helping organize all of this.