Skip

Accessing WebXR Through Art

Facilitator: Lauren Lee McCarthy

Speakers: Evelyn Eastmond, Stalgia Grigg, and Valencia James

WebXR is a quickly developing area of the web, with the potential to draw in many new users with its creative possibilities. Panelists Evelyn Eastmond, Stalgia Grigg, and Valencia James will present their work developing platforms and tools that open access to WebXR through art, taking into consideration COVID-19 and the need for remote experimentation. Topics will include the development of the p5.XR library to lower the barrier to entry for new coders, a volumetric performance toolkit that reimagines immersive web spaces as a site for more equitable representation of humans in mediated performance, and using WebVR to connect art students and their studio practices in a remote-learning context. Brief presentations will be followed by open discussion among panelists and Q&A with the breakout participants. This session is curated and moderated by Lauren Lee McCarthy of the p5.js project.

Previous: Creative Imagination for an Ethical Web - TPAC 2020 breakout presentatio All breakouts Next: Virtual Keyboard Control

Skip

Skip

Transcript

Lauren Lee McCarthy.: My name is Lauren Lee McCarthy. I am a mix race Chinese American woman. I have short dark hair. I'm wearing a brown shirt. I'm sitting in my studio.

I'm excited to introduce this panel today. I came into this scenario from the P5.js project which is an open source library for making creative work online and learning to code. So it's used a lot by designers and also educational settings.

The people in this panel I have met through that project or they're working in communities that are adjacent. All three of the panelist are artist and designers and tool makers themselves. So they'll talk about the different tools and projects that they're working on related to thinking about XR on the web.

So, I guess the only other thing I'll say is it feels exciting to do this panel now because I know WebXR is a standard evolving and given the COVID situation there's been questions about how to do things around remotely.

So the format is that each person will give a short presentation and then we'll have time at the end for questions and discussion. If you have question while it's going, you can throw them into the chat, and I'll do my best to weave it in the conversation or you can raise your hand feature.

We didn't decide on an order, but I will call on Evelyn first. Evelyn Eastmond is a multidisciplinary artist exploring trauma and embodiment through sculpture. She's been at the [indistinct speech] center for the arts [indistinct speech] at the residency program in Canada and thought at the [indistinct speech] school of design. She's held positions as the [indistinct speech] MIT lab. She's a senior design researcher at Microsoft. She will talk her work there. Welcome. If you can describe yourself and your slides as you go along.

Evelyn Eastmond: Hi, I'm Evelyn Eastmond. She/her. I'm Hispanic woman. I'm wearing a green shirt and I'm in my guest room.

For my slides I'm going to read from a script and the video that I have will have some captioning. I'm going to share my screen. I'm going to go to my slides. We'll give it a second.

Hi, everyone. My name is Evelyn Eastmond. I'm here to share a little bit today about my experiences with WebXR and its use in remote art education.

In my current position as a design researcher at Microsoft I'm running user studies such as VR and AR. I'm interested in how these enable active learning and active participation in the ways that PowerPoint cannot. I'm interesting in learning if we can open up what open sharing means. And how or whether this opens up new pathways to accessibility, openness and inclusivity.

Earlier this year a professor asked if I could work with her students. I created an [indistinct speech]. I populated it with the students works. Using this space and donated headsets from [indistinct speech]. The students were able to meet and talk about their objects in the way that zoom wasn't suited for. I will show a video of one student presenting her 3D sculpture.

So back to the slides.

As a result of that experiment in the spring the student said they felt they could find a sense of connection to their peers and art work again and regain a sense of community that zoom wasn't offering.

Some went as far as to say their tuition value was redeemed by having a more remote classroom experience.

In this semester now we decided to push the work further. The social VR tool was difficult to use and not accessible. We wanted the students to be able to easily create their own environments suited to their visions and artwork instead of me create their studio like I did in the spring.

So we decided to switch to a social VR tool based on WebXR from Mozilla called Mozilla hubs. It's an open source tool and its inherently spatial and has head tracking. It's multiplatform. Anyone can join with any computer or device. It's very easy to use. You drag and drop it in or use the built-in tool. You can use a web link like a website and there's an active community on discord for learning more.

We're only half way through the semester but we're seeing advantages in addition to the limited in person meetings that the students are having.

Here's some pictures that the current in person studio critiques are like with social distancing, masks and limited out door table space to show artwork. Here's a student working on a project based on a bag of cookies and she made a copy of it that's soft.

This is another student artwork based on the scent of lavender.

Here's a student with shiny balloons and a lattice structure.

I'll finish by showing a short montage of some of the artwork students have been able to make. I will let the video play. It has no audio. I will talk over it sharing what we hope to learn next.

So far the students have been able to create rooms, landscapes and architecture to give a sense of place to their work.

They've been able to play with scale, mix media installation and art which would have been limited if done in person. What's interesting is not just the enabling of the spatial experience for those not on campus but also to those who have not been able to return to the U.S.

For now in the next couple of months our goal is to collect data about the classroom experience and incubate new tools to help foster more explorations.

There's questions to explore about not only free and open media that connect people at a distance but always questions about Microsoft's involvement in this space. As some of you know Microsoft owns Github. I'm wondering if Microsoft wants to support not just the future of the open web for developers but if it also wants to support the future of the open web for spatial and embodied making as well.

My last slide is just the same slide. I'm learning about the politics about making that case for support from might recollect and finding it interesting and challenging given I have little experience in business structuring.

I'm excited to explore these ideas today. Thank you.

Lauren McCarthy: Thank you Evelyn. We're going to move to the next presentation.

This is Valencia James who is a Barbadian freelance performer, maker and researcher interested in the intersection between dance, theater, technology and activism.

She believes in the power of the arts to make change. She cofounded the AIM project which explores muvene learning and dance. The project has been presented in several international forums and multiple TEDx. She's also currently a rapid response for a digital [indistinct speech] in journalism.

Valencia James: Thank you, Lauren for this wonderful opportunity to share my work. And just to be in community with all of you at the W3C forum and community. Thank you so much.

So thank you for that introduction, Lauren. I'm from Barbados. My main medium is dance. I'm not a coder. This is still a new medium for me the idea of XR and working digitally. I want to say my work is really informed by the tension that I have felt in my relationship with society. I'm coming from Barbados. I grew up in British colonialism and I spent time in Hungary. It was a positive experience, but it was also an othering element to it.

My exploration has been led by curiosity and sense of urgency to extend the limits of my creativity and interrogating what that means in performance and in creation in studio.

In this slide you see on the left my first taste for working with dance in a virtual environment. This is a project called AI am.

This is founded with a group of technologist from Sweden and Hungary. This is our first prototype of a system dealing with working with intelligence avatar thinking of what it might be having a duet with myself. A 3D replica. This was a start in thinking into machine learning but also what it means to interact human computer interaction.

I found my movement practice was more enriched than actually me being the teacher. I was being taught.

I realized that I was there was this awkward divide between my world and that on the screen. So there's always this strange reality between the physical that bothered me.

So this slide is in 2017 the culmination of that project which was an even length work called I am here. You see me on the left. I'm in a slightly bend over pose in a motion capture suit. The avatar has evolved to a blue androgynous figure that hovers on the screen.

I was looking at empathy between myself and this agent but there's always this kind of boundary there.

So I was wondering how can I supersede that? I took space from the project and move today the U.S. and had a baby and I didn't know how I would continue but during the lock down this year it came to me seeing that theaters are closed I thought what if I get into the virtual.

So there's where I figured out WebXR could be a way to bridge that divide but how.

This is what started the new project that I'm researching through Eyebeam. It's called the volumetric performance tool box.

It's a collaboration between myself, Sorob and glow box. We want to see how to have a performance in their own living space with virtual audience and minimal. We've been working on this for the last couple months.

Our approach has been looking at centering black representation in online spaces. Me being the performer.

I think I forgot to describe myself at first. I'm a dark brown skin and I have thin locks pulled back by a multicolored scarf. This is important that I would be represented in virtual space as I am and not as an avatar.

There's already [indistinct speech] in these spaces. There's not enough representation of black people in virtual and even performance space for that matter. So we decided it would be volumetric captured in our approach.

We wanted accessibility for audience and artist. For audience we wanted a platform that would be accessible across devices. So we decided to start experimenting with Mozilla hubs. We see from Evelyn's work that really is a place that is most accessible.

And then for the artist as well how can we do this with using minimal equipment without using expensive heavy-duty computers. So we started off by trying to make the thing.

How do we make a performance. On the left side you see a picture of my set up in my dining table. Thomas Wester is seen in one computer. We would collaborate via zoom. Glowbox sent me equipment to already start experimenting to use an acer connect and android computer.

On the right you're seeing some experimenting with [indistinct speech] kit as I improvise in my living room. So I'm thinking about not just black fem representation but what would it be like to create a virtual space to perform in and what would it be if these spaces could be a place of healing of historically places of pain and injustice.

I thought about ruins. That brought me to [indistinct speech] work where they have photo scans and we found one of the [indistinct speech] sugar mill. This is a big deal for me because my connection with my ancestors and finding power and strength from within when the world is clearly showing hostility towards black people.

How can this acts of reclamation in virtual space be a source of healing. I'm going to pause and show you proof of concept that we have developed in Mozilla hub with guests as a test audience.

So, in these sessions we realized how exhilarating experience and the feedback that we got from the guests was that there was a sense of liveness and community in this experience and also that it was a sense of intimacy.

I felt this was very exciting and has a lot of potential even beyond the pandemic.

First of all to bring artist in the possibility to create their own world and their own ways to tell their stories still and connect with audiences I think the possibilities are huge and I feel like this is pointing towards a new art form maybe.

A new way of creating, of sharing beyond just even human performers. What if there's interaction with algorithmic dancers and thinking of AI as cocreators.

So, this brings me full circle how to bridge the divide. There could be possibility for new dances with algorithmic movement and human movement connect people together. So this is just the beginning.

We're looking forward to developing a community around this because it's something that we need a wide range of people to collaborate on. If you're interested, please do contact me at volumetricperformance@gmail.com.

Lauren McCarthy: Thank you Valencia. And then lastly we're going to go to Stalgia.

Stalgia Grigg is an artist using simulation. Stalgia works to build open source software tools. As a project and works as a project steward for P5.js and P5.XR. They are dedicated to building tools that interfere with normative tools that [indistinct speech]. He's an assistant professor in the multimedia design in [indistinct speech] New York.

So welcome Stalgia.

Stalgia Grigg: Hi. Thank you for the introduction Lauren.

I am a white person with pinkish hew to my face and believe it or not this mullet combination of a bowl cut and pigtails. I'm sitting in front of an off-white wall and two blank post it notes behind me.

This slide that we're viewing now has the title of the talk, the panel and has a picture of kudzu. As an artist I'm thinking of how tools set value for creative expression.

Today to think through this together I want to look at two examples of XR pedagogy. I want to read an excerpt by Jessy and Selwa.

“In this closing workshop, we take key takeaways from the readings (Escobar, Massey, Mignalo, Kelly) and engage in a thinking-feeling-making exercise that tackles our basic inquiry: how we might Decolonize AR. Using our bodies, personal objects, and physcal space, we will individually deconstruct, understand and prototype our own cosmology and its relationship to other's in the pluriverse. Through sensor awareness, performance and the capturing of our own configuration of reality, we'll take notice of little moments of interaction, feelings or relationships - seeing how they do or don't translate digitally. Finally, we'll share and discuss what's being left out and what we wish we could include when we digitize our cosmologies into this coming mirrorworld.”

We're now viewing a slide with a large landscape orientation picture with colorful back ground and multiple smaller pictures inside of these smaller pictures which are animated people are playing with different objects.

Decolonizing AR was a free course in LA. The course invited participants to explore the history of their city and bring representations of personal cosmology into a digital layer of the city.

Also these have two cool related projects in the works. If you're in the audience you can drop info into the chat.

Moving on to the second example. This is from a tutorial from later.com. On how to create filters on spark AR.

“Custom Instagram Stories filters are no longer limited to the Kylie Jenner's and Nike's of the world, giving artists and brands of every size the opportunity to create a viral moment on Instagram. Keep reading to lear more about the future of Instagram and best practices for creating your own Spark AR effects for Instagram, including how AR filters work as a growth hack to getting more followers.”

Spark AR is a free proprietary tool for making tools for Instagram. It disallows content that is critical to Instagram or Facebook.

The outcomes for this tutorial are aligned with platform base personal brand growth. The progress can be measured by metrics.

This is hinged on the new forms of meaning and criticality.

I'm not offering these examples to make a value judgment. We want tools that help people critique power.

This is a P5.js logo. A magenta square with letters P and 5 and asterisk.

In 2018 I began making a wrapper library. The wrapper was for P5.js. P5.js is a web library for creative coding. It's taught in many schools. P5.js isn't the only library for entering graphics and browsers. They consistently direct their aim to accessibility. This happens through making the canvas accessible to screen readers. They also propose and refine an understanding of what justice looks like both in and outside of software.

We know that tools are not neutral. This doesn't limit the use of the tool or the kinds of things that can be made with it, but it works to orient the meaning of the tool when people encounter P5 they encounter its sense of justice.

We're now viewing a slide with 3 lines of JavaScript code. Each of them say createCanvas.

The first one has the number 500 passed as the first parameter and the number 500 passed as the second parameter. In the comments it says “create a canvas with a width of 500 pixels "and a height of 500 pixels”.

The next line says “createCanvas(VR)”, the comment says “create an immersive VR canvas”. and the next line says “createCanvas(AR)”, and the comment says “create an AR canvas”.

For these reasons I was excited to bridge P5 to an immersive context. The library was to make it possible to easily move a sketch from a flat interpretation to an immersive one.

This would make an immersive project and denounces me as the project initiator and [indistinct speech]. And share it with their friends.

I was a bit overly eager to start this project despite the spec wasn't stabilized. I ended up rewriting the library a number of times. The ability to release this project hung in suspension as we waited for the browsers to catch up to the spec and patch features through a web independent app.

This made me aware of the importance of timing in my experience of an educator there's a magical wow moment when they realize their power.

The moments of put a 3D model in the browser that their friends can see. Platforms are aware of these empowerment. This means they are working become to be the first and only place when people think of creating content.

Why does it matter whether there are alternatives to XR content production?

These tools prioritize accessibility. Instagram’s AR features are available for the lion's share of devices. Spark AR is easy to use. I would argue this is why accessibility and ease of use for authoring tools is not enough.

Tools created by for profit enterprises will transmit the priorities of profit. In this way accessibility can actually lead to a monogenous web.

We have to build and share tools without constraints and rigid definitions of value. When I think about the web I think of how it was used in the web [indistinct speech].

There are many resources that favor over privileged authorship. There's a relationship that appears unique in hindsight.

The slide shows an artist. Olia is a white person fem wearing an internet explorer sweater.

To paraphrase artist Olia:

“to be blunt it was bright rich personal slow and under construction. It was a web of sudden connections and personal links. It was a web of amateurs soon to be washed away by .com ambitions, guidelines designed by usability experts. The amateur web is hidden. Search engine reading mechanisms rank the amateur web low so because of the www of today is developed and regulated space. You wouldn't get on the web just to tell the world welcome to my home page.”

Now looking at the last slide it is an ASCII bunny holding a sign that says towards a boundless and messy immersive web.

As this becomes more common we can see search engine rankings begin to matter less. A web that prioritizes presence.

This gives us an opportunity to reorganize the power relationships of the web. In order to do this accessible and stable alternatives for content authoring have to be made numerous.

It is my hope that in the coming year with webkit indicating a move towards moving the WebXR API we can proliferate these tools that anyone and follow.

I'm encouraged by the work of the people on this panel and I believe the emerged web can grow into something bright and personal and something always under construction.

Lauren McCarthy: Thank you. All 3 of those presentations were amazing. I'm excited about it. Thank you.

So we're going to move into a little more conversation and back and forth. If you are a participant and have a question, you can put it in the chat or use the raise hand feature to ask it directly. I will ask questions and the panelist have questions for each other too. We'll just try to throw it together and see what we can do in the last 15 minutes or so.

I wanted to start actually with picking up with a question I think it was Simon asked in the audience. They directed it at Evelyn, but I would like to open it up to everyone. It question is I would love to hear about the experience of teaching students with little experience.

And I would be interested in hearing about your experience or thinking about bringing people that are new to the space in it and what the considerations are there in terms of access.

Maybe Valencia you have a firsthand experience coming into the space that you could speak more about that you already started to. So that's my question. That entry experience. How do you think about that? What have you learned? What are the considerations we want to take with us going forward.

Valencia: For me that's the tough question for me still. As we as I think about how to bring in my community and other artist into the possibilities of creating for immersive web space. Right now it's been a big learning curve.

To be honest I'm still very much dependent on my collaborators who work through the technical artist. I'm now exploring the different tools. So I would like to post that question back to my fellow panelist for what are those.

I started with just getting to understand how hubs work and looking at sketchfab but in terms of implementation I'm still very dependent on my collaborators with more technical skills.

Evelyn, I realize you're talking about you showed amazing work of the students who didn't have VR background but made amazing work.

Evelyn: Thanks for the question. Yeah. That those examples from the presentation today come from students that are in a class taught by a collaborative mind called joy [indistinct speech] who may or may not be on this call actually.

That class actually is a class that teaches the students some 3D modeling. So they're primed in the language of digital 3D assets already through that course.

But then we are giving them the VR experience as an option. I'm not in the classroom to walk them through. Joy also doesn't have a lot of experience or history with VR. So one thing I did was actually made these things I call one pagers.

These PDF's that in one page get the students in the space. The second page will be get the students to bring in one image. That's another page. So it's this way of I try to chunk the activities into bite size things that make it feel light and easy to do.

It was an experiment this semester and being remote. Making the one pagers, handing them off to joy and seeing how it went. I was bracing for the emails from the students and they never came. That's one strategy I'm using at the moment.

Stalgia: I've been using Mozilla hub for students who want to host online art shows it's worked very well.

I think that the documentation is fantastic, and it’s got an easy entry point. I think just in terms of having these kinds of tools that are drag and drop and don't require coding skill in order to get something that maybe shows off a 3D model or scan is important.

I'm also trying to work in, you know, advocating for other kinds of tools that do require a little bit of coding knowledge. Demystifying them a little bit like AR.js or the P5.XR if they do have access to an android device.

But I think in general I want to see this kind of drag and drop feature across many different content authoring platforms. I think that Mozilla has a certain look and feel but this feeling that you could drag something in and make it appear in AR can really exist in many different authoring platforms.

Lauren McCarthy: I found myself wishing for code drop ins that you could mix into hubs.

To kind of break down the barrier between it feels there's some experiences where it's like very plug and play and some is like code it yourself.

I get excited when you can move one to the other. I saw a question from Ada. Did you still want to ask your question?

Ada Rose Cannon: Sure. So I'll just thinking the WebXR device API has the capability of providing access to augmented reality or virtual reality depending on what's available on the hardware. I was wondering if that's a feature that anyone has experimented with or tried out or whether you think it's useful at all.

Stalgia:: I would like to take a shot at that if I may.

This was something that I was really impressed by the far-reaching vision of the immersive web working group and realizing the technical and experiential qualities are very similar. I have when developing this wrapper library have worked across both.

I think as it stands now you can't really use the AR capacity unless you're on an android phone running the latest chrome. So there's extreme limitations on that.

In terms of being able to make something and quickly switch from wearing a headset and being amongst it to seeing the background of your camera feed and imagining it in the space can be as simple as changing a couple lines of code.

Evelyn: When it comes to interest I would say from my perspective it's definitely there for hand held devices.

The support could be ramped up for low end devices with cameras on them. I think the interest is still there because I do one thing from Microsoft I'm learning is the market for expensive headsets is a slow thing that will be slow for a while it sounds like.

AR on mobile devices is just this area that wants to grow now, and you want to bring it into schools. So interest is there from me.

Ada: Cool thank you so much.

Lauren McCarthy: Thanks. I was wondering as you all were developing your tools and testing these out what were I guess if we're thinking about the WebXR speck what do you hope to see?

I know Stalgia you spoke to this in a very philosophical sense but what does that how does that play out?

What are the barriers that you are coming up against as you try to develop things that you hope might unmock in the future?

This could be [indistinct speech] with the particular tools you're using or with the spec in general. I'm interested in the whole WebXR space.

Stalgia: Yeah. I think that I'm really happy with the spec and the kinds of conversations I've seen happening around the spec.

I think there's this other layer that's important to us as tool makers or end users in which how that gets implemented. I think that just the degree to which the uneven roll out happens at that layer has made it has further mystified the technology for people. That part has been frustrating for me.

Lauren McCarthy: Ben, I think you want to make a comment about that?

Ben Erwin: Hi. I'm Ben Erwin and I'm deep into researching WebXR.

The answer that I would give to what would I expect from WebXR is adoption. I think that the fact that the Quest is really the only fully immersive device right now gives us a glimpse of what the web is right now.

All of the controversy surrounding Facebook and the merging of logins aside all we have right now is [indistinct speech].

I think it's essential that we as developers, artist, anybody interested in this space do what we can to push the hardware manufacturers to bring it in.

We need HP to have an immersive web browser. We need vive. We need it to be a standard implementation. I think what's holding us back is the Walled garden model.

That will always exist, and I don't have a problem with that but the best way to distribute any content is over an URL. What I want to see is a future where everybody can get their content over URL.

That starts with VR support on the existing VR devices and as AR glasses roll out and as the AR the AR module of the WebXR specification roll out that that gets baked in to all of that.

I can't wait to see what Unreal Engine will do with the nebula web launcher. But to boil it down is adoption.

Lauren McCarthy: Thank you.

Evelyn: Can I follow up with a comment. That resonates with me the adoption thing. I've only been at Microsoft for a year. I'm in the [indistinct speech] office which would be a place where maybe you can help drive at least from the corporate side the conversations around adoption and openness and working from with communities outside the company.

One thing I'm finding interesting and frustrating is the adoption actually the discussion about adoption is sometimes a nonstarter in the company because of politics. So it's fascinating how slow the conversation is.

@@@ That's going to be the fight.

Evelyn:Yeah. It's yeah.

Lauren McCarthy: And Valencia do you have anything you want to add? Things that you've come up against or hope to see in the future?

Valencia: Coming from outside of this world just peeping in and trying to explore it I think just that the devices would be to be able to do volumetric capture with web cam.

I see a couple experiments that involve machine learning lately.

But also I notice these capabilities like depth capabilities comes into our mobile devices.

I was wondering if there's anyone with thoughts about where that can go and how connected could it be to actually create your own content more people creating from all walks of like creating capabilities.

Lauren McCarthy: Thank you. I know I see a question from Boaz, but I know you had questions for each other too.

I want to open up this space if any of you want to ask any of those.

Evelyn: Mine have been mostly covered. My last one overlaps with Boaz'.

Lauren McCarthy: Maybe we want to go into that then if you would like. Okay.

Boaz is asking how you all are thinking about or thinking about XR as an assistive technology? I'm scanning the comments here. How we're thinking about disability and access in terms of XR if that's come up in your work or something that you're thinking about at all.

Valencia: That's a main research question for us now in terms of getting this possibility these technology out there f or artist with disabilities to even approach creating volumetric in house.

I've been in conversation with a blind ballet dancer. Her comment is she wouldn't approach these spaces because they're not built with her in mind.

So I think what I've been seeing is are these spaces actually accessible via screen reader and what are the what needs to be maybe a new platform needs to be built completely.

I wonder your thoughts Stalgia and Evelyn and people on the call. Do you think maybe a whole new platform needs to be built from the ground up integrating assistive technology?

Stalgia: I think that's a good question.

Something I think about a lot is the work that's done on the P5 web editor to make it easier to work with for people who have vision disabilities so they can get auditory feedback for what they're making graphically.

Part of me is overly eager to see where this can be spatialized because I think a lot more information can be conveyed through spatialized sound and immersive context. So I feel like there's this possibility for an authoring tool that uses the specific capacity of XR to make it easier to make things actually.

Lauren McCarthy: Yeah. To build to that, Stalgia mentioned we've been thinking about accessibility across the P5 library.

As I get deeper into that I feel excited about I feel there's a certain approach in some areas of the web where accessibility is treated like this minimum set of guidelines that you need to meet to be okay or say we checked that box.

As we have gotten into the questions of P5 is primarily a visual coding tool. So to get into how does that work for people not seeing the screen and that opens up many more questions about other disabilities.

It becomes a very creative question where it starts to think about what if you're working in multiple modalities at the same time. So I'm excited about that conversation.

We're just about out of time.

Dominique Hazaël-Massieux: I wanted to mention a couple work in this space that may be interesting to you.

We run a workshop last year on inclusive design for XR on the web. Which looked at many of these questions both how to make the immersive web accessible and how to use immersive web technologies to create new accessibility experiences.

And there is ongoing work around the XR user access requirements which try to go in more depth into what it means to make an XR experience accessible.

We are very far from having all the answers. As you said, I think a lot of it also is not well-defined answers but more questions of research and creativity.

That's a topic we're keen on getting more insight and input as you explore these spaces.

Lauren McCarthy: Thank you. I think we should wrap up here unless anyone had anything they want to say as a closing comment. Are you good? Boaz were you about to say something?

Boaz Sender: Just to my friends at the W3C, I think the P5.js community has a remarkable way of thinking about assistive technology that a few folks on the panel touched on today but as we think about this for XR and generally different modes of experiencing and interacting with content.

I don't know what the links are but there's wonderful material out there and work from P5 to look for after this session.

Lauren McCarthy: Sorry I wish I could dig them up right now but that would take a minute.

I want to say thank you to all the panelist. It was amazing to see your work. It felt like there was nice overlaps and things that were going in totally different directions.

So thank you for grounding us in this space today. I think that's all. We'll be posting a recording and notes later. So you can catch up there too. Thank you for coming.

Skip

Sponsors

Platinum sponsor

Coil Technologies,

Media sponsor

Legible

For further details, contact sponsorship@w3.org