Minutes of the fifth live session:
WCG & HDR: Canvas, WebGL, WebGPU

Live, real-time transcription services were provided by Caption First, Inc. and these minutes are derived from those captions.

Present: Alexis Tourapis (Apple), Andrew Cotton (BBC), Andrew Pangborn (Apple), Captioner Tina (Caption First), Chris Lilley (W3C), Chris Needham (BBC), Christopher Cameron (Google), Karen Myers (W3C), Ken Greenebaum (Apple), Lea Verou (MIT), Marc Jeanmougin (Télécom ParisTech), Max Derhak (Onyx Graphics), Pekka Paalanen (Independent/Wayland), Rafael Cintron (Microsoft), Sebastian Wick (Wayland), Simon Fraser (Apple), Simon Thompson (BBC), Takio Yamaoka (Yahoo! JAPAN), Vitaly Prosyak (AMD), Zachary Cava (Disney)

Chris Lilley As with the other sessions I'm going to wait until 5 past, to allow other people to join. (Pause).

Canvas, WebGL, and WebGPU

view presentation by Christopher Cameron (Google)

Chris Lilley Let's get started. I'm slightly worried that Ken hasn't showed up yet. But let's get going. So Christopher, you did your introductory talk and it is pretty straightforward. You are adding new color spaces with the color space property and you are adding higher bit depths. So this is pretty clear. But there was a question about what bit depth is needed. I mean you have added 10 bits and you have added half float. And from the slides it looks like the ten bit is for the gamma-encoded stuff and half float is for linear light, is that correct?

Christopher Cameron In practice that's the most likely use of these. With that said, in the spec I don't think there is anything that says you can only use 10 bit with gamma-encoded and half with linear. So in practice the bit depth and the color space are independent. And it's up to the user to decide what's the best bit of depth for the color space they have chosen to work on. In practice if you have something like REC.2020 primaries and you are using gamma, 10 bits is what you are going to want there. In terms of what the spec allows, those are two independent properties and all mix and matching combinations are allowed.

Chris Lilley The values are restricted to 10/16. You can't say hey, I want 12 bits or something like that?

Christopher Cameron That's still a proposal. It is not a spec. What ends up being exposed may not be that. Likely it is going to be the various pixel formats that all GPU support. There is going to be a 10-10‑10‑2 which is something that's very popular for wide color gamut. Other things that people might think about, there are a whole bunch of things that GPU that can do that aren't on that list. 5-6-5 is one that comes to mind, if you want to cut down on memory usage. But in practice, we're probably going to start small with the common set of things and expand them because it is always easier to add than to realize you made a mistake and have to emulate or that sort of thing. We're targeting what – I'm sorry.

Chris Lilley No, no. I was just agreeing with you. Go ahead.

Christopher Cameron Oh, yeah, this was to say it was targeting the things that had universal support on all GPUs, float 16 and 10‑10‑2 are things that have been around for decades and decades.

Chris Lilley So are there other questions about Chris Cameron's proposal? If not, I have some. But I would like to hear other people's opinions.

Andrew Pangborn I have a question about the HDR canvas proposal. When it is specified as HLG or PQ for the compositing space, sorry, for the color space, does that mean that all of the blending itself takes place in that space as well?

Christopher Cameron So this is an issue which is under active discussion about what the most appropriate thing to do is. At the moment, I should actually – I should double check what the exact specs are now. As currently written, with HLG as your color space, then that pops you in to HLG compositing, the tone mapping that would be getting for. And also your pixel values are HLG. Now I in earlier versions of the proposal I try to sort of separate the two out so that you have a color space that you are working in, and then you have the compositing space that is being used to get it to the monitor.

So one example of the thing you might do with that, I'm going to be working in the linear space but treat this as HLG for the purposes of compositing it. That ended up causing a lot of back and forth discussion and complicating the spec. So for the moment I pulled that out. But it is something that's near and dear to my heart and something I want to see happen.

Now this thing on Mac OS would be, there is the – there is the pixel format of the actual metal layer or ISR surface and then there is the – there is the HDR, I forget the exact term that's applied to layer but there are the HDR metadata. You can say this is okay this is a linear thing. Use the HLG as a compositing. That's a combination that a lot of people are going to want, but it is not in the proposal but it is something I have as something that I want to make sure is an option. Does that answer the question?

Andrew Pangborn It brought up a second question. There is composition within the canvas itself. So let's say I'm making a video playback app. And I don't only have HLG videos or photos or whatever but I also have like image assets with antialiasing on them. I have subtitles that might have a drop shadow and maybe translucent buttons. All of these things would have been designed and proofed for the sRGB compositional space. I guess I'm just – I'm struggling to see like how do you map that kind of a design in to like all of a sudden you are compositing in PQ? Would that have to be done today with a second canvas on top of it and then the two canvases themselves are composited to one together in sRGB? I presume it would happen with the HLG canvases. It has to blend with the rest of the web content in sRGB, right?

Christopher Cameron Before I get to that, I hear reports that I have some echos. Is it better? Is it worse now?

Chris Lilley It is the same.

Christopher Cameron Okay. I'm going to turn off my Bluetooth and turn on again. I will be back in just one second. Any improvement?

(Yes)

Christopher Cameron Yes? Okay. So one thing that – so when it comes to like compositing elements of the page together, that's been sort of de facto in SRGB's space, with SRGB pixel values. If you have a hashtag 000000 and six Fs, all 0s and 255 you send up with the – it is the pixel values being deposited. The proposal says excluding normal HDR stuff, that's still the that web elements together are composited in. Because that's just what the web has decided. We're not going to rewrite every single Web page because we feel we should be blending in a different space.

So that's excluding the HDR case. There is something this wasn't quite clear and I will ask to clarify the question. Is the question about having say an HDR canvas element and then like a non‑HDR canvas element on top of them and what sort of blending happens there? Is that the question?

Andrew Pangborn Yeah, I was saying that's the only way I can imagine making an existing design, that's sort of designed for SRGB blending in mind, work on top of an HDR canvas. I guess I'm struggling to see how you can do that sort of blending in like a say HDR space and have it look right. That's true you could redesign your whole player if you know you are going to be in a PQ canvas. I'm curious what – is that the direction people see or am I missing something?

Christopher Cameron That I think is the easiest way to get things to work off the bat. The other option that is a possibility is that you could have a float 16 SRGB and that's SRGB encoded. So the – to translate to Apple terms, it could be the extended SRGB called color space. Work that in. Specify that. And then it is possible to read all of the properties of the display. You have to ask permission for them. But then you can do – you could do all that composition however you would ordinarily do it and then you can do your own manual tone mapping of say an HLG or a PQ image in to that buffer. So that way everything that your legacy application has, just works and then if you want to throw something that's HDR in there we have a well defined mapping between this extended SRGB space, what's going to appear on the screen and then you can choose your own tone mapping.

If you want auto tone mapping deferred to the operating system, that becomes operating system design, then you have to do the mode you are discussing, you sort of sequester HDR in to its own separate thing to be tone mapped by the OS or by the browser depending on who is the one who is capable of it.

I'm a bit afraid that I'm not answering the question. Let me know if I'm circling on the border.

Andrew Pangborn That makes sense to me. Once you have the HLG canvas does the specific indications say anything how that gets blended by the rest of the browser or the operating system for that matter? Or is it kind of – ‑‑ the extension today it is up to the OS to play back the video?

Christopher Cameron HLG video is a good example. How the HLG signal is mapped to the brightness that comes out of the display, is delegated to the operating system now. On Mac OS if I hand it an HLG buffer it is going do a very particular thing and that's something that presumably people will just want the platform to do the default unless the application has a strong preference. So it is something that how that gets blended is well, we kind of just ask the operating system to do it for us please, there is the screen, that I think is the most reasonable thing to do. Of course, all – when it comes to the HLG – HDR parts of the proposal, this is part of a proposal. So while I'm giving sort of answers of what I personally want the spec to move in as a direction this is a great forum for the Q and A to actually be a little bit reversed to say does that sound like a good idea all of you, on that point of delegating this exact mapping to the operating system.

Andrew Pangborn I think from an implementation point of view, like that's – that's a practical place to start. From a trying to view it through a lens of a designer or web developer, it seems like if you have an operating system independent interpretation of your content it makes it hard to design and deliver something that like looks reasonable across a wide variety of devices. Ideally want to have a well defined way that those different HDR and sRGB systems interchange. One of the talks talked about conformance to ITU BT.2048 that provides a bunch of different options.

Christopher Cameron The mode that I personally prefer, in terms of iterations in the spec, is the idea that when it comes to pixel values there is a well defined mapping between all the different color spaces, so the math there is always well defined. But then there is a sort of separate knob that you can turn on for an element in a sense that is the – that says composite this using HLG or with this HDR10 metadata or any of those options. Keeping those two things sort of independent, again translate in to the terms of iOS and Mac OS it is sort of the separation between the pixel formats and the EDR metadata.

If you are working within the same EDR metadata you know the mapping it will have. If they have different elements with metadata, tone mapping is going to do something different. There are two different use cases that web developers would have that we need to keep in mind. One is where they really want something to be standardized across all operating systems. And then another one where people say I want this HLG image to look the same in Windows explorer preview and in finder and in Mac OS preview and in the web browser. Make this look right for the OS.

And the thought with delegating things is that well, that's an option that you have. It is going to do the same way that it would be done there. And then, you know, if the developer has a particular idea as to what tone mapping they want, then they can query the browser for the properties and monitor they are on and do math and create their own custom thing that's going to not necessarily match another Web page or the operating system that is going to deliver a consistent experience.

Chris Lilley That's quite clear. I don't think it was as clear from the slide. You have two options: you can handle it yourself, or hand it off. You mentioned something about fingerprinting, and there might be a permission prompt. Because if you look at the mapping, the soft – the stuff from ITU 2408, it assumes that you know what the peak luminance is and you do these soft toe clippings. Do you think this can work even in the case where we have no idea what the display capabilities are?

Christopher Cameron So with respect to querying the display capabilities, there exists a spec, it is the window placement API. And it has a prompt where you can say hey, are you going to let this Web page have access to fingerprintible data about my display. For the moment it is like what's the size and arrangement of all monitors on the system. That would be the very natural place to stuff in, here are the primaries. And it has the call backs when the property has changed.

So we would add in to that API all the HDR parameters. But it is something that needs user permission. Needs to be able to work without this, this information. And for a little bit of clarification for people who have been in on a lot of the earlier discussions about the spec, I wasn't aware of that API and I was trying to avoid exposing fingerprintible properties. But so I was doing all sorts of weird contortions: if people don't remember, then best they don't. And if they do, please purge from your memory :) .

Simon Fraser his is the kind of spec that WebKit doesn't normally implement, because we don't believe that users will understand the implications of a permission prompt and we want to avoid permission prompts. But more generally I think one of my questions is we've talked about HDR and canvas, but what about just HDR and images, or even HDR CSS colors. Why is canvas special is my question?

Christopher Cameron That's a very good question. I would say there is – CSS colors is something I want to leave a bit to the side for the moment. But the goal that I have with the canvas is to make it so that you can create a canvas that can do anything that an image can. And so the idea with that is that however an image – an HDR image is treated by the browser, you can opt in to having a canvas treated the same way. So whatever tone mapping would be applied to an HLG image that's in image tag, the idea with the spec is to give a mechanism by which you can create a canvas and draw that image in to that canvas and have it appear pixel to pixel on that the screen as that picture image. Does that answer the questions with respect to image versus canvas?

Simon Fraser I think it does. Except I think we've – collectively we've thought more about how HDR and canvas works. We haven't decided how HDR images composite to the rest of the page.

Christopher Cameron Yes. Exactly. I would say it is not – it is that we've sort of delegated a lot of the option, a lot of the behaviors of how an HDR image in the page should composite with the rest of it. And there is a sort of idea of well, we are sort sequestering each HDR element.

To move away from – that would be an HDR10 video with metadata. Suppose you have two on the page, one on the left, one on the right. Those two are going to have totally different compositing modes. Each of those elements ends up – each HDR element is sort of composited independently and is going to have its own tone mapping applied independently based on the parameters that it has. And the only place where, where things get tricky is well, what if it is transparent and how do you blend those things. And that's the one area where I have been very intentionally vague. I know that right now all the browsers say we are going to do something close to sRGB pixel value blending for transparency but in practice well, we might wiggle it a bit if it is going to make a difference of 50% power improvement to use something that's close. The one area where I feel we're being vague is if you have – if you have transparency or something where you need to interpolate between HDR pixels and non‑HDR pixels. Does that – do we feel like that's sort of in a well defined box for you? And if so should we take a moment to discuss CSS colors?

Simon Fraser I think HDR images brings up some interesting like power questions and performance questions. I mean HDR canvas. If somebody has a thousand tiny HDR images on the page and we have to delegate rendering each one of those on Mac by making custom layers and because they are HDR they ramp up screen brightness and cause additional power use. Is that okay? I am not sure.

Christopher Cameron Yeah. Creating a thousand separate layers. There are ways that pages can sort of trigger four a bazillion and half layers and cut them off and say we are going to manually composite this and be close to what the operating system does. There could be that issue of a transition. Now one interesting thing that you bring up that I think is worth digging in a bit more, geez right now if you have a page that triggers – that has HDR content in it, that's not benign in some sense. That's going to change into a mode that has a lot of higher power and a lot of times there a big system clunk that happens as we relocated all the frame buffers. And, you know, is that something that we won't allow any page to do. Do we want to allow them to do that by canvas out there by having an image there.

And we actually live in this world now already. You can have a one pixel HDR video hidden somewhere on your page that's going to do that. So...it is something that can be done right now and we're living with that aspect of it now. And I guess – so for that reason it doesn't feel like we are doing anything new. Should we rethink whether or not that should be allowed? Personally I think it's okay if a page has bad performance the user usually response is well, let's close that page.

That's my feeling on that area. So I don't feel like we're opening up anything new that wasn't already opened a while ago by video.

Chris Lilley That's an interesting tradeoff between offering too many permission prompts on the one hand, and letting the page do random stuff like strobing attacks or whatever on the other.

Christopher Cameron Yes. I don't think that we – so I mean there is a question to be asked, should there be permission for the ability to show anything in HDR and my take is no. If the page is going to do something that's visually unpleasant, they can do that knew. They can do that today. They can strobe things between the black and white. HDR is slightly more – it can make things slightly more unpleasant but it is not a fundamentally different scenario. Unless displays that can cause damage to the user. That's a whole separate area to be in. Is that – there is a bit of silence after I made that comment. Did people feel comfortable in this world? We are in reversing Q and A. Do people feel comfortable that HDR content that the page can say hey do that in the form of video and canvas.

Simon Fraser I think comfortable, no. But we will have to learn through experience through some extent. WebKit has traditionally has very strict like no power aggression goals. And if this is – ends up being a big power thing we might have to take steps to ameliorate it. I mean one of the things we would probably want to avoid things like adds and iframes using HDR images and triggering this. So something we have done in the past is to allow certain things in the main origin, in the origin of the main page and not allow them in cross origin frames. So we might apply those rules here.

Christopher Cameron That sounds like something quite reasonable that we should consider codifying more broadly. I agree.

Chris Lilley So you mentioned about learning through experience. There is always a risk to early standardization and also a risk to late standardization because people are already doing stuff. Is sounds like the general feeling is try this out and see what happens and correct course if it is going badly wrong. Does that sound like it?

Christopher Cameron I think that's reasonable. I mean we – we don't have HDR support anywhere except in videos yet. And I'm not sure that's going to change in the near future. I guess when it does happen I would like to see equal progress on images in canvas. I think it is okay if colors lag behind. But I think we should apply the same thought processes to images in canvas.

Simon Fraser And sort of to continue in the vein of what we were discussing with respect to the tradeoff between the spec going too far ahead and wanting to ramp it back versus not, I have staked out my flag in the sand well beyond where the front line is with respect to saying I believe we should have an independent HDR compositing mode, that parameter, versus the – on the other hand, the pixel format that's backing a canvas. One way to be sort of cautious about that which is where the spec is right now is say if you say your color space is Rec 2100 HLG or something like that, then you get the compositing you would get if there was an image. Canvas can't express anything that couldn't be expressed by a dynamically updating image. That may be very – that may be a good first step. I think it is a little bit on the restrictive side. But that may be the way to go. At least in the beginning.

One thing that I would like to take a moment to discuss is HDR and CSS colors. But we should probably round up these areas before we go in there.

Chris Lilley Right. There was a question in chat from Dimitri about web games, that applies to Web GPU and canvas. Games are using HDR. So won't web games want it? and I guess the obvious answer is yes, they will.

Christopher Cameron Yes. This has all been canvas centric. But the idea would be for – thinking about a game that uses physically based renderering, would probably be that you want to use linear sRGB with float 16 and that would be the mechanism through which one could specify not only wide color gamut but also HDR and to return unfortunately this topic of the independent pixel format versus compositing mode that's something where I feel there – the default would be sRGB linear would allow wide color gamut and you have a separate bit to allow HDR. And there are PRs for certainly adding WebGL – WebGL has a PR for adding color space and pixel format properties. And then WebGPU I'm less familiar with. I think a lot of that is already kind of – the place or the parameter slot is already there, especially with the way they handle swap chains.

Chris Lilley So what you were talking about there brings us on to another point so yeah, this is a proposal. How do we move it forward? How do we – what venue or multiple venues does this get progressed in?

Christopher Cameron I have been making and not holding up various promises making a prototype. The working group, the state of things since basically the summer has been we have an idea of API we want to expose. Prototype this. One day when we get in rooms together we can look what this looks on various operating systems that would be the idea I would imagine. I definitely think this is something that needs to be – people need to look at the actual results that come out of it before ratifying it in any format.

Chris Lilley Yeah, I agree. We need to check on different hardware. Prototype implementation sounds like a good way forward honestly. So there was some questions from Dmitry about how does this fit in with CSS color. And I guess I could take my chair hat off and talk about that briefly. We have CSS Color 4. That's wide gamut, but explicitly SDR only. And then CSS Color 5 which is currently mainly about color mixing and color contrast and that sort of thing. Relative color syntax. And then I do have an unofficial draft of something called CSS Color HDR which basically adds BT.2100-PQ and HLG and that sort of stuff and starts talking more about the compositing of SDR and HDR. It has been shown to the CSS working group but it hasn't been adopted yet. It is little bit early to ask them to adopt that and move it forward. I guess that's where that is happening. I drafted that spec mainly to convince myself that we wouldn't painting ourselves in to a corner as I was developing CSS Color 4, that this was a path forward to HDR as well.

Christopher Cameron So sorry. I had sort of a question and a comment on that. Did I cut you off?

Chris Lilley No, no please go on

Christopher Cameron So one thing that's been a property of all this HDR stuff has been the idea that each source of HDR is sort of sequestered. That you have your image element and canvas element and video element and my feeling with respect to CSS colors is that exactly the way that you – what you are outlining there we are going to allow REC2100HLG or PQ or you could say sRGB-linear. You could have 2,2,2 twice as bright as paper white.

The thing that would be needed there, my thought of how that – of what would be needed to sort of allow, you know, a text on a page to be twice as bright if one wanted to do such a thing, would be a way to specify for an element the compositing mode for that element. So that would be the say okay within this div, this div is HDR and allows extended values or this div is HLG and that way will match what an HLG colors are. This starts moving this in to a direction that I have seen a lot of discussion on but I haven't – I don't think there has been any proposals in the area which would be similarly having a concept of a working space of a div. Right now the working space of the whole page is implicitly sRGB and that's where the blending is done. And this moves us in to the idea we can sequester out individual parts of the page or an entire page if an entire page wants this. So that's – that's sort of the roadmap where I see the proposal that you had in there.

Picking up some of these other comments that have been floating around, does that feel like sort of a natural progression to you as well?

Chris Lilley Ahh. Okay. It used to be, about five years ago, 2016 there was a big issue about what should be the Working Color Space and blah blah. And then as we developed from that and we got stuck on that issue for quite a while, it goes against what you are doing. If you are compositing, then you want to do that in linear-light. If you are doing gradients then you want that to be perceptually uniform and also the – if it is a legacy color, then it has to be sRGB. We ended up with something more nuanced than that, and ditched the idea of a single working color space, that you have a per page or per element. Which is also contentious because some people wanted one thing for the entire page. So the whole thing has to be opt in. But other people said no, I just want this little bit here to be HDR or wide gamut and the rest is sRGB which is fine but also means that if you are doing compositing on a GPU, which doesn't know what color space it is. It is getting pixel values. At the moment it is a sRGB only world with little windows chopped in to it to do other stuff.

Christopher Cameron Yeah. One thing that ended up being the way that things came about in the canvas 2D – to talk about canvas 2D and the stuff that's ratified, one thing that ended up happening there was this question on canvas there are all these operations that require interpolation between color A and color B. The choice that everyone sort of ended up on and now is there, is that well, whatever the actual back buffer's format, color space is, those pixel values are the space in which that interpolation is happening for better and for worse. So if one is using sRGB that ends up being the space it is done in. If one wants to sometimes use perceptually linear and sometimes use physical linear light and sometimes use sRGB blending like everything else in the past says, then your sort of two choices are write your own shaders everywhere or have a separate element each one doing its thing. So that was where we ended up with the canvas element. So it may be natural to have divs also do that sort of thing. I'm in no particular rush to make that happen.

Chris Lilley I'm interested to hear any other comments on that aspect. Okay. Not hearing any.

Andrew Pangborn Another sort of dangling thing from this talk, Chris, was the mention of ITU 2408 as an option, but being very complicated. I wonder if we can chew in to that a bit. I downloaded 2408 and had another look at it and yes, some of it especially the hermites is a bit harder. Especially in a shader. Could you explain more about what you want and mabe Christopher, or someone from the BBC, could talk a bit about what their experience in using that sort of stuff.

Christopher Cameron The principal concern – I don't remember on which – in which context I said it was complicated. My guess is that that the thing is that I viewed as complicated any strategy for mixing – let's put it a different way. SDR and HDR content can be mixed in a smart way when you know everything about the content and display in that particular moment. And that's something that can be done by the browser which knows exactly how that display is doing things or what displays are connected and what brightness they are at. It is not nothing if you are in offscreen canvas that you can't know anything about. It is not going to happen automatically when you have an HLG image and an SDR image and say draw an image next to each other, it is not going to be meaningful for all displays, for all the the displays or for any particular display because it doesn't even know what display it is on. One of the reasons that I was arguing to keep 2408 outside of the sort of default math piece being done, was because of this scenario. I'm sort of against trying to – trying to have the draw image canvas math do something smart and instead saying well, the user can either do all the math themselves or you can delegate to the operating system and hey here is my SDR and HDR and you know what's going on in the display. I didn't want to add anything that did it automatically inside a 2D context. I wanted it to be the simplest math available. I think that's the context in which I made that comment.

Chris Lilley Yeah, I think so, too. It seemed to be in the context of converting one color space to the other. And I guess it is for compositing primarily.

Christopher Cameron Yeah. Converting one to the other inside a context 2D. One HDR is pulled in there is no way that it is going to be done right for all scenarios. And so that's – yeah. It ends up being something where my approach is, make the math simple and make it so people can do their own math.

Chris Lilley Okay. So any other questions about this?

(None)

Deep Dive on HDR, WCG and Linear Rendering in WebGL and WebGPU

view presentation by Kenneth Russell (Google) & Kelsey Gilbert (Mozilla)

Chris Lilley We do have another presentation from Ken Russell and Kelsey Gilbert. But sadly they are not here today. Although maybe Chris could talk about it a bit. It is mainly extra detail about the specific proposal, especially web GPU. I don't think there is anything conceptually that isn't in the earlier talk. It is just more detail about specifically how you would do that at a coding level.

Christopher Cameron I can talk briefly to that. The main difference between canvas 2D and WebGL/WebGPU, in canvas 2D everything that you write to the canvas has a known color space. If you have a CSS color, it is in a known color space. Image, color space metadata. If video all the inputs are in a known color space and when you draw them they are converted sort of at the very beginning of the pipeline of the color space. All this conversion done for you. No sense in which you can say you want a pixel value of 0.2, 1.3 in to the back buffer. No raw access to the back buffer.

WebGL/WebGPU you are writing numbers in to the buffer. The general scheme color space while is there there is color management and you pull things in to a texture. When you are reading something in it a texture, it might get converted to a number of options. The main property when you write those values in to the buffer the color management API is saying how is the compositor going to interpret these values. So you can say these values are sRGB. They are P3. They are HLG. All those various options. And in WebGPU you attach that data to the swap chain. The WebGL proposal is to have that be a property on the rendering context. This is the way to interpret these colors. So that's the sort of – the big difference in what the API looks on the way in. On the way out, in terms of compositing, it is all the same. The way that the browser will then take that buffer once it is handed off. And I can – I might be able to answer WebGL questions and if I'm asked questions about WebGPU I will certainly tell you when I don't know the answer.

Chris Lilley That's always better, yeah. So any questions on this aspect?

Andrew Pangborn So my main take‑away, it was essentially exposing the sRGB accelerated pixel formats. Did I understand the premise?

Christopher Cameron Yes and no. So there are sort of two separate things. There is sRGB color encoding which is this thing which is in effect the working space of WebGL or any GPU; and a linear space, there is automatic conversion to sRGB. These are the sRGB encoded pixel formats which are something that are variously supported and sometimes not. That's one thing related in that WebGL and WebGPU. These proposals – you can specify that pixel format and that will get you to work in a physical space and keep everything in eight bits. That's one aspect of it. And that's the main thrust of the talk, it is adding these pixel formats you can use them today as intermediate render targets and you can have your back buffer be that as well.

So there is that which is to allow physically based rendering in to eight bits without having to go up to float 16. That's one of the things that's enabled but that's through that pixel format option. The second component is the color space and those are two independent parameters.

Andrew Pangborn But you can have an incompatible combination then, right?

Christopher Cameron You can have a nonsensical configuration. For instance, you could say I have a sRGB color encoded back buffer. So – I want to interpret this as, you know, REC.2100 HLG as the color space. That doesn't make very much sense. My personal opinion is that that's something that there is no particular reason to disallow it. I think in all the platforms where I have attempted such a thing it has worked, but it is a foot gun that is a question, should we disallow or allow it. Is that the sort of configuration that you had in mind as something that doesn't work too well and is the idea of leaving that foot gun available sound reasonable to you or not?

Andrew Pangborn To your point I guess it is pretty much like that in any other GP or compositing framework. So it is like display P3 in particular matches nicely. So thinking about this for kind of going back to like a PQ or HLG canvas. And – it kind of makes you wonder, if it makes sense.

Christopher Cameron Yes, you are in PQ or had HLG you probably want ten bits per pixel. So it is – that wouldn't apply in that area. Yes. And indeed there are many ways you can specify – you can specify these two parameters in ways that are not particularly meaningful. I can say I want sRGB-linear with sRGB gamma-encoded which would do something, but one would have to a lot of math to figure out what would show up on the screen.

Chris Lilley Yeah, it is not possible to make something totally idiot-proof, because idiots are very ingenious. Sometimes the effort to prevent them is not worth the trouble.

Christopher Cameron That's my take as well.

Chris Lilley Any closing questions about this?

Marc Jeanmougin I have kind of a related question a bit. For WebGL, there is a common library used by Firefox and Chrome, angle as a back end for implementation. Is there such a library for WebGPU? Or it is implemented in every browser individually?

Christopher Cameron So I can't speak to – okay. I probably shouldn't say anything because the probability of what I'm going to say is a lie is very high. But I do believe that a lot of things are based on something called Dawn; but I can't even say with confidence that's what's going on in Chromium and I can't speak to any other browser for what's used for WebGPU. I will leave it at that. If one is looking for such a common back end for WebGPU implementations I believe it is Dawn. Don't trust me. But that's where to start looking.

Rafael Cintron Yes, that's right.

Christopher Cameron Oh, okay. There is confirmation from Rafael. Chromium uses Dawn. Firefox uses a different one, would use Metal. Early days.

And Dawn has a Metal back end and all that stuff. That's how Chromium gets to Metal.

Marc Jeanmougin Another question, why isn't there a common library like there is angle?

Christopher Cameron Really moving outside of my real many of competence. My understanding is that dawn has the aspiration to be something similar but honestly I should probably keep my mouth shut when I don't know what I'm talking about.

Chris Lilley Okay. So thank you, everyone for the discussion. This is the last of our live sessions. The next thing that will come out will be published minutes and a report of the workshop and calls to action, because it is very clear that this isn't end of any discussion. We are just finding out which questions to be asked and deciding where the answers should be worked on. I'm very interested to get anyone's input on where they think next steps should be. Feel free to e‑mail if you wish, or there is a GitHub repo for this workshop and raise issues there.<

Thank you to everyone. Thank you to the captioner, Tina, for doing closed captions for us. And I'll see you in whatever venue this works on next.


What is W3C?

W3C is a voluntary standards consortium that convenes companies and communities to help structure productive discussions around existing and emerging technologies, and offers a Royalty-Free patent framework for Web Recommendations. We focus primarily on client-side (browser) technologies, and also have a mature history of vocabulary (or “ontology”) development. W3C develops work based on the priorities of our members and our community.