Canvas, WebGL, and WebGPU
Presenter: Christopher Cameron (Google)
Duration: 9 min
Slides & video
Hi, my name is Chris Cameron.
I'm a software engineer at Google working on Chrome and today we're going to be talking about 2D Canvases upcoming support for wide color gamut and high dynamic range.
So first we're going to start looking at wide color gamut and a good place to start is all the wide color gamut
content that's out there on the web already.
So the first place where one is likely to encounter wide color gamut content is in media images, pngs, jpgs, other image types include within them or can include within them a color space.
And that whole space can be anything.
And so that's one place where wide color gamut content already exists on the web.
Another place where wide color gamut content already exists on the web is in CSS colors.
So most people are used to writing colors in CSS using the the #840628 syntax or using the RGB(132, 6, 40) syntax.
One thing that's kind of clear about this format of writing color is that it's 8-bit.
But one thing that people may not be aware of is that they're specifying an sRGB color.
It's not in some nebulous color space.
These colors are in sRGB and in CSS Color Level 4 which has been around for quite some time.
There's the ability to express CSS colors that are not just sRGB.
So CSS Color Level 4 introduces the snazzy syntax in these last two bullet points where you say color and then a color space and then a bunch of floating point values.
And one can say sRGB as the color space or one can say display-p3 as the color space there are a number of other predefined color spaces that could be used.
This is something that should be kept in mind because a lot of the color spaces that we're adding for Canvas are coming from this syntax here.
So moving onto Canvas, what's the situation today?
Well, Canvas 2D Canvas as an API is color managed.
And the bitmap that's created for the Canvas the output of the Canvas is de facto sRGB.
Now, what do I mean by color managed?
I mean that every input, everything that's drawn to a Canvas has a color space associated with it.
Oftentimes that color space is sRGB but in the case of images or other things it may not be.
The color managing means that all those inputs are converted from whatever color space they're in to sRGB when they're drawn.
So as a concrete example, let's take this this pseudo-code where I, I have my Canvas and I create a 2D context to draw into it.
I have myDisplayP3Image that I'm drawing and I'm going to draw a CSS color.
What happens today?
Everything that doesn't fit in sRGB is just chopped off.
So you can draw a wide color gamut content to a Canvas, but it's just going to be clipped to sRGB because Canvas is de facto sRGB.
The big change coming to Canvas is that we can say we want that backing bitmap to be in something besides sRGB
like, for instance, display-p3.
And so this is the new syntax.
And when you specify this then all those colors that were getting clipped before are now going to appear as actual full display-p3 colors.
Very small change.
Similarly for the image data API this is an API that allows you to be represent the bitmap image for mostly CPU side pixel manipulation.
It too is de facto sRGB.
So if I were to have this code where I create an image data and I write these pixel values these pixel values that I'm writing are sRGB values.
And what I call put image data which is going to draw this image to the context.
This data is considered sRGB and it's going to be converted to whatever the Canvas is.
And one thing that people would probably want to do is to be able to specify image data that is not just stuck
in being sRGB and the syntax for doing that is just this.
And when you create this image data the image here that has an attribute that's going to be the color space and all the data is pixel data in that color space.
So the meaning of these pixel values changed from before when there was no color space specified and it was sRGB by default to after when now these are display-p3 pixel values.
The final API that's being changed to add support for wide color gamut is image bitmap.
Now image bitmap is used for efficient asynchronous bitmap representations, there are a number of different uses.
I'm going to focus on web GL just for the purpose of example and it is de facto sRGB.
In this example, I'm loading a ball, er, a blob which is going to fetch data from this URL ordinarily would be done asynchronously.
And then when I get that blob, I call create image bitmap to decode whatever that blob image was into a bitmap, and this is done asynchronously.
And then finally I had this bitmap data and I pass it into texImage2D to upload it to a texture.
Well, the new parameter that's coming for image bitmap is a color space parameter where I can say when you're decoding that blob also convert it into the following color space.
And because it's being done at decode time and being done asynchronously it can be extra efficient to do here.
And then this bitmap here will be whatever was in this image at this URL converted to display-p3.
So that's it for wide color gamut that is pretty well baked and on its way out.
Something that is more in progress is high dynamic range for 2D Canvas.
Now let's start where we did for wide color gamut and talk about where people encounter high dynamic range content either on the web or elsewhere.
So on the web there's already support for HDR media, images or videos.
And that content comes in either using Hybrid Log Gamma as its transfer function - as its color space, to abuse the terminology a bit - or PQ Perceptual Quantizer.
And it's usually 10 bits per pixel.
Meanwhile, video games or things doing physically based rendering have been using extended linear sRGB for a long time.
And they use 16 bit floating point.
And so they allow for writing values that are greater than one.
So if you write the value two, that's just going to be twice the brightness of one, whatever one is.
And so what's the proposal.
Well, we need more color spaces and we need more bits.
So the proposal is to add new color spaces with rec2100-hlg and rec2100-pq as color spaces.
Also sRGB-linear as a color space, and for creating the 2D context, we're going to want more than the 8 bit's we get by default.
So one option would be to have 10 bits per pixel or alternatively also to have 16 bits per pixel.
These are all options that we're considering.
Now high dynamic range has some complications. for one it uses a lot more power.
You have more colors, more power, more bits.
You don't want to use it unless you really need it.
So we're going to make sure that's something that the user opts into rather than making it something that we try to intelligently detect.
Fingerprinting makes things a lot more complicated for high dynamic range.
A lot of times when people want to generate HDR content something they want to know in order to use it all throughout the pipeline is for instance the maximum brightness of their display.
And that's something that we can't actually give in all of its bits to web applications because that's a lot of bits for fingerprinting.
And then a third thing that makes life more complicated is how do you convert between these HR, er, these HDR color spaces?
We have PQ, we have HLG we have just regular SDR and we have this extended linear space.
There is a spec, ITU 2408, that is an option to be considered.
It may not work for everyone.
And for the people it doesn't work for it adds a lot of complicated math.
So that brings us back to the front page which is where we talked about.
We talked about wide color gamut, and we talked about a high dynamic range in the context of 2D Canvas and what's coming down the pipeline for that.
Thanks for taking the lesson.
And I also want to thank a number of people who've helped a lot with educating me and also with coming up with these specifications, that includes but it's definitely not limited to Joe Drago at Netflix, Lars Borg at Adobe, Ken Russell at Google and Jeff Gilbert at Mozilla.