Minutes of the fourth live session:
WCG & HDR: Color creation and manipulation

Live, real-time transcription services were provided by Caption First, Inc. and these minutes are derived from those captions.

Present: Alexis Tourapis (Apple), Andrew Cotton (BBC), Andrew Pangborn (Apple), Captioner Kelly (Caption First), Chris Lilley (W3C), Chris Needham (BBC), Dmitry Kazakov (Krita), Felipe Erias (Igalia), Ken Greenebaum (Apple), Lea Verou (MIT), Marc Jeanmougin (Télécom ParisTech/Inkscape), Max Derhak (Onyx Graphics), Mekides Assefa Abebe (NTNU), Mike Bremford (BFO), Pekka Paalanen (Independent/Wayland), Sebastian Wick (Wayland), Simon Fraser (Apple), Simon Thompson (BBC), Takio Yamaoka (Yahoo! JAPAN), Timo Kunkel (Dolby),

Chris Lilley All right. Let's go. Kelly, for the captions, I hope you're all set? All right.

HDR for 2D content creators: HDR support in Krita painting application

view presentation by Dmitry Kazakov (Krita)

Chris Lilley Okay. Let's get started. So, Dmitry, your presentation is first, talking about the Krita application. Do you want to quickly summarize what you said, then we will go onto the discussion?

Dmitry Kazakov Yes. We implemented the HDR support in 2018, so it may be that some of my information is outdated.

My presentation talks about two sides, first is about user interface and how users can work which with HDR as content creators. The second part, which is more interesting for me, it is the technical implementation of how we did it.

So Krita supports, on Linux, there is no HDR support at the moment; on Windows, there is no support in Open GL drivers. Some drivers, the drivers I guess support that, but basically, if you want to go for HDR, if you need to use direct X. We used library by Google which is called Angle (Almost Native Graphics Layer Engine). This library is used by all modern browsers for this layer.

We implemented two extensions to the library, it is a standard EGL extensions to look at the surface format for Krita window so that we can render HDR content.

Basically we implemented, it works, and there are probably three topics I would like to raise: First, actually profiling the HDR displace and just as a content creator, the environments. So for normal SDR workflow we have ICC profiles, color proofing, so on. So I can just load the profile of my printer and see how the image will look in that.

For HDR content just no such thing. It just doesn't exist. It doesn't exist even technically. By design I think.

The second problem is that if I'm a content creator, and if I switched on HDR mode of my display, I can no longer use SDR color proofing or display profiles. Display profiles need to use Video Card Gamma Table (VCGT) tag in the video card and this block, it is just disabled in the HDR mode. So as a content creator, I should either work in SDR world and then switch to HDR to work with that HDR content and then switch back. It is not very convenient.

And the third question, it is what is the future of Angle library and Open GL but I think that's for Friday's workshop.

Chris Lilley So thank you for that summary. You mentioned it is not possible on Linux, and on Windows you are using a converter from Open GL, what do you do on OS X?

Dmitry Kazakov We don't have HDR on OS X. I think we'll have to use some Metal interface, but we don't do that at the moment.

Chris Lilley Ken, could you comment on if this is possible and how they could get this going?

Ken Greenebaum Even longer than the application, for a long while, even longer than your application we have had support for what we call EDR, it is a secret until recently. I had the WWDC presentation on EDR last year, but it has been available for six, or seven years already.

Then I kind of like our approach. I think it solves some of the problems that you are kind of describing if you noticed I was making some kind of head gestures because I think you would get the ICC profile support and the profiling support and more generally EDR is an extension of SDR for HDR. It's not something that's modal and we don't take away from you the SDR aspects, including the soft proofing in the ICC profile support and even the based calibrations and other mechanisms.

Dmitry Kazakov How does it do it? I mean, we should somehow ‑‑ the compositor of the operating system should somehow compose SDR content into HDR surface, how does it do it.

Ken Greenebaum You're right. That's required. The HDR surface is very simple, it is logically not always practically, but logically it is a floating point surface where the nominal 0 to 1 range is the SDR, the same as it always has been including all of the color management that we have all been accustomed to and then the HDR is contained in the values that exceed 1.0 and they can go all the way up to floating point max and you can represent lasers and things that would damage the vision. It is up to the display system to try to figure out based on the display and the characteristics of the brightness of the environment that you're in, how much head room that you actually have.

Dmitry Kazakov But it means – if you use – it is like scRGB color space I think? It is also possible on the windows to use values higher than 1 on the surface and it will – no. No. It is rec709 linear surface and you can use values higher than 1, but since it is linear, the table is disabled because the table is supposed to create a gamma, not linear state.

Ken Greenebaum I'm really unfamiliar with the mechanisms that Microsoft uses. We also use the VCGT, and the VCGT does a few things, right, it allows you to calibrate your display so at least on Macs, the actual display response, including any warts or imperfections are represented in the NDED, the native response. Then our compositional space, as a tone response curve and the VCGT is calculated to make those two true in such a way that anything you had in your compositional space will be corrected by the VCGT such that it displays with the response on the display and it ends in describes. That works for when EDR is enabled.

Dmitry Kazakov Okay.

Ken Greenebaum You may want to go to this last year's WWDC and there is a presentation on that.

Dmitry Kazakov WWDC, EDR.

Chris Lilley Dmitry Kazakov, it looks like from the screen shots that you have a single monitor solution that you basically go to the download and the dialogues and everything is displayed on the same screen. Is that it? Do you need that set up, where one is the preview and the other one is all of your system dialogs.

Dmitry Kazakov It works on the same display. That's one of the problems, that we need to compose GUI in the same HDR surface. So right now we just use the dumb approach of setting the white to – I don't remember – I think it is 80 or 100 so if you have a real HDR image, really bright HDR image, all of the GUI will be dimmed. I'm not sure we can – I'm not sure Windows can do that.

I mean, having two displays, one in HDR mode and the other one in SDR mode, I don't know if we support it but it would be fun to check it.

Chris Lilley So we have a question in the chat, why did you choose the PQ system for Krita and what would need to change if you used HLG? Or any problems with one system and not with the other?

Dmitry Kazakov The problem with HLG, as far as I understand it, it doesn't have any definition of the color, and it is about transmitting the data from – some raw data from publisher to the device.

So I don't really know where to get HLG content as an image. Like if I get an image, it is tagged as HLG, I don't know what this colors mean. At least that's how I understood the specification.

PQ, it is absolute color space. So we can convert it into any form, into any color space when loading PQ files we just convert it into linear color space, Rec2020 linear and work with it as a normal image, and just to add to the discussion we have on github when working with these images in linear Colour space, the blending modes work, like normal Alpha composing, additions, subtraction, multiply, so basic compositing forms work fine, even with higher values, higher than 1.0.

Chris Lilley Hmm. I mean, on other sessions we have had quite a bit of information on the HLG. The basic difference is that PQ tells you what your display should look like when you're in a standard viewing environment. If you're not in a stand viewing environment then you have to do some changes. And HLG tells you what the scene looked like and then you have to display it correctly and so if you have different viewing environments, it's essentially scale up and down, that's how that works.

We have a couple of people with us from the BBC, so maybe they would like to say a little more.

Andrew Cotton Yes. Good morning. I'm certainly happy to add some more, I missed the context, Simon Thopson, chip in if I am misunderstanding.

HLG is exactly as Chris explained and the IT has more in common than traditional, it is based on the BT.709, that's a camera OETF, a scene referred system, just as HLG is.

We were actually fortunate in the days of Cathode Ray Tubes because, the behavior of a Cathode Ray Tube in different brightness viewing environments is, by a fluke, pretty much exactly what you want in that as you adjust the brightness control on the CRT for brighter viewing environment it is gamma changes. That actually is what happens with the HLG as well. So the two are very, very close.

As PQ, which is actually somewhat different in that the code value shows you the exact luminence of the other pixel on that reference on that mastering monitor, so I would say anything that you think you could do with this you could certainly do with HLG, if you're doing a format conversion, then you choose the reference viewing environment for HLG and the specification of how to display that signal in that reference viewing environment and on that reference monitor is clearly described in BT.2100.

Simon, I don't know whether I have missed a point there. If you would like to add?

Simon Thompson Not that I can think of.

Dmitry Kazakov I think I can add something. When we work with direct, we need to provide data to the display either in PQ color space or in Rec.709 linear. We have only those two options. I don't think we have HLG option. At least two years ago. There was none. So we just chose what we have and convert into these color space using standard LCMS.

Andrew Cotton I think it is fair to say because PQ was first standardized in SMPTE ST.2084, a lot of early equipment, only supported PQ and, you know, it was a couple of years later probably that HLG was standardized through the ITU. It is finding its way, in all professional broadcast monitors for sure but more and more computer monitors as well. We should find it easier to work with HLG these days than perhaps a couple of years ago.

Chris Lilley I see the following question in chat, Pekka says "it sounds to me like an HLG display would be better defined than an arbitrary PQ mode display. How about proofing images with a HLG system." Andrew , do you want to answer that one?

Andrew Cotton Most definitely the HLG display is better defined than PQ. That just keeps the 10,000 candelas per meter square display, and it doesn't tell you what to do if your display is a lower peak lumiance and some implementations may do a good job of tone mapping that to different luminance displays and others may do a less good job and absolutely nowhere is there any specification of what you should do for a PQ signal if you want to view it in an environment brighter than the mastering environment.

And all of that is provide for HLG and the adaptation for the display according to the peak luminance of the display is in BT.2100 and the adaptation of the different viewing environments is based on work that Simon Thomson did here, and that's in BT.2390. Definitely the displays are better defined than PQ.

Chris Lilley Are there other questions? I have a couple myself but I will take other questions first.

Dmitry Kazakov I have the link to the Microsoft recommendation, what color spaces we have for Windows surface. So basically we can paint on that and the operating system is supposed to care for the rest. We have no – we absolutely have no control what happens to the data after that.

Chris Lilley I do have a question, you mentioned that there is an extension to PNG which can be used supposedly to encode HDR images. I was slightly involved in that, and my main involvement was trying to tell people, A, please don't do it like this, and B, if you absolutely have to, then please make it very clear that you are very – what you're basically doing, it embeds an ICC profile and says, please, please, don't use this ICC profile. Instead pay attention to the special name and use this as a flag to do whatever processing you do and as you have shown your slides, if you use that profile, it just masks the entire range. There is a new suggestion, a new working group which is going to maintain the PNG specification and that just started a week ago. One of the work items there, it is to do proper signaling of different spaces. There's a very small chunk to throw that in to say this is an srgb image without the profile. This new cICP chunk is a similar thing but you can say that this is a space defined in ITU-T H.273, these various color spaces, and so that should be much better as a target hopefully.

Dmitry Kazakov I think we not only use the standard, we took this profile and I think we use it to flag our own images in our own format to take that this is – to check that this is REC2020PQ. Yes, we only use name of these ICC profile, we don't use that profile itself.

Actually, LCMS, the color management system we use, does not support HDR, and so we have a special conversion that first linearizes this color space, and then passes it through.

It is perfectly fine to have values higher than 1.0 for LCMS. We just need to make the color space linear and that's it.

Chris Lilley Okay, thank you very much Dmitry.

Better than Lab? Gamut reduction CIE Lab & OKLab

view presentation by Chris Lilley (W3C)

Chris Lilley I'm in a slightly awkward position now, I'm trying to chair and also I'm the next speaker.

So I put together a presentation about the new color space called OKLab, I investigated it, I think it is better result for gamut mapping, and I explained why.

Are there questions about this? Did people get a chance to look at this? I did upload the presentation rather late.

Max Derhak I have a question for you. It is a good presentation, it's a good introduction to the hazards of CIE Lab and what things can be done. In looking at the OKLab, it is just using a cube root which ends up potentially having a zero or infinite slope depending which direction you're going, is that a problem? What can be done about that if it is?

Chris Lilley I was honestly surprised they did that, I think partly they're looking at implementing that in GPU and stuff, and so it is easier. Every so often I see people pushing the simple power laws are better and also people saying, well, it is only really the linear segments, it is only really to stop the noise from cameras and that sort of thing. You don't really worry about that, you can't have anything in that analogue system, it is obviously terrible. Yes, I was surprised not to see any linear segment on that.

I did some notes on it. I asked Björn Ottosson ,the originator of this, and that wasn't one of the things that I asked him about yet, but I can certainly ask. It does surprise me a little. Basically what they did is they made some pairs of colors and used the CAM16 and then they can convert both of them to the color space and then they swap the coordinates that were supposed to be identical and then they were summing the deltaE2000 and then they just numerically optimized until that sum was as low as possible. And they also did that for another function, they ended up with, I think it was .323 or something, was the optimum and they said well that's so close to .333 we're just going to do that cube root, which seems to be working.

The other thing I noticed, it was that if you look at the sRGB gamut at least and then display gradients, in this, you get it from white to yellow or to cyan and that gives you gamut mapping issues and there was less in the OKLCH I found and it converged more quickly. The other difference, it is that I found I did have to use deltaE2000 to get a decent result in the CIELab, just using the deltaE76, it was giving bad results. I put the implementation into CSS color 4 in JavaScript so people can see it done, often implementations are wrong on this.

But in OKLab you can use a simple Euclidean distance, a root sum of squares and it works fine. So obviously that's faster.

I see some questions in the comments.

Let's see. Okay. So some things may end up in Little CMS because Wayland is using LCMS? Several people registered from Wayland here, I wondered. There is a comment, there is a lack to use that in Weston. Can you explain what Weston is? I don't know.

Dmitry Kazakov I think Weston is the reference implementation of a Wayland compositor for Wayland's protocol.

Chris Lilley Any other questions about this?

Dmitry Kazakov Could we probably use the OKLab for mixing colors, gradients, I mean, is there any benefit from the user, from the user point of view to mix the OKLab rather than the HLS for example?

Chris Lilley Vastly better than doing it in HLS. Compared to doing it in CIE LCH for example, yes, you don't get the purpling, you get more even gradients and there was a good review of OKLab by Raph Levien which was concentrating more on that aspect of how it works in gradients. His review (linked from my slides) has an interactive calculator so that you can pull down some colors and see how they mix in different color spaces and in general OKLab seems to be much better. It is much better behaved I think. It is also simple.

Dmitry Kazakov I mean, we have, like, a dozen different ways to set the color in CIE Lab, perhaps it could make life with users a bit easier.

Chris Lilley Yeah. I think so.

I think there are two – there is OKLab like the CIE Lab and another, it is the angular version, the polar form. So if you have two widely similar hues and you want to interpolate between them, you can go through the middle so you pass by the gray axis or you can go around the edge on a curve so you maintain chroma. If they're very far apart, you have the rainbow affect that you may or may not want. I would certainly encourage investigating, others investigated and implemented it and it seemed to be pretty good.

Other questions?

Max Derhak Planes are not the lightness you know if we have CIE Lab being a constant like this, how is that mapped into OKlab? I would assume – the IPT and the CIECAM has a shift that's basically not the same thing.

Chris Lilley Yes. And the Chroma compression that CAM16 has, whereas it – if you do blends in CAM16 you end up with a strange unevenness although it is supposed to be perceptually uniform. You don't see that in OKLab.

Other comments or questions? Okay.

Using iccMAX for HDR color management

view presentation by Max Derhak (Onyx Graphics)

Chris Lilley Max, would you like to explain what your talk was about, what conclusions you bring and what questions you would like to raise?

Max Derhak Okay. I was talking a little bit about iccMAX or version 5 ICC. We have talked a lot in these sessions about various ways of approaching, talking about color. In my presentation I first off kind of identified that one way of going about doing things is just to basically have some identifier that says this is what it is. This is – you know, P3, BT.2100, BT.2020, whatever.

And if you want to use those kind of things, then you have to then understand what to go from the encoding into what you want to do. It is up to the implementer how they do that. So making things consistent and stuff that requires basically a consistent implementation of the different ways of doing things.

ICC color management, it basically provides the transformation between encodings to a representation of how color appears, and that's maybe an advantage, in some cases it may be a disadvantage. Because you only have one rendering, you don't have arbitrary renderings in doing that.

With a challenge with ICC color management as it exists in version 4 profiles, it is all energy based, it's really based around the idea of SDR color, and – you can potentially tweak it a little bit to get into an HDR presentation.

It's a bit awkward. And challenging. We have noticed in ICC of aspects about color that didn't really necessarily get encapsulated and what you could do in an ICC profile. So we spent quite a bit of time adding and creating extensions to color management based on the principles of how ICC works, to allow for better encoding using floating point, being able to express colors using different light sources and making things so that profiles can actually have more robust definitions with transforms as opposed to just using simple curves and matrix and look‑up tables.

In the process we have come up with a very, extensive specification, it can do lots of things. So we also have the ability to provide subsets of everything so you don't have to try to input everything if you don't really care about it.

So we have come up with some Interoperability Conformance Specifications (ICSs) and specifically for HDR we have the extended range as different levels. The basic one, really basically it allows you to define a version 5 profile that's essentially a version 4 profile without integers. So you have the full floating point range on the Profile Connection Space (PCS) without necessarily having confusion about the limits of energy‑based things and that can be extended to use the PCS, further extended to allow for, you know, more function, functional programming of the transforms itself.

I think that pretty much covers sort of what I was talking about, basically ICC version five is available to use for people to define things. And, you know, having an image file format that uses an ICC V5 profile allows you to basically specify, you know, the entire dynamic range in the PCS and you can potentially tie in to different aspects of the spectral nature of light and variation in observers, which has a lot of power to it.

Chris Lilley Okay. That's a good summary. Thank you. I would like to take apart those points one by one if I could.

So the first one is ICC versions 2 and 4 require you to use a D50 white point which means you have to do chromatic adaptation to and from the PCS even if you're start and end point that's D65, how much of a problem is that in practice, is it – are we talking about round tripping errors, are we talking about finally it is simple to use it in iccMAX.

Max Derhak The big aspect of it is conceptual. I mean, you're basically going into sort of a D50 XYZ, and so the matrix, the conversion is there, beside rounding errors, it really depends on whether you're using an integer transform; integer based transforms versus the floating point, there are a potential for additional errors as a result of that. So having the ability to have D65‑based PCS, when you're going through, there is no transfer in the middle, you're just basically passing data straight through.

Chris Lilley You also mentioned spectral processing, which isn't directly related to your talk, but I know that some spectral processing and also some custom observers can be very useful for extremely wide gammut displays because of the sensitivity to observer metamerism – could you maybe address this point a little?

Max Derhak Let me talk a little bit about that. Fairchild and folks who did some investigation with observer and they discovered that basically when your monitors have very, very narrow primaries, they're going in the lasers, when you look even at the color matching function, the Z color matching functions, they have high slopes. So if there is any variation in the observer matching function and you have a very sharp thing that you're going to catch that curve on at a very different place, you have extreme differences in how color appears.

So, even people who are color normal, there is variation between observers and so that variation gets pulled out with the wide game mitt displays that use the narrow primaries, having a color managing system, you know, I don't – two observers is nice but it's based upon a lot of assumptions and even the CIE, there is work towards trying to replace the standard observer with something more based on the cone fundamentals and it would potentially predict things a whole lot better.

I don't know if that quite answered the questions that you're asking. You know, it is an important concept. The real key is that also in doing the spectrum stuff in ICC version 5 you specify the spectral characterization of the display, you have the characterizings of the monitor, and then it basically applies those to actually do colorimetric calculation but it would be observer specific; so that you're not generally processing hundreds and hundreds of wavelengths and you're processing still an LMS type value to do the processing, which still has a high speed.

It is really basically a start‑up, the spectral stuff is calculated and then from that point on, you're doing essentially normal RGB processing.

Chris Lilley I brought this up in the same question because I know that some people get confused and think if you're doing it like this we basically are doing full spectral processing which is not the case. And also in the context of content creation, I thought of an analogy with audio mastering where the mastering engineer will really know their own particular monitoring set up and their speakers and whatever and they will listen to reference tracks so that they know that their own little bias, how their own little changes are reflected in the mix, they're not making the mix for them but the mix for the public and they want to know how it will sound for them. And it sounded like this is a similar thing, if someone is doing color grading and they switch on their own particular observer then they would produce better results which then will be translated to the standard observer for distribution.

Max Derhak It will be critically important to have these dynamic ranges because there are wide variations between the observers.

Chris Lilley Any other questions for Max about this. Yes. I see one in the chat, how does iccMAX encoding space differ from gammut space if we think of display profiles.

Mike Bremford With the display profiles, for instance, just thinking about the simple case of the extended dynamic range profile, it will have essentially some – a matrix, some curves in there, floating point when coded. In addition to that, it specifies essentially the luminance level of – that the relative white, it should be in terms of how many nits there are and the encoding can go above that and it is somewhat similar to the idea of having a HLG, you know, or where you basically have the middle ground and it has a really nice well‑defined relationship to SDR and the relative intent of the ICC profiles for other SDR and print and and things like that.

However, if you really want to actually have a matching of the luminance levels, you know, having that – luminance of that point, you know, in the profiles, then you can then tell it to actually be scaling so they are actually scaled to preserve the lightness to achieve what you would want to do with something like a PQ.

Max Derhak One of the things that we're currently exploring in the ICC is how do you do tone mapping? You know, a lot of this stuff is – it gets complicated and if all you really have is just basically a number that say, hey, this is HLG, this is whatever, then the system has to figure out how doing that and it would really be nice to be able to say well, you know, I have the display profile, I have the profile for the image and then I would like to be able to put it potentially an abstract profile in the middle that would do the tone mapping, allow you to basically do the tone mapping and define it in terms of profiles as opposed to having to have some sort of jobs you're being put into every system to actually do that. It makes it easier to standardize on something like that. Because you can standardize, that's one of the things about ICC, we're standing on how you communicate to do the transforms as opposed to standardizing the transforms themselves. That's a clear distinction there, it is that in the future, if you want to do something different, you provide a different profile id that provides that transformation for you.

As opposed to, you know, we want to add something new, I have got to add an identifier and tell everybody in the universe they have to implement something else to support that new identifier.

Ken Greenebaum I like using abstract profiles a lot, they seem to be pretty static unless you can dynamically generate them.

Max Derhak And – you made a good point. The real key is iccMAX has an ability to put an element in it. And so the thing we're doing, is with an abstract profile, you basically have access in that abstract profile to the luminance level of the profile before it and after it, and so that they can actually dynamically do the transforming for you.

Ken Greenebaum That's very interesting. Thank you.

Chris Lilley Max, you're talking about the calculator element, right? Maybe you could say a few words about that, for people who are unfamiliar with it.

Max Derhak The calculator element, we spent time, we decided to – you know, there is a lot of things where using basically the matrix, the look‑up tables and you look up table, you come up limited. If you want to have, spectral processing, we have lots of dimensionality than the look‑up tables that will blow up or become inaccurate.

So we come up with a method of encoding transforms with a very simple scripting language. Now, one key to this, it is not a general purpose programming language, we also are very keenly aware of it needs to be powerful enough to get what needs to be done but not so powerful that can really mess up the system or it becomes a performance overall problem or even having malicious code in that would do bad things. There is some limitation to it. It doesn't allow you to actually do loops, you know, every path, code path is known beforehand and so you can basically validate to see whether or not there's something bad going on in the calculator element. You can also determine the memory usage, there's no ability to actually have pixels coming in to reference and to see in the arrays and things like that. All of the process – the calculator elements, operator, they're vector‑based, you can basically do things with multiple channels at a time. That makes it so that it is much simplerer and some of the things you normally would have loops, you don't need to have, basically it is matrix and vector based.

And the additional thing with the calculator element, the calculator element has access to doing things like matrix and version and – as well as providing some CMM environment variables, variables outside of the profile that could be actually accessed by it, so if you want to take into account the ambient luminance and you had a sensor, that would be available to the CMM and the profile would do the right thing to account for the ambient lighting and using that mechanism is really the mechanism I think that you have basically some variables that would accurately represent the values from the previous to next profile so that the tone mapping can be done.

Currently right now, we currently have the – a group looking at the whole calculator element and actually all of the ICC profiles in general to identify what sort of security risk, Safe Docs is doing that and we're work with them again to someone that has reason for people implementing it. It is my hope that at some point, I actually also do believe that the calculated element could be implemented directly into GPUs too so that you get the continued speed with it. Some sort of translation needs to be done.

Currently right now, we currently have a group looking at the whole calculator element and actually all of the ICC profiles in general to identify what sort of security risk, Safe Docs is doing that and we're work with them to give guidance from ICC to someone that has reasonto be implementing it. It is my hope that at some point, I actually also do believe that the calculator element could be implemented directly into GPUs too, so that you get the continued speed with it. Some sort of translation needs to be done.

Chris Lilley Okay. Any other questions for Max or for any of the other speakers?

Dmitry Kazakov I have a question for Max, it is probably – it is probably a note, not a question. About standard transformation versus dynamic one with scripting language, I think it is quite important to still have some standard way to express the standard transformation like PQ because here in Krita we can optimize this by hand using special processer instructions, for example, and GPU instructions to optimize for this specific transformation and processing the common case with scripting language is either too much work for or it will be slow.

Max Derhak Yeah. There are considerations for that. One thing to do is, we're working on a processing element that uses the script to actually define a look‑up table. So the real key there, it is that you're not necessarily burning the script on every single pixel but basically using it to have the dynamic behavior for defining the transform, the transform itself is optimized for high performance. Performance will be the key issue that you have. I also agree, that you know there may be some cases where having the actual transform defined is that this is what it is, I mean there's people that want to have that, that's really basically that the video industry, cinema, they just basically have these things and all of the devices along the way know it. But when you talk about doing things for web and stuff like that, you may actually want people to have some of the flexibility to do new and interesting things.

Dmitry Kazakov In some cases look‑up table could be much slower than just implement this conversion, for example, in this or GPU using the straightforward way. That may be a problem if there's no standard transformation for that.

Max Derhak Well, in those cases, having the metadata in the profile saying that is what it is, it is useful for those cases.

Dmitry Kazakov If this profile name is …

Max Derhak No. No. No. No profile name. No. No. No. No. There is – as I mentioned in the presentation, there's actually tags that has metadata specifying what it is, you look at the tag and you can do what you want and ignore the transform.

Dmitry Kazakov I had the second question, it is is there any implementation of iccMAX at least profile processing library, because the authors of Little CMS, I don't know if you saw the message on GitHub Wishes, the people just declared that iccMAX is too complicated and they are not going to support it in any of the features in the future and how to solve that. Like we cannot use it without implementation.

Max Derhak Yeah. That's a good point. I think that you're not going to see necessarily going to be a lot of implementations of everything in iccMAX. I do believe that for the extended range profiles, I mean, if they have implemented version 4 floating point tags they could potentially implement that immediately and then you actually have a very robust way of at least describing that where you don't necessarily have to worry about the limitations of the fixed energy‑based profiles being required.

Dmitry Kazakov Perhaps iccMAX could have some levels of support?

Max Derhak We have it. That's the whole idea of the ICS, we have the ICS from the beginning of the year that were defined so people can implement subsets that they feel are important. I would actually go back to Marti and folks and say, hey, how about implementing this particular ICS because it would be very, very helpful for us.

Dmitry Kazakov Oh. Do you have a link to these levels?

Max Derhak It was in the presentation.

Dmitry Kazakov Okay.

Max Derhak My presentation has a link to the ICSs on the website. They're available and so they can be used. You know, I would really like to see implementations of it.

Dmitry Kazakov So there was a – there's a separate extension for HDR, right? Can we implement something that is HDR only?

Max Derhak Yeah. Yeah.

Dmitry Kazakov Okay.

Max Derhak The real key is to go to people who are actually doing the color management asking them what they're feel being that. I can't speak for the various parties involved.

Chris Lilley So I see a question from Marc Jeanmougin in the chat, is there a way CSS Color 4 could specify how interpolation ‑‑ So yes, there's a section about interpolation, and it says how to do the interpolation, so on. Also, this use of OKLab in CSS is new. I just started a new issue on the CSS GitHub about OKLab, pointing to the presentation from today, we will see whether that is accepted, and it will be discussed in the next face‑to‑face meeting.

We have some stuff about interpolation, the idea was to put everything in one place for interpolating the color and then the spec component of that, so they're all doing the math the same way. That's the idea.

Marc Jeanmougin Thanks. I missed the paragraph.

Chris Lilley No problem. Yeah. The spec is rapidly changing, it is getting very close to being a candidate recommendation. People just keep raising new issues, including me, eventually we'll get there.

Any other questions? We have a few minutes remaining.

Marc Jeanmougin It is still piecewise linear interpolation, it cannot be an expediential or things like that?

Chris Lilley I see. Right. No the idea of transfer functions and using codes, whatever, it is being discussed for gradients certainly and some people want – are using curves like the animations and some people want to have these sorts of things, yeah. Having to do straight simple linear stuff when you have more than one step, it gives you just discontinuities and that's being currently addressed as well.

Marc Jeanmougin Thank you.

Chris Lilley Any more questions? All right. Well, thank you very much, everyone. Thank you to the captioner. Thank you everyone. There is one more session remaining on Friday, then that's it.


What is W3C?

W3C is a voluntary standards consortium that convenes companies and communities to help structure productive discussions around existing and emerging technologies, and offers a Royalty-Free patent framework for Web Recommendations. We focus primarily on client-side (browser) technologies, and also have a mature history of vocabulary (or “ontology”) development. W3C develops work based on the priorities of our members and our community.