Minutes of the third live session:
HDR: Compositing and tone mapping

Live, real-time transcription services were provided by Caption First, Inc. and these minutes are derived from those captions.

Present: Alexis Tourapis (Apple), Andrew Pangborn (Apple), Captioner Kelly (Caption First), Chris Lilley (W3C), Chris Needham (BBC), Dmitry Kazakov (Krita), Felipe Erias (Igalia), Ken Greenebaum (Apple), Lea Verou (MIT), Marc Jeanmougin (Télécom ParisTech/Inkscape), Max Derhak (Onyx Graphics), Mekides Assefa Abebe (NTNU), Mike Bremford (BFO), Pekka Paalanen (Independent/Wayland), Sebastian Wick (Wayland), Simon Fraser (Apple), Simon Thompson (BBC), Takio Yamaoka (Yahoo! JAPAN), Timo Kunkel (Dolby)

Chris Lilley For those that just joined, we are going to start at 5 past the hour. If you didn't have time for the coffee, feel free to. If it is the end of the day, have a glass of wine. Whatever is appropriate. Feel free to raise any points in the chat, that's fine. The session is also being live captioned by a real human, not AI, you can tune in to that on Zoom as well. On the Zoom client, if you click on more there is a live transcript option that shows you.

A couple more minutes and then we'll start at 5 past the hour.

Comparison of linear vs composited HDR pipelines

view presentation by Timo Kunkel (Dolby)

Chris Lilley Let's get started. So the first presentation we will be discussing is Timo's talk about linear versus composite HDR pipelines. Composite pipelines, that's pretty much what we do on the Web; Timo, do you want to maybe quickly summarize your two take‑aways. And then we can get in to the discussion.

Timo Kunkel Good morning, good afternoon, good evening, wherever you are in the world. Thanks for joining. So the presentation I submitted was trying to discuss what the differences are between linear pipelines, which is what we see in the marketplace at the moment. With TVs and also cinema now supporting HDR and lots of different flavors of HDR, but we get the quality increase over SDR and this is a fairly decently established ecosystem now. Now the next steps obviously are getting us in to the world of the web. And there are a couple of challenges that we need to look into that are different than when we deal with linear HDR.

So what is linear HDR, I will discuss it in the talk. It is content, like a movie that starts at a certain point and continues with the sequence of frames and then ends at some point. The whole screen, in the context of HDR that means that the content can take over how the display is imaged and portrayed and that makes it easier.

If you go in to the domain of composite content, but with graphical user interfaces, we end up with having to put together lots of content elements and then place them somewhere spatially on the screen and hope they still work. With SDR-coded content that is worked out, we have experienced it in the last 25 years with the web.

But with HDR we have some unique challenges. For example, how do we map content. You could end up with something that is pitch black on the left of your screen and a very bright image on the right. And the two image elements by themselves are fully contained and they are mapped right. But now it was the intent that both of them are shown on the same screen at the same time.

And then you add all the layout and the graphics text and white background and things like that together. And you can imagine that this is getting fairly quickly complicated. And I have just talked about the spatial properties. Now add temporal properties to that, animations or videos that you have played. You don't know at which point in time things will be presented. And that's what I have been explaining in my quick presentation. Just to give kind of introduction and a high level of the differences.

And there are other challenges we have to overcome, and we are all working on it, that's the great thing. I'm looking forward to, as a community, to tackle those issues and hopefully come up with some good solutions.

So I also added a lot of links and references at the end of the slide deck as just for the reading. I am coming from the linear HDR game. That's much more versatile. But a lot of that information is very valuable for the compositing case, in particular, if you look in to human vision, that's where we are getting with compositing content.

Okay. Thanks, that's a quick overview. And now I open the mic for questions I guess. Or discussion.

Chris Lilley So one thing occurred to me and I offer the question to get us started: where you have a full screen presentation of a movie or something and you are in sort of a dark home theater or cinema environment, the content is the primary adaptive stimulus. On the Web, the assumptions of your viewing conditions are now broken. You don't know. Now the room itself – has there been any work on being able to deal with that, especially if you – if you are re-rendering so it looks correct?

Timo Kunkel This is the field of light compensation and most TVs offer something that can do a compensation. There are lots of preference modes in TV that give you a cinema dark or cinema home or movie light, whatever they call it that has the mid tones. Just to add a plug, television offers a mode that analyzes the content based on the luminance in the environment and can adapt to that. And we (Dolby) sell other technologies that will look in to that.

but this is just a beginning of things. It's like many more scenarios and it is a complex situation. Certainly we could group it in to the mixed illumination situation. In reading through the literature, most of the time we assume we have two illumination sources, two different worlds, one inside the TV and one outside of the TV. You could be in winter in your dark house or living room and you watch a picture of a Caribbean beach in the summer and it is very bright. And that is, kind from a natural evolution point of view, that wouldn't happen. How does perception deal with that? We are mimicking it by looking at the screen. But it is still, from a vision point of view, it is a mismatch.

So the other one is obviously how close you are to the screen. The closer you get, the larger the visual field to the screen becomes and the more you kind of get immersed in to your content. So there is a difference between you sitting close to a 18‑inch TV versus you having a TV in your kitchen that is 20 inches and you are not really paying attention while you are cooking. So it changes how you perceive the content. And everything we do with the Web, like going from a big screen monitor to your mobile phone, is a similar kind of mix in the environments.

Chris Lilley Other questions for Timo?

Dmitry Kazakov I would like to ask about whether there are any approaches of compositing application GUI at the same time with the HDR content. Perhaps – when listening to it right now, I got an idea that probably some games have already addressed this problem. I personally didn't investigate in to this topic. But it is really a problem in our project (Krita), because we have GUI and we have the HDR image on the same screen. And GUI is in some cases it might be just undistinguishable because it is made to sRGB likeness.

Timo Kunkel So obviously there has been work done. And it is still ongoing research. I think one key thing, I'm not sure – that doesn't – as you mention it, what happens a lot is that you take the signal from 0 to 1 and then just map it and then you composite content, overlay that's how it should be you will end up with graphics wide or being literally 10,000 nits if your display will do it. It will give a headache after a while. So getting the mappings right is one of the current challenges.

So we have the interoperability, the 203 nits that's in the BT2100 standards, which is used as a reference for placing graphics white. There are other projects at 100 nits or even going somewhere like 400 nits but you shouldn't go that much higher. If you have full screen content and put graphics on top of it. What do you do if your video is shown at let's say a max peak luminance, that will kind of blast you. So these are kind of small things and perception that we need to look in to.

And people are looking in to that. It is just that I don't think there are commercial solutions rolled out that do all that stuff. These are just things to keep in mind that it happens. So – and there is a big difference between like operational factors in HDR and the human vision in context of HDR. Both are important. We can't equate them to be the same thing.

Dmitry Kazakov There is one more problem. The current HDR standard allows the display to highlight, it can’t highlight separate pixels. Instead it highlights areas. If you have a small dot of light then the surrounding area is also highlighted. And in – it is okay for specially designed content like fields. But if you have, for example, it was a bug actually, originally. If you have one cursor in the middle of the screen, then you will have a halo around this cursor. And it should – it should be addressed somehow as well I guess.

Timo Kunkel So the cursor example is a very good example. And that's a thing we don't have with the TV world because we don't really have small cursors. So the problem you are addressing with the small field is called – is called small feature with display device. There are display technologies, full LED displayed like they are large. Hopefully in the future you will have the micro LEDs that will do that. You have a process called dual modulation that is illuminating. The more you get these issues with potentially with halos. It working fairly well. Especially with natural scenes. But the mouse is a problem. So one solution is don't make your mouse cursor 10,000 nits!

It is fairly well set up now and with mini LED back lights, everything that we are seeing coming up right now, the problem is mitigated. As a side note there is another issue, which is the glare artificial system. If you have pitch black backgrounds, you will still get glare. You could call it a halo ground. You can't do too much about that. But it shouldn't bug us because that's natural behavior. But these are all things this we need to keep in mind. So glare is a problem if the content is not presented right on the screen.

Chris Lilley I think the VESA standards for HDR, have different conformance levels. And basically as you go up, it is not just increasing brightness but also increasing number of backlight zones, so halo size is reduced.

Timo Kunkel Yes, it is getting better. In beginning of HDR displays we had the limit of 20 to 50 LEDs in the back light. And that's not enough to get a good experience. The LED driver pitch, they couldn't handle more because of the temperature. Mini LEDs, they can have easily thousands and now we are getting to a spatial pitch that's acceptable. The problems will get less. And we are in the middle of that.

Chris Lilley Okay. Thank you. Unfortunately, I think it's time we should move on to discussing the next talk.

An Introduction to Hybrid Log-Gamma HDR Part 2: Format conversion and compositing

view presentation by Andrew Cotton (BBC)

Chris Lilley So Simon is going to be taking questions and summarizing this one. Andrew Cotton sady can't be here today. Simon, do you want to introduce what the talk was about?

Simon Thompson Yeah. So I'm Simon Thompson from the BBC. We've spent the last three or four years trying to perfect workflows for doing HDR in live television. What Andrew was presenting was the sort of building blocks that you need in your system to be able to do conversions between sRGB or BT.709 in this case to HLG, and from HLG HDR back to SDR. One thing to note is that going from a standard dynamic range to a high dynamic range is highly valuable.

So in the slides that Andrew goes through he show us step by step guide of how to build a converter that goes in that direction.

I said relatively simple because there are actually two ways of doing it, and I think Andrew shows the main one. You have to decide whether you want the colors that would appear on a monitor at the end of the chain to match, or all the signals that would come out of the camera to match. If you take an sRGB camera or Adobe RGB camera or one of the various proprietary product formats and point at a natural scene the look of the natural scene is different in all those systems.

sRGB seems to exaggerate the desaturation if you don't apply a desaturation curve to it. If you are trying to match cameras at an event, you have to do one thing, or if you are trying to match the look on the TV in the home then something very slightly different.

And then also in the slide, Andrew goes through the opposite process, the process of going from high dynamic range down to sRGB.

Basically you can choose how complex you want to get on that. The – you have to do something to the HDR to force it to fit within the SDR color volume. Need to do some highlight or color tone mapping. Standard dynamic range color volumes have a much smaller gamut. You have to be gamut mapping or clipping to get to the target format.

And in quite a lot of them you can quite close. But you do get psycho-visual effects that you may need to deal with as well. So in the slides Andrew shows two different ways of doing it. One of which is quite accurate. The other one is using a technique of just assuming that the HDR signal is BT2020 or 709 signal. And then applying a gamma of 2.2 which is what you would expect to see in TV displayed in the home. The good thing of the second one it is quite easy to implement. And that's one of the techniques that W3C are looking at the moment (in the Color on the Web CG, for HDR canvas) for doing a simple rendering on SDR.

And at the moment the W3C is also looking at the issue of how you render all of these various HDR and SDR formats in to a single working canvas and then you how end out of this working canvas on to multiple different displays.

So hopefully that work could lead to something that the previous guy who I'm afraid I didn't catch your name [Timo], some of the previous things he was highlighting with GUIs. And that's it for me. Any questions?

Chris Lilley Questions? (pause). Simon mentioned something called max RGB. Would you clarify that? That's taking the highest of the three components as a maximum?

Simon Thompson Yes. There is that or the norm which is I think like X to the 4 over X to the 3. So, to come up with what the maximum of the RGB is, so you know how much – how far out of gamut it is and just how much compression needs to be applied.

Chris Lilley Other questions?

Mekides Assefa Abebe We have another question which is very much related with what you just asked, actually. So in the Andrew's presentation I saw that he is saying tone mapping in RGB which will effect the color more. We should do it luminance space. Which luminance space are you using? Which color space? How are you computing new values?

Simon Thompson No. So the tone mapping, there are a number of spaces that we tried and we found that the one that worked best was Y prime, u v prime. There are other ways of doing it. You can do it in lightness or intensity. But for – to best match – one of the problems that we have is bringing in things like score graphics and flags for sporting events. So if you are overlaying – you got your sports video feed from the host broadcaster and then you have to add in the score and the players' names and all that sort of stuff. You have to be very, very careful with the colors on those because the – so if you are doing the Olympics, the colors of the Olympic rings have to be exactly correct. Otherwise you have the Olympics committee in each country up in arms and measuring your content. And we found that in order to get their design to be used in Y prime, uv prime. We found if we move too far away from this you can get yourself in to various bits of trouble with regulators.

But there are certainly – we did an experiment with quite a few. They all worked reasonably well. The big issue that you see, so in Andrew's paper he uses a clipping of RGB to get back in to gamut which will induce hue errors. We did quite a lot of experimentation with different color spaces to see which ones are hue linear: so that when you are desaturating you don't get a change in hue. What we wanted was to ensure that the desaturation was similar amongst all the colors and highlights. So you didn't end up with red and orange staying quite saturated but blue and green becoming desaturated. We have come up with a compromise using tone mapping in one space. And actually gamut compression in another space just to try and match the amount of desaturation in the highlights that we were seeing.

Chris Lilley That's interesting a constraint about the consistency across different colors. You mentioned you use lots of different color spaces and I assume these ones are hue linear. Could you say which colorspaces you used, did you try the constant-luminance ICTCP for example?

Simon Thompson We did test on ICTCP, uv prime, Y prime and I can't remember which one my colleague used to implement it in. But yes, they were all – so Y U prime, V prime, LUV, YCbCr are very not linear in hue when you are desaturating. The other ones are much better.

One of the issues you get is if you have got – I will use bright blue as an example because I think that's the one that Andrew uses in his talk, if you actually look at the shape of the gamut at the intensity or luminance or L value, lightness of bright blue and then desaturate it, you have to desaturate a long way before white. You end up with quite washed out blues in your image. But the rest of the highlights remain saturated.

So one – so yeah, there is various techniques you can do to try and change that. So one of the topics that one of our students is looking at at the moment: can you use some of the published appearance models rather than doing it mathematically based on hue and saturation. Is there a way of creating a smaller color volume in a color model and mapping that way. But I think there are quite a few techniques out there.

Chris Lilley What you are describing sounds a bit like what the ICC does for perceptual rendering. They have a thing called perceptual reference medium gamut, and they first match to that. It gives them sort of consistent downscaling of the color gamut.

Simon Thompson Yes, that's a similar technique to where we ended up, I think.

Chris Lilley Is that written up in any more detail on the project website at the BBC?

Simon Thompson Not sure if it is at the moment. We do provide conversion lookup tables. But I don't think we are providing exact details of how they work at the moment.

Chris Lilley Okay. I would encourage you to do so if someone gets the time and energy. Other questions?

Andrew Pangborn Yes, can everybody hear me better today so far?

Simon Thompson Yes, I can

Andrew Pangborn Perfect. I want to expand on this example you brought up of kind of blending like player names or flags or other graphics in to the HLG video. I'm curious, were you – first of all, did any of those assets dealing with transparency? Did they have anti‑aliased edges or straight opaque pixels that you were blending together?

Simon Thompson Some of them do have transparencies, yes.

Andrew Pangborn When you apply that transparency what space are you doing that? Like typically on the Web you would have sRGB assets and then interpolate in gamma-encoded space. Were you doing that with HLG or anything else? And what issues might that have caused?

Simon Thompson All of these transparencies are appled in hardware. The vision mixer is treating them as sRGB. It is doing what it normally applies in the sRGB space. We are relying on the sort of 10 bit vision mixer to pass high dynamic range, none of the current mixers on the market know about the high dynamic range. They are agnostic. There are that you can buy that do the conversion in to a color space but they are all either assuming sRGB or they are taking an sRGB asset and doing a conversion to HLG and then doing a blend in the nonlinear domain.

Chris Lilley Other questions? (silence) Okay. Hearing none, let's go on to Mekides' presentation

Investigation of current color spaces for HDR content reproduction over the web

view presentation by Mekides Assefa Abebe (NTNU)

Chris Lilley Could you quickly summarize what you said and what your main points are so we can start discussing them?

Mekides Assefa Abebe Okay. So my presentation is actually very similar with what Timo and Andrew presented, particularly what Timo presented. Andrew's presentation was mainly particularly focused on HLG pipeline. So my presentation was somehow followed the kind of management process right now which has been – which has already built in the color management module of browsers, current browsers which support color mapping.

So first I tried to investigate what level is in current browsers. So only intended for SDR contents. And basically it does gamut mapping which is from SDR 709 gamut to other – kinds of gamut. So I tried to briefly – I tried to briefly explain what's available in some of the browsers. I tried to simulate the pipeline they are following in MATLAB. I see that some of them use ICC profile color management.

So they will convert from the color specification of the input profile in to the profile connection color space, which is my ICC profile is XYZ color space, and then from XYZ to the destination color space. If we follow that without any tone mapping, then we see like very dark or inappropriate content for other – even if it is – if it is SDR rendered image. So just converting from color spaces to color spaces, doesn't work for HDR because the dynamic range is very high. So it would give a very dark image.

So we need somehow to composite, as Timo said in his presentation. Also the other way around, if we have an HDR image, and then we have – if we have like SDR contents and the user has HDR display, then we have to do the reverse. Like Andrew presented, the reverse gamma, just doing linear expansion of the dynamic range might infringe the color appearance. There are some studies how to preserve saturation when we do inverse tone or the same which is just similar with tone mapping. We have this kind of process.

So in my experience, I know that that is mainly related with where we are doing these processes, which color space. So as Andrew said, for example, now Simon said that while desaturating, Y prime uv prime was the base from the comparison, but that's like from the compared color spaces.

But in my experience we have seen that the general dependence between the luminance is very important. When we apply tone mapping, or reverse mapping, we are stretching the luminance part which is really in this Cartesian space, color space which is really related. Of the color spaces that are available, most of them are in Cartesian space. And then there is not very pure channel independence between the luminance and the chrominance, and I tried to present that in my presentation. So yeah, if you have any questions.

Chris Lilley Are there questions? (no)

You mentioned the difference between global and local tone mapping operators but also said that local is much more complex and I assume if it is, as Simon said, done in hardware, that won't to be investigated at the BBC because it requires a lot more computational complexity. Do you want to talk about that a little?

Mekides Assefa Abebe Yes. For tone mapping cases, there are so many operators proposed over the past years. There was also a lot of research comparing different tone mapping operators. So most of the standards propose global ones, because they are efficient – but in terms of a quality image or video quality, of course, the local ones are found to be better because they preserve more details. And also more contrast than using global tone mapping.

But if we want to apply local tone mapping for applications like wave, for example, the efficiency of the complexity, might be higher. In time it might be challenging to implement them in real time because most of them use expensive algorithms like (inaudible) – which is like bilateral filtering, and also median cut kind of algorithms which are a bit expensive. But if we could manage to implement this in a very efficient way, then they are better than the global operators.

Simon Thompson Just quick comment on local versus global. There are a couple of local tone mappers available on the market at the moment. I'd have to look up the name of the company, it is a German company. One of the issues: if you imagine a sports game, you want a local tone mapper running on the video content and fixed (global) tone mapping running on the graphics overlays of the content. And that's changing dynamically throughout the TV program. So it's – it is an area that I think needs a little bit more work for – to be used in live production.

Mekides Assefa Abebe That's correct. I mean if it is dynamically changing content it would be very challenging for the consistency of – because there would be quicker in the video because of different parameters for these local algorithms. So yeah. There have been a lot of works actually to do local tone mapping for videos.

But as you said if the content, any changing, highly dynamic that's a challenge. It is a research area.

Simon Thompson Yeah, it is not really the dynamic changes of the video. If you imagine it is a sports match and you have got a score in the top corner, you want the score to stay the same level all the time irrespective of what the video around it is doing. So you want the video to be dynamically adapted but not the graphics overlays that you have put on to the video.

Dmitry Kazakov I'm not sure the score should have the same lightness. It should adapt to the brightness of the main video. No?

Timo Kunkel Like I thought about that. I think if we want to maintain this that the appearance of any kind of super imposed graphics stays the same. It does not mean that the code words or the luminance stays the same. So it needs to go in context of what is shown but obviously if the content changes so much, that the overlay also changes drastically then we can see that something changed. So ideally you want to implement it so you can't see that things are changing even though they are changing.

Getting at what – follow the thought, like local tone mappers we will probably be seeing them more and more, the question is now how are they implemented. Like just in place between local and global and it is a bit of a binary separation. Like the truth is really in between there.

The early days of Flickr, they had “HDR” and then compress the dynamic range to like almost nothing, and you get extreme halos and super contrasty images. The challenge is you are destroying the appearance frameworks and then people say something is wrong. I might not be able to put my finger on it. But it looks fake.

We want to avoid that we alter the content to a level that it is completely different. So we need to find ways to do it in a way that it's – it should be done in a way that we don't introduce additional artifacts. But fundamentally, local tone mapping is something worth continuing investigating but we need to find the right way and compressing stuff to almost nothing is probably not helping. But doing it in a well balanced manner is probably going to be difficult.

Chris Lilley Yeah we have got color appearance modeling and image appearance modeling and also creative intent. In a film production environment, you have the best world. You have got time. You know what's coming later. You can balance things up. If it doesn't work out,you can do it again.

And in a broadcast situation, the match is playing. The footballers are running around the field but also it is very brightly lit. You can say"I want this score to be these code values". You have a highly constrained problem, even with with well lit scenes.

Then if you have the Battle of Winterfell and people fighting zombies in pitch blackness. Suddenly you can't see anything unless people have well set-up, high-end equipment and close to the reference viewing environment.

Mekides Assefa Abebe Yes, I think you are right. The complexity will get even more difficult, as Timo said, when complex time-based rendering is important.

So most of my experience was on the media part. So on the way we are not much that familiar. All my experience are for videos. Not canvas based. Timo very clearly explained the complexity is much worse in that case. In terms of perception, perceptually and technically.

Timo Kunkel I think the one benefit we can predict in to the future is compute power will go up, either locally in your chip or if it is done in the cloud. But we will know we will have the compute power in the future. We need to identify what we want to do with the compute power. There is a lot of considerations we need to look in to. Most of the big color appearance models, like CIECAMs they are spot based models. They are great tools. We need to find solutions to get color appearance assessment. Temporally, video appearance models. There are lots of research papers out there but we need to bring these things together. And there are tasks under way, as Mekides already knows with CE project we are working on at the moment. We are looking in to all these things. And we – we are making good progress. So I'm hopeful there are going to be some really great things to come in the next couple of years.

Simon Thompson The W3C creating canvases for doing HTML or similar on to HDR displays you have also got the added complexity of having a far larger range of devices and viewing environments that need to be dealt with. There is a world of difference between matching something on a huge screen in a dark environment to matching something on a tiny screen outside in the tropics at the height of summer.

Mekides Assefa Abebe Yeah, and there are things also related with individual based adjustments also. Because every individual has different perceptional, it can be like slight problems or full color blindness. It could be by nature every individual has different color perception. So individual based automation or individual based adjustments are also one part of SDR research. That's very true. And it tends to get almost – in discussion of HDR. Once you get on to BT.2020 with laser primaries, then user metamerism starts to become significant.

Timo Kunkel On any device, display technology changes. So it – it is plenty of issues we have to tackle. But yeah. We need to identify them first and then we can kind of work on them. One risk as you said is it seems to be with this kind of latent perception that now we have HDR for the last six years and in consumer displays it is mostly done. Move on to the next thing. There is still plenty out there that we can tackle.

Like most displays only go to a thousand nits max. There are others that go brighter. We have to showcase. How can we creatively use that or compensate for your living room–like environment, your viewing environment. And that's as we heard a big challenge if you are outside on the beach versus in a dark room.

Like a good example, if you are outside and the ambient light is high, you might have a perfect 2000 nits display. Now things are still washed out even though you have HDR display. Buts if it an immersive display it can't match up with the outside. These are still technological challenges.

If we can build a display that can work on the beach or a glacier. I have these glasses that change like chemically change to turn in to sunglasses. You go in too bright and your display is struggling and your glasses get dark. Forget about seeing anything on the display. These are funny interactions with modern technology. But we see these kinds of things all interact together. And hopefully we can find at least learn about how we can put them together and then work on models to compensate for stuff.

Chris Lilley Another aspect that doesn't get talked about is calibration. We have SDR calibration and a display matches their print. But f you have a box that says "I'm going to take some video in and I might look at some metadata with a histogram and then I'm going to do things, trust me good things", how do you calibrate a box? Control the back light so you get the full range. Define what's on the full screen and then under that condition a small patch has changed. But in general, calibration seems like a hard and almost like an abandoned problem. I heard someone in an ICC meeting and they said oh, it is just not possible to calibrate an HDR screen. It is simply not possible.

Timo Kunkel Maybe I can comment on that. So the SDR – calibration is a challenge. And it gets more complex with HDR displays but that's also a challenge. Before you had unbounded signal, signal between (inaudible). Now you have to say, do I send it in HDLC(??) or HDR or do I have to look in to a format. How does my TV react on that.

Then another one is that the HDR formats that try to use every little capability of display. It using all the color value display can do. Also because of power management and thermal management. So if you calibrate, a common approach is a 10% wide patch and you get an answer and you can make it track perfectly. Now change that patch size and you look at completely different management because the linearity or nonlinearity has changed.

It is a big challenge to define what to measure and make it comparable. There are a lot of different measurement approaches out there. The ideal approach is to make it technology agnostic and make it repeatable and understandable to a user. One good resource is the International Committee for Display Metrology (ICDM). They just released a new 1.1a update to the Information Display Measurements Standard that includes a new chapter on HDR.

And the contribution they added in there, it might sound like it is not much yet because they have only defined the standard, the format descriptions and how to actually set up the measurement.

They at the moment are developing the test methods which would be in the next update of the standard. If you want to encompass all those challenges it is much more complex than coming up with something quickly. So there are some really good examples by VESA as well, DisplayHDR.

There are some good test approaches. And then obviously independent tool manufacturers, like I think Light Illusion in UK and Portrait in the U.S. And they have tools to at least simplify considerably to get some consistency in to displays and also work with the display manufacturers to make it easier, like auto calibration. That's a big help. Anyone interested in calibration read the new ICDM standard or join it if you have the capacity. I'm a member of that. So we need any kind of help we can get.

And there are many more calibration issues outside of HDR, like display geometry there are open issues. I'm advocating that display calibration is important. If talk to the general public, we need – it is not important. So if we can make it easy and graspable and maybe you can do it like what Apple did with the phone you hold it against your screen and it is done in a few minutes that's a basic calibration but it makes people aware that oh, calibration is something I actually should think about, then that's a good momentum that has to –

Simon Thompson One of the big issues that we have seen is getting people to understand that you can't buy a display that show all the BT.2020 space or color volume. We were getting lots of questions with people asking this in various meeting,s people were putting up full-saturation color bars and doing an hue mapping down to SDR and the color changes. The monitor can't actually display the color bars correctly in the first place when they are at 100% BT.2020. And the most of the displays are quite a lot smaller than BT.2020, which is basically designed as container format rather than an accurately displayed format.

So trying to get people to change that mindset from while I'm looking at my color bars and they don't look right or the color lookss different post transform to trying to get them to understand when with a calibrated display, they still can't show those color bars correctly it is quite a big leap for people to take.

Chris Lilley We are coming up to the top of the hour and glad to see that we have solved all the problems :) This is clearly the beginning of the conversation, not the end. One of the outputs of this workshop will be what do we do next and it doesn't have to be W3C that does it. What is happening, what's happening where. Who do we liaise with and what's happening and how can we make ourselves a road plan. So in the next few years we design things that are needed and figure out where we are going. So thank you, everyone. This is a great discussion. I wish we had another hour because this was only getting started. Thank you very much all. There will be another session on Monday. And then on Friday. Since we mentioned hue linearity I should point out that my talk (that was recorded today), is talking about color linearity and mapping. There we are. Thanks everyone. Thanks to the captioner and attendees and speakers and hopefully see you on Monday.

Mekides Assefa Abebe Thank you

Simon Thompson Thanks, bye!


What is W3C?

W3C is a voluntary standards consortium that convenes companies and communities to help structure productive discussions around existing and emerging technologies, and offers a Royalty-Free patent framework for Web Recommendations. We focus primarily on client-side (browser) technologies, and also have a mature history of vocabulary (or “ontology”) development. W3C develops work based on the priorities of our members and our community.