Meeting minutes
Introduction
ChrisN: Welcome all,
today's topic is HDR support on the Web
… Thank you Pierre for proposing the topic, and we
can figure out next steps
HDR support on the web
Pierre:
https://
… This has been a long adventure, the goal has been
to bring HDR imagery to the web platform
… I'm sure you're all familiar with HDR. It's in
widespread use in consumer streaming, media, and TVs
… It's available in the video element in narrow use
cases. It works on Edge with specific monitor and video card
combinations
Nigel: When you say imagery, is that video or still photos too?
Pierre: It's both
Pierre: There are
different ways of doing HDR, the way in photography is different
than that used for motion picture and TV content
… And different for authoring of motion picture and
TV content. That's part of the complexity of doing HDR
… The fundamental limitation on the web today, is
the canvas element, that allows drawing of arbiratry images, lacks
three main features for HDR images:
… Color spaces used for HDR images, today it only
supports Display P3 (on some platforms) and SRGB (on all
platforms)
… Limited to 8 bit per color channel, so you see
banding. That becomes intolerable with HDR images
… It's also missing metadata that's used in some
HDR applications
… The goal of the Color CG is to address that, how
to modify the HTML canvas element to allow HDR images to be
drawn
… Typical canvas use cases: drawing images not
supported by <img> tag, collage of images of different
sources, adding graphics to an HDR image, or draw an HDR image that
matches a video in a <video> tag
… With video, there's a path to supporting HDR
video. It's basically an opaque element, you just give it a video
stream, and it plays
… Same thing with the <img> tag, but HDR
isn't supported there at all, AFAICT
… Interaction with canvas element and modify it at
will
… As a group, we've been through multiple
iterations on a strawman proposal, over the years. It's gone from
simple to being complicated
… Over last few weeks, I've been working to bring
it back to a minimal proposal for HDR in HTML canvas.
… Not intended to be the ultimate answer for doing
HDR on the web, but based on current practices in the M&E
industry, where there are standards in widespread use
… Not intended to preclude other ways of doing HDR
on the web
… Any questions so far?
ChrisN: What's driven you towards simplifying the proposal?
Pierre: We're three
years into the process, but still didn't have a concrete proposal
that people are implementing, it's been a moving target
… We wouldn't be discussing today if there were
some level of support in HTML canvas. So it's an attempt to get to
consensus on a minimal starting point, be less ambitious than we
were at the beginning
… Initially, people looking at new ways of doing
HDR images, using linear color spaces, things more close to what
might be done in photography
… So the starting point was more complex,
speculative, risky - hence people that wanted to implement
concluded it's complicated so wanted to wait
… So proposal is to go back to a more well
understood path
… The proposal is straightforward, add colorspaces
for HDR images - I mean colorimetry systems. HDR canvas only
supports srgb and display-p3. Add rec2100-hlg and
rec2100-pq
… These are non-linear spaces, using r', g', b' as
specified in ITU BT2100. These aren't supported in CSS Color 4,
unfortunately
… But many things in CSS Color 4 aren't supported,
may be prototype implementations?
… One thing more controversial is how to increase
the bit depth of canvas beyond 8-bit. Proposal to start with
float16 - that's not super controversial, it's widely used for GPU
operations
… The part that is controversial, surprisingly, is
the format used when exporting the pixel array outside the canvas
element
… An interesting issue is float16 arrays aren't
supported in JS today, only float32 arrays
… There seems to be two camps. It seems hard for
the community to decide whether to export as a float32 array of
numbers or float 16 array. float16 requires modifying the language,
no objections to that, but requires going to ECMA TC39
… Some see that as a bridge too far. Some argue
using float32 array, but it's a waste of memory bandwidth - and
becomes an argument against processing float16array
… This proposal doesn't care, anything greater than
8 bit would work
… Last item, that requires some work, and not
explored in detail by existing use cases: It's customary for some
kind of HDR images, e.g., PQ, is to use metadata to indicate what
mastering display was used in creating the images
… Color volume, primaries, white point of the
display. No spec for how to use that data for how to use or tone
map with the data. But without it it's not possible to match the
content of a video element that would have similar
metadata
… Another fundamental issue identified is what
happens if you draw multiple images, each with different metadata,
to the canvas. The canvas has only one set of metadata - so ignore,
take the superset?
… But most use cases in M&E are single full
screen content, so there's only one set of metadata. Needs
investigation, figure out how the metadata is used, or decide to
completely ignore it
… Something else that's a significant part of the
complexity, is there's a good understanding of how to convert
between SDR color spaces, and a lot of questions about convering
from SDR to HDR color spaces and the reverse
… and between HDR spaces. Progress made in recent
years. ITU and SMPTE have recommendations, so it felt important to
include in the strawman proposal
… How to tone map is an inexact science, as it
loses information. But there are guidelines and recommendations on
how to do that
… I put together a demo of tone mapping HDR to SDR
using various methods
… https://
… The first image the colors are wrong, it looks
flat. But when tone mapped it has more contrast
… This uses two methods: SMPTE and ITU
… So it shows it's not impossible to do it, e.g.,
if you're doing a collage with multiple images
… That covers the proposal, I'm presenting it to
get consensus among various groups. Progress on issues such as how
to combine metadata from multiple images
… Can discuss now, raise issues, or contact me
privately
ChrisN: Simon also has a demo
ChrisN: [shares screen with BBC demo]
SimonT: We took Pierre's demo and added some more methods.
SimonT: We wanted to
look at HLG as well.
… Still from a camera test chart as the HLG
original image.
… Simple tone mapping method. The colours look as
you'd expect on an sRGB image.
… Question of an HLG canvas with PQ, or vice versa,
can you do the transform?
… We took PQ into HLG, relatively straightforward
transform, undo one and apply the other.
… If you've watched e.g. the NFL Superbowl recently
you'd have seen this transform in action.
… The SMPTE and ITU transforms are both very nice,
a couple of hundred lines of code,
… whereas the simple transform to HLG we put into
HTMLCanvas, with a colour space transform only,
… takes only a couple of lines of
code.
… Then we wondered if we could do PQ -> HLG
-> sRGB, and we found we could.
… It could be improved, it doesn't have tone
mapping and it does clip colors, but as
… a basic demonstration, it works, and is
computationally a lot simpler.
… As a minimum viable product, we'd probably end up
with some different but slightly better appearance.
Nigel: Pierre, in your presentation you showed something about a canvas mastering display. Does that mean that there isn't a canonical exchange format for the data? It seems strange to send data about the mastering display
Pierre: It's often
called SMPTE 2086 metadata
… The latest PNG spec draft that includes HDR, as
well as every codec and file format that includes HDR will include
that metadata
… It describes the mastering display, the
implication is that none of the pixels in the image extend beyond
that color volume. Intended to simplify tone mapping
… But most applications ignore the metadata today.
And it's not specified how to use the metadata
… I'm not a big fan of the metadata, but the reason
it's in the strawman proposal, is to match the content in a video
tag. So if there's a Netflix video in the video tag, and the
Netflix logo in an image - if you dont' use the same metadata, the
two may not be rendered the same
… This is an opportunity to get to what the
metadata is for, and potentially ignore it
Nigel: On a technical argument that it's important to convey for presentation, I can see it working if there's a non linear mapping between the mastering display and rendering display - so it depends on capabilities of both dispays. Is that the case?
Pierre: In terms of
luminance, the tone mapping uses knowledge of min an min luminance
of the image to map it to SDR, so you map between the max and
min
… So that's an example where it helps do a better
tone mapping.
Nigel: But that's sourced from the mastering display and not the image itself
Pierre: [shares screen
again] The ITU method assumes the HDR has a typical average and
median luminance to do the mapping, so it uses no information about
the image nor the mastering display
… The SMPTE method tries to map only the part of
the HDR image that actually matter. So this method takes as input
the max, min, avg luminance when mapping to SDR. So you don't have
to compute it
… Can use the mastering display min/max luminance
as a proxy for the image luminance
<kaz> demo page
Pierre: So knowing some characteristics of the display can help with tone mapping
Nigel: So having this standardised is beneficial, rather than proprietary?
Pierre: Yes, metadata
without a normative spec to use it isn't super useful
… I'm not a browser implementer, but the debate
about float16 and float32 should be a lot more complex
Francois: Thanks for the proposal and discussion. So this would extent canvas 2d contexts, it can also use WebGL or WebGPU. Each would have to define its own way of handling HDR. Have you aligned with people behind those specs?
<tidoust> canvas.getContext()
Pierre: Earlier
versions of the proposal link to discussions in the WebGL and
WebGPU communities. Don't think anything is incompatible, float16
is uncontroversial with WebGPU. Goal is to have a unified approach,
use HDR canvas as a test case
… Trying to resolve all those questions, including
those about metadata, then propose extending it to WebGL and
WebGPU, as an approach
… Have the same approach applied to those other
contexts
Francois: I value this progress, but asking just to check about thinking about those aspects
Pierre: Because of the nature of W3C, those communities are somewhat independent, hard to wrangle, so start simple
ChrisN: What can our group do to help?
Pierre: Please review
the proposal, and send feedback. Please let us know now. Talk to
your web developers, would it help them? If you have a strong view
of whether HDR metadata is important, that would help
… Next quarter is about gathering feedback from a
range of groups
… At some point the TAG should look at
this
ChrisN: Is the complexity coming from general web pages combining images from different sources?
Pierre: Don't think
having multiple having multiple images or canvases is a big issue.
You can do display-p3 and srgb, mix canvases. On Windows I have a
monitor in HDR mode, SDR and HDR windows, it works
… I don't think the complexity comes from that.
From a page author point of view, it's a different issue
<Zakim> tidoust, you wanted to wonder about alignment with 3D contexts (webgl and webgpu)
Alicia: I don't have an
HDR display myself, I have a display that's more that sRGB. I've
dealt with colorspaces in photography. How could we handle, either
now or with this proposal, colorspaces attached to image files that
we draw in a canvas?
… For instance, I can have an image in display-p3
or a custom monitor profile, embed ICC profile in a JPEG or PNG
image. Generally if an application does color profiles correctly,
you'll see the image somewhat OK depending on capabilities of your
display
… I read a post from WebKit, which I think was the
only browser that did this in the img element. Not sure how much
traction it got
… One thing is rendering in an img element, but
what happens when you try to blit that element to a canvas, then
try to fetch the pixels back from the canvas? Or what happens in
between, what kind of pixels could browsers be expected to store in
the canvas?
Pierre: If you instantiate a rec2100pq canvas, and the img tag has a PQ image, I'd expect the same pixels to be blitted
Alicia: But it's different to other pixels inside the canvas, the canvas needs to be aware: in a rectangle there are pixels outside sRGB?
Pierre: So you pick the color space for the canvas. If you instantiate the canvas with rec2100 color space and write an sRGB image, you won't get sRGB pixels out, there'll be a mapping
Alicia: I think of it as the color space for the canvas
Pierre: You can pick between display-p3 and srgb for the canvas today. So we're proposing to add ability to create as a rec2100-pq and rec2100-hlg, and the implementation tone maps
Alicia: Is this already implemented? There has to be some kind of API
Pierre: Today, when you
create the canvas there's a parameter that specifies the color
space
… It's not supported universally
Alicia: So we'd be giving it new values so that the color spaces can be used.
Chartering MEIG
Kaz: We discussed draft charter in previous call. We haven't added a Background section. So suggest moving ahead without it, to get reviews
<kaz> Draft Charter
ChrisN: I agree. Proposed resolution is to go ahead with the charter as currently drafted, but remove the template Background and Motivation section (which is just a placeholder currently)
<igarashi__> +1
ChrisN: Any objections?
(none)
… So that's our group's decision,
resolved
RESOLUTION: MEIG would like to move forward with the current proposed charter
<kaz> proposed Charter
<kaz> [adjourned]