Minutes of the first live session:
HDR Introduction

Live, real-time transcription services were provided by Caption First, Inc. and these minutes are derived from those captions.

Present: Alexis Tourapis (Apple), Andrew Cotton (BBC), Andrew Pangborn (Apple), Captioner Kelly (Caption First), Chris Lilley (W3C), Chris Needham (BBC), Karen Myers (W3C), Lea Verou (MIT), Marc Jeanmougin (Télécom ParisTech), Pekka Paalanen (Independent/Wayland), Sebastian Wick (Wayland), Simon Fraser (Apple), Simon Thompson (BBC), Takio Yamaoka (Yahoo! JAPAN), Zachary Cava (Disney)

An Introduction to Hybrid Log-Gamma HDR Part 1: Mixed Display and Viewing Environments

view presentation by Andrew Cotton (BBC)

Chris Lilley It is five past. Let's get started. Andrew, it strikes me that there are two differences in the HDR systems: first, relative or absolute and secondly, to what extent they require re-rendering in a non‑standard viewing environment. From what I understand from your talk, it (HLG) is a relative system, it is not pegged on absolute luminances and it almost doesn't require re-rendering or it requires a very simple gamma function re-rendering rather than a complex tone mapping. Is that a correct characterization?

Andrew Cotton That's exactly right. From the outset, we understood that we needed for television broadcast to target a wide range of environments. We had no control over the devices and the rooms of people watching tv at home and that means essentially that you need such a system; which is the difference between PQ and HLG.

It was actually only effectively by accident that we discovered that we would need this gamma adjustment, a different brightness display and different learning environments. In the linear days, we thought we could just stretch it and that's what you needed to brighten the environments. As I explained in the slides, happily, it turned out that the simple adjustment of the gamma of the display was all you needed for the different brightnesses, and I explained why that should be the case.

Chris Lilley It is easy to talk about gamma and changing the gamma with a simple power for the entire display. Many systems have a linear toe and here you have the hybrid, you have the log part and the gamma part so, when you talk about increasing the gamma, could you explain how that effects the higher and lower end?

Andrew Cotton The little toe, it is usually in the camera OETF effectively. I think sRGB is slightly different, why that could be included, but for broadcast systems, it reduces noise.

Chris Lilley It does, yeah.

Andrew Cotton We have the toe in the camera. That actually is largely there for noise suppression. It actually as it crushes any noise in the black area, where it used to be prevalent, it has a nice artistic effect, giving the pictures more punch and it is not reproduced in the display EOTF.

We had a long discussion in the standardization process as to whether we needed something like that for HLG and camera manufacturers said we no longer need this for noise suppression. Indeed, if you had it in a camera, it does quite a lot to limit the dynamic range in the shadows. We decided not to put it there.

So now camera manufacturers are having trouble giving the same effect. Essentially it is not necessary for technical reasons any longer in HLG but you may do something similar to give the right artistic effect: nice looking pictures.

Chris Lilley The retro button.

Andrew Cotton Yeah.

Chris Lilley And then for the top part, the log part, when you adjust the gamma over the whole slope, what's that translate to?

Andrew Cotton We're talking then just about the optical transfer function. That is independent of the actual shape of the display EOTF or the camera OETF, and they're there for different reasons.

The OETF and the EOTF non‑linearity, they are there simply to get the best noise performance out of the encoder, usually in the 10pit channel in the days of analogue TV, then we had 8bits and now 10bit channel. That non‑linearity is designed to minimize the visibility of the banding artifacts. On top of that, we need a end‑to‑end non‑linearity so what is displayed is similar to the camera. It is confusing that both are gamma, that's not the case with HDR, in fact, both the PQ and the HLG and the OOTF is a gamma, not quite with PQ but more or less. And the EOTF and the OETF non‑linearity, it is somewhat different from that.

I don't know whether I have answered your question.

Chris Lilley I think so.

Maybe you and I should stop talking and let others chime in if they have any comments on this. I was trying to get the discussion started. We'll have others chime in here or say you're completely confused, whatever it is. Or that we're all wrong.

Lea Verou There is a question in the chat.

(someone) What is the difference between PQ and HLG?

Andrew Cotton So actually the way that HLG and PQ operate is fundamentally different. They're not just different shape curves trying to do the same thing. It took us probably two years at the to to understand what those differences were. There's a diagram at the back of ITU BT.2100, the ITU standard that specifies both and in annex one of that, it shows – and maybe I can bring that – maybe I can – I might be able to find that.

It shows that the OOTF for HLG is in the display. The PQ, it is effectively in the camera. Let me find the diagrams here.

Am I able to share my screen?

Chris Lilley Give it a go and we'll see if it works.

Andrew Cotton (shares screen) Let me start – this is the back of 100, the end‑to‑end system, we have the scene light captured by the camera, display light produced by the display with an optical transfer function which is there to ensure that the images on this display are subjectively similar to those in front of the camera. That is not unity, it is most often a gamma function of some sort.

For a system based on scene light – let me find the right set of equations - okay.

For HLG, the end‑to‑end system actually looks like this.

We have got an OETF in the camera, it is – whose job it is to introduce a non‑linearity such that after want quantization to 10bits you optimize the noise performance of the channel.

The first thing you do in the display, it is you apply the OOTF before the display to make the pictures look right.

For PQ, the signal itself, this is a display light system represents the light by the display.

Then what that means, it is that actually in the case of PQ, because the OOTF is not in the display, it must be effectively in the camera which means this is the architecture of PQ, you have the scene light by the camera and you apply this right OOTF for the production environment and the display used in production, and then you apply the inverse EOTF to make good use of the 10 or 12 bits in the channel here before going through the opposite function in display.

These are two fundamentally differences, and this is why we would say with PQ, because the OOTF for the grading environment is burnt into the signal then the adaptation that you need to do for viewing environment or a display that's different from this environment, it is not specified, it is possibly quite complex.

For HLG, because of this OOTF, it is in the display, it – and it is not burnt into the signal for specific environment, actually it is – it is quite straightforward, to adapt for different displays and environments and we need it to effectively specify that as part of the standardization process.

That is – that's already specified, the different brightness displays, in BT.2100 and then Simon did some work to work out what we need to do in the different environments.

The two systems, they are really quite different.

Maybe you can just see it there.

For PQ, the OOTF, for the mastering environment is burnt into the signal and for HLG, it is in the display and it can – it knows what its own capabilities are and it may have knowledge of the viewing environment.

Does that answer the question in chat? I will ask the person who raised the question whether that's now clear?

(someone) What is the difference in the signal, though?

Andrew Cotton The signal is the same for many different displays. The signal is the same, many different displays and it is the same for many displays and they need to do some mapping, not only for the different brightness displays but they have to do something to adjust the midtones as well for different viewing environments and if they want to make it brighter, they have to do additional processing.

So things like the Dolby will do it, Dolby vision has things to do all of that. The way to do it is not specified.

The OOTF, it is burnt into the signal, chances are that it is the wrong OOTF for the home viewing environment or for many people viewing in that domestic environment.

You have to do adaptation and it is not just adjusting highlights, you have to do something for the mid tones as well, the brightness of the environments or the viewing environments.

Chris Lilley Right.

I think what this brings up, it is the difference between a broadcast or movie environment and a web environment. It is easy especially if you are making feature films, you're assuming people have a home cinema set up, black-out curtains, whatever, saying they will use reference viewing environment. Otherwise they're not going to get the full benefit of their great expenditure.

The web isn't consumed like that. The web is consumed in brightly lit offices and outside and so on.

The system which is more flexible with regard to the viewing conditions, and is also simpler to implement with regard to viewing condition changes, seems a better fit for me.

Andrew Cotton That was a motivation for us developing HLG, we didn't do it just because we wanted interesting work to keep us busy over the next ten years.

Well, of course, Dolby did great work.

One fantastic demonstration back in 2013, we looked to see what we would need to do to deliver that type of service to the home. There are a handful of things that we thought were not quite right for television.

That's also, I think if you look at ST2084, it is not only the BBC that felt that, and if you look at the scope of ST2084, the standard for PQ which came before the ITU standards, it says in the scope, it is primarily for mastering long broadcast content because we weren't the only people that felt it would be a problem.

Any other questions?

(someone) I have a question. The goal is to adapt to different viewing conditions, right? There must be a way to access the ambient brightness. Sensors. Would you expect it to be a sensor in the viewing device, like in the monitor or should that be at software level that could take a brightness source from another sensor.

Andrew Cotton I think we don't particularly mind. There are some TVs that do this now and they have the sensor, some in the front, some in the back, it is the surround luminance you need to measure. If you're not measuring that directly – there are problem with both solutions to be honest. Neither is perfect. If you're measuring the light falling on the front of the screen, you have to make assumptions on the color of the walls behind the TV, if you measure the light at the back of the TV, then there may be things that are stuck in the sensor. It is provided. They have sensors, some current TVs, the first TVs actually just had a user control to allow you to change the gamma and that's actually available in standard dynamic range TVs for quite a few years.

I think we don't really mind, that's the ideal.

You could have presets for different expected environments, right now my laptop adjusts to what it thinks is daytime or evening, it could be something, you know – I was going to say as crude as that, that's the wrong word, I don't wish to upset anyone on the call, as basic as that or as simple as that.

Chris Lilley Yeah. I think simple controls have a long history in television. I mean, contrast and brightness, black and white TV, all had that, right. They were strangely labeled the opposite way round but apart from that, there were two knobs and you turn it, until it looks right.

Andrew Cotton Yeah. You're right. Particularly brightness.

Chris Lilley W3C has some APIs, ambient light sensor, it is controversial, whether you should have access to that or not.

Andrew Cotton I just wonder whether this is – I mean, this is really a display function, would others need to know that? My – as you know, Simon, my colleague Simon is joining most of the meetings.

Are there instances Simon, where you think that the browser would need to know the ambient viewing environment?

Simon Thomson An issue I have been seeing or discussing with Google recently, it is it seems to be impossible to pass the actual signal all the way through the browser and the operating system to the display of the output at the moment.

That's one area that we probably need to try and liaise with manufacturers or operating system writers with.

At the moment, all of the different operating systems seem to have a slightly different rendering format that they want to see getting passed to them.

Andrew Cotton I was thinking in format conversion we assume a reference viewing environment and if we're doing a display conversion we would use a gamma of 1.2 which if I do the conversion, there are tricks that can be applied as you talk about in the next presentation, going to SDR.

Just trying to think of the – off the top of my head whether there is an instance where you would want to take in account the viewing environment and I'm not sure that's the case.

I would have thought that's – well – at the display level, possibly the operating system.

There are probably others on the call that know more about that than I do.

Chris Lilley That's another difference between web and broadcast media. Instead of the one person sitting down, looking at the content, there are multiple pieces of content, multiple places that are composited together. To do that, you may need to undo something to get back into the scene-linear lighting and composite and then go back again. You may need that information.

Andrew Cotton Always the signal is – for HLG, the signal is the scene light and it has none of that in the OOTF burnt into it.

That should always be possible.

If you are – if you are compositing with PQ, then potentially some knowledge of the mastering environment for PQ that may be useful.

Chris Lilley If I could, I would like to raise an issue that is surprising to me. The viewing environments provide a lower gamma and the displays provide a higher gamma. That's brighter at the peak, not brighter on the full screen.

It looks like one is going down as one is going up, you ought to imagine that – they're talking about different things. One is the average, one is the peak.

Andrew Cotton We were surprised by the results when we first came across this issue that the mid tones looked wrong for the bright displays. At the time everyone said the brighter the display then the more like real life it is. The OOTF gamma should go to 1 and should drop. Actually I think that's an analysis to know – took no account of the viewing environment, it has a bigger adaptation than the tiny screen providing a small area of the field of view.

It is when you think about it, it is quite – it makes sense. If you think about the television at home and you want to see more detail in the shadows, you can increase the contrast which is increasing the peak luminance of the display, effectively you may get clipping depending on the capabilities, if you increase the contrast you will see more details in the shadows.

If on that brighter display you want the picture to look subjectively as it did on a different display, you need to suppress some of the detail in the shadows and an increase in the end‑to‑end gamma will do that. It will crush the low lights.

You can do a similar thought experiment with brighter viewing environments. If you have got a brighter – if you increase the viewing environment, detail in the shadows will become harder to see. You need to lift the detail out of the shadows so it is just as visible in a bright environment as it was had in a darker environment and reducing the OETF of that, gamma does exactly that.

When you think of it in those terms, it actually makes – begins to make sense. Actually if you start to look – if you look at how the human visual system works when it is roughly the low response to light intensity, then actually it makes sense in those terms as well.

We didn't have those insights at the very start.

Chris Lilley There is one last thing I would like to point out from the presentation, it is the recording, it is very well to have huge movie studio with enormous budgets, the television broadcast sort of thing, if it is individual consumers putting content on the web, the fact that they can do this with their phone and post it, it is not SDR. It is interesting. The fact that the iPhone uses HLG, it is kind of hidden, it uses Dolby Vision profile 8.4 with HLG, can you talk about that a little?

Alexis is on the call, so perhaps Alexis has something to say on that. You are muted, by the way.

Alexis Tourapis My colleague Andrew is on the call. He could probably speak more about this.

Andrew Pangborn Broadly speaking, I mean, we have – I do believe one of the main benefits was that we felt that HLG made a better use of the code space than a PQ transformer would in an 8‑bit context. To kind of compound on what was said in the chat before, from a display point of view, we don't view the PQ or the HLG as being fundamentally different and they require some type of operation on the display (poor audio quality, missed) the backward compatibility of the nature of HLG and had more of a standardized way of conversion that, it is appealing.

Andrew Cotton Thank you. That was – that's good to hear.

Chris Lilley Are there any other questions on the HLG intro?

(someone) Yes. I have some questions on –

Chris Lilley You're very, very quiet by the way.

(someone) Apologies. I will move to the chat then. There must be something wrong with my headset.

Andrew Pangborn What about the influence of viewing environment on non-HDR media?

Andrew Cotton Do we look at the ambient adaptation outside of the context of HLG media, for example the sRGB media in in the browser? We have not specifically. HLG has been our primary focus and the three sets of subjective tests were done to establish this relationship for HLG. What I can say, however, is that if we map SDR content into an HLG container then these changes, at least in terms of adapting the OOTF for different luminance displays seem to track pretty well.

We haven't tested I would say with SDR content the change for different viewing environments.

I don't see any reason why it should be very different, there may be some small differences.

It would be interesting to know that they are quite complex tests to set up, as Simon will attest to.

There is something else I was going to say on that. I would just say in terms of the display of content, we have known since the 1940s that brighter environments needed a lower gamma and there is a lot of research on that. Happily, and this was new to me – they did the right thing. In a brighter environment, you tended to increase the brightness control which changed the black level offset of the CRT and actually that adjusted the gamma as well. We were very lucky with CRTs and they did roughly the right sort of thing for different viewing environments for panels obviously, we didn't have that fortuitous good luck and we had to do fundamental research.

Simon, is there anything you would add on? I'm right in thinking you have not loaded specific SDR, is that correct?

Simon Thomson To explain how we did the test. We had a small TV studio and some software written which adjusted the DMX lighting, you go between the images and you change the gamma of the image and you change the lighting of the TV studio that you were in.

It is a bit harder to sort of set up.

We ran the test for about 50 or 60 viewers at the European Broadcasting Union and we got some quite small errors on the predictions. We're quite happy that it seems to work for most people.

Andrew Cotton Does that answer your question, Andrew?

Andrew Pangborn Yeah, thanks.

Chris Lilley There probably is more to be done in that space, looking at that.

A final thing before we wrap up. There are now some color models such as Jzazbz, and others that use the PQ transfer function as part of the encoding.

I wondered whether it had been looked at to put the HLG in to make a very similar one, using HLG or whether that was specific to absolute luminance encoding.

Andrew Cotton PQ does a couple of things, it separates the light intensity from the coloring information quite effectively, and it also – it is more perceptibly uniform than other color representations and that is certainly useful when it comes to format conversion, color volume reduction from HLG, white color gamma and tweaks the PQ non‑linearity and I think the authors of the papers would suggest improves upon that.

That is important for display referred system, where the code value represents the absolute brightness of the pixels on the screen. For relative light, actually, depending on the screen capabilities, those code values, the brightness differences between the code values vary, so just got something for HLG which is kind of rough and ready and actually this gamma law, it is a good match overall, they're sensitive to the banding artifacts and it is following the high luminance with where the rods are in effect reacting with low luminance, it is a good match but not precise, there is no point with a precise match, when you apply anything slightly different, all of the codes will change.

The other points to note, actually it is – we're talking sensitivity to banding artifacts, not contrast sensitivity, it is slightly different. That's a bigger discussion perhaps.

Chris Lilley Okay. Thank you very much.

Keeping W3C Relevant in an HDR / WCG Living Room Environment

view presentation by Zachary Cava (Disney)

Chris Lilley I would like to move to the other presentation. Zack. You want to quickly run over what the overall point was of the talk, just to get people remembering, they should have already seen it, and then we can take specific questions.

Zachary Cava Sure. You know, to kind – a little bit of background there.

One of the things we do primarily at this new stream something to deploy to an environment that's characterized as a 10‑foot lean-back experience where upwards of 80 percentage of consumption happens across a number of application, primarily the premium movies and TV content being shown, not short form, but long form episodic and movie‑based content.

In that space in particular, it is always one that's trying to keep up with what film makers, movie studios are doing with the brightest and the best of film and movie at the time.

So what we have seen quality now, in that space, it is that it is a proliferation of new and advanced video and audio features.

HDR being a great case example where, you know, as HDR is becoming main streamed, consumes, the iPhone case, doing that, that's putting a lot more reality to it, people have the HDR pictures on the phones.

We really are seeing actual grading and things happening in movie, 5, 6, 7 years ago as we were trying to push the overall technology spans there.

The positional sound, even with the immersive type of things.

It is making its way into linear.

One of the biggest platforms that you may not expect to find there, it is actually web. A lot of smart devices, the players, the boxes, instead of bringing the proprietary rendering environment, we'll bring a web‑like environment for you and put the application in.

The problem with that, it is just that a lot of the functionality, things that you wanted to implement don't exist within the – didn't exist with the web environment, they do exist a little bit now. We're moving beyond, further out than the level of implementation we want as an application producer.

There's been a lot of fracturing within the ecosystem where different providers, different content provider, content distributors, the right word, they have worked with television and the manufactures to implement the custom APIs and things on top of the web standards and prior to work around BU.edu passing the WebEx importance altogether and going full native on the boxes to both address these more advanced feature sets and at the same time addressing the race to the bottom in hardware.

It is – you know, TVs are being produced about 256 of megabytes, you don't get a whole lot of computational performance that you can do with that. Essentially when trying to compose.

In order to offset that, you make the application more and more performing, that application being the web browser or the native stack thing.

There is a huge, huge consumption of time and eyeballs, people sitting and watching.

It is and was a web space and the great thing of having it be a web space is that you had industry expertise to come into a team and build, you had the efficiency of portability across devices. As we have continued to advance features and functionality and trying to stay ahead of the form man's curve it has become a very fractured space and to a point where people are abandoning, going into a custom native route that decreases interoperability, decreases the openness and we would like to change that conversation back.

Chris Lilley So that's interesting.

As you say, it is faster if you're targeting and seeing 2, 3 platforms working directly with them, you have what you want, you pay for it, it is in the box, when you go to do the other 30, 40 platforms, then you have to do it all again.

It is faster to start, but slower to finish.

I guess the other thing is, since you mentioned hardware, on the web, it is very easy to push an update, pushing the browsers and that sort of thing and then you're constantly changing what it is and that you can even remove APIs, and hardware, people are buying a piece of hardware, they don't expect it to stop working three months later. It is like sorry, you need to buy a new one. People get very annoyed when that happens.

You do need, you either have a long tail of legacy stuff where you try to do things fast, you wish you hadn't, but you still have to support it for – support it for offer or it has to be sufficiently adaptable so that it can be changed and updated until you later discover no, that was wrong and you need the other thing

What do you – what are the feelings on that?

Zachary Cava Another great case of having – even though – it may be a snapshot in time, the model year released, having the web environment gives you a target. You have that updatability, the deployability to it.

You're absolutely right, you go the native route, unless you're also taking on the burden of being able to remotely deploy everything download it, verify it, all of the additional signatures and signing, on top of enabling that you need to also ensure the security of the platform. Otherwise you won't have the 4K features in the video, audio or in the video stream.

So, yeah. You have the additional updatability or you have the update cadence.

You know, right now it is kind of split across the board.

We do have one version of the platform, it is actually a browser base and it is a native port but it is to kind of replace the browser on Deque Systems that are not up to snuff and that one, the duo employ out, that one – basically it never changes and then the web applications deploy on top of that. It is updating, it is updated very, very frequently.

Every two weeks actually is the cadence.

Then there is the next level where there is a hybrid solution, where the main thing is written, parts of it deployed and it is very, very limited on what we can do. The actual interpretation layer is not allowed to run in native code.

It is not able to be executable, it has to remain interpreted and otherwise it is a violation of the trusted execution environment.

Finally, there is the fully baked solution, it is the most performing but we're kind of promised a yearly update, if that.

After something is launched, there is no guarantee that we will ever get another update to it.

t immediately puts your users and your customers on an island of the device that they purchased.

You know, you done want the answer to be I just bought this LGTV, this Samsung TV last year, why is it not getting the latest and greatest. You can buy a set top box and plug it in and get better features. That's not an answer to anybody in this ecosystem, what they want to hear.

Chris Lilley In some way, the need for urgency and also stability and maintainability reminds me when W3C started to get into the E-books and that sort of stuff. We have people coming from traditional publishing model where they are used to having complete control of the page and all of the fancy features and all of the typographic niceties which, at the time, the web had very little of.

They wanted it all, and they wanted it all now.

They would sometimes make these standards that would reference, you know, a first working draft of something that then was changed three weeks later and they bake what it had into the standard. There were references of stuff that used to be in the specification and weren't any more, or totally changed and the name was different. The content providers were having to write to this strange snapshot and it sounds similar here. I understand the urgency and I also understand that wishing ten years later you didn't have such an urgency and there was a better, more flexible solution.

Zachary Cava You have to be practical about it. Actually the play back, they went through the exact same path in the linear space. The adoption, they were referencing the candidate specs and the author recommendation, it wasn't actually stable.

We go back pre2016, even 2017 in a lot of cases, it is a complete mine field of what kind of support you're going to get at this level.

You have a lot.

If you look at the – a lot of the open-source jobs for base players, you see a bunch of shims and fixes and things to try to bridge that gap. That's a reality where we're at.

Those hardware, they'll naturally cycle off sooner or later. We'll get to a better state.

You are absolutely right, there is urgency now, but we're not going to be able to fault it in a year and have everything Greenfield. Right.

It is more of – it is more important to do, it is to start looking at whatever delta is now, where do we expect there to be deltas and how do we build towards that instinct.

Kind of the two biggest things being – feature detection, it is really getting in, capabilities is great, we start to get more into the privacy concerns because of how detailed you get into a media – you know, we want to know if this hardware will do this profile with that window function. Well, it gets to the very, very large permeation nature.

When you constrain that to the living room device space, it is not as high as a PC, a laptop scenario.

There is a number of sets, manufactured, it is not going to give you a large – anything kind of bigger than hundreds of thousands or millions of people in many cases.

That's – that's the flexibility of privacy, depending on the deployment environment and that's something that we have discussed.

Then you go deeper than that, the realtime rendered, capabilities feedback, it is something that we have the first thing, drop frames, we need so much more going on and then just more of that advanced functionality and support.

hat's just helping tie kind of this very robust video ecosystem that exists in the industry more into the web environment and the web space so that as things are coming up like – what was the latest one. The latest thing that's out there, it is 8 axis or 9 axis movement for VR. There is already product use cases that are tilting that out from teams I have talked to.

It is wanting to see that deployed very, very deeply. Very interesting.

I think building a better communication so that when we get to 2027 we're ready for it.

Chris Lilley It sounds to me like instead of focusing on this is what the next version will do but it will take 8 years, what you're looking for, it is this roadmap, what the briefer versions will do with increasing uncertainty as it goes further away and you narrow that down so that you have an idea of where it is headed and you get to the intermediate states quicker and with reliability and the precision.

Is that the sort of thing that you're looking for.

Zachary Cava I think so, it will give us a cadence and there is primary work that could be taken from CT Wave and the members there where a lot of constraints on performance, constraints on solving the problems has been dealt with by variants.

A great example, I will give you from our web browser environment for performance, it is that we don't want to actually take the segments into the JSE and put them back in the native code with the path.

So you fetched, pulled them down, you push it into MCE, if you pull that in, it is a double allocation of the memory, which is very, very expensive on the devices. Instead, what we have done, internally, brought in the stream API and enabled servicing of E. messages out of ONC, and you have the economic extension and you provide fetch pull and direct pull into the buffer and internally you recognize and optimize that to be a memory direct reference.

So we're no longer having to deal with the double allocation which will let us get to the number of devices that we couldn't reach before because of the memory cards.

I think there is a little – lovely little gems here and there like that that we could use as primaries to push us forward into changes and things like that.

We have the discussions around it, like pin stream has been discussed again and again and it has been left in the limbo state because there is not a primary driver for it. We could provide some of those through the ecosystem.

Chris Lilley I know I look sheepish as we're discussing this, that's great with a single buffer rather than multiple buffer and I think sometimes implementers are reluctant to expose details like that, either because they think it is an advantage or whatever. It does help the entire ecosystem move on forward.

I'm reminded of a discussion, some engineers are very worried about if we have wide color gamut stuff, you need more than 8 bits and everything is optimized for the colorspace 32 bits; but if we have to make them all full floats, it will be so huge. And I have seen the phrase, well at Apple we have one bit and if it is off it is regular, and if it is on, it is not, it is a pointer to where the color is actually stored. So they only store the WCG colors they were using, and the rest of the legacy ones, ok. So we don't have to have a huge explosion of memory then.

Little things like that, being shared, I think it helps move it forward collaboratively because often they're applicable to more than one implementation, more than one specific circumstance.

There is a need for that more collegial sharing I think.

Are there more questions about issues that's been raised?

(someone) Not so much a question but I feel I should say something in response.

We would love to hear more from others in the media industry who are encountering these kinds of issues.

I think we have a forum to talk about this, we have the media interest group and the main purpose.

The issue that we're facing there, it is that there are other certain industry bodies that have become the center of gravity for where the industry is gathering and so a lot of this conversation, it is happening outside of the W3C and it is not necessarily being communicated back to us through ongoing conversations.

So at this year's TPAC for example, we have kind of responded to what Zack has presented here and there will be a session where we offer the web application performance on consumer electronic devices. We'll be holding sort of a joint industry consortium sort of meeting between groups like CTA wave and HBTV where we can hopefully pull out some of these issues and look at what the roadmap is essentially.

Using that as a way to discuss where and how we go next.

As Zack pointed to, E. message handle something an example, it is kind of outside of the color and kind of the topic of this workshop, but it is kind of a relevant one in terms of the W3C positioning in terms of the relationship to consumer electronic devices.

Because of the nature of the – you know, the lower performance kind of devices that these kinds of concerns are much more important.

You know, maybe this is something like the major browser vendors have less of an interest in, and so if W3C is a place where we need to get alignment between the major browser vendors, but you have got – you know, you have an industry group that's sort of separate to that which has their own requirements, how do we – how do we make sure that that voice gets heard and we end up with the solution that is – ultimately should be beneficial it all, more forming implementation, it should be of benefit across the board I would expect.

To some extent, if we're proposing features, you know, that WebKit, that Chromium, they sort of need to implement, we're kind of asking for, you know, for collaboration on defining the functionality and then working on implementations.

There is something here on putting in specifications, how to collaborate on implementation because after all, you know, these are opensource engines that we're using.

Then I think there is something perhaps process wise for the W3C in the – you know, so the group I'm probably more familiar with is HBTV, and they have an annual specification cycle. It is like how do we get to a place where W3C has something similar. I have been in working groups where our roadmap is a couple of years out to get from MSE today to MSE version 2, with a couple of big important features, you know, we're looking at a few years sort of development seek will I think to get that to that point.

That will really be about sharing the interoperability that exists between the major browser implementations and we may end up in the same place, the exact – that Zack described earlier, a few year ago, where you have the desire to implement the new capabilities but they're referencing the work in progress drafts and you end up with this compatibility issue when you get to a stable recommendation.

Yeah. There's an awful lot to unpack in what Zack is describing from a general kind of media ecosystem point of view I think.

Zachary Cava Just to add on to that, I think I recorded this back in March too or something like that. A little bit of time has passed. Even in that time, the discussions we have had, you know, to better connect the Wave and W3C and personally, when I have had time to be able to join the meeting groups and to read all of the notes, I think that's the right steps to get us more connected, more thinking through the scenarios. I'm looking forward to that.

Chris Lilley Okay. Any last points before we wrap up. (none)

A quick note to the speaker, we have had Kelly doing captioning for us. And as soon as we get that rough transcript, I'll send it out so that people can make any adjustments, there is a lot of technical terms being thrown around and I'm sure it wasn't – it was very hard to capture.

Thank you, Kelly, for your work.

It has been very much appreciated.

Thank you very much. We'll see some of you tomorrow, the next session.


What is W3C?

W3C is a voluntary standards consortium that convenes companies and communities to help structure productive discussions around existing and emerging technologies, and offers a Royalty-Free patent framework for Web Recommendations. We focus primarily on client-side (browser) technologies, and also have a mature history of vocabulary (or “ontology”) development. W3C develops work based on the priorities of our members and our community.