W3C

Media WG - TPAC 2024 - Day 1/2

26 September 2024

Attendees

Present
Anssi Kostiainen, Chris Needham, Cyril Concolato, dana, Eric Carlson, Eugene Zemtsov, Francois Daoust, Jer Noble, John Riviello, Louay Bassbouss, Marcos Caceres, Mark_Foltz, Mark Watson, Nigel Megitt, Paul Adenot, Wolfgang Schildbach (observing), Xiaohan Wang
Regrets
-
Chair
Chris, Marcos
Scribe
tidoust, cpn

Meeting minutes

Slideset: https://lists.w3.org/Archives/Public/www-archive/2024Sep/att-0013/Media_WG_Meeting_26_27_Sep_2024.pdf

Media WG introduction

[Slide 6]

cpn: We're building on the media foundations developed in former groups.
… Goal is to improve the overall media playback experience on the web.
… Beyond media playback with WebCodecs.
… Current charter runs until May 2025. Marcos and I co-chair. François is our team contact.

[Slide 7]

cpn: Looking at publication status of our specs, there's a mix bag of maturity levels and implementation levels.
… We want to help drive interoperability.
… We'll talk about each spec in term. To progress them, for specs that are at WD, question is what needs to be done to move to CR.
… Re. Media Playback Quality, the decision was to migrate this to HTML. It used to be part of MSE, then moved to a separate spec. We have not yet done the work to migrate the spec to WHATWG. We may discuss when is the right time to do that. There are a number of open issues against the spec to look at perhaps here before we migrate.

Marcos: In parallel among chairs and team contact, we've been looking at the specs and low-hanging fruits that we could address to help progress things forward.
… Things that can help spec editors maintain the quality of their specs.

cpn: Some of the specs were authored with older versions of the spec authoring tools. We refreshed some of them, it took some time.

Marcos: That also helped uncover some things such as eventing models that are wrong.
… There is machinery available to help detect that in parallel steps properly queue tasks before they resolve promises or fire events.

Anssi: Do you have a pipeline of new stuff coming into your group for the next cycle? For example, for Picture-in-Picture, there is Document Picture-in-Picture.

cpn: Discussed in the past. Document Picture-in-Picture being not focused on media in parallel, we felt that the Media WG would not be the right home for it.
… In terms of other incubations, there are none that I'm aware of that with plans to adopt in the WG.

Marcos: The other part that we looked into is test coverage. We found that one of thems had funny test coverage that did not match the spec. I think that was Media Capabilities. The tests have weird steps on MIME interpretation that affect interoperability.
… Some open issues got filed years ago by Boris.
… Quite fun to read over.

Marcos: The main point being that, if you're a spec editor, you're also responsible for making sure that things that get merged in specs have corresponding tests.
… Make sure that's all good. If you need a hand or are unsure, I'm happy to review things.

Picture-in-Picture

[Slide 9]

[Slide 10]

cpn: Picture-in-Picture has shipped in at least two engines, and has not really changed in a long time. There are a bunch of open issues. What we need here is help from editors to move these things over. Issues seem somewhat easily resolvable.
… The spec is in Working Draft state, and there's no real reason why it would be stuck at that level. I'd really want us to move forward.
… Horizontal reviews: accessibility and TAG reviews were done. We haven't done internationalization, privacy and security reviews. Any objection for us to initiate those?
… Help wanted to complete the relevant preliminary questionnaires.
… Current editor is François Beaufort

markafoltz: Is there an issue open to finish the Privacy and Security considerations?

cpn: We can create one if it's missing.
… What I've done is creating a tracking issue for CR requirements as a whole

Marcos: With the issues that are still open, the work is probably to check what implementations currently do, and adjust the spec with that.

Autoplay Policy Detection

[Slide 12]

cpn: For Autoplay Policy Detection. Alastor has done a great job at getting horizontal reviews.
… There are a couple of privacy related feedback that need addressing.
… We have a single implementation for now.
… My question is around the level of interest for moving the spec forward.
… There was a lot of discussion on what the spec currently says.

jernoble: WebKit is supportive of the API. No time to work on implementation yet. To be honest, I had forgotten about it. No particular objection on the design.

[Slide 13]

cpn: The main open issue is on whether the API result should be binding or indicative?

jernoble: Was the issue opened before we switched to a sync API?

cpn: I think that was after.
… Having more implementation experience with that would help.

jernoble: In the time since this was discussed, WebKit moved from requiring a user event to using a transient activation status of the document.
… The answer will depend on when you ask the question since the API is sync.
… We may have found a way around having to answer this question accordingly.

Media Playback Quality

[Slide 15]

cpn: For Media Playback Quality, one thing I noticed is that we don't have great tests for it. I don't know how interoperable this is in practical terms.
… We heard in the IG on Monday is that CTA WAVE has designed a testing framework with capturing cameras that can be used to detect actual rendered frames in a video.
… It makes me wonder whether we can leverage some of this to create more useful tests.
… We'll come back to the interoperability issues if we have time during the WebRTC discussion.

bernard: Also added a discussion on "corruption" in the agenda of the joint meeting.

Media Capabilities

[Slide 34]

cpn: A number of open pull requests.
… Anne commented that we had a concept of valid MIME type which wasn't properly defined. We re-worked the spec to make use of algorithms defined in MIME Sniff instead.
… In doing so, we managed to simplify the specs quite a bit.
… Bernard, I'd like your review on this.
… It references RFCs for registration requirements.
… I propose changing these to reference the IANA registries itself.

Bernard: We're going to close the RTP payload registry. We'll only keep the MIME type registry.

hta: It turned out that it was a badly maintained part of the overall MIME type registry.

Bernard: Right, just reference the main MIME type registry.

ACTION: cpn to update the Media Capabilities PR to refer to the main MIME type registry

cpn: OK, I'll update the pull request.
… Then I think it's good to merge. Thank you!

cpn: On Media Capabilities, ability to detect Dolby HDR support

Timo: The proposal is that, at the moment, the spec has 3 enum values. They are not sufficient to identify all cases, including Dolby Vision support.
… Last year, we discussed the possibility of turning the enum into a registry. Both approaches would work with us.
… Any comment and feedback on how to proceed and hopefully move to the next step?

jernoble: Is there a public spec we can point to?

Timo: Yes. There is enough information out there.

jernoble: That was a problem previously.

Timo: We also put a lot of resources into providing a test suite to the community.

cpn: What Media Capabilities is to report on the abilities to decode a stream. Not really in scope is display and rendering capabilities.
… Does the Dolby Vision metadata need to be paired with some rendering capabilities?
… Does a yes answer imply some rendering capabilities?

jernoble: What is the goal that apps would want from an answer?
… Second question on rendering is what we punted to to CSS.

Timo: You could argue that there's still value in going to Dolby pipeline even if you're on SDR.

jernoble: Yes, but that's what you get from CSS already.
… I'm curious what use cases are left unsupported, between CSS reporting these capabilities and Media Capabilities providing decode capabilities.

Timo: Will circle that with colleagues.

jernoble: It would be good to have that argumentation written down.

jernoble: I'm not saying that we shouldn't add those, but I'm curious about missing ones from Media Capabilities which I'm hearing might exist.

Timo: I guess we want to know what kind of homework you'd like us to do.

markafoltz: Understanding the pipeline model would be great. Where you see the information coming from, API and CSS.
… If CSS says display is SDR but you still want to use Dolby Vision, do you need to know something else?

Timo: I guess it's a choice at this point because you then don't necessarily need Dolby Vision, but you may still want to use it.

cpn: I don't know that we completed everything on the CSS front.
… We talked about the possibility to query the video rendering plane capabilities independent of the graphics rendering plane.
… What we didn't complete is [missed].

jernoble: I don't remember there being different resolution for the video plane and the content plane. Certainly for color support.

cpn: I think that may be the case for some TV devices for which the video resolution may be higher than for content.

xhwang: I have some experience with Dolby Vision in browsers. My understanding is that there is a certification process. We basically ask the OS in Chromium. I'm trying to see whether the browser needs to have these capabilities or whether it can delegate to the OS.
… Instead of browsers making a decision, that may be more a OS decision.

wschi: That makes sense to me.

xhwang: More a passthrough.

jernoble: In WebKit, it's the responsibility of the underlying OS as well, not the browser.

xhwang: My understanding is that certification covers both decoding and rendering.

cpn: Is there a concern that answering the decoding question tells you something about the display?

xhwang: Fingerprinting? Possibly.

jernoble: The problem arises if you tie the answer to the display the browsing window is tied to.
… The answer is supposedly immutable. Regardless of the current device context.

jean-yves: On some low-end level, there can still be cases where it tries to reserve a decoder and falls back, so response can change.

greg_f: For video playback dynamic range of protected content, I have been done that it's key system dependent. Some browsers implement multiple key systems, and whether a stream can be decoded and rendered depends on the pipeline you use.

jernoble: Was it a technical limitations where some CDM cannot support HDR or a policy limitation?

xhwang: I think it's mostly technical.

jernoble: Is the solution to say "no" to HDR for such streams then?

xhwang: That particular key system pipeline is implemented using a different pipeling altogether.

jernoble: I'm suggesting that if the CSS media query is unusable, then we should that problem down to media queries as bug report. That would solve the problem that people are seeing. Really, it's a problem of the EME pipeline.

padenot: In some instances, you need a particular combination of hardware and driver. If everything says yes and the provider is happy of the level of protection, you might get HDR bits. But all of these moving pieces need to agree.

xhwang: Do we still have an issue with clear Dolby support?

jernoble: We'd move away from Greg's question, specifically targeting protected video pipelines.

wschi: Most browsers do not deal immersive audio in the clear. If you deal with protected audio, then suddenly things work because the pipeline is different.

padenot: There could be an answer that says "in theory, we should be able to play this content", and then a check on the key system.
… Do we want to integrate, one call / one answer, or are consecutive requests good enough?

jernoble: If you, as a user agent, know that the system is capable of displaying HDR, and the app still wants to render SDR?

padenot: Yes, per policy because the web site feels that it does not have enough protection.

jernoble: I'm trying to figure out how we would answer this policy question?
… Is there not a way to ask that question with the EME API already?

xhwang: Level of robustness.

jernoble: In Fairplay, robustness is not really used.
… Media Capabilities won't give you an answer. You will only know at key exchange step.
… At least, it seems doable to request.
… and know.

markafoltz: It seems there is a path already in Media Capabilities to query about clear and protected paths.
… I don't think that we need to give the right answer all the time, because that may depend on current context. CSS approach is a good approach. Sticking to separating decoding and rendering.

jernoble: Except for audio ;)

xhwang: If content moves from one screen to another, then JS needs to ask again.

jernoble: Yes.

markafoltz: Video may not always be rendered on screen.

padenot: Even in encrypted scenarios?

cpn: Do we need an issue to capture that? Or do you think we'll capture that as part of the overall enum discussion?

markafoltz: Tracking the enum seems good.

cpn: Wondering whether we've come back to not going to a registry for this?

xhwang: Still a proprietary solution.

Francois: From a Process perspective, not enough to normatively reference, consider whether it comes from an open community, is it stable. Can't easily normatively reference a spec just because it's there

greg_f: I'm wondering whether we need a Dolby specific mime type and HDR mime type.
… I'm wondering whether that's sufficient to have one?

Timo: That was discussed before. I think the MIME type is not enough.

wschi: This is for cases where the MIME type does not specifically say it's Dolby, because it does not explicitly list all sets of metadata that you can use.

xhwang: In what case would applications actually query the HEVC MIME type + HDR Dolby Vision?

wschi: If you want to be backward-compatible, the MIME type has to be the base thing.

eric_carlson: Why not make more than one query with different MIME types then?

wschi: Maybe that is a solution. I think that has some repercussion on how the media needs to be specified. I'm sketchy on the details.

[Slide 35]

cpn: Looking at other issues. First, thanks for stepping up as editor, Mark. Any specific issue that you feel would be worth discussing?

markafoltz: I only looked at issues we discussed over email, the ones that looked straightforward.
… I prepared pull requests for some of them.
… #152 has a PR now, I think. With a side comment that if there is a conflict between the MIME type and the transfer function, it will be reported as "not supported".
… The PR reflects the current implementation in Chrome.

cpn: I'd like a reviewer from each engine to check whether it matches their implementation.

tidoust: A corresponding test in WPT could help assess interoperability as well.

markafoltz: For #146, I debated whether this was worth a PR. About splitting the Audio/Video configurations into decoding dictionaries where most properties are shared, with specific properties specifically for decoding.
… I think the changes should be compatible, but I cannot make a 100% guarantee right now. The meta question is: to move the two specific properties out, is it worth complicating the IDL?

cpn: I guess the proposal is to close this with no action, then.

markafoltz: If we foresee adding a lot of properties, but for two, I'm indeed not sure that's worth it.
… I did a PR anyway just to see how the result would look.

cpn: The final thing I wanted to talk about is what to do about additional features that have been raised.
… Whether we want to do work to address those or set a scope for what we consider suitable for a Candidate Recommendation, possibly to be extended in a v2 spec.
… What I would propose is that we treat it that way. I'm not hearing strong support for capabilities for codec switching for example, so propose to defer.

Media / Web RTC Joint Meeting

See joint meeting minutes

Media Source Extensions

Detachable MediaSource

[Slide 21]

Jean-Yves: w3c/media-source#357
… Use cases is loading content for ads. Long latency to resume playback
… Solution is multiple players, hide one behind the other. It's complex
… Idea to have a MediaSource, when you create it's marked as detachable
… When you detach from the HTMLMediaElement, e.g., by setting srcObject to null
… The buffers attached to the SourceBuffer are deleted, so you can seek to where you were before and start playback
… Concern about leaking MediaSources
… Currently, most MS are created by URL, and need to be manually revoked
… Doesn't matter, remove the media element, it gets cleared, only leak small amount of data. But this would mean more data
… Use the srcObject instead, to track via refcount, etc
… Attach to a MMS, so if out of memory you can flush content. Can't do this with MS
… I believe it's simple to implement
… Added behind a feature flag, so you can test it

Jean-Yves: [ Explains the behaviour ]
… When you go to HAVE_NOTHING in network state, the currentTime is set to zero. If you reattach, you're into fetching and you'd seek to where you were before

Xiaohan: Would implementation require a bigger context switch in the media pipeline?

Jean-Yves: In webkit ended up being a bigger change. The renderer has been torn down
… Could consider suitable to keep the renderer and reuse it later. That's an implementation detail

Xiaohan: feedback from content providers?

Jean-Yves: Joey commented how they switch players by hiding. He described the workarounds

Joey: Wasn't specific to Shaka. Alternatives are to have two video elements and show and hide
… On platforms without memory to do that, you have to tear down the stack and rebuild it
… I that situation, a detachable media source may not help as you still don't have memory
… If you do have more memory, there's no technical limitation meaning you need a detachable MS in the first place. Might be convenient

Jer: Other use case from FOMS, sometimes platforms have random decoder errors, sites workaround by flushing. THey want to avoid re-parsing the media data

Jean-Yves: May need to add ot the proposal that an error doesn't tear down the MS

Chris: Doing on both MS and MMS?

Jer: That's a question to the group.

Jean-Yves: If you leak the blob, you can't force it to clear. If that's a conern, having it only on MMS allows you to correct the situation

Joey: I'm fine with it, despite my complaining. Don't think it solves memory pressure
…The idea of HTML fragments, could be interesting link between that work and this
… Imaging on the rendering end of the MSE pipeline you create texttrackcues, and automatically prune them
… They'd be dealt with as HTML fragment cues
… Question about whether you pass your preferred caption format on the way into the buffer, or there's a hook on the way out

Eric: Why would it be any different than the way a browser handles in-band text tracks now
… Muxed with audio and video data, as a separate track. QuickTIme movies, MP4, they have well defined schemes. HLS allows WebVTT and IMSC to a certain extent
… Webkit handles it by creating TextTracks when it sees those inside a file, or in the case of WebVTT, invoking the existing vtt parser
… For other formats, transcribe into webvtt. From the page perspective it looks like a webvtt track

Nigel: So what goes into MSE is the same format. For BBC use case, you couldn't translate IMSC to VTT.

Eric: For an unsupported format, script wuld need to parse the samples and do what it wants. The page could provide script
… Our point of view, it's sensible to separate buffer managemnet from how to do presentation

Jer: For inband 608/708 captions, we have request to elevate those as subtitles. One issue is you don't know there'll be a subtitle track at init time
… SO the spec says throw it away as there's no track buffer
… Would need change to the ISOBMFF spec. I think the HTML inband text tack spec says that

Eric: Depends how 608 and 708 captions are carried?

Cyril: SEI messages int he video track

Eric: So we wouldn't know

Cyril: Has proposed adding to samle groups in the init segment

Eric: Would be helpful. HLS had this problem, they handle with every stream appearing to have a text track but not necessasrily any captions in it

Jer: In the sourcing in band tracks section of HTML. In MPEG4 there's a requirement that track boxes inthe movie box is maintained

Jer: Would need updating that document if that's what we want

Jer: MSE spec requires same number and types of all the tracks. For inband tracks, aybe can't do ad insertion unless they also have matching subtitle tracks
… We'd need spec language to work aroudn that problem

Cyril: What's the level of support for the inband track sourcing?

Jean-Yves: Not much is supported

Jer: Current solution is to demux the content just to get the 608 and 708

Eric: What do you mean, JYA?

Jean-Yves, Not supported within Media Source

Jer: How you represent a buffered range of a media source would be a question we have to answer
… If you can append fragmented VTT or IMSC, the cues are sparse, they don't have start or end time of the fragment

Eric: Would have to be parsed

Jer: One way is to not included text tracks in teh buffered range

Nigel: ISOBMFF samples have known begin/end time from the box structure
… The concern it might come from a different source, so you don't know, but you could have a data source

MarkW: A fMP4 has samples...

Cyril: Question is with plain text VTT files

Jer: If you implement HLS.js you have fragmented plain text VTT, you need your own streaming VTT parser. Would be nice not to have to do that, just to append. We have a streaming VTT parser, but no way to expose that to the web

Eric: We should do this, and do it now

ACTION: Nigel to raise an issue against MSE, one for inband, and another for plain text

Cyril: Need to distinguish the various forms of inband

EME

[Slide 32]

#251 Mixed encrypted/unencrypted content

Xiaohan: Transitioning is complicated. If the app knows it's encrypted content, set the media keys before playback starts, so the browser can set up the pipeline properly
… Sending media keys without doing license exchange is low cost.
… This would make things easier

Joey: I'm in favour

Greg: I am also

Wolfgang: What if you don't setup the keys beforehand?

Xiaohan: Playback could fail, quality of implementation issue

Xiaohan: Looks like we have consensus, i'll update the issue

#132 Continuous key rotation

Xiaohan: Issue from before EME v1
… One example of key rotation is playready license chaining
… A few types of keys, one is a root key, you need to fetch during license exchange
… App sees first initdata, generate a license request, get the root key
… subsequent init data has info in encrypted form
… The subsequent PSSH's can have different content keys
… For playback, don't need to fetch, for whole playback just do license exchange once to get the root key
… EME spec doesn't support the use case
… In previous discussion, generateKeyRequest(), not compatible with current spec. Create a new session without requiring generateKeyRequst(), or hijack update()
… Proposal is , there are cases where root and content keys managed in one session
… Proposal is simple, relax requirement in generateRequest(), so it doesn't have to generate a request
… CDM can generate renewal request by itself without calling generateRequest()
… Support the single session case
… Or manage in the root key session
… [Shows the requests/responses]

Francois: Note that per-charter, we can't work on new features unless listed in the charter. This looks new. Could argue it's maintenance. Requires rechartering

Xiaohan: Not trying to bring this into V2

Francois: EME scope can cause tension

Jer: I don't think Fairplay has something matching this
… We have a way to force a key request
… Client sends a renew message that results in a key request to the server,. If fullfilled, resets the timer. It's outside the spec
… Write an explainer, then consider during rechartering
… Our custom extension to EME to renew an expiring key

Xiaohan: Discussion to add a new method. Could be too intrusive?
… Bringing this up as apps will try to use similar features and do it in a hacky way
… If we have consensus on the workflow, people have some standard way to do it, then consider for the charter next year

Cyril: You send a new PSSH box, but an initial PSSH box coul d have multiple keys. Could that be used as an intermediate solution?

Xiaohan: Use case comes from TV people

Cyril: WOuld it work for a timeboxed live event today?

Xiaohan: Thats not the nromal use case for this, where you can do normal time based exchange

Mark: There are two different flows here? PSSH is inband?
… Why doesn't the PSSH just flow with the media data?

Joey: There's no backchannel

ACTION: Xiaohan to write an explainer for key rotation

EME reference to MPEG CENC

Xiaohan: Issue 563
… Historically, there are v1, v3, v4 versions of the CENC spec. v4 is relatively new
… Current spec references v3
… Difference, in v1 video slice headers can be encrypted but in the clear with v3
… Today it looks like theres a lot of existing content using CENC v1 with encrypted video slice headers
… Causes headaches in implementation
… Need to decrypt the video slice headers and then do the decoding processes. Causes implemtation complexity
… Uses report things don't work
… Nothing in the EME spec says we don't support CENC v1
… Can we do something in the spec to have a clear way to tell applicatinos we don't support encrypted slice headers
… Reduce burden of supporting CENC v1 in new implemetations
… We try to support it for all h264 impleentations, comlpexity
… Cyril replied about MPEG discussion about feature support.
… Could be a feature detection issue
… There'll be more CENC versions in future. EME anchored in v3. How should EME move forward as new versions come out?

Cyril: MPEG is aware of the complexity of identifying the featuers in CENC. Looking at a way to describe more granularly, or have profiles
… CENC keeps growing features. So far, backwards compatible, except for that

Xiaohan: Need to look the v4 thing

Cyril: We should talk about how MPEG can provide the signalling you need

Xiaohan: Need an API for feature detection

Cyril: ALso need to update MC API?

Xiaohan: If you have feature flags for all content, not just this, could go in MC API

Cyril: Recommend referenceing MPEG specs undated, with the asumption will be backwards compatible
… Then you can say which features are supported, but don't say the date

Xiaohan: Difficulty as maybe there aren't features
… And in this case it wasn't backwards compatible

Jer: We run on defined hardware with defined behaviour

Xiaohan: TO summarise, we can look at the MPEG signalling. And add more feature detection in EME spec

Cyril: MPEG will have an online group meeting, October 3, not restricted to MPEG members. On this issue, versioning of CENC, everyone is welcome.

WebCodecs

VideoFrame orientation

[Slide 26]

Eugene: These are old issues. One is VideoFrame orientation

Eugene; Not a new issue. we propose a similar approach to CSS. Expose image data similar to image-orientation
… Two attributes: rotation and flip
… In addition, we'd want the same attributes added to VideoFrame constructor
… CopyTo and VideoEncoder don't need to take the orientation into account. This usually gets passed in the container
… Needs to be taken into account during rendering
… We got feedback to say it's important
… It's driven by community feedback
… I'm ready to send a PR unless there are objections or comments?

[Slide 27]

Harald: WebRTC has this attribute... but dont expose it as metadata

Eugene: Not sure will benefit from it , as you don't expose VideoFrames?

Harald: There are places. It should be in metadata

Eugene: We don't want to add it to VideoFrameMetadata, instead two attributes on VideoFrame
… I'll send a PR along those lines

[Slide 28]

Eugene: Another longstanding issue is resource exhaustion
… Accelerated video codecs. Some apps only use if they can get accelerated encoder, or use WASM instead
… Can be a limited number of encoders. If users use more, they won't be able to
… Web developers get an encoding error. We want to tell them what's going on
… We want to provide a QuotaExceeded exception
… What we want to do is declare newly created codecs as inactive. Point of contention is what point we declare a codec as active? For me, until it provides the first output you can't ensure it's fully active
… Paul mentioned first output from the codec could be not a good criteria

Paul: I proposed marking as active when first input is accepted

Eugene: Practically, not clear what it means
… You can get an error in a few milliseconds. At what point will you not get QUotaExceeded?

Paul: Doesn't matter. Have an onerror callback?

Eugene: To be deifined, need to say what it means for a put to be accepted?

Paul: For audio, it's step 4.1 in the algorithm

Eugene: It's not observable to users

Eugene: So seems we agree you get the error until first output

Paul: You can mark codec as active when it starts processing media. Consider scenario when you're close to exhaustion
… The first starts encoding, the second you can fire the error callback

Eugene: You're right, races. I'll add a note

Summary of action items

  1. cpn to update the Media Capabilities PR to refer to the main MIME type registry
  2. Nigel to raise an issue against MSE, one for inband, and another for plain text
  3. Xiaohan to write an explainer for key rotation
Minutes manually created (not a transcript), formatted by scribe.perl version 229 (Thu Jul 25 08:38:54 2024 UTC).

Diagnostics

Maybe present: Anssi, bernard, Chris, cpn, Francois, greg_f, hta, jean-yves, Jer, jernoble, Joey, markafoltz, tidoust, Timo, wschi, Xiaohan

All speakers: Anssi, bernard, Chris, cpn, eric_carlson, Francois, greg_f, hta, jean-yves, Jer, jernoble, Joey, Marcos, markafoltz, padenot, tidoust, Timo, wschi, xhwang, Xiaohan

Active on IRC: anssik, cpn, cyril, dana, eric_carlson, eugene, greg_f, jernoble, JohnRiv, louay, Marcos, markafoltz, markw, nigel, padenot, tidoust, wschi, xhwang