WebTransport/TPAC 2021

From W3C Wiki
Jump to: navigation, search

TPAC 2021 meeting for the WebTransport Working Group

Date: Tuesday October 26

Time link

CEST 23:00 - 00:30

UTC 21:00 - 22:30

EDT 17:00 - 18:30

PDT 14:00 - 15:30

AEDT 08:00 - 09:30 (Oct 27th)

JST 06:00 - 07:30 (Oct 27th)

Meeting connection details

The meeting will be held via Webex.


Meeting number: 2554 322 8197

A chat room will be available at https://irc.w3.org/ in the #webtransport channel


  • 5 min: Welcome
  • 35 min: Presentation on WebTransport covering goals, purpose, current API structure, code examples and an examination of core differences between WebTransport | Websockets | WebRTC.
  • 10 min: Presentation on Chrome M97 support of WebTransport
  • 10 min - 5 min Presentation by W3C Multicast Group Chair and discussion
  • 30 min: Discussion of key issues including
    • Issue #365 - how to address real time bi-directional A/V use cases.
    • Stats
    • Prioritization of streams and datagrams
    • Connection sharing and pooling

Minutes and meeting notes

The meeting was recorded and is available here along with and audio transcript. This recording is only available to WebTransport Working Group members. For password access, contact the co-chairs.

The full slide deck is available here.

Present: (members) Adam Rice, David Schinazi, Harald Alvestrand, Jan-Ivar Bruaroey, Martin Thomson, Philippe Le Hegaret, Victor Vasiliev, Will Law, Yves Lafon. And (non-members) Chris Needham, Daisuke Nakayama, Hiroshi Kajihata, Jake Holland, Jungkee Song, Karen Myers, Kazuhiro Hoya

  • Will presenting (slides 7-16) on WebTransport covering goals, purpose, current API structure, code examples and an examination of core differences between WebTransport | Websockets | WebRTC. (slides are self-explanatory)
    • Slide 14: WebTransport transfer modes: 1) ordered & reliable (within stream), 2) unordered & unreliable (datagrams), 3) “Partial reliability” = Unordered between streams but reliable (unless aborted). Use to send objects larger than one packet while avoiding head-of-line blocking.
    • Slide 15: Correction: We’re in "Working Draft” status now (not FPWD).
  • Jan-Ivar presenting (slides 17-26) going through basic code examples
    • Slide 17: New since last TPAC: Datagrams now have local back pressure sender-side, so you can measure how fast your browser is sending datagrams.
    • Slide 21: Use single stream for data you need to arrive in order. Use separate streams for data that should arrive in parallel and can arrive out of order.
    • Slide 22: You can simulate WebSocket semantics (distinct ordered & reliable messages) by e.g. wrapping text messages in JSON, requiring the receiver to parse/deserialize them.
    • Questions:
      • Harald: Is the number of bytes you can put into a stream unlimited? Jan-Ivar: Yes. Harald: Good
      • Harald: If you create two streams, there’s no way to tell the connection which one should arrive first? Jan-Ivar: That is correct, yes. Harald: So you have to bring your own sequencing (yes).
      • Harald: Is there back-pressure from congestion? That is you will be blocked from writing because the connection can’t handle it? Jan-Ivar: Yes. Harald: Thank you, I understand more.
      • Jan-Ivar (clarifying slide 21) Networking is time sensitive, which means messages may often appear in order and be properly segmented, purely from enough time having passed between arrival of packets. This is tricky because it might look nice ahead of production, but once you go to production you see all these problems. So this is definitely a low-level API, and we’re curious how people feel about these semantics.
      • Martin (confirming): Sometimes these things get out of order, and you’re not going to see that reliably. Because it’s going to depend on things like what prioritization is being applied from the browser, and what the network does to your packets when you send them. Small messages might end up in the same packets, but they might not depending on circumstance like other networking happening.
      • Jan-Ivar: That’s a good way to say it: If you need things in order put them in the same stream. If you need things in parallel put them in different streams. And bring your own framing.
      • Bernard: On the slide that said “Difficulty HARD” (slide 26). This just shows the sender side, which pretty much works as shown. The hard part is the receiver side. You have stuff coming in out of order, and potentially may not arrive at all. You have to have a receive queue and fill in the holes, or WebCodecs will error. And if you’re sending a discardable frame, even if subsequent frames don’t depend on the frame that were lost, WebCodecs will still error today (a bug). Still a work in progress.
      • Jan-Ivar: Yes that’s true. There’s still a potential sender side issue with whether back pressure works reliably in practice with partial reliability. Need more people to play with this.

Will presenting (slide 27) What’s exciting about WebTransport. A chance to unify the transport and API between what have been disparate applications: Video conferencing, telephony, gaming, low-latency and live media delivery. I work at a CDN, we would much rather put WebTransport on our edge servers knowing it can satisfy all these use cases, rather than having some protocol that only satisfies a narrow niche of applications. Because it’s going to look like HTTP/3 to firewalls, proxies and network switches, this is going to greatly facilitate its reach and robustness. The web is moving to HTTP/3, and WebTransport will arrive with it. Browser support gives us billions of addressable clients.

Datagrams in JS. And when you combine Webtransport with WebCodecs and WebAssembly, you start to close the gap between what you can do with native apps and what you can do with browser based realtime communication apps. Which is exciting. A lot to like.

Status: WD. Chrome has intent to ship WebTransport. Finally we can experiment in the field. There will be an echo server set up for WPT. Please take these early implementations, try trying to use them for simple proofs of concepts, and give feedback to the WG. We are planning an Interop Event in conjunction with IETF. It will be available as soon as we have servers available. Hope for something in Q1.

Bernard: Should have it in M97 shipping in January

  • Victor presenting (slide 30-34) Presentation on Chrome M97 support of WebTransport
    • Chrome is expected to ship an implementation of M97 in beginning of January. Will have the most core features over HTTP/3, the -02 version of the draft. Has support for 8-bit reset streamCode errors, and error messages for closing the connection. Includes full support for cleanly opening and closing the connection.
    • Is shipping to release without any origin trial. In https, in both window context and workers.
    • Some features not currently shipping are write prioritization, stats or bandwidth estimation, which could be useful for media transfer. Also hash-based certificates instead of WebPKI, may make it or be in M98.
    • Implementation is fairly mature. An origin trial started in April, extended up until shipping. Before that there were different iterations of the protocol. Glad to finally see it shipping.
    • Covered by Web-Platform-Tests (WPT), using an echo server based on aioquic. https://wpt.fyi/results/webtransport
    • Server implementations. Due to draft 02 being implemented just last month, there are currently a very limited number of servers that work. Earlier servers used in the Chrome Origin Trial, are unfortunately not compatible, so we cannot point to working servers at this very moment due to this recent change. This should resolve itself in a couple of months.
  • Jake Holland presenting (slide 35-38) Presentation by W3C Multicast Group Chair and discussion
    • Hi, I’m chairing the Multicast Community Group, and doing some work in IETF as well. I’ve been trying to get multicast on the web into a viable state. Main purpose is the efficiency gains. ~1% global carbon footprint. Got a proof of concept running: a ReadableStream that can subscribe to and process UDP payloads. It doesn’t have a security model yet. We have a multi-cast-based produce that plays video for live TV in a walled-garden situation. Ported receiver in WebAssembly, and plugged in the receive path to this API and can get it to play video today. FEC building of reliable segments fed into MSE. Can be a real system with the proper kind of security. Gaps, being dealt with IETF (slides self-explanatory). Aiming to get good consensus to do this safely. We believe it requires separating encryption and authentication functions. Want to make sure packets are authenticated before they are processed. Doing something over QUIC may be viable. We think WebTransport datagrams are a good fit for this. Run something that takes raw UDP packets from sender, encapsulate it in WT datagrams, ship it as multi-cast to browser receiver, unpack it, and do the rest of your processing. Port it to WebAssembly and rebuild any existing application, if we have this path available. Consider back-pressure differently, since we can’t tell the multi-cast sender to slow down. Maybe switch to a lower-rate channel, or fallback path. We’re looking at WebTransport as a phase 1 target, to port existing applications into a web context. 6 or 7 competing products. We think we can get large scalability gains, and once it’s running it has the scope to continue improving. My main ask is we add multicast datagrams as a use case for WebTransport, in the roadmap.
    • Questions:
      • Bernard: How would you construct a WebTransport with a multicast address, so you can send a datagrams to that address?
      • Jake: We might need to do some kind of signaling over an existing connection to establish it, so the server knows there is a multicast SG that is viable way to receive traffic. And would send a signal to the receiver to enable it to fetch the multicast traffic. It may also be possible for the receiver to specify which SG it’s expecting to receive from with some URL definition (not defined yet). The http over quic multicast draft from the BBC a few years ago touches on some of these points, but probably needs updating for WebTransport context.
  • Bernard presenting (slides 39-46): How to address real time bi-directional A/V use cases (issue #365)
    • Sending media from client to server low-latency video ingestion, for e.g. bidirectional server based conferencing. Maybe sending simulcast or SVC. We’ll talk about some of the challenges for congestion control in those scenarios. Both involve multiplicative increase. That is the layers are multiplicative, as you add layers you double the bandwidth used, which is anathema to transport.
    • In conventional video upload scenario, typically only one stream is up, the highest possible quality. But in low-latency could be sending simulcast or SVC in the hope of lower latency by bypassing the transcoding step.
    • Some of the obstacles in these client-to-server use cases. JS is constrained by encoder and congestion controls of the browser. You can always sends less than what’s allowed. Issue #21. There’s an issue of getting precise timing. What could you do with timing you get back. Average bandwidth estimation doesn’t handle the key frame bandwidth spike. If keyframe is lost, what do you do? Send it again and risk another spike? Could scale down resolution, try again, or make subsequent frames smaller. H264 has long-term-reference, another delta frame that references a long-term frame, and try to recover that way.
    • App can’t send more than congestion window permits. In transports we have additive increase, to try to find what available bandwidth is. But when we have drops, we have a multiplacitive decrease, so how do we know when we can scale back up? If you don’t have a probing mechanism, you’re not filling your congestion window, and you never really know when you can do multiplicative increase (enable the dropped layers). Example (see slide 45)
    • X = Resolution enhancement layer didn’t get through… bandwidth spike at time 0 … what do I do now? I’m going to try to keep sending the lower resolution, but I won’t try the res enhancement layer. Lets assume everything gets through, no frame loss, at what point do I decide I’m feeling lucky? At time 2 I can’t just decide to send the hires layer, because it depends on the frame I lost at time 0. Similarly, at time 3 I can’t either, it depends on time 0 and time 2. The only thing I can do is send another keyframe. Likely won’t work any better than at time 0, unless I start lowering resolution of my keyframe. If I keep doing the same stuff I’ll be stuck at the low res, and never re-enable s1.
    • (Slide 46) Triangle describes what YouTube Live did. Found the WebRTC congestion control not ideal. They wanted one stream coming out at highest quality as possible. Not looking for low-latency willing to get queues. This is a comparison of WebRTC and other non-low-latency approaches. We need to address the metrics issue for the case where we’re willing to send if we have better information to lower the latency.
    • Questions?
      • Will: What are the solutions? Better stats?
      • Bernard: Youtube Live (slide 46) selected a different congestion control algorithm to better fit the classic video upload case. Now the low-latency upload case, like WISH, could use the existing WebRTC congestion control. The option might be some parameter to influence what congestion control to use: I want low-latency or I don’t for the entire connection.
      • Jan-Ivar: Is basic video upload different from file upload?
      • Bernard: In this use case it was convenience. There are also live sporting events that fall somewhere in the middle. YouTube live didn’t need absolute lowest latency.
      • David: (expand on YouTube uploads) the “video upload” may be somewhat of a misnomer, since this is for live streaming from the person recording to the uplink. Would rather everyone has the latest data even if there is loss.