This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.

Bug 23992 - WebSocket: API to apply TCP backpressure
Summary: WebSocket: API to apply TCP backpressure
Status: RESOLVED WONTFIX
Alias: None
Product: WHATWG
Classification: Unclassified
Component: HTML (show other bugs)
Version: unspecified
Hardware: Other other
: P3 enhancement
Target Milestone: Needs Impl Interest
Assignee: Ian 'Hixie' Hickson
QA Contact: contributor
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-12-04 00:12 UTC by Ian 'Hixie' Hickson
Modified: 2016-03-16 14:01 UTC (History)
4 users (show)

See Also:


Attachments

Description Ian 'Hixie' Hickson 2013-12-04 00:12:01 UTC
Right now if the client is receiving data faster than it can handle it, there's no way (short of not returning from the onmessage handler) to indicate to the user agent that it should stop reading from the socket (and thus apply backpressure at the TCP level so the server stops sending so much data).

ack Michael Meier
Comment 1 Anne 2015-09-05 17:26:39 UTC
Domenic, would this be something that would be solved if we created a stream-based version of the API?
Comment 2 Domenic Denicola 2015-09-05 23:03:40 UTC
Yes. However, it's unclear what benefits a streaming version of web sockets would have over a bidirectional streaming HTTP API. E.g. when we add uploads to fetch, you could do something like

function fetchPair(url) {
  const pipe = TransformStream.identity();
  return fetch(url, { method: "POST", body: pipe.readable })
    .then(res => ({ readable: res.body, writable: pipe.writable }));
}

fetchPair("http://example.com/api").then(({ readable, writable }) => {
  // write to writable to send data to http://example.com/api
  // as data comes in from http://example.com/api, you can read it
  // from readable.
});

This gives basically the same capabilities as a web socket, with the addition of backpressure. But over HTTP.
Comment 3 Anne 2015-09-06 14:57:41 UTC
No, HTTP is not bidirectional. With HTTP/2 you could it through many requests/responses though. Not sure if folks still want to use WebSocket then.
Comment 4 Takeshi Yoshino 2015-09-09 08:30:01 UTC
> No, HTTP is not bidirectional.

My understanding is that HTTP protocol (incl HTTP/2) is technically able to allow full-duplex body exchanging, but nothing guarantees that such communication would work on the current Web.

Wenbo discussed this before in:
https://tools.ietf.org/html/draft-zhu-http-fullduplex-02

From our experience in development of fetch() + Streams in Chromium, we're feeling that there're several road blocks we need to remove to enable full-duplex streaming and make it fast enough, e.g. cache, proxy both inside user-agent and on the Internet.

That the WebSocket uses completely new protocol has been a disadvantage regarding reuse of existing infra, but as adoption grows it's becoming an advantage that nothing so sophisticated like HTTP is done on WebSocket frame traffic.

> With HTTP/2 you could it through many requests/responses though.

Right. With HTTP/2, the overhead/limitation which motivated introduction of WebSocket became very small. Some says Comet style JS library performs quick enough over HTTP/2.
Comment 5 Takeshi Yoshino 2015-09-09 09:03:45 UTC
Integration between Streams and WebSocket would be like either


(1) Return a ReadableByteStream for each message received.

This doesn't allow controlling amount to pull beyond message boundaries. Internally we could just use the quota given for the previous message to pull the bytes of the next message, or we could add some method on the WebSocket API, not on the stream obtained for each message, to control quota for next and future messages? To make this work perfectly, we need to return a message object without waiting for headers of it to be received.

ws.onmessagev2 = function (msg) {
  // 
};

We might want to give big quota immediately after start of connection attempt. When onmessagev2 is invoked, 1 RTT has already passed since it. So,

var p = ws.receive();
// At this point we can call some flow control function on p.body if we want
p.header.then(function (header) {
  // process header
  var bodyStream = p.body;
  // start reading bodyStream
});


(2) Return a ReadableStream which streams message objects

This doesn't allow the consumer to pull only bytes that it can consume.


(3) Return a ReadableStream which streams a header object, and then chunks, again, a header object, and then chunks, ...

This doesn't work well with other byte streams (they don't understand the header object. So, we cannot just pipe but filter the generated objects manually).
Comment 6 Domenic Denicola 2015-09-10 22:27:12 UTC
Takeshi, those seem way too complicated, so I must be missing something. Why do we have to include header objects at all? They are not currently exposed through the API and are not conceptually part of the data stream, are they? Just the text or bytes.

I am not sure whether the best chunk is data frame or message, but I would think the stream could take care of that for you. Then you get backpressure the normal way as the TCP receive buffer builds up.
Comment 7 Takeshi Yoshino 2015-09-11 03:31:38 UTC
(In reply to Domenic Denicola from comment #6)
> Takeshi, those seem way too complicated, so I must be missing something. Why
> do we have to include header objects at all? They are not currently exposed

Ah, sorry for confusing. I missed that we don't have any metadata to deliver. So, we just need to insert something to tell the boundary between messages.

> through the API and are not conceptually part of the data stream, are they?
> Just the text or bytes.

Right. Sorry. But to fully deliver WebSocket semantics, we need to notify the boundary.

> 
> I am not sure whether the best chunk is data frame or message, but I would
> think the stream could take care of that for you. Then you get backpressure
> the normal way as the TCP receive buffer builds up.

It's (2) but asking the user to use message boundary for flow control (frame boundary should be hidden to the API user. So, messages).

If the semantics the application want to associate with the WebSocket message boundary has small message size to provide granularity enough for flow control, they can just use it. If not, they need to change the layering a bit so that their big messages would be split and put into messages but the semantics which was associated with the boundary need to be encoded in a different way.
Comment 8 Anne 2016-03-10 08:47:14 UTC
Do we still care enough about WebSocket to solve this?
Comment 9 Anne 2016-03-16 14:01:10 UTC
Please file a GitHub issue if this continues to be a problem. Or possibly switch to H2 + fetch().