If the connection is slow, it is probably quite easy to fill some buffers,
which would (possibly) per spec lead to closing the connection.
That is bad. The connection is working, but just a bit slow.
If send() can't temporarily process the data (like buffer it),
it should, IMO, either throw an exception or perhaps better would be to
I think throwing is better than returning false. We had a boolean return value for send() before, but it was removed because we didn't see how it would be useful.
Failing to send is an unexpected situation. It will be easier to debug for authors with an exception compared to a return value.
This has been brought up a large number of times in the past.
The initial assumption is that authors will not be expecting the implementation to run out of buffer space and will thus not check for errors. I think this is a reasonable assumption.
I've never seen anyone using XHR to write code that attempts to deal with the implementation running out of buffer space. Neither the buffer for sending nor the buffer for receiving. Nor have I seen script dealing with OOM errors when creating new JS objects or when concatenating strings.
So, under that assumption, what is the best API to deal with that situation? First of all we should require implementations to have a very large sending buffer. Anything other than that will result in more intermittent errors which means more intermittent bugs in pages.
Second, throwing an exception would likely have very bad consequences. I agree that throwing an exception makes debugging easier, but only if the exception happens for the developer. An intermittent error that happens very rarely, likely mostly if there are connectivity issues, is unlikely to result in the developer seeing it.
So throwing an error will just result in that whatever the script is doing after the call to websocket.send() will intermittently not happen.
So how about returning a boolean value. The problem is that unless the script checks the boolean value, that means that packets of data will intermittently "get lost" since they are not put in the sending buffer.
Having intermittent message loss seems very bad for data integrity. Who knows what the application logic does if a message is simply missing. Especially if this happens rarely enough that the developers likely hasn't tested for it.
We chose TCP exactly for the reason that it provides integrity guarantees. Otherwise we could have chosen to build websocket over UDP.
This leaves us with the option of simply closing the connection. This seems much less likely to cause "random" behavior on the server due to dropped messages. It also seems more likely to get testing since TCP connections do drop at times.
On IRC Jonas suggested that perhaps we could add new event which is fired
when send() fails because the buffer is full.
The default handling for the event would be to cut the connection, but
calling .preventDefault() would let web app to handle the case by itself.
That sounds like a reasonable solution to me.
Such event would be also a nice complement to the event I proposed in bug 15210.
Another possibility would be to have a websocket attribute ('errorHandlingType') that determines the type of error handling that is used: could default to the current (silently fail WS), and provide option to return an error or throw an exception or call an onempty event).
Do we have any data on whether WebSockets ever run out of buffer in the wild?
The event approach proposed in comment 3 seems the best to me so far, if we need to do something. But without data on whether this ever actually happens, I don't think we should add it yet.
This should probably be done at the same time as bug 15210.
Marking this and bug 15210 as REMIND for now. Will reopen in a few months and see whether we've found that it's a real problem or not.