This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
Specification: http://www.whatwg.org/specs/web-apps/current-work/multipage/commands.html Section: http://www.whatwg.org/specs/web-apps/current-work/complete.html#stream-api Comment: I need to send live data from device to URL on remote system. Would be nice to connect WebSocket, but simple access to buffer callback to package and send data to remote URL. Having access to data after closing recording interupts asynchronous processing of audio/video data. Posted from: 12.116.138.30
EDITOR'S RESPONSE: This is an Editor's Response to your comment. If you are satisfied with this response, please change the state of this bug to CLOSED. If you have additional information and would like the editor to reconsider, please reopen this bug. If you would like to escalate the issue to the full HTML Working Group, please add the TrackerRequest keyword to this bug, and suggest title and text for the tracker issue; or you may create a tracker issue yourself, if you are able to do so. For more details, see this document: http://dev.w3.org/html5/decision-policy/decision-policy.html Status: Did Not Understand Request Change Description: no spec change Rationale: I don't understand. Could you elaborate?
The last draft I saw only had provision for spooling device data (in my case sound) and when recording is stopped a spool file can be accessed for the binary data. My application requires a live stream of buffered recorded audio (voice) to be sent to remote server, where the audio data may be immediately and asynchronously processed and an asynchronous result sent back the client and for each segment of time of the continuous live stream. My application may send a stream of live data for between a few seconds to several minutes in length. The latter would make the use of the file based API unacceptable.
Ah, I see. Yeah, we'll need to support this.
Moving this to WebSockets. This will likely not be supported for some time. In the meantime, a server could just implement PeerConnection if you're willing to do unreliable audio over UDP (as opposed to reliable but potentially high-latency audio over TCP).
Why is there PeerConnection and WebSocket APIs. They have very much in common (the API, not the protocols). Could we just have one API which can use several communications channels/protocols?
Marking this "LATER" for now, because we need more implementation experience with binary data in WebSocket and multiplexing in WebSocket before we add this. Regarding comment 5, the APIs are as similar as makes sense, but in practice the problems of peer-to-peer and client-server have quite different needs and I don't think it would make sense to use the same API for both.
EDITOR'S RESPONSE: This is an Editor's Response to your comment. If you are satisfied with this response, please change the state of this bug to CLOSED. If you have additional information and would like the editor to reconsider, please reopen this bug. If you would like to escalate the issue to the full HTML Working Group, please add the TrackerRequest keyword to this bug, and suggest title and text for the tracker issue; or you may create a tracker issue yourself, if you are able to do so. For more details, see this document: http://dev.w3.org/html5/decision-policy/decision-policy.html Status: Accepted Change Description: enabled UDP-based client-to-server stream transfer using PeerConnection. Rationale: WebSockets wasn't really the right solution, as it's TCP-based on media really needs UDP.