This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
It occurs to me that it would be much simpler to get rid of the MIDIMessage object altogether, and promote timestamp and data to be on the MIDIEvent. The only thing we're buying by having a separate MIDIMessage type is the ability to timestamp each individual message; with Windows underlying, we're going to be pushing a single message through at a time anyway, and with CoreMIDI underlying, we'll get MIDIPackets, which have one timestamp with potentially multiple messages. Currently, it's implied (but contentious) that a "message" represents a single MIDI message (e.g.: no more than one note-on or note-off message), and we deliver an array of messages on a single MIDIEvent. In the Windows MIDI API, messages are delivered one at a time, and are pre-parsed. CoreMIDI, on the other hand, passes MIDIPackets, where a single data array may contain multiple complete messages (it is required to have parsed them to make sure the short messages are completely, but oddly passes them with no message boundaries, forcing the app developer to re-parse them). I have a preference for doing the break-this-into-MIDI-messages once, under the hood, rather than having to write that break-up code in every single MIDI app. (Note: if you want to send "custom data" over MIDI, you can and should use sysex messages, as people do today; otherwise, it's not MIDI.) I think it best to continue to parse into messages for the JS developer, but it seems that a single timestamp per group of messages is sufficient; with that in mind, I think the right model is either: - a MIDIEvent with a timestamp and an array of data bytes, explicitly representing one message - this has no semantic loss from the previous design, but may be "chattier" on some systems, as it will fire the event system more frequently. - a MIDIEvent with a timestamp and a sequence of data sequences - that is, an array of arrays of data bytes (i.e., the underlying system is responsible for parsing into MIDI messages), so we are effectively sending multiple messages, but all with the same timestamp. This is akin to a MIDIPacket in CoreMIDI. Either way, I'm inclined to get rid of MIDIMessage. I prefer the first design, as it's simpler. Welcome feedback.
(In reply to comment #0) > It occurs to me that it would be much simpler to get rid of the MIDIMessage > object altogether, and promote timestamp and data to be on the MIDIEvent. > > The only thing we're buying by having a separate MIDIMessage type is the > ability to timestamp each individual message; with Windows underlying, we're > going to be pushing a single message through at a time anyway, and with > CoreMIDI underlying, we'll get MIDIPackets, which have one timestamp with > potentially multiple messages. > > Currently, it's implied (but contentious) that a "message" represents a > single MIDI message (e.g.: no more than one note-on or note-off message), > and we deliver an array of messages on a single MIDIEvent. In the Windows > MIDI API, messages are delivered one at a time, and are pre-parsed. > CoreMIDI, on the other hand, passes MIDIPackets, where a single data array > may contain multiple complete messages (it is required to have parsed them > to make sure the short messages are completely, but oddly passes them with > no message boundaries, forcing the app developer to re-parse them). I have > a preference for doing the break-this-into-MIDI-messages once, under the > hood, rather than having to write that break-up code in every single MIDI > app. (Note: if you want to send "custom data" over MIDI, you can and should > use sysex messages, as people do today; otherwise, it's not MIDI.) > > I think it best to continue to parse into messages for the JS developer, but > it seems that a single timestamp per group of messages is sufficient; with > that in mind, I think the right model is either: > > - a MIDIEvent with a timestamp and an array of data bytes, explicitly > representing one message - this has no semantic loss from the previous > design, but may be "chattier" on some systems, as it will fire the event > system more frequently. > > - a MIDIEvent with a timestamp and a sequence of data sequences - that is, > an array of arrays of data bytes (i.e., the underlying system is responsible > for parsing into MIDI messages), so we are effectively sending multiple > messages, but all with the same timestamp. This is akin to a MIDIPacket in > CoreMIDI. > > Either way, I'm inclined to get rid of MIDIMessage. I prefer the first > design, as it's simpler. Welcome feedback. Sounds like a great idea, at least in theory! I'd very much like the simplicity of this. However, the idea of having multiple messages with different timestamps in the same event was designed with performance as the main concern. While I'm not deadly concerned with the performance of sending messages as it's not a proven problem, this is. Firing events in the browser's main thread is not just suspect to latency (setTimeout(..., 0) anyone? or rendering tasks blocking the event), but also quite expensive. On a medium spec desktop, having an event listener for mousemove updating a text box with the coordinates can take 50% of CPU when the user is moving the mouse. And browsers throttle the mousemove events. A sequencer might easily be receiving messages a lot more often than the mousemove events fire and might even do more expensive things, and I don't even know how one would start throttling the MIDI events in JS (like you can do for mousemove) without incurring even more cycles. The current design, however, allows the UA to be aware of upcoming events ahead of time and use that information to optimize the amount of events fired and when they're fired, since the current design doesn't even dictate that the MIDI messages attached to the event have the same timesamp. This might be crucial to allow reliable scheduling of for example Web Audio API events related to the MIDI messages, especially in graphics-heavy sequencer applications.
(In reply to comment #1) > (In reply to comment #0) > > It occurs to me that it would be much simpler to get rid of the MIDIMessage > > object altogether, and promote timestamp and data to be on the MIDIEvent. > Sounds like a great idea, at least in theory! I'd very much like the > simplicity of this. However, the idea of having multiple messages with > different timestamps in the same event was designed with performance as the > main concern. While I'm not deadly concerned with the performance of sending > messages as it's not a proven problem, this is. > > Firing events in the browser's main thread is not just suspect to latency > (setTimeout(..., 0) anyone? or rendering tasks blocking the event), but also > quite expensive. On a medium spec desktop, having an event listener for > mousemove updating a text box with the coordinates can take 50% of CPU when > the user is moving the mouse. And browsers throttle the mousemove events. Actually, that's not really the case. (The latency, of course, is why we have timestamps at all, and I don't question that.) The problem, in your mouse case, is typically that the text box updates are taking a ton of time, not the event firing. (At least, that's my experience - I'd welcome further insight. My Chrome eng friends assure me the event firing is quite fast; IPC can be expensive, but in this particular case, if you get the events batched up (e.g. as a packetlist from CoreMIDI), you could do IPC once and then JS event dispatching multiple times.) > A sequencer might easily be receiving messages a lot more often than the > mousemove events fire and might even do more expensive things, and I don't > even know how one would start throttling the MIDI events in JS (like you can > do for mousemove) without incurring even more cycles. You couldn't, really. But that's of general concern; if you take so long processing MIDI events that you start growing a backlog, that's going to happen in any case. The real question is whether adding more event callbacks is going to significantly contribute to the problem or not. Note that on Windows, you're likely not going to be batching these anyway - short messages are delivered individually, one at a time, as message to the messageproc - on CoreMIDI, the Packet is a single timestamp, but the packetlist has multiple timestamps. Of course, realistically, you're not likely to see THAT much incoming data batched together. On USB-MIDI, same thing: you're getting an incoming stream of MIDI message bytes. You (Web MIDI implementer) can batch messages, but there's no timestamping on the incoming data, so if you do this you're likely to batch it with a single timestamp. > The current design, however, allows the UA to be aware of upcoming events > ahead of time and use that information to optimize the amount of events > fired and when they're fired, since the current design doesn't even dictate > that the MIDI messages attached to the event have the same timesamp. This > might be crucial to allow reliable scheduling of for example Web Audio API > events related to the MIDI messages, especially in graphics-heavy sequencer > applications. I'm confused by this set of statements, starting with "upcoming events ahead of time". Do you intend future events to be pushed through somehow? How would you ever be able to get a lookahead? That would only seem possible with virtual ports (which we dropped as a v1 goal), or am I missing something?
(In reply to comment #2) > (In reply to comment #1) > > (In reply to comment #0) > > > It occurs to me that it would be much simpler to get rid of the MIDIMessage > > > object altogether, and promote timestamp and data to be on the MIDIEvent. > > Sounds like a great idea, at least in theory! I'd very much like the > > simplicity of this. However, the idea of having multiple messages with > > different timestamps in the same event was designed with performance as the > > main concern. While I'm not deadly concerned with the performance of sending > > messages as it's not a proven problem, this is. > > > > Firing events in the browser's main thread is not just suspect to latency > > (setTimeout(..., 0) anyone? or rendering tasks blocking the event), but also > > quite expensive. On a medium spec desktop, having an event listener for > > mousemove updating a text box with the coordinates can take 50% of CPU when > > the user is moving the mouse. And browsers throttle the mousemove events. > > Actually, that's not really the case. (The latency, of course, is why we > have timestamps at all, and I don't question that.) The problem, in your > mouse case, is typically that the text box updates are taking a ton of time, > not the event firing. (At least, that's my experience - I'd welcome further > insight. My Chrome eng friends assure me the event firing is quite fast; > IPC can be expensive, but in this particular case, if you get the events > batched up (e.g. as a packetlist from CoreMIDI), you could do IPC once and > then JS event dispatching multiple times.) Good to know! My statement was based on my own empirical experience of every quick-firing event being very expensive, but it's probably just what they do. But I doubt it's less expensive to fire a lot of events instead of batching them into one. > > A sequencer might easily be receiving messages a lot more often than the > > mousemove events fire and might even do more expensive things, and I don't > > even know how one would start throttling the MIDI events in JS (like you can > > do for mousemove) without incurring even more cycles. > > You couldn't, really. But that's of general concern; if you take so long > processing MIDI events that you start growing a backlog, that's going to > happen in any case. The real question is whether adding more event > callbacks is going to significantly contribute to the problem or not. > > Note that on Windows, you're likely not going to be batching these anyway - > short messages are delivered individually, one at a time, as message to the > messageproc - on CoreMIDI, the Packet is a single timestamp, but the > packetlist has multiple timestamps. Of course, realistically, you're not > likely to see THAT much incoming data batched together. > > On USB-MIDI, same thing: you're getting an incoming stream of MIDI message > bytes. You (Web MIDI implementer) can batch messages, but there's no > timestamping on the incoming data, so if you do this you're likely to batch > it with a single timestamp. > > > Snip. > > I'm confused by this set of statements, starting with "upcoming events ahead > of time". Do you intend future events to be pushed through somehow? How > would you ever be able to get a lookahead? That would only seem possible > with virtual ports (which we dropped as a v1 goal), or am I missing > something? Heheh, haven't you heard of my new AI algorithm predicting any future MIDI event from the first two? ;) Jokes aside, I was referring to the UA having heuristics to determine when the main thread is going to be blocked for a serious amount of time. (e.g. requestAnimationFrame) For example, there's a drawing operation that triggers a reflow. This can take a serious amount of time, and during that time MIDI messages may have piled up and can just be batched up into a JS array and sent as a single event. The memory imprint is smaller and the messages are more likely to be processed sooner rather than later (what if during scheduling all the events to fire, another event, such as another drawing callback, is registered to the event queue? Then the processing of the MIDI messages will be delayed in between by the drawing operation). Anyway, if you have cases where the current model would perform worse than the one you suggested, please let me know. Actually the model you suggested was also on my mind when I stripped down the MIDIMessage interface to just data and timestamp, but I thought the current model gives more breathing room for potential optimizations. This discussion actually reminds me... We should give some thought to giving the MIDIAccess interface to a Web Worker. If outside the main thread none of my concerns would be pressing enough to justify having the behavior a bit more complex.
> But I doubt it's less expensive to fire a lot of events instead of > batching them into one. Of course not! But I do note that this pattern will force every onmessage handler to be a loop: for (var i=0; i<e.MIDIMessages.length; i++) processMIDIMessageData( MIDIMessages[i].data ); when on at least some systems (Windows), you'll only ever receive (and thus, dispatch to the Web MIDI API) one message at a time. In fact, it wouldn't surprise me to see some code end up skipping that loop and dereferencing [0], even though it's wrong, because it will work (on Windows at least). Additionally, in OSX and USB-MIDI implementations, we've set it up so the data stream should be deconstructed into MIDI Messages, which means they'll have to copy the data into a different structure anyway. In short: I agree with you, it's at best a wash in terms of performance (to not pass multiple messages on a single call), and at worst, it's slightly worse. Depends on your implementation, of course, and how fast your event firing is, but my understanding is that event firing itself is quite fast, and the performance issues come in on forcing relayouts in event handlers (or having to cross thread boundaries repeatedly, which shouldn't be the issue here). However, I do think the pattern simplifies quite a bit for the MIDI web developer, which is why I brought it up at all. The less we have in the v1 API, the easier it is to pick up and understand. > Anyway, if you have cases where the current model would perform worse than > the one you suggested, please let me know. Actually the model you suggested > was also on my mind when I stripped down the MIDIMessage interface to just > data and timestamp, but I thought the current model gives more breathing > room for potential optimizations. We could always, of course, add a more structured "onMessages" that takes batches later. > This discussion actually reminds me... We should give some thought to giving > the MIDIAccess interface to a Web Worker. If outside the main thread none of > my concerns would be pressing enough to justify having the behavior a bit > more complex. I don't actually see a lot of problem with doing that, actually. I'm not sure I'd want to sign up for it quite yet, as I'd like to get an initial implementation or two first, but logistically I'm not overly worried. I would want us to map out scenarios for this, however, as I'm not sure you have access to all the other APIs you might want to make that interesting inside a worker. (I.e.: sure, you could get a MIDI message in a worker. Now what do you want to do with it?)
(In reply to comment #4) > > But I doubt it's less expensive to fire a lot of events instead of > > batching them into one. > > Of course not! But I do note that this pattern will force every onmessage > handler to be a loop: > > for (var i=0; i<e.MIDIMessages.length; i++) > processMIDIMessageData( MIDIMessages[i].data ); > > when on at least some systems (Windows), you'll only ever receive (and thus, > dispatch to the Web MIDI API) one message at a time. Like I said, this may not be true even for Windows, because you might receive several messages during the main thread being blocked, so these can be stacked into a single event. > In fact, it wouldn't > surprise me to see some code end up skipping that loop and dereferencing > [0], even though it's wrong, because it will work (on Windows at least). > > Additionally, in OSX and USB-MIDI implementations, we've set it up so the > data stream should be deconstructed into MIDI Messages, which means they'll > have to copy the data into a different structure anyway. > > In short: I agree with you, it's at best a wash in terms of performance (to > not pass multiple messages on a single call), and at worst, it's slightly > worse. Depends on your implementation, of course, and how fast your event > firing is, but my understanding is that event firing itself is quite fast, > and the performance issues come in on forcing relayouts in event handlers > (or having to cross thread boundaries repeatedly, which shouldn't be the > issue here). > > However, I do think the pattern simplifies quite a bit for the MIDI web > developer, which is why I brought it up at all. The less we have in the v1 > API, the easier it is to pick up and understand. > > > Anyway, if you have cases where the current model would perform worse than > > the one you suggested, please let me know. Actually the model you suggested > > was also on my mind when I stripped down the MIDIMessage interface to just > > data and timestamp, but I thought the current model gives more breathing > > room for potential optimizations. > > We could always, of course, add a more structured "onMessages" that takes > batches later. > > > This discussion actually reminds me... We should give some thought to giving > > the MIDIAccess interface to a Web Worker. If outside the main thread none of > > my concerns would be pressing enough to justify having the behavior a bit > > more complex. > > I don't actually see a lot of problem with doing that, actually. I'm not > sure I'd want to sign up for it quite yet, as I'd like to get an initial > implementation or two first, but logistically I'm not overly worried. I > would want us to map out scenarios for this, however, as I'm not sure you > have access to all the other APIs you might want to make that interesting > inside a worker. (I.e.: sure, you could get a MIDI message in a worker. > Now what do you want to do with it?) Yeah, it's probably not something we should worry about right now. Although, even without any interesting APIs you can do stuff like arpeggiating MIDI messages, which would be more reliable if it's done in a worker.
(In reply to comment #5) > > when on at least some systems (Windows), you'll only ever receive (and thus, > > dispatch to the Web MIDI API) one message at a time. > > Like I said, this may not be true even for Windows, because you might > receive several messages during the main thread being blocked, so these can > be stacked into a single event. How are they stacked into a single event? We must not be looking at the same MIDI APIs. AFAICT, even in the questionably-deprecated DirectMusic MIDI API, short (non-sysex) messages are sent back one at a time: http://directmidi.sourceforge.net/documentation/creceiver.htm#CRRecv2. Note that I haven't developed Windows MIDI s/w for a long time, though, so if I'm missing the right API, please point me in the right direction. (In the old-school Windows MIDI APIs, messages are sent via MM_MIM_DATA messages - also one at a time.) > Yeah, it's probably not something we should worry about right now. Although, > even without any interesting APIs you can do stuff like arpeggiating MIDI > messages, which would be more reliable if it's done in a worker. That's an excellent example!
(In reply to comment #6) > > How are they stacked into a single event? We must not be looking at the > same MIDI APIs. AFAICT, even in the questionably-deprecated DirectMusic > MIDI API, short (non-sysex) messages are sent back one at a time: > http://directmidi.sourceforge.net/documentation/creceiver.htm#CRRecv2. Note > that I haven't developed Windows MIDI s/w for a long time, though, so if I'm > missing the right API, please point me in the right direction. What I was referring to has nothing to do with how many messages the underlying API gives the implementation. Let's say there's a layout operation that takes 250ms. During that time the underlying API relays the implementation multiple messages. Now the implementation can decide if it fires these messages in separate events or batches the messages into one event. Same as a silly pseudo-code implementation: bool isEventScheduled = false; vector<MIDIMessage> messages; void onmidimessage (MIDIMessage msg) { messages.add(msg); if (isEventScheduled) return; isEventScheduled = true; JS.onNextTick(void () { isEventScheduled = false; midiPort.triggerEvent("message", { MIDIMessages: messages }) messages.clear() }) }
(In reply to comment #7) > What I was referring to has nothing to do with how many messages the > underlying API gives the implementation. Let's say there's a layout > operation that takes 250ms. During that time the underlying API relays the > implementation multiple messages. Now the implementation can decide if it > fires these messages in separate events or batches the messages into one Hmm, I see. I'm not sure if letting the MIDI implementation increase the granularity like that is a plus or a minus; you should be able to drive multiple messages through before releasing, anyway.
I like the proposal of getting rid of MIDIMessage. I can't access the prototype spec (server problem?), but will try again tomorrow to review it.
I reviewed the proposed spec. I like this simplification a lot. Today's computers/devices are so fast that the theoretical performance gain of delivering multiple messages at once is not significant. And the timestamp will ensure that there is no ambiguity for simultaneous events.
(In reply to comment #9) > I like the proposal of getting rid of MIDIMessage. I can't access the > prototype spec (server problem?), but will try again tomorrow to review it. The repo hosting seems to be down for me, too. But to the point: (In reply to comment #8) > Hmm, I see. I'm not sure if letting the MIDI implementation increase the > granularity like that is a plus or a minus; you should be able to drive > multiple messages through before releasing, anyway. Here's the thing, it's a tradeoff. There's really no right or wrong choice here, and I'm not sure of my own preference, but I want us to make that decision informed: The single message per event model is far simpler, both in terms of implementation and usage. However, simplicity is often a tradeoff for power and in this case both in terms of implementation and usage. From the implementation's point of view because it allows the implementation to take some liberties to optimize CPU usage (hopefully for the good, would be sad to see an implementation as unreliable as setTimeout for example), although not necessarily very significant ones. As for the (API) user, in the case that things don't go as planned (a re-layout takes a significant amount of time, blocking messages), you have better tools to recover from the situation; you have more information at hand. You can detect if e.g. you missed a noteon as well as its corresponding noteoff, and react to that in the way you want, for example by not playing back that note, but still recording it to a MIDI file anyway. In the single event model you don't know about the other messages you already missed so you'll probably play a weird stub of a sound, off its mark. For the API user, the complexity overhead is never to be underestimated, and in this case it would be, at best, wrapping your MIDI message handler in a loop, i.e. another level of indentation, at worst, you can choose to implement countermeasures for missed messages. Now if we were designing this API for a (relatively) safe environment (e.g. a worker), the choice would be easy, but instead this API is most likely to be used in an extremely unstable environment, the browser main thread. So, what's it going to be? Like I said, I'm unsure.
Hi Jussi, OK, I see the point. In particular, it can make sense to coalesce control change messages (for many MIDI receivers, only the last control value is significant, so you can skip previous, late, messages). But for any such removal of messages, you must know how the receiver is interpreting the data. For example, if the control change is interpreted as a relative movement (delta), as rotary encoders send, you better don't coalesce such messages. Still, it's quite an advanced usage, I'd say. Rather than cluttering up all webmidi receivers, we could provide a method in MIDIInput like "getQueuedMessageCount" which tells you if there are still messages immediately following the current one. Now the basic webmidi receivers will not use it and benefit from the simplifications proposed by Chris. And the advanced applications can cache all queued messages until there are no queued messages anymore, and then process them at once, possibly eliminating redundant or outdated messages. Would that be a solution?
(In reply to comment #12) > Still, it's quite an advanced usage, I'd say. Rather than cluttering up all > webmidi receivers, we could provide a method in MIDIInput like > "getQueuedMessageCount" which tells you if there are still messages > immediately following the current one. Now the basic webmidi receivers will > not use it and benefit from the simplifications proposed by Chris. And the > advanced applications can cache all queued messages until there are no > queued messages anymore, and then process them at once, possibly eliminating > redundant or outdated messages. > > Would that be a solution? Or, as Chris suggested, we could go with the simplified single message proposal for now, and consider adding an additional event handler called "onmessages" to a future version, letting the developer choose the tradeoff.
yes, that will work, too (even now).
Resolved on teleconference (http://www.w3.org/2012/12/05-audio-minutes.html): We'll remove the MIDIMessage interface and simplify as noted in this thread, with the understanding that if we need to for real-world reasons (e.g. performance, also as noted in this thread), we can add another API (the "onmessages" event) to fill that gap.
Fixed: https://dvcs.w3.org/hg/audio/rev/c19d385a839f.
(In reply to comment #16) > Fixed: https://dvcs.w3.org/hg/audio/rev/c19d385a839f. Looks good. Closing.