Bugzilla – Bug 23332
Support Binary Keys
Last modified: 2014-02-24 12:39:45 UTC
Suggested by Joran Greef <email@example.com>
Mailing list discussion:
Summary: Seems like a good idea, but not for V1. Also tracked on:
Marking RESOLVED=LATER since we're not doing this in v1, but we can dump ideas here.
* APIs that accept keys accept any kind of ArrayBufferView
* Raw ArrayBuffer is not accepted as an input type
* The input type is not retained; i.e. it doesn't matter if you pass in a Uint8Array, a Float64Array or a DataView, it's just the backing bytes (as seen through the view's offset/byteLength) that form the key
* APIs that emit keys return a new Uint8Array backed by a new ArrayBuffer
* Binary keys sort between strings and arrays
* Binary keys are compared like strings/arrays; bytewise comparison, otherwise longer is greater
I think it would be confusing to accept a Float64Array, but then not sort according to float values.
Why not restrict to ArrayBuffers and Uint8Array?
Though it also feels strange to accept one type and then return another, so maybe limit to Uint8Array?
(In reply to Jonas Sicking from comment #3)
> I think it would be confusing to accept a Float64Array, but then not sort
> according to float values.
That's exactly why I tossed the straw-man up. :) It felt a little odd when I was prototyping it.
> Why not restrict to ArrayBuffers and Uint8Array?
> Though it also feels strange to accept one type and then return another, so
> maybe limit to Uint8Array?
I'd be fine with that restriction.
I think for the TextDecoder API we accept any type with the semantics I described (i.e. just consider the input type a byte buffer view, ignore the actual type), and ISTR there was discussion about moving away from consuming raw ArrayBuffers. We should probably evolve some consistency here.
I'll restrict our prototype to Uint8Array since it's easy to relax that later.
Thanks Joshua for filing the bug and getting discussion going.
Restricting to Uint8Array sounds like a good idea to start.
If it helps, one likely scenario for people using binary keys might be storing a few gigabytes in IDB, and there might be some kind of sync process between client and server, with an initial download sync to initialize the client. On top of that the connection would be binary Websocket and the key and value might be streamed to the client one after the other, i.e. fixed size key followed by value, within the same Websocket message.
In this scenario, if one had to pass the key in as a standalone Uint8array then that means slicing and millions of small objects being created/released and GC pressure especially for mobile devices.
Therefore it would be useful to be able to pass the binary key in as an offset and size into an existing Uint8array which might contain other data (i.e. the value itself), without forcing the end-user to have to slice that existing Uint8array first. If there's no offset and size argument when passing in the binary key, then the offset would be 0 and the size would be the length of the Uint8array.
(In reply to Joran Greef from comment #5)
> Therefore it would be useful to be able to pass the binary key in as an
> offset and size into an existing Uint8array which might contain other data
> (i.e. the value itself), without forcing the end-user to have to slice that
> existing Uint8array first. If there's no offset and size argument when
> passing in the binary key, then the offset would be 0 and the size would be
> the length of the Uint8array.
A Uint8Array is already a view onto an ArrayBuffer. If you have an existing large Uint8Array called |big| you can use:
var slice = new Uint8Array(big.buffer, offset, length);
... to specify a subset without making a copy.
Behind the scenes, an IDB implementation is likely going to make a copy of the bytes of the key, but a caller should be able to do:
store.put(big, new UInt8Array(big.buffer, offset, length));