This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
Blob objects should be immutable, as if their data and its size never change. Blob.close() should cause reads to the blob to behave as though a network resource is no longer available. This fulfills the goal of Blob.close (discarding the underlying storage), without losing immutability of the Blob interface itself. Similarly, blob.slice() with a closed Blob shouldn't throw. Just create a new Blob just as you would if the original blob wasn't closed, and mark the new blob closed. This is also consistent with bug 24576 (making createObjectURL not throw). Another benefit of these changes is that the effects of Blob.close() are narrowed. Its only (script visible) effects during fetch, instead of having lots of little side-effects scattered across the whole Blob API. That makes the API simpler for developers. It may also make it easier to have a later Blob.close() variant that also closes all slices of the blob. Right now it's actually fairly hard to successfully close a Blob, since you need to close every sub-slice that you create, too. Doing a Blob.close({includingSlices: true}) may be easier later if we eliminate these synchronous side-effects, since this could cross workers. (Also related to bug 25240.)
I don't think we'll ever be able to cause all slices to be closed. For example, if a Blob is backed by a file, and you postMessage a slice of that Blob to another thread/process it seems hard to make sure any reads from the slice fail. It would mean that any attempts to read from the blob couldn't just be implemented by opening the file and reading form it. Instead you have to proxy a message to the thread which the blob was originally sliced from and see if the blob has been closed. This might involve bouncing to multiple threads since you can create slices from slices. This seems terrible performance-wise. Also, why are you specifically calling out slices? What about things like |x = new Blob([myblob]); myblob.close();|? Would that also cause x to get closed? If not, wouldn't you be stuck with the same problem as you have with slices? So I don't think we should change behavior of .close() for that reason. That said, I wouldn't mind changing Blob.close() such that it causes fewer other operations to fail. Making calling Blob.slice() on a closed blob not throw and instead return a closed Blob seems fine and creates fewer exceptional cases which is good. I don't really feel strongly about if the size should be set to 0. It does seem good to have a way to indicate that a blob is actually closed. You shouldn't need to attempt to read from it and see if you get an error back. However setting the size to 0 isn't a great way to do that since you still can't tell a closed blob from one that is truly empty.
(In reply to Jonas Sicking from comment #1) > I don't think we'll ever be able to cause all slices to be closed. For > example, if a Blob is backed by a file, and you postMessage a slice of that > Blob to another thread/process it seems hard to make sure any reads from the > slice fail. It would mean that any attempts to read from the blob couldn't > just be implemented by opening the file and reading form it. Instead you > have to proxy a message to the thread which the blob was originally sliced > from and see if the blob has been closed. This might involve bouncing to > multiple threads since you can create slices from slices. This seems > terrible performance-wise. I think my mental image of what would happen was based on a view of the top-level blob. That part's easy, since the closed state can be written to the file on disk as metadata; other threads can just look at that, and not communicate directly. But, that doesn't work when closing slices, since you'd only want to close that slice and its sub-slices, not the "parent" blob. (This could probably be done with nested blobs, but it'd be a lot more complex and much more like IPC.) > Also, why are you specifically calling out slices? What about things like > |x = new Blob([myblob]); myblob.close();|? Would that also cause x to get > closed? If not, wouldn't you be stuck with the same problem as you have with > slices? That's probably an issue, too. But the "close all sub-blobs" solution may not make sense there, since unlike slicing, nested blobs don't have a tree structure. It's not so obvious whether closing the containing blob should close the inner one. So, maybe the solution is something else. > That said, I wouldn't mind changing Blob.close() such that it causes fewer > other operations to fail. > > Making calling Blob.slice() on a closed blob not throw and instead return a > closed Blob seems fine and creates fewer exceptional cases which is good. > > I don't really feel strongly about if the size should be set to 0. It does > seem good to have a way to indicate that a blob is actually closed. You > shouldn't need to attempt to read from it and see if you get an error back. > However setting the size to 0 isn't a great way to do that since you still > can't tell a closed blob from one that is truly empty. Exposing a separate Blob.closed property would be better than making it appear as though the Blob's data was mutable, I think. One simple example: a site displays "Processed: 450/5000 bytes", updating as it processes the file: refresh() { processed.innerText = totalCompleted + "/" + blob.size; }. If the user cancels, it closes the blob, stops the operation and refreshes one more time, which causes it to say "Processed: 450/0 bytes". It's just one more thing to have to work around. (Structured clone should probably also just clone a new closed blob. That's in HTML, so I'll only file a bug on that if the close()-based exceptions in File API itself go away.)
(In reply to Glenn Maynard from comment #0) > Blob objects should be immutable, as if their data and its size never > change. Blob.close() should cause reads to the blob to behave as though a > network resource is no longer available. This fulfills the goal of > Blob.close (discarding the underlying storage), without losing immutability > of the Blob interface itself. Behaving as if a network resource is no longer available is right behavior, but only for Blobs that are accessed by network APIs. This behavior sounds right for CLOSED blobs that are used to coin blob: URLs. In this case, this behavior makes sense. > > Similarly, blob.slice() with a closed Blob shouldn't throw. Just create a > new Blob just as you would if the original blob wasn't closed, and mark the > new blob closed. This is also consistent with bug 24576 (making > createObjectURL not throw). OK, I think this makes sense. That is, the overall technical consensus seems to be to simply NOT true. And this applies to Blobs that you call .close() on before you .slice() them. > > Another benefit of these changes is that the effects of Blob.close() are > narrowed. Its only (script visible) effects during fetch, instead of having > lots of little side-effects scattered across the whole Blob API. That makes > the API simpler for developers. But developers SHOULD see the effect of blob.close(), and in places outside of fetch! Maybe the idea of a restricted keepalive list in bug 25302 is the right approach. > > It may also make it easier to have a later Blob.close() variant that also > closes all slices of the blob. Right now it's actually fairly hard to > successfully close a Blob, since you need to close every sub-slice that you > create, too. Doing a Blob.close({includingSlices: true}) may be easier > later if we eliminate these synchronous side-effects, since this could cross > workers. > > (Also related to bug 25240.)
(In reply to Arun from comment #3) > > > OK, I think this makes sense. That is, the overall technical consensus seems > to be to simply NOT true. > ^^ throw, not "true".
(In reply to Arun from comment #3) > (In reply to Glenn Maynard from comment #0) > > Blob objects should be immutable, as if their data and its size never > > change. Blob.close() should cause reads to the blob to behave as though a > > network resource is no longer available. This fulfills the goal of > > Blob.close (discarding the underlying storage), without losing immutability > > of the Blob interface itself. > > > Behaving as if a network resource is no longer available is right behavior, > but only for Blobs that are accessed by network APIs. This behavior sounds > right for CLOSED blobs that are used to coin blob: URLs. In this case, this > behavior makes sense. I think the analogy makes sense in general. Another way of looking at it: if a Blob is a File, closing the blob should act just like the user deleted the file. > > Another benefit of these changes is that the effects of Blob.close() are > > narrowed. Its only (script visible) effects during fetch, instead of having > > lots of little side-effects scattered across the whole Blob API. That makes > > the API simpler for developers. > > > But developers SHOULD see the effect of blob.close(), and in places outside > of fetch! Can you give an example? There might be use cases for a Blob.closed property to find out if a blob is closed (if we're OK with being locked into this being a sync operation), but that's all I can think of. (Of course, they should see the effect of disk space being freed up if things are working correctly, but that's not a *script-visible* effect.) > Maybe the idea of a restricted keepalive list in bug 25302 is the right > approach. (Sorry, I lost the thread--is this related?)
(In reply to Glenn Maynard from comment #5) > (In reply to Arun from comment #3) > > > > But developers SHOULD see the effect of blob.close(), and in places outside > > of fetch! > > Can you give an example? There might be use cases for a Blob.closed > property to find out if a blob is closed (if we're OK with being locked into > this being a sync operation), but that's all I can think of. If a web app gets a file reference, allows it to be displayed, including with metadata extraction (ID3 / EXIF), and then posted to the server with FormData, but wants to close the object at some point. The keepalive list posited in Bug 25302 might allow FormData to work even after the Blob has been neutered, but other operations on that neutered Blob, happening AFTER the closing, should know the Blob is closed, and maybe recreate the file picker. Rather than throw on a subsequent read operation, we could return 0 bytes. This makes it hard to differentiate between a *real* 0 byte Blob that is still OPENED, and a CLOSED Blob. That is the problem Jonas identifies, but I'm not sure what the use case would be for real 0 byte objects that aren't neutered. > > (Of course, they should see the effect of disk space being freed up if > things are working correctly, but that's not a *script-visible* effect.) > > > Maybe the idea of a restricted keepalive list in bug 25302 is the right > > approach. > > (Sorry, I lost the thread--is this related?) It's related in that I don't think the right approach is to retroactively close all the parents of slices; I agree that a slice on a neutered Blob should also be neutered, but I don't think neutering a slice neuters the parent (original object).
(In reply to Arun from comment #6) > If a web app gets a file reference, allows it to be displayed, including > with metadata extraction (ID3 / EXIF), and then posted to the server with > FormData, but wants to close the object at some point. I think I see the confusion. You mean that FormData should succeed, even though it's not a fetch of the blob. I don't mean that the one and only thing that should fail is fetch. I mean that the failure should happen when you try to access the blob's data, Fetch being the most common place that happens. > The keepalive list posited in Bug 25302 might allow FormData to work even > after the Blob has been neutered, but other operations on that neutered > Blob, happening AFTER the closing, should know the Blob is closed, and maybe > recreate the file picker. Posting FormData should do the same thing as fetch, and grab a reference to the Blob so it's immune to the user closing the blob later. The approach I suggested in the other bug should work for this too. > Rather than throw on a subsequent read operation, we could return 0 bytes. > This makes it hard to differentiate between a *real* 0 byte Blob that is > still OPENED, and a CLOSED Blob. That is the problem Jonas identifies, but > I'm not sure what the use case would be for real 0 byte objects that aren't > neutered. This is the original thing that this bug is arguing against. Blobs should be immutable (or at least act as though their data is immutable) and never change size. If there are use cases for detecting if a blob is closed, then we should just add a property to expose that. > It's related in that I don't think the right approach is to retroactively > close all the parents of slices; I agree that a slice on a neutered Blob > should also be neutered, but I don't think neutering a slice neuters the > parent (original object). Closing a slice absolutely shouldn't close the parent. Earlier on I suggested that we might want to support the opposite (closing the parent optionally closes its slices), but Jonas brought up some difficulties with that, and it would be a separate feature anyway. (It was only a minor side-benefit of what I was suggesting, and derailed the discussion for a bit.) We can forget about that for now.
I think this is fixed: 1. blob.close no longer sets size to 0: http://dev.w3.org/2006/webapi/FileAPI/#dfn-close 2. blob.slice() no longer throws: http://dev.w3.org/2006/webapi/FileAPI/#slice-method-algo