Last Call comment on SPARQL 1.1 Update

All,

This is a comment concerning the Last Call Working Draft 'SPARQL 1.1  
Update' [1]. It is clearly written and, AFAICT sound. However, I have  
an issue with it - more on the conceptual level. I tried to express my  
concerns in a blog post [2] and will do my best to summarise in the  
following.

While the proposed update language - without any doubt - is perfectly  
suitable for 'small to medium'-sized setups, I fear that we will run  
into troubles in large-scale deployments concerning the costs for  
updating and deleting huge volumes of triples. Now, I wish I had  
experimental evidence myself to proof this (and I have to admit I  
don't have), but I would like the WG to consider to either include a  
section discussing the issue, or setting up a (non-REC Track) document  
that discusses this (which could be titled 'implementation/usage  
advices for large-scale deployments' or the like). I do feel strongly  
about this and would offer to contribute to such a document, if desired.

I'd very much appreciate it if WG members would be able to point me to  
own experiences in this field (experiments or real-world deployments  
alike).

Cheers,
	Michael (with my DERI AC Rep and RDB2RDF WG co-chair hat off)

[1] http://www.w3.org/TR/2011/WD-sparql11-update-20110512/
[2] http://webofdata.wordpress.com/2011/05/29/ye-shall-not-delete-data/

--
Dr. Michael Hausenblas, Research Fellow
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html

Received on Sunday, 29 May 2011 16:30:08 UTC