Feedback on content transformation guidelines

My comments below. I agree with Mark Baker's comments, and have tried  
not to repeat them here, although a few may have slipped through.

* Section 2.1 - "Alteration of HTTP requests and responses is not  
prohibited by HTTP other than in the circumstances referred to in  
[RFC2616 HTTP] Section 13.5.2."  This isn't true; section 14.9.5 needs  
to be referenced here as well.

* Section 3.4 / 3.5 "A [Content|Transformation] Deployment conforms to  
these guidelines if it follows the statements..."  What does "follows"  
mean here -- if they conform to all MUST level requirements? SHOULD  
and MUST?

* Section 4.1.2 "If the request contains a Cache-Control: no-transform  
directive proxies must forward the request unaltered to the server,  
other than to comply with transparent HTTP behaviour and as noted  
below."  I'm not sure what this sentence means.

* Section 4.1.3 "Proxies must act as though a no-transform directive  
is present (see 4.1.2 no-transform directive in Request) unless they  
are able positively to determine that the user agent is a Web  
browser."  How do they positively" determine this? Using heuristics is  
far from a guaranteed mechanism. Moreover, what is the reasoning  
behind this? If the intent is to only allow transformation of content  
intended for presentation to humans, it would be better to say that.  
In any case, putting a MUST-level requirement on this seems strange.

* Section 4.1.4 "Proxies should follow standard HTTP procedures in  
respect to caching..."  This seems a strange way to phrase it, and I  
don't think it's useful to use RF2616 language here.

* Section 4.1.5 Bullet points one and 3 are get-out-of-jail-free cards  
for non-transparent proxies to ignore no-transform and do other anti- 
social things. They should either be tightened up considerably, or  
removed.

* Section 4.1.5 What is a "restructured desktop experience"?

* Section 4.1.5 "proxies should use heuristics including comparisons  
of domain name to assess whether resources form part of the same "Web  
site."  I don't think the W3C should be encouraging vendors to  
implement yet more undefined heuristics for this task; there are  
several approaches already in use (e.g., in cookies, HTTP, security  
context, etc.); please pick one and refer to it specifically.

* Section 4.1.5.1 Proxies (and other clients) are allowed to and do  
reissue requests; by disallowing it, you're profiling HTTP, not  
providing guidelines.

* Section 4.1.5.2 Again, not specifying the heuristics is going to  
lead to differences in behaviour, which will cause content authors to  
have to account for this as well.

* Section 4.1.5.2 "A proxy must not re-issue a POST/PUT request..." Is  
this specific to POST and PUT, or all requests with bodies, or...?

* Section 4.1.5.4 Use of the term 'representation' is confusing here;  
please pick another one.

* Section 4.1.5.4 Using the same headers is often not a good idea.  
More specific, per-header advice would be more helpful.

* Section 4.1.5.5 This is specifying new protocol elements; this is  
becoming a protocol, not guidelines.

* Section 4.1.6.1 When a proxy inserts the URI to make a claim of  
conformance, exactly what are they claiming -- all must-level  
requirements are met? Should-level? What is the use case for this  
information?

* Section 4.2.1 Requiring servers to respond with 406 is profiling  
HTTP; HTTP currently allows the server to send a 'default'  
representation even when the headers say that the client doesn't  
prefer it.

* Section 4.2.2 "Servers must include a Cache-Control: no-transform  
directive if one is received in the HTTP request." Why?

* Section 4.2.3.1 "Serves may base their actions on knowledge... but  
should not choose an Internet content type for a response based on an  
assumption or heuristics about behaiour of any intermediaries." Why not?

* Section 4.3.2 Why can't proxies transform something that has already  
been transformed?

* Section 4.3.3 Sniffing content for error messages is dangerous, and  
also unlikely to work. E.g., will you sniff for all languages and all  
possible phrases? How will you avoid false positives? Remove this  
section and require content providers to get it right. People may  
still do this in their products, but there's no reason to codify it.

* Section 4.3.4 What's the purpose behind this behaviour?

Cheers,

--


Mark Nottingham     http://www.mnot.net/

Received on Friday, 29 August 2008 04:18:54 UTC