Re: public suffix list: when opacity meets security [metaDataInURI-31 siteData-36]

Hi Noah,

There may be some overlap between this and one of the solutions we  
have been discussing for issue-50.

If we take the domain suffix approach,  a central site for browsers to  
retrieve information about the sub-scheme associated with particular  
domains would be a good thing.

Perhaps if one expands the scope of sub scheme to also include cross  
site collie polices etc the same mechanism could be used for both  
http: URI using DNS and those using non DNS resolution.

The thing is that we now have a web site perhaps consisting of a list  
of special domains.  True a better solution than tucking it away in a  
database someplace,  but still centralized in a sense.

I recommended that the descriptions of the sub-schemes should be  
retrieved from the domain itself.

Dare I say by retrieving a XRDS document?

The XRDS for a domain can have any number of service endpoints  
defined.  This is what is used in oAuth and openID now for describing  
OPs and RPs.

It is in the interests of the organizations defining and running these  
sub schemes to publish accurate information.   I personally prefer as  
decentralized approach as possible.

There is nothing in DNS itself that prevents the return of an A record  
for a TLD.   I admit I have never seen it done however.

I will ask someone I know that has a TLD if they have ever looked into  
it.

If DNS resolves http should be able to deal with it.

If browsers would by default retrieve metadata from the domain via  
XRDS that could be very interesting.

For those of you that I offended with XRDS just replace it with some  
new RDF meta data format.

However that is exactly the sort of service discovery that XRDS is  
designed for.

Regards
John Bradley
OASIS IDTRUST-SC
http://xri.net/=jbradley
五里霧中


On 30-Jul-08, at 8:04 AM, noah_mendelsohn@us.ibm.com wrote:

>
> My gut feel is that this might better be done by retrieval of  
> hypermedia
> documents as opposed to through maintenance of a centralized list.    
> For
> example, what if HTTP GET from http://uk (are retrievals from top  
> level
> domains supported?) returned a document with a list of public suffixes
> such as "co.uk"?  You could, I suppose, also establish some standard
> subdomain so instead of retrieving from "uk" you'd retrieve from
> http://domain_description.uk.  Browsers could then use recursive
> retrievals to build up pertinent parts of the public domain table  
> locally.
> Seems much more scalable and appropriately distributed than a  
> centralized
> list.  Am I missing something obvious?
>
> Noah
>
> --------------------------------------
> Noah Mendelsohn
> IBM Corporation
> One Rogers Street
> Cambridge, MA 02142
> 1-617-693-4036
> --------------------------------------
>
>
>
>
>
>
>
>
> Dan Connolly <connolly@w3.org>
> Sent by: www-tag-request@w3.org
> 06/19/2008 12:01 PM
>
>        To:     www-tag <www-tag@w3.org>
>        cc:
>        Subject:        public suffix list: when opacity meets security
> [metaDataInURI-31       siteData-36]
>
>
>
> I wonder how the principle of opacity applies in this case...
> http://www.w3.org/TR/webarch/#pr-uri-opacity
>
> The proposal is:
>
> [[
> The Mozilla Project (http://www.mozilla.org/), responsible for the
> Firefox web browser, requests your help.
>
> We are maintaining a list of all "Public Suffixes". A Public Suffix  
> is a
> domain label under which internet users can directly register domains.
> Examples of Public Suffixes are ".net", ".org.uk" and  
> ".pvt.k12.ca.us".
> In other words, the list is an encoding of the "structure" of each
> top-level domain, so a TLD may contain many Public Suffixes. This
> information is used by web browsers for several purposes - for  
> example,
> to make sure they have secure cookie-setting policies. For more  
> details,
> see http://publicsuffix.org/learn/.
> ]]
> -- Gervase Markham (Monday, 9 June)
>  http://lists.w3.org/Archives/Public/ietf-http-wg/2008AprJun/0483.html
>
> arguments against include:
>
> [[
> By proper design you can easily make cross-site cookies be
> verifiable. Set out the goal that a site must indicate that cross-site
> cookies is allowed for it to be accepted, and then work from there.
> There is many paths how to get there, and the more delegated you  
> make it
> close to the owners and operators of the sites the better.
>
> The big question is what that design should look like, but it's
> certainly not a central repository with copies hardcoded into  
> software.
> ]]
> -- Henrik Nordstrom  10 Jun 2008
>  http://lists.w3.org/Archives/Public/ietf-http-wg/2008AprJun/0552.html
>
>
> tracker: ISSUE-31, ISSUE-36
>
> -- 
> Dan Connolly, W3C http://www.w3.org/People/Connolly/
> gpg D3C2 887B 0F92 6005 C541  0875 0F91 96DE 6E52 C29E
>
>
>
>
>

Received on Wednesday, 30 July 2008 16:37:57 UTC