In short, keep writing “http:” and trust that the infrastructure will quietly switch over to TLS (https) whenever both client and server can handle it. Meanwhile, let’s try to get SemWeb software to be doing TLS+UIR+HSTS and be as secure as modern browsers.
As we hope you’ve noticed, W3C is increasing the security of its own Web site and is strongly encouraging everyone to do the same. I’ve included some details from our systems team below for an explanation but the key technologies to look into if you’re interested are Http Strict Transport Security (HSTS) and Upgrade-Insecure-Requests (UIR).
Bottom line: we want everyone to use HTTPS and there are smarts in place on our servers and in many browsers to take care of the upgrade automatically.
So what of Semantic Web URIs, particularly namespaces like
Visit that URI in a modern, secure browser and you’ll be redirected to
https://www.w3.org/1999/02/22-rdf-syntax-ns#. Older browsers and, in this context more importantly, other user agents that do not recognize HSTS and/or UIR will not be redirected. So you can go on using
http://www.w3.org namespaces without disruption.
This raises a number of questions.
Firstly, is the community agreed that if two URIs differ only in the scheme (http://, https:// and perhaps whatever comes in future) then they identify the same resource? We believe that this can only be asserted by the domain owner. In the specific case of http://www.w3.org/* we do make that assertion. Note that this does not necessarily apply to any current or future subdomains of w3.org.
Secondly, some members of the Semantic Web community have already moved to HTTPS (it was a key motivator for w3id.org). How steep is the path from where we are today to moving to a more secure Semantic Web, i.e. one that habitually uses HTTPS rather than HTTP? Have you/are you considering upgrading your own software?
Until and if the Semantic Web operates on more secure connections, we will need to be careful to pass around http URIs – which is likely to mean remembering to knock off the s when pasting a URI from your browser.
That’s a royal pain but we’ve looked at various workarounds and they’re all horrible. For example, we could deliberately redirect requests to things like our vocabulary namespaces away from the secure w3.org site to a deliberately less secure sub-domain – gah! No thanks.
Thirdly, a key feature of the HSTS/UIR landscape is that there is no need to go back and edit old resources – communication is carried out using HTTPS without further intervention. Can this be true for Semantic Web/Linked Data too or should we be considering more drastic action. For example, editing definitions in turtle files such as the one at
http://www.w3.org/ns/dcat# to make it explicit that
https://www.w3.org/ns/dcat#Dataset (or even worse, having to go through and actually duplicate all the definitions with the different subject).
I really hope point 3 is unnecessary – but I’d like to be sure it is.
Jose Kahan from W3C’s Systems Team adds
HSTS does the client side upgrade from HTTP to HTTPS for a given domain. However, that header is only sent when doing an HTTPS connection. UIR defines a header that, if sent by browser, will tell the server it prefers using HTTPS and the server will redirect to HTTPS, then HSTS (through the header in the response) will kick in. HSTS doesn’t handle the case of mixed-content. That is the other part that UIR does to complement HSTS: tell the browser to update URLs of all content associated with a resource to HTTPS before requesting it.