June 27, 2016

W3C Blog

Perspectives on security research, consensus and W3C Process

Linux Weekly News published a recent story called “Encrypted Media Extensions and exit conditions”, Cory Doctorow followed by publishing “W3C DRM working group chairman vetoes work on protecting security researchers and competition”. While the former is a more accurate account of the status, we feel obligated to offer corrections and clarifications to the latter, and to share a different perspective on security research protection, consensus at W3C, W3C’s mission and the W3C Process, as well as the proposed Technology and Policy Interest Group.

There have been a number articles and blog posts about the W3C EME work but we’ve not been able to offer counterpoints to every public post, as we’re focusing on shepherding and promoting the work of 40 Working Groups and 14 Interest Groups –all working on technologies important to the Web such as: HTML5, Web Security, Web Accessibility, Web Payments, Web of Things, Automotive, etc.

TAG statement on the Web’s security model

In his recent article, Cory wrote:

For a year or so, I’ve been working with the EFF to get the World Wide Web Consortium to take steps to protect security researchers and new market-entrants who run up against the DRM standard they’re incorporating into HTML5, the next version of the key web standard.

First, the W3C is concerned about risks for security researchers. In November 2015 the W3C Technical Architecture Group (TAG), a special group within the W3C, chartered under the W3C Process with stewardship of the Web architecture, made a statement (after discussions with Cory on this topic) about the importance of security research. The TAG statement was:

The Web has been built through iteration and collaboration, and enjoys strong security because so many people are able to continually test and review its designs and implementations. As the Web gains interfaces to new device capabilities, we rely even more on broad participation, testing, and audit to keep users safe and the web’s security model intact. Therefore, W3C policy should assure that such broad testing and audit continues to be possible, as it is necessary to keep both design and implementation quality high.

W3C TAG statements have policy weight. The TAG is co-Chaired by the inventor of the Web and Director of W3C, Tim Berners-Lee. It has elected representatives from W3C members such as Google, Mozilla, Microsoft and others.

This TAG statement was reiterated in an EME Factsheet, published before the W3C Advisory Committee meeting in March 2016 as well as in the W3C blog post in April 2016 published when the EME work was allowed to continue.

Second, EME is not a DRM standard. W3C does not make DRM. The specification does not define a content protection or Digital Rights Management system. Rather, EME defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems. We appreciate that to those who are opposed to DRM, any system which “touches” upon DRM is to be avoided, but the distinction is important. DRM is on the Web and has been for many years. We ask pragmatically what we can do for the good of the Web to both make sure a system which uses protected content insulates users as much as possible, and ensure that the work is done in an open, transparent and accessible way.

A several-month TF to assess EFF’s proposed covenant

Cory further wrote, about the covenant:

As a compromise that lets the W3C continue the work without risking future web users and companies, we’ve proposed that the W3C members involved should agree on a mutually acceptable binding promise not to use the DMCA and laws like it to shut down these legitimate activities — they could still use it in cases of copyright infringement, just not to shut down activity that’s otherwise legal.

The W3C took the EFF covenant proposal extremely seriously. Made as part of EFF’s formal objection to the Working Group’s charter extension, the W3C leadership took extraordinary effort to resolve the objection and evaluate the EFF proposed covenant by convening a several month task force. Hundreds of emails were exchanged between W3C Members and presentations were made to the W3C Advisory Committee at the March 2016 Advisory Committee meeting.

While there was some support for the idea of the proposal, the large majority of W3C Members did not wish to accept the covenant as written (the version they voted on was different from the version the EFF made public), nor a slightly different version proposed by another member.

Member confidentiality vs. transparent W3C Process

Cory continued:

The LWN writeup is an excellent summary of the events so far, but parts of the story can’t be told because they took place in “member-confidential” discussions at the W3C. I’ve tried to make EFF’s contributions to this discussion as public as possible in order to bring some transparency to the process, but alas the rest of the discussion is not visible to the public.

W3C works in a uniquely transparent way. Specifications are largely developed in public and most groups have public minutes and mailings lists. However, Member confidentiality is a very valuable part of the W3C process. That business and technical discussions can happen in confidence between members is invaluable to foster broader discussion, trust and the opportunity to be frank. The proceedings of the HTML Media Extensions work are public however, discussions amongst Advisory Committee members are confidential.

In his post, Nathan Willis quoted a June 6 blog post by EFF’s Cory Doctorow, and continued:

Enough W3C members endorsed the proposed change that the charter could not be renewed. After 90 days’ worth of discussion, the working group had made significant progress, but had not reached consensus. The W3C executive ended this process and renewed the working group’s charter until September.

Similar wording is found in an April EFF blog post, attributing the renewal to “the executive of the W3C.” In both instances, the phrasing may suggest that there was considerable internal debate in the lead-up to the meeting and that the final call was made by W3C leadership. But, it seems, the ultimate decision-making mechanism (such as who at W3C made the final decision and on what date) is confidential; when reached for comment, Doctorow said he could not disclose the process.

Though the Member discussions are confidential, the process itself is not.

In the W3C process, charters for Working Groups go to the Advisory Committee for review at different stages of completion. That happened in this case. The EFF made an objection. By process, when there are formal objections the W3C then tries to resolve the issue.

As part of the process, when there is no consensus, the W3C generally allows existing groups to continue their work as described in the charter. When there is a “tie-break” needed, it is the role of the Director, Tim Berners-Lee, to assess consensus and decide on the outcome of formal objections. It was only after the overwhelming majority of participants rejected the EFF proposal for a covenant attached to the EME work that Tim Berners-Lee and the W3C management felt that the EFF proposal could not proceed and the work would be allowed to continue.

Next steps within the HTML Media Extensions Working Group

Cory also wrote:

The group’s charter is up for renewal in September, and many W3C members have agreed to file formal objections to its renewal unless some protection is in place. I’ll be making an announcement shortly about those members and suggesting some paths for resolving the deadlock.

The group is not up for charter renewal in September but rather, its specifications are progressing on the time-line to “Recommendation“. A Candidate Recommendation transition will soon have to be approved, and then the spec will require interoperability testing, and Advisory Committee approval before it reaches REC. One criteria for Recommendation is that the ideas in the technical report are appropriate for widespread deployment and EME is already deployed in almost all browsers.

To a lesser extent, we wish to clarify that veto is not part of the role of Working Group chairs; indeed Cory wrote:

Linux Weekly News reports on the latest turn of events: I proposed that the group take up the discussion before moving to recommendation, and the chairman of the working group, Microsoft’s Paul Cotton, refused to consider it, writing, “Discussing such a proposed covenant is NOT in the scope of the current HTML Media Extensions WG charter.”

As Chair of the HTML Media Extensions Working Group, Paul Cotton’s primary role is to facilitate consensus-building among Group members for issues related to the specification. A W3C Chair leads the work of the group but does not decide for the group; work proceeds with consensus. The covenant proposal had been under wide review with many lengthy discussions for several months on the W3C Advisory Committee mailing lists. Paul did not dismiss W3C-wide discussion of the topic, but correctly noted it was not a topic in line with the chartered work of the group.


In the April 2016 announcement that the EME work would continue, the W3C reiterated the importance of security research and acknowledged the need for high level technical policy discussions at W3C – not just for the covenant. A few weeks prior, during the March 2016 Advisory Committee meeting the W3C announced a proposal to form a Technology and Policy Interest Group.

The W3C has, for more than 20 years, focused on technology standards for the Web. However, recognizing that as the Web gets more complex and its technology is increasingly woven into our lives, we must consider technical aspects of policy as well. The proposed Technology and Policy Interest Group, if started, will explore, discuss and clarify aspects of policy that may affect the mission of W3C to lead the Web to its full potential. This group has been in preparation before the EME covenant was presented, and will be address broader issues than anti-circumvention. It is designed as a forum for W3C Members to try to reach consensus on the descriptions of varying views on policy issues, such deep linking or pervasive monitoring.

While we tried to find common ground among our membership on the covenant issue, we have not succeeded yet. We hope that EFF and others will continue to try. We recognize and support the importance of security research, and the impact of policy on innovation, competition and the future of the Web. Again, for more ample information on EME and frequently asked questions, please see the EME Factsheet, published in March 2016.

by Coralie Mercier at June 27, 2016 10:30 AM

June 24, 2016

W3C Blog

Subresource Integrity Becomes a W3C Recommendation

The fundamental line of trust on the Web is between the end-user and the Web application: individuals who visit a website rely on HTTPS to trust that they are getting the page or application put there by the site owner. Features of Web Application Security are designed to support that trust, protecting against cross-site scripting and content-injection attacks or unwanted snooping on Web traffic. If a Web application includes resources from third parties, however, it may effectively delegate its trust to all of those included resources, any of which could maliciously or carelessly compromise the overall security of the Web application and data shared with it.

Subresource Integrity (SRI), which just reached W3C Recommendation status, offers a way to include resources without that open-ended delegation. It lets browsers, as user-agents, cryptographically verify that the integrity of included subresources such as scripts and styles, matches as-delivered what was expected by the requesting application.

As explained in the specification:

Sites and applications on the web are rarely composed of resources from only a single origin. For example, authors pull scripts and styles from a wide variety of services and content delivery networks, and must trust that the delivered representation is, in fact, what they expected to load. If an attacker can trick a user into downloading content from a hostile server (via DNS poisoning, or other such means), the author has no recourse. Likewise, an attacker who can replace the file on the Content Delivery Network (CDN) server has the ability to inject arbitrary content.

Delivering resources over a secure channel mitigates some of this risk: with TLS, HSTS, and pinned public keys, a user agent can be fairly certain that it is indeed speaking with the server it believes it’s talking to. These mechanisms, however, authenticate only the server, not the content. An attacker (or administrator) with access to the server can manipulate content with impunity. Ideally, authors would not only be able to pin the keys of a server, but also pin the content, ensuring that an exact representation of a resource, and only that representation, loads and executes.

This document specifies such a validation scheme, extending two HTML elements with an integrity attribute that contains a cryptographic hash of the representation of the resource the author expects to load. For instance, an author may wish to load some framework from a shared server rather than hosting it on their own origin. Specifying that the expected SHA-384 hash of https://example.com/example-framework.js is Li9vy3DqF8tnTXuiaAJuML3ky+er10rcgNR/VqsVpcw+ThHmYcwiB1pbOxEbzJr7 means that the user agent can verify that the data it loads from that URL matches that expected hash before executing the JavaScript it contains. This integrity verification significantly reduces the risk that an attacker can substitute malicious content.

This example can be communicated to a user agent by adding the hash to a script element, like so:

<script src="https://example.com/example-framework.js"

With SRI, WebApps can improve their network performance and security together. Read the Implementation Report for more examples and sites already using the feature.

Thanks to editors Devdatta Akhawe, Dropbox, Inc.; Frederik Braun, Mozilla; François Marier, Mozilla; and Joel Weinberger, Google, Inc. and participants in the Web Application Security Working Group for successful completion of this Recommendation.

by Wendy Seltzer at June 24, 2016 04:34 PM

June 21, 2016

ishida >> blog

UniView 9.0.0 available

Picture of the page in action.
>> Use UniView

UniView now supports Unicode version 9, which is being released today, including all changes made during the beta period. (As before, images are not available for the Tangut additions, but the character information is available.)

This version of UniView also introduces a new filter feature. Below each block or range of characters is a set of links that allows you to quickly highlight characters with the property letter, mark, number, punctuation, or symbol. For more fine-grained property distinctions, see the Filter panel.

In addition, for some blocks there are other links available that reflect tags assigned to characters. This tagging is far from exhaustive! For instance, clicking on sanskrit will not show all characters used in Sanskrit.

The tags are just intended to be an aid to help you find certain characters quickly by exposing words that appear in the character descriptions or block subsection titles. For example, if you want to find the Bengali currency symbol while viewing the Bengali block, click on currency and all other characters but those related to currency will be dimmed.

(Since the highlight function is used for this, don’t forget that, if you happen to highlight a useful subset of characters and want to work with just those, you can use the Make list from highlights command, or click on the upwards pointing arrow icon below the text area to move those characters into the text area.)

by r12a at June 21, 2016 07:39 PM

June 08, 2016

W3C Blog

Wishing Tim Berners-Lee a happy birthday!

piece of cake!Today is Tim Berners-Lee‘s birthday and we’d like to wish him a very happy birthday and many happy returns of the day.

Happy birthday, Tim!

We are so grateful that you invented the Web  27 years ago! and that you are shepherding us, at the W3C (the Team, our members and all our 40+ Working Groups in our work including: Security, Internationalization, Accessibility, Web Applications and more) in leading the Web to its full potential.

Thank you for all you do for the Web (in inventing it) and the world, in sharing it for free, advocating for an Open Web, working to protect it and to bring it to the whole world — to truly be for everyone; a rich, creative resource for all.

We are honored to know you, to work with and for you, and we wish you all the best this and every year.

(Anecdotically, the piece of cake icon has been on the Web for 19 years, 7 months. That this birthday cake was the first image in the W3C icon directory shows just a bit of the sense of caring, humanity and connectedness that are so much a part of this group and the work we do.)

by Coralie Mercier at June 08, 2016 02:59 PM

June 03, 2016

W3C Blog

Exciting Opportunity at TPAC 2016

TPAC 2016 logoThe time since the last TPAC meeting has flown by and we’re in the process of putting the finishing touches on TPAC 2016 in Lisbon, Portugal.   During the meeting in Sapporo we ran an experimental Demo Area.  While that was very successful we decided to expand on that idea and have an Exhibition Area in Lisbon!

The Exhibition Area will be open on all five days of TPAC. It is open to any W3C Member and what you exhibit is up to you!  The rule is that it needs to be something that leverages W3C’s work. That can be a solution that has implemented our Standards.  You may be an organization that offers consulting in A11Y and you want to promote that – fine!  You may offer training on how to implement our standards or follow our best practices – Wonderful!

The price is a mere 1500€ and the space is limited to the first 14 organizations that apply.  In fact we only have 12 spaces left as Viacom is the Exhibition Sponsor (THANKS!) and will be taking two of the tables.

You can register by using this form and you will be contacted by either myself or Bernard Gidon.

A reminder that we’ll hold a Developer Meetup on the Monday of the week –stay tuned– and that Wednesday is the day for unconference and breakout sessions.

I look forward to seeing everyone in Lisbon!

J. Alan Bird Global Business Development Leader, W3C

by J. Alan Bird at June 03, 2016 02:19 PM

June 02, 2016

W3C Blog

Finishing HTML5.1 … and starting HTML 5.2

Since we published the Working on HTML5.1 post, we’ve made progress. We’ve closed more issues than we have open, we now have a working rhythm for the specification that is getting up to the speed we want, and we have a spec we think is a big improvement on HTML5.

Now it’s time to publish something serious.

We’ve just posted a Call For Consensus (CFC) to publish the current HTML5.1 Working Draft as a Candidate Recommendation (CR). This means we’re going into feature freeze on HTML5.1, allowing the W3C Patent Policy to come into play and ensure HTML5.1 can be freely implemented and used.

While HTML5.1 is in CR we may make some editorial tweaks to the spec – for instance we will be checking for names that have been left out of the Acknowledgements section. There will also be some features marked “at risk”, which means they will be removed from HTML5.1 if we find during CR that they do not work in at least two shipping browsers.

Beyond this, the path of getting from CR to W3C Recommendation is an administrative one. We hope the Web Platform WG agrees that HTML5.1 is better than HTML5, and that it would benefit the web community if we updated the “gold standard” – the W3C Recommendation. Then we need W3C’s membership, and finally W3C Director Tim Berners Lee to agree too.

The goal is for HTML5.1 to be a W3C Recommendation in September, and to achieve that we have to put the specification into feature freeze now. But what happens between now and September? Are we really going to sit around for a few months crossing legal t’s and dotting administrative i’s? No way!

We have pending changes that reflect features we believe will be shipped over the next few months. And of course there are always bugs to fix, and editorial improvements to make HTML at W3C more reliable and usable by the web community.

In the next couple of weeks we will propose a First Public Working Draft of HTML5.2. This will probably include some new features, some features that were not interoperable and so not included in HTML5.1, and some more bug fixes. This will kick off a programme of regular Working Draft releases until HTML5.2 is ready to be moved to W3C Recommendation sometime in the next year or so

As always please join in, whether by following @HTMLWG on Twitter, filing issues, joining WP WG and writing bits of the specification, or just helping your colleagues stay up to date on HTML…

… on behalf of the chairs and editors, thanks!

by Charles McCathie Nevile at June 02, 2016 01:18 PM

Invitation to upcoming GIPO sessions at EuroDIG

I participate in GIPO (Global Internet Policy Observatory), to help frame the dialog on Internet Governance, and in the context of the upcoming European Dialogue on Internet Governance (EuroDIG) taking place in Brussels on 9-10 June 2016, a number of sessions will be devoted to GIPO with experts and interested stakeholders. I’ll be there!

You may register to attend this free event.

by Daniel Dardailler at June 02, 2016 12:48 PM

May 20, 2016

W3C Blog

HTTPS and the Semantic Web/Linked Data

In short, keep writing “http:” and trust that the infrastructure will quietly switch over to TLS (https) whenever both client and server can handle it. Meanwhile, let’s try to get SemWeb software to be doing TLS+UIR+HSTS and be as secure as modern browsers.

Sandro Hawke

As we hope you’ve noticed, W3C is increasing the security of its own Web site and is strongly encouraging everyone to do the same. I’ve included some details from our systems team below for an explanation but the key technologies to look into if you’re interested are Http Strict Transport Security (HSTS) and Upgrade-Insecure-Requests (UIR).

Bottom line: we want everyone to use HTTPS and there are smarts in place on our servers and in many browsers to take care of the upgrade automatically.

So what of Semantic Web URIs, particularly namespaces like http://www.w3.org/1999/02/22-rdf-syntax-ns#?

Visit that URI in a modern, secure browser and you’ll be redirected to https://www.w3.org/1999/02/22-rdf-syntax-ns#. Older browsers and, in this context more importantly, other user agents that do not recognize HSTS and/or UIR will not be redirected. So you can go on using http://www.w3.org namespaces without disruption.

This raises a number of questions.

Firstly, is the community agreed that if two URIs differ only in the scheme (http://, https:// and perhaps whatever comes in future) then they identify the same resource? We believe that this can only be asserted by the domain owner. In the specific case of http://www.w3.org/* we do make that assertion. Note that this does not necessarily apply to any current or future subdomains of w3.org.

Secondly, some members of the Semantic Web community have already moved to HTTPS (it was a key motivator for w3id.org). How steep is the path from where we are today to moving to a more secure Semantic Web, i.e. one that habitually uses HTTPS rather than HTTP? Have you/are you considering upgrading your own software?

Until and if the Semantic Web operates on more secure connections, we will need to be careful to pass around http URIs – which is likely to mean remembering to knock off the s when pasting a URI from your browser.

That’s a royal pain but we’ve looked at various workarounds and they’re all horrible. For example, we could deliberately redirect requests to things like our vocabulary namespaces away from the secure w3.org site to a deliberately less secure sub-domain – gah! No thanks.

Thirdly, a key feature of the HSTS/UIR landscape is that there is no need to go back and edit old resources – communication is carried out using HTTPS without further intervention. Can this be true for Semantic Web/Linked Data too or should we be considering more drastic action. For example, editing definitions in turtle files such as the one at http://www.w3.org/ns/dcat# to make it explicit that http://www.w3.org/ns/dcat#Dataset is owl:equivalentClass to https://www.w3.org/ns/dcat#Dataset (or even worse, having to go through and actually duplicate all the definitions with the different subject).

I really hope point 3 is unnecessary – but I’d like to be sure it is.


Jose Kahan from W3C’s Systems Team adds

HSTS does the client side upgrade from HTTP to HTTPS for a given domain. However, that header is only sent when doing an HTTPS connection. UIR defines a header that, if sent by browser, will tell the server it prefers using HTTPS and the server will redirect to HTTPS, then HSTS (through the header in the response) will kick in. HSTS doesn’t handle the case of mixed-content. That is the other part that UIR does to complement HSTS: tell the browser to update URLs of all content associated with a resource to HTTPS before requesting it.

For browser UAs, if HSTS is enabled for a domain and you browse a document by typing its URL on the navigation bar or follow a link to a new document, the request will be sent as HTTPS, regardless of the URL saying HTTP. If the document includes a CSS file, javascript, or an image, for example and that URL is HTTP, the request for those resources will only be sent as HTTPS if the UA supports UIR.

by Phil Archer at May 20, 2016 05:51 PM

April 06, 2016

W3C Blog

Working on HTML5.1

HTML5 was released in 2014 as the result of a concerted effort by the W3C HTML Working Group. The intention was then to begin publishing regular incremental updates to the HTML standard, but a few things meant that didn’t happen as planned. Now the Web Platform Working Group (WP WG) is working towards an HTML5.1 release within the next six months, and a general workflow that means we can release a stable version of HTML as a W3C Recommendation about once per year.


The core goals for future HTML specifications are to match reality better, to make the specification as clear as possible to readers, and of course to make it possible for all stakeholders to propose improvements, and understand what makes changes to HTML successful.


The plan is to ship an HTML5.1 Recommendation in September 2016. This means we will need to have a Candidate Recommendation by the middle of June, following a Call For Consensus based on the most recent Working Draft.

To make it easier for people to review changes, an updated Working Draft will be published approximately once a month. For convenience, changes are noted within the specification itself.

Longer term we would like to “rinse and repeat”, making regular incremental updates to HTML a reality that is relatively straightforward to implement. In the meantime you can track progress using Github pulse, or by following @HTML_commits or @HTMLWG on Twitter.

Working on the spec…

The specification is on Github, so anyone who can make a Pull Request can propose changes. For simple changes such as grammar fixes, this is a very easy process to learn – and simple changes will generally be accepted by the editors with no fuss.

If you find something in the specification that generally doesn’t work in shipping browsers, please file an issue, or better still file a Pull Request to fix it. We will generally remove things that don’t have adequate support in at least two shipping browser engines, even if they are useful to have and we hope they will achieve sufficient support in the future: in some cases, you can or we may propose the dropped feature as a future extension – see below regarding “incubation”.

HTML is a very large specification. It is developed from a set of source files, which are processed with the Bikeshed preprocessor. This automates things like links between the various sections, such as to element definitions. Significant changes, even editorial ones, are likely to require a basic knowledge of how Bikeshed works, and we will continue to improve the documentation especially for beginners.

HTML is covered by the W3C Patent Policy, so many potential patent holders have already ensured that it can be implemented without paying them any license fee. To keep this royalty-free licensing, any “substantive change” – one that actually changes conformance – must be accompanied by the patent commitment that has already been made by all participants in the Web Platform Working Group. If you make a Pull Request, this will automatically be checked, and the editors, chairs, or W3C staff will contact you to arrange the details. Generally this is a fairly simple process.

For substantial new features we prefer a separate module to be developed, “incubated”, to ensure that there is real support from the various kinds of implementers including browsers, authoring tools, producers of real content, and users, and when it is ready for standardisation to be proposed as an extension specification for HTML. The Web Platform Incubator Community Group (WICG) was set up for this purpose, but of course when you develop a proposal, any venue is reasonable. Again, we ask that you track technical contributions to the proposal (WICG will help do this for you), so we know when it arrives that people who had a hand in it have also committed to W3C’s royalty-free patent licensing and developers can happily implement it without a lot of worry about whether they will later be hit with a patent lawsuit.


W3C’s process for developing Recommendations requires a Working Group to convince the W3C Director, Tim Berners-Lee, that the specification

“is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized”

This had to be done for HTML 5.0. When a change is proposed to HTML we expect it to have enough tests to demonstrate that it does improve interoperability. Ideally these fit into an automatable testing system like the “Webapps test harness“. But in practice we plan to accept tests that demonstrate the necessary interoperability, whether they are readily automated or not.

The benefit of this approach is that except where features are removed from browsers, which is comparatively rare, we will have a consistently increasing level of interoperability as we accept changes, meaning that at any time a snapshot of the Editors’ draft should be a stable basis for an improved version of HTML that can be published as an updated version of an HTML Recommendation.


We want HTML to be a specification that authors and implementors can use with ease and confidence. The goal isn’t perfection (which is after all the enemy of good), but rather to make HTML 5.1 better than HTML 5.0 – the best HTML specification until we produce HTML 5.2…

And we want you to feel welcome to participate in improving HTML, for your own purposes and for the good of the Web.

Chaals, Léonie, Ade – chairs
Alex, Arron, Steve, Travis – editors

by Léonie Watson at April 06, 2016 01:05 PM

April 05, 2016

W3C Blog

HTML Media Extensions to continue work

The HTML Media Extensions Working Group was extended today until the end of September 2016. As part of making video a first class citizen of the Web, an effort started by HTML5 itself in 2007, W3C has been working on many extension specifications for the Open Web Platform: capturing images from the local device camera, handling of video streams and tracks, captioning and other enhancements for accessibility, audio processing, real-time communications, etc. The HTML Media Extensions Working Group is working on two of those extensions: Media Sources Extensions (MSE), for facilitating adaptive and live streaming, and Encrypted Media Extensions (EME), for playback of protected content. Both are extension specifications to enhance the Open Web Platform with rich media support.

The W3C supports the statement from the W3C Technical Architecture Group (TAG) regarding the importance of broad participation, testing, and audit to keep users safe and the Web’s security model intact. The EFF, a W3C member, concerned about this issue, proposed a covenant to be agreed by all W3C members which included exemptions for security researchers as well as interoperable implementations under the US Digital Millennium Copyright Act (DMCA) and similar laws. After discussion for several months and review at the recent W3C Advisory Committee meeting, no consensus has yet emerged from follow-up discussions about the covenant from the EFF.

We do recognize that issues around Web security exist as well as the importance of the work of security researchers and that these necessitate further investigation but we maintain that the premises for starting the work on the EME specification are still applicable. See the information about W3C and Encrypted Media Extensions.

The goal for EME has always been to replace non-interoperable private content protection APIs (see the Media Pipeline Task Force (MPTF) Requirements). By ensuring better security, privacy, and accessibility around those mechanisms, as well as having those discussions at W3C, EME provides more secure interfaces for license and key exchanges by sandboxing the underlying content decryption modules. The only required key system in the specification is one that actually does not perform any digital rights management (DRM) function and is using fully defined and standardized mechanisms (the JSON Web Key format, RFC7517, and algorithms, RFC7518). While it may not satisfy some of the requirements from distributors and media owners in resisting attacks, it is the only fully interoperable key system when using EME.

We acknowledge and welcome further efforts from the EFF and other W3C Members in investigating the relations between technologies and policies. Technologists and researchers indeed have benefited from the EFF’s work in securing an exemption from the DMCA from the Library of Congress which will help to better protect security researchers from the same issues they worked to address at the W3C level.

W3C does intend to keep looking at the challenges related to the US DMCA and similar laws such as international implementations of the EU Copyright Directive with our Members and staff. The W3C is currently setting up a Technology and Policy Interest Group to keep looking at those issues and we intend to bring challenges related to these laws to this Group.

by Philippe le Hegaret at April 05, 2016 02:29 PM

March 31, 2016

W3C Blog

W3C Highlights and Advisory Committee meeting

AC 2016 logo

W3C had its annual meeting last week in Boston and it was one of the most interactive meetings we have had in recent memory. During the meeting we released the W3C highlights for Spring 2016, a comprehensive report of W3C’s vision and focus; and had informative discussions from industry presenters and keynote speakers.

We also had an opportunity to discuss a proposal from the Electronic Frontier Foundation about a covenant related to our Encrypted Media Extensions specification; see the information on EME work at W3C we recently made available.

Participants were most energized by a discussion about what is the Next Big Thing for the Web. Everyone seemed to think there was so much more work to do in moving the Web forward. In a straw poll, the leading topics were Web Security, Web Payments, Web of Things, and (of course) the core platform. These definitely map to where we are seeing the most excitement in W3C groups. More details can be found in our recent security blog posts; my recent payments blog post; my recent WoT post; and the Web Platform Working Group’s co-chairs next steps for the core platform blog post.

Other Highlights of the meeting:

  • We spent half a day looking at how the Web leads industry to its full potential and vice versa. The speakers – principally from industry – taught us a great deal about how Web technologies impact their industries. See our efforts in Telecommunications, Web Payments, Web of Things, Digital Publishing, Automotive and Entertainment.
  • We had about 10 BOF sessions over lunch as Advisory Committee representatives and the Team thought through diverse topics such as Digital Marketing, executive focus on Web technologies, and blockchain.
  • We spent an afternoon getting an update on technical topics and tooling. A highlight was Nigel Megitt’s professionally delivered video speech as we all celebrated getting our first Emmy. Several attendees were seen getting their pictures taken with the Emmy.
  • Bruce Schneier delivered an impressive keynote about the growing impact of what he called “the World-Sized Web” as the Internet of Things brings more parts of world infrastructure within the domain of Web technology. A particular focus was on how to secure this infrastructure.

by Jeff Jaffe at March 31, 2016 06:30 PM

March 25, 2016

ishida >> blog

Historical maps of Europe, 362-830 AD

Picture of the page in action.
>> See the chronology
>> See the maps

This blog post introduces the first of a set of historical maps of Europe that can be displayed at the same scale so that you can compare political or ethnographic boundaries from one time to the next. The first set covers the period from 362 AD to 830 AD.

A key aim here is to allow you to switch from map to map and see how boundaries evolve across an unchanging background.

The information in the maps is derived mostly from information in Colin McEvedy’s excellent series of books, in particular (so far) The New Penguin Atlas of Medieval History, but also sometimes brings in information from the Times History of Europe. Boundaries are approximate for a number of reasons: first, in the earlier times especially, the borders were only approximate anyway, second, I have deduced the boundary information from small-scale maps and (so far) only a little additional research, third, the sources sometimes differ about where boundaries lay. I hope to refine the data during future research, in the meantime take this information as grosso modo.

The link below the picture takes you to a chronological summary of events that lie behind the changes in the maps. Click on the large dates to open maps in a separate window. (Note that all maps will open in that window, and you may have to ensure that it isn’t hidden behind the chronology page.)

The background to the SVG overlay is a map that shows relief and rivers, as well as modern country boundaries (the dark lines). These were things which, as good as McEvedy’s maps were, I was always missing in order to get useful reference points. Since the outlines and text are created in SVG, you can zoom in to see details.

This is just the first stage, and the maps are still largely first drafts. The plan is to refine the details for existing maps and add many more. So far we only deal with Europe. In the future I’d like to deal with other places, if I can find sources.

by r12a at March 25, 2016 11:37 AM

March 19, 2016

ishida >> blog

UniView now supports Unicode 9 beta

Picture of the page in action.
>> Use UniView

UniView now supports the characters introduced for the beta version of Unicode 9. Any changes made during the beta period will be added when Unicode 9 is officially released. (Images are not available for the Tangut additions, but the character information is available.)

It also brings in notes for individual characters where those notes exist, if Show notes is selected. These notes are not authoritative, but are provided in case they prove useful.

A new icon was added below the text area to add commas between each character in the text area.

Links to the help page that used to appear on mousing over a control have been removed. Instead there is a noticeable, blue link to the help page, and the help page has been reorganised and uses image maps so that it is easier to find information. The reorganisation puts more emphasis on learning by exploration, rather than learning by reading.

Various tweaks were made to the user interface.

by r12a at March 19, 2016 10:22 PM

March 11, 2016

W3C Blog

An invitation to the free-software community for real dialog

This is an open invitation to all people in the free-software community for genuine person-to-person dialog with people in the W3C staff about DRM on the Web (and any other topics of importance to the Web we all have an interest in discussing).

We have a People of the W3C page that lists the names and e-mail addresses of all the W3C staff, and we always welcome you to contact us about the work we are doing together for the Web. Along with that we have a Contact page that includes more details about how to find us.

We believe this invitation from us to you for real person-to-person dialog is a much more constructive route to mutual understanding and change than approaches such as the recent campaign (under the apparent aegis of the Free Software Foundation) which you might have seen, that encourages you to instead go by a W3C office to just “take a protest selfie” in demonstration against “DRM in HTML”.

As the announcement about that campaign suggests, if you live near a W3C office, “you have a unique opportunity to make a difference”—but that opportunity is actually for much more than just snapping a selfie next to a W3C sign. Instead you have a chance to talk with real people who care a great deal about the Web and its future—just as you do—and to find out things we agree about with each other, and problems we can work on solving together.

We’re all real people. So let’s treat each other like real people, and don’t instead let someone else make you try to shoehorn yourself into any narrative they want to construct about fearless activists doing battle against some faceless uncaring entity.

So if you care enough yourself to make time to visit a W3C office in person, please consider not doing it only to take a selfie in front of a W3C sign and then leave. Instead, make it an opportunity to actually meet the people at your nearby W3C office who care deeply about a lot of same things you do, and chat with some of us person-to-person over a cup of coffee (or hey, maybe even some after-work drinks somewhere nearby).

The announcement about the “take a protest selfie” campaign claims to have “reliable advice” that it will be “very influential to the W3C’s leadership”. But I have a lot more reliable advice for you: The open invitation for real person-to-person conversation, that we as people are offering you right here, is an opportunity to be much more influential.


by Michael[tm] Smith at March 11, 2016 01:45 PM

March 09, 2016

W3C Blog

HTML: What’s next?

Since the end of last year the Web Platform Working Group has responsibility for W3C’s HTML spec, as well as many other core specifications. What have we been doing with HTML, and what is the plan?

The short story is that we are working toward an HTML 5.1 Recommendation later this year. The primary goals are to provide a specification that is a better match for reality, by incorporating things that are interoperable and removing things that aren’t.

We also want more people and organisations to get involved and make sure the development of HTML continues to reflect the needs and goals of the broad community.

As an important step down that path, the editors (Arron Eicholz, Steve Faulkner and Travis Leithead) have published the Editors’ Draft in github, and by using bikeshed to build it we have made it easier for people to propose an effective edit. Different kinds of edit require different levels of effort, of course…

Fixing a typo, or clarifying some text so it is easier to understand, are easy ways to start contributing, getting used to the spec source and github, and improving HTML. This level of edit will almost always be accepted with little discussion.

Meanwhile, we welcome suggestions – ideally as pull requests, but sometimes raising an issue is more appropriate – for features that should not be in a Recommendation yet, for example because they don’t work interoperably.

Naturally proposals for new features require the most work. Before we will accept a substantial feature proposal as part of an HTML recommendation, there needs to be an indication that it has real support from implementors – browsers, content producers, content authoring and management system vendors and framework developers are all key stakeholders. The Web Platform Incubator Community Group is specifically designed to provide a home for such incubation, although there is no obligation to do it there. Indeed, the picture element was developed in its own Community Group, and is a good example of how to do this right.

Finally, a lot of time last year was spent talking about modularisation of HTML. But that is much more than just breaking the spec into pieces – it requires a lot of deep refactoring work to provide any benefit. We want to start building new things that way, but we are mostly focused on improving quality for now.

The Working Group is now making steady progress on its goals for HTML, as well as its other work. An important part of W3C work is getting commitments to provide Royalty-Free patent licenses from organisations, and for some large companies with many patents that approval takes time. At the same time, Art Barstow who was for many years co-chair of Web Apps, and an initial co-chair of this group, has had to step down due to other responsibilities. While chaals continues as a co-chair from Web Apps, joined by new co-chairs Adrian Bateman and Léonie Watson, we still miss both Art’s invaluable contributions and Art himself.

So we have taken some time to get going, but we’re now confident that we are on track to deliver a Recommendation for HTML 5.1 this year, with a working approach that will make it possible to deliver a further improved HTML Recommendation (5.2? We’re not too worried about numbering yet…) in another year or so.

by Charles McCathie Nevile at March 09, 2016 02:00 PM

March 08, 2016

W3C Blog

W3C Web of Things at Industry of Things World

Illustration of WoT interoperability layer and things interactions

Two weeks ago I had the pleasure to present our vision of the future of the Internet of Things at the Industry of Things World USA conference in San Diego.

Here in a nutshell is what I told them:

Conference chairman Jeremy Geelan had started the conference with a retrospective. It took 27 years from J. C. R. Licklider’s vision of a worldwide computer network until Tim Berners-Lee’s invention of the Web became the spark to make this network broadly useful.

I then said that history could repeat itself with this next driver on the Internet, the Internet of Things (IoT). That is, unless we have a model which makes broad sharing available for IOT, we are liable to delay progress for years or decades. Specifically, today’s architectures and initial implementations tend to be silo-ed. There are standards at the physical layer but insufficient interoperability at the higher layers.

For example, a person’s watch (as an IoT device) will want to participate in IoT wearable applications (since it is worn), IoT medical applications (as it takes one’s pulse and links into personal medical information), IoT Smart Homes (used to control the home), IoT Smart Cities (as the municipal infrastructure relies on data about weather and traffic), and IoT Smart Factories (to track its usage and condition). But to participate across all silos, and for applications to be built which leverage all silos requires common data models, metadata, and an interoperable layered model.

From this I introduced the Web of Things model and Interest Group. This complements IoT by providing a higher level interoperability layer above IoT. Through the work W3C is doing in our task forces, we are addressing interoperability related issues: thing descriptions, API and Protocol bindings, discovery and provisioning, and security.

I spoke to several stakeholders: thing manufacturers, developers, solution providers who all seemed to agree that interoperability needed a greater level of attention in IoT.

by Jeff Jaffe at March 08, 2016 08:46 PM

March 07, 2016

ishida >> blog

More updates for the Egyptian hieroglyph picker

Picture of the page in action.
>> Use the picker

I’ve been doing more work over the weekend.

The data behind the keyword search has now been completely updated to reflect descriptions by Gardiner and Allen. If you work with those lists it should now be easy to locate hieroglyphs using keywords. The search mechanism has also been rewritten so that you don’t need to type keywords in a particular order for them to match. I also strip out various common function words and do some other optimisation before attempting a match.

The other headline news is the addition of various controls above the text area, including one that will render MdC text as a two-dimensional arrangement of hieroglyphs. To do this, I adapted WikiHiero’s PHP code to run in javascript. You can see an example of the output in the picture attached to this post. If you want to try it, the MdC text to put in the text area is:
anx-G5-zmA:tA:tA-nbty-zmA:tA:tA-sw:t-bit:t- -zA-ra:.-mn:n-T:w-Htp:t*p->-anx-D:t:N17-!

The result should look like this:

Picture of hieroglyphs.

Other new controls allow you to convert MdC text to hieroglyphs, and vice versa, or to type in a Unicode phonetic transcription and find the hieroglyphs it represents. (This may still need a little more work.)

I also moved the help text from the notes area to a separate file, with a nice clickable picture of the picker at the top that will link to particular features. You can get to that page by clicking on the blue Help box near the bottom of the picker.

Finally, you can now set the text area to display characters from right to left, in right-aligned lines, using more controls > Output direction. Unfortunately, i don’t know of a font that under these conditions will flip the hieroglyphs horizontally so that they face the right way.

For more information about the new features, and how to use the picker, see the Help page.

by r12a at March 07, 2016 11:21 AM

March 04, 2016

W3C Blog

W3C Web Payments at NACHA Payments Innovation Alliance

Web Payments icons; shopping basket, padlock, shopping basket on handheld device and hand

Last week I had the pleasure to present our vision of the future of Web Payments and e-commerce at the NACHA Payments Innovation Alliance meeting in San Francisco. This alliance focuses on modernizing payments in view of technology trends, standards, and security.

It was a good opportunity to share our vision of web payments with the financial services and fintech communities. Here in a nutshell is what I told them:

  • The historic dichotomy between the design point of the Web and how it is used by merchants is striking. The Web is user-centric. It is designed to connect humanity to information. e-Commerce has historically been merchant-centric. A merchant makes their product available on their site for a customer to come in and buy.
  • But e-Commerce has changed. In today’s omni-channel model, the customer is in charge. They receive information from social media and recommendation websites. They combine in-store with on-line; and expect service from laptop and mobile. There are enormous changes and more flexibility in payment methods. Users are looking and finding the best deal and most importantly the best overall experience.

A few blocks away from the NACHA meeting at the Google office, W3C’s Web Payments Interest Group and Web Payments Working Group were meeting face-to-face to discuss how the Web will need to expand to help the industry meet these challenges.

The Web Payments Working Group is looking at how to improve the e-Commerce user experience by streamlining the checkout process and improving security. Ian Jacobs has written a summary of the Working Group’s recent discussion and plan to publish a First Public Working Draft in early April of what we believe will be transformative new Web technology.

Ultimately, we will need a broad set of offerings. The Interest Group discussed a number of topics as fodder for new standardization efforts. These included the impact of regulatory changes such as the Payment Services Directive (PSD2) in Europe, as well as faster-payments initiatives such as those of the US Federal Reserve. The group also discussed use cases for “verifiable claims” on the Web, opened a discussion on blockchain and the Web, and discussed interoperability with the broader financial services ecosystem (e.g., through alignment with ISO20022).

It was great to brainstorm the changing face of e-Commerce with the Financial Services industry during the NACHA conference —joined by Working Group co-Chair Nick Telford-Reed (Worldpay)— and illustrate how the W3C community is addressing the challenge. The net of all of these changes is that the underlying payment and e-commerce infrastructure will be “web-like”. In addition to connecting humanity to information, we will better connect humanity with their economic potential.

by Jeff Jaffe at March 04, 2016 12:23 AM

February 29, 2016

ishida >> blog

Egyptian hieroglyph picker updated

Picture of the page in action.
>> Use the picker

Over the weekend I added a set of new features to the picker for Egyptian Hieroglyphs, aimed at making it easier to locate a particular hieroglyph. Here is a run-down of various methods now available.

Category-based input

This was the original method. Characters are grouped into standard categories. Click on one of the orange characters, chosen as a nominal representative of the class, to show below all the characters in that category. Click on one of those to add it to the output box. As you mouse over the orange characters, you’ll see the name of the category appear just below the output box.

Keyword-search-based input

The app associates most hieroglyphs with keywords that describe the glyph. You can search for glyphs using those keywords in the input field labelled Search for.

Searching for ripple will match both ripple and ripples. Searching for king will match king and walking. If you want to only match whole words, surround the search term with colons, ie. :ripple: or :king:.

Note that the keywords are written in British English, so you need to look for sceptre rather than scepter.

The search input is treated as a regular expression, so if you want to search for two words that may have other words between them, use .*. For example, ox .* palm will match ox horns with stripped palm branch.

Many of the hieroglyphs have also been associated with keywords related to their use. If you select Include usage, these keywords will also be selected. Note that this keyword list is not exhaustive by any means, but it may occasionally be useful. For example, a search for Anubis will produce 𓁢 𓃢 𓃣 𓃤 .

(Note: to search for a character based on the Unicode name for that character, eg. w004, use the search box in the yellow area.)

Searching for pronunciations

Many of the hieroglyphs are associated with 1, 2 or 3 consonant pronunciations. These can be looked up as follows.

Type the sequence of consonants into the output box and highlight them. Then click on Look up from Latin. Hieroglyphs that match that character or sequence of characters will be displayed below the output box, and can be added to the output box by clicking on them. (Note that if you still have the search string highlighted in the output box those characters will be replaced by the hieroglyph.)

You will find the panel Latin characters useful for typing characters that are not accessible via your keyboard. The panel is displayed by clicking on the higher L in the grey bar to the left. Click on a character to add it to the output area.

For example, if you want to obtain the hieroglyph 𓎝, which is represented by the 3-character sequence wꜣḥ, add wꜣḥ to the output area and select it. Then click on Latin characters. You will see the character you need just above the SPACE button. Click on that hieroglyph and it will replace the wꜣḥ text in the output area. (Unhighlight the text in the output area if you want to keep both and add the hierglyph at the cursor position.)

Input panels accessed from the vertical grey bar

The vertical grey bar to the left allows you to turn on/off a number of panels that can help create the text you want.

Latin characters. This panel displays Latin characters you are likely to need for transcription. It is particularly useful for setting up a search by pronunciation (see above).

Latin to Egyptian. This panel also displays Latin characters used for transcription, but when you click on them they insert hieroglyphs into the output area. These are 24 hieroglyphs represented by a single consonant. Think of it as a shortcut if you want to find 1-consonant hieroglyphs by pronunciation.

Where a single consonant can be represented by more than one hieroglyph, a small pop-up will present you with the available choices. Just click on the one you want.

Egyptian alphabet. This panel displays the 26 hieroglyphs that the previous panel produces as hieroglyphs. In many cases this is the quickest way of typing in these hieroglyphs.

by r12a at February 29, 2016 12:45 PM

February 25, 2016

ishida >> blog

New picker: Egyptian hieroglyphs

Picture of the page in action.
>> Use the picker

I have just published a picker for Egyptian Hieroglyphs.

This Unicode character picker allows you to produce or analyse runs of Egyptian Hieroglyph text using the Latin script.

Characters are grouped into standard categories. Click on one of the orange characters, chosen as a nominal representative of the class, to show below all the characters in that category. Click on one of those to add it to the output box. As you mouse over the orange characters, you’ll see the name of the category appear just below the output box.

Just above the orange characters you can find buttons to insert RLO and PDF controls. RLO will make the characters that follow it to progress from right to left. Alternatively, you can select more controls > Output direction to set the direction of the output box to RTL/LTR override. The latter approach will align the text to the right of the box. I haven’t yet found a Unicode font that also flips the glyphs horizontally as a result. I’m not entirely sure about the best way to apply directionality to Egyptian hieroglyphs, so I’m happy to hear suggestions.

Alongside the direction controls are some characters used for markup in the Manuel de Codage, which allows you to prepare text for an engine that knows how to lay it out two-dimensionally. (The picker doesn’t do that.)

The Latin Characters panel, opened from the grey bar to the left, provides characters needed for transcription.

In case you’re interested, here is the text you can see in the picture. (You’ll need a font to see this, of course. Try the free Noto Sans font, if you don’t have one – or copy-paste these lines into the picker, where you have a webfont.)

The last two lines spell the name of Amenhotep using Manuel de Codage markup, according to the Unicode Standard (p 432).

by r12a at February 25, 2016 05:43 PM