August 17, 2016

Reinventing Fire

Topic of Cancer

I’m now officially a cancer survivor! Achievement unlocked!

A couple weeks ago, on July 27th, during a routine colonoscopy, they found a mass in my ascending colon which turned out to have some cancer cells.

I immediately went to UNC Hospital, a world-class local teaching hospital, and they did a CT scan on me. There are no signs that the cancer has spread. I was asymptomatic, so they caught it very early. The only reason I did the colonoscopy is that there’s a history of colon cancer in my family.

Yesterday, I had surgery to remove my ascending colon (an operation they call a “right colectomy”). They used a robot (named da Vinci!) operated by their chief GI oncology surgeon, and made 5 small incisions: 4 on the left side of my belly to cut out that part of the right colon; and a slightly larger one below my belly to remove the tissue (ruining my bikini line).

Everything went fine (I made sure in advance that this was a good robot and not a killer robot that might pull a gun on me), and I’m recovering well. I walked three times today so far, and even drank some clear liquids. I’ll probably be back on my feet and at home sometime this weekend. Visitors are welcome!

There are very few long-term negative effects from this surgery, if any.

They still don’t know for certain what stage the cancer was at, or if it’s spread to my lymph nodes; they’ll be doing a biopsy on my removed colon and lymph nodes to determine if I have to do chemotherapy. As of right now, they are optimistic that it has not spread, and even if it has, the chemo for this kind of cancer is typically pretty mild. If it hasn’t spread (or “metastasized”), then I’m already cured by having the tumor removed. In either case, I’m going to recover quickly.

My Dad had colon cancer, and came through fine. My eldest sister also had colon cancer over a decade ago, and it had even metastasized, and her chemo went fine… and cancer treatments have greatly improved in the past few years.

So, nobody should worry. I didn’t mention it widely, because I didn’t want to cause needless grief to anyone until after the operation was done. Cancer is such a scary word, and I don’t think this is going to be as serious as it might otherwise sound.

I’ll be seeing a geneticist in the coming weeks to determine exactly what signature of cancer I have, so I know what I’m dealing with. And I want to give more information to my family, because this runs in our genes, and if I’d gotten a colonoscopy a few years ago, they could have removed the polyp in the early stages and I’d have never developed cancer. (And because I’m otherwise healthy, I probably wouldn’t have gotten the colonoscopy if I hadn’t had insurance, which I probably wouldn’t have had if Obamacare didn’t mandate it. Thanks, Obama!)

Yay, science!

Future Plans

So, the cliché here is for me to say that this has opened my eyes to the ephemerality and immediacy of life, and that I’m planning to make major decisions in my life that prioritize what I truly value, based on my experience with cancer.

But the fact is, I’ve already been doing that recently, and while the cancer underscores this, I’ve already been making big plans for the future. I’ll post soon about some exciting new projects I’m trying to get underway, things that are far outside my comfort zone for which I’ll need to transform myself (you know, in a not-cancerous sort of way). I’ve already reduced my hours at W3C to 50%, and I’m looking at changing my role and remaining time there; I love the mission of W3C, which I see as a valuable kind of public service, so no matter what, I’ll probably stay involved there in some capacity for the foreseeable future. But I feel myself pulled toward building software and social systems, not just specifications. Stay tuned for more soon!

I’m optimistic and excited, not just about leaving behind this roadbump of cancer, but of new possibilities and new missions to change the world for the better in my own small ways.


by Shepazu at August 17, 2016 08:39 PM

August 16, 2016

W3C Blog

W3C Day in Spain: Web of Things to boost industrial productivity

Recently the W3C Spanish Office organized the W3C Day in Spain, an annual conference held in different cities across Spain. The objective of this event is facilitating a collaborative  environment that enables local stakeholders to share their knowledge on ICTs. This W3C Day is considered as one of the major forums in Spain to discuss about the future of Web technologies in industry, academia, public sector, and the society in general.

Dave Raggett during his talk at W3C Day in Spain 2016

Dave Raggett during his talk at W3C Day in Spain 2016

This 13rd edition, hosted by CTIC Technology Centre in Gijón (Spain), was focused on Web of Things and its application on the industry, aiming at raising awareness about the new technologies related to IoT to boost the concept of Industry 4.0 in Spain. The event gathered together over 220 experts who interacted with the speakers during five dynamic panels. Most of the attendees came from Spain, although there were a few international representatives from Latin American and Eastern European countries.

The agenda of the event was full of high level speakers from Spanish leading corporations, national government or prestigious universities. All the keynotes and panels were focused on the evolution of the Internet and the Web towards the Internet of Things (IoT) paradigm, framed in the potential interest for the industry. Experts’ speeches addressed the topics from a high perspective, introducing challenges and opportunities in their sectors.

The first keynote speaker was Szymon Lewandowski (Policy Officer at DG CONNECT, European Commission), who presented the efforts of the European Commission in order to evolve industry towards a Digital Single Market. His speech was clear and concise, encouraging companies to evolve their strategies by the right use of data and the adoption of Web and Internet standards.

Dave Raggett (W3C Web of Things) presented the W3C’s work in this new promising concept that will solve the problem of interoperability in the IoT. His talk, titled ‘Web of Things: Web standards to bridge the IoT silos’, was a good motivation to start specific discussions on different subtopics such as: interoperability, security, big data, cloud computing, and Industry 4.0 strategies.

Panelists during the W3C Day in Spain 2016

Panelists during the W3C Day in Spain 2016

During a full day, keynote speakers and panelists discussed how open standards could increase productivity, optimize business processes, and avoid silos of things (and information). This event served as a good starting point for the Spanish industry to change their minds towards the Web of Things. We are already thinking about what topics to discuss next year.

Interested in learning more? Read a complete report about the W3C Day in Spain 2016 and join the W3C Web of Things Interest Group if you want to make (and change) the rules for a better Internet ecosystem.

by Martin Alvarez-Espinar at August 16, 2016 10:13 AM

August 09, 2016

W3C Blog

W3C China celebrated its 10th Anniversary in July

W3C Beihang Host celebrated W3C China10th Anniversary in Beihang University on July 9th 2016. To honor the past fruitful 10 years and look forward to brighter future, W3C China team invited the local web community to celebrate this great moment together.

The event was organized in 3 sessions, including Core Web technology, Future of the Web and Web&Indsutry. 11 speakers from W3C team, W3C members as well as some notedresearchers shared their insights. More than 200 participants attended this event on site and about 20000 remote attendees watched the onsite video streaming. The Core Web Technology Session focused on the current achievements of the Open Web Platform. Presentations about Web design principles, Web applications and web accessibility were shared with the audience; in the Future of Web Session, the speakers talked about the hot topics such as blockchain, virtual reality and data visualization. Prof. Wei-Tek Tsai who just came back from W3C Blockchain workshop shared his experience on this workshop as well as his vision about blockchain; the Web & Industry Session were mainly for W3C’s efforts in the vertical industries such as payment, automotive as well as web of things. Dr.Jie Bao, a former W3C Linked data activity participants talked about the use of linked data in financial industry and brought the audience a fresh new angle to view the linked data technologies.

Prof. Jinpeng Huai, former Beihang Host representative, ex-President of Beihang University, the Vice Minister of Ministry of Industry and Information joint this event and expressed his best wishes for the future of W3C and the Web.

A brief history of W3C in China

In the spring of 2006, W3C China Office was launched in Beihang University and Beihang University starts to host W3C in China ever since. In 2008, W3C China Office took over the related business from W3C Hong Kong Office and W3C Hong Kong Office was terminated for a good reason. W3C China Office appreciated the contribution from Hong Kong Office, especially the efforts and supports from Prof. Vincent Shen the Office Manager of W3C Hong Kong Office. With the continuous endeavor from W3C team home and abroad, as well as the strong support from Web community, W3C has grown robustly together with the web industry in China. More and more noted Chinese ICT organizations such as Alibaba, Tencent, Huawei, Baidu, China Mobile, Chine Unicom and Chinese Academy of Science joint W3C as members. New web technologies like HTML5 gains increasing popularity among the Chinese developers. In January 2016, W3C upgraded its China Office and launched the fourth international R&D center in Beihang AKA a W3C Host in China.

by Angel Li at August 09, 2016 04:15 AM

August 08, 2016

Reinventing Fire

In Praise of HB2

North Carolina House Bill 2 (aka, “HB2”, or the “Public Facilities Privacy & Security Act”, or simply “the Bathroom Bill”), which  among other things prohibits transgender people from using the bathroom designated to the sex of their identity, is going to force another step forward in civil liberties.

Four years ago, in the 2012 gubernatorial election season, the North Carolina General Assembly, controlled by Republicans, passed North Carolina Amendment 1 (aka, “SB514”, or “An Act to Amend the Constitution to Provide That Marriage Between One Man and One Woman is the Only Domestic Legal Union That Shall Be Valid or Recognized in This State”), which called for a public referendum on the issue of constitutionally banning same-sex marriage.

From its inception, this bill was doomed to have no long-term relevance; it was cast in the mold of the polemical 2008 California Proposition 8. Already, the battle lines were being drawn for the national legalization of same-sex marriage: the military’s restrictive “Don’t ask, don’t tell” policy had been repealed, and the Department of Defense was permitting military chaplains to perform same-sex marriage ceremonies; President Obama had announced his support for marriage equality; challenges to Prop 8 were wending their way to the Supreme Court; and public polling indicated that a slender-but-growing majority of Americans approved of same-sex marriage. Predictably, in July 2014, the 4th Circuit U.S. Court of Appeals overturned an equivalent bill in Virginia, declaring it unconstitutional, thus nullifying NC’s Amendment 1. Why did NC legislators waste so much time, money, and energy on a bill they had to know wouldn’t last?

Because this was about more than just the bill itself. It was a dog whistle, or maybe a bullhorn, to rally conservatives around the state to come to the polls. A well-funded campaign of anti-marriage-equality groups spread across rural NC, especially conservative Christian groups, from the famous evangelical pastor Reverend Billy Graham, to two NC Roman Catholic bishops, to the Christian-funded Vote for Marriage NC, to the pulpit activism of ministers around the state. The message wasn’t just “vote for Amendment 1”, it was “vote for conservatives”; Representative Mark Hilton (R-Catawba) said, “One of the issues [conservative groups] have come to me about is the marriage amendment. It’s important to the conservative groups that we get this passed this year because they need that to be able to get their ground game working to get the maximum effect to get out the vote.” It was a heavily divisive issue, one that played to the deepest emotions of conservatives, and the public debate energized the voters, and helped usher in a new conservative Republican governor, Pat McCrory, after 20 years of fairly progressive Democratic governors (and a longer history of less-progressive Democratic governors before that).

So, is it really a coincidence that 4 years later, in the 2016 gubernatorial election season, the North Carolina General Assembly, controlled by Republicans, passed a bill that limits the rights of a gender minority? Or that some of them are calling for a public referendum? Or that they diverted $500K from the state’s Emergency Response and Disaster Relief fund to defend the fore-doomed HB2 in court against the U.S. Department of Justice, maintaining the controversy and the press for the next several months until the November election? I don’t think it will have the saving grace for Pat McCrory that it did last time, however; it’s already cost the state millions of dollars in revenue, and it’s made us an international laughing-stock.

Like Amendment 1 before it, HB2 is destined to be overturned, a footnote in history. But in the meantime, it’s causing real harm to real people; phone calls to Trans Lifeline, the nonprofit transgender crisis hotline, doubled after the passage of HB2; and some bigots feel emboldened to mock or even harm transgender people in the name of this law. This must have been profoundly disappointing for the human rights activists in Charlotte who’ve spent years working to make NC more inclusive, and who scored a victory with the Charlotte City Council with the passage of Charlotte Ordinance 7056 (aka, “An Ordinance Amending Chapter 2 of the Charlotte City Code Entitled “Administration”, Chapter 12 Entitled “Human Relations”, and Chapter 22 Entitled “Vehicles for Hire””), only to have it struck down at the state level by HB2. So, why am I praising HB2, rather than Charlotte Ordinance 7056?

Because, as good as the intention was behind Charlotte Ordinance 7056, if left unopposed, it would have had minor and purely local effect, rather than the transformative societal effect of HB2.

California’s Prop 8, banning same-sex marriage, was the critical event that made same-sex marriage legal across the entire US, in three notable ways:

  1. The public debate forced people to form an opinion on the issue, and when pressed on it, most people decided that either they were in support of marriage equality or that it simply wasn’t their business to dictate what other adults did;
  2. It inspired contrary legislation in several other states, legalizing same-sex marriage there;
  3. It forced the issue to be resolved in the courts, rather than the timid Congress.

Federal laws are made in two ways in the USA; either they are enacted by Congress; or they are decided as interpretations of the Constitution by the Supreme Court (or its lower district courts). Though same-sex marriage was trending upward in favorability among Americans, it would likely have been decades before Congress would have acted on that; members of Congress are too afraid of strong action on contentious issues, lest it endanger their reelection; and no single party is likely to have a clear mandate to act unilaterally for the next several elections. (A cynical view might assert that controversial issues –like same-sex marriage, gun control, health care, and abortion– are kept unresolved so the parties have strong, emotional differentiators to garner voters, but I prefer to ascribe it to simple inability.) So, the courts brought in marriage equality at least a decade, and probably much longer, than would have been possible from Congress. And this has been a huge step forward in civil rights, positively affecting hundreds of thousands of lives, and giving millions of people their dignity.

And these laws do more than just determine how people are treated by the government. They set a normative expectation among the public. Same-sex marriage is enjoying more popular support now not only because the law reflects how people feel… people feel differently because of the law itself. At their best, laws are a reflection and reinforcement and declaration of shared social values.

So ask yourself, and be honest: Were you concerned about the rights of transgender people a year ago? Were you inspired to march in the streets, attend rallies, or even post on social media about it?

I wasn’t. Sure, if you’d asked me, I would have said truthfully that I thought transgender people should have the same rights as others. But I wouldn’t have felt that strongly about it.

And then HB2 happened. In my state. And I was forced to form an opinion.

And I took to the streets.

Because, who are we, as a state? Who am I, as a citizen? I’ll tell you, clearly, in the face of legal claims by representatives of my state government: “We are not this.”

We are not punching down. We are not petty. We are not oppressive. We are not exclusionary.

Still, if same-sex marriage was yet decades away, how long in the future were transgender rights? How many years and how many lives until we cared?

But now, around the country, around the world, people are defiantly defining themselves by what they are not, on an issue that had not even been on their radar: “We are not this.”

I may not know much about law, but I know what I don’t like.

“We are not this.”

I can’t predict if HB2, this bigoted bill, will help conservatives maintain control of the North Carolina state government for another term. But I do know its one inevitable effect: however hurtful it will be for transgender people in its short life, and though some of those affected may not live to see the long-term benefits, it will give transgender people their legal dignity ever after.

So, self-styled “social conservatives”, keep bringing us hateful, hurtful laws. Keep pushing against the tide of history. Keep forcing us to form an opinion. Please.

by Shepazu at August 08, 2016 04:20 AM

August 04, 2016

W3C Blog

25 years ago the world changed forever

6 August 1991 usenet post by Tim Berners-Lee

6 August 1991 usenet post about the WorldWideWeb by Tim Berners-Lee

If you grew up thinking that the Web always existed since you were born, you may be right. If not, you may remember the very early days of the Web.

Two years ago we celebrated  the invention of the Web on the anniversary of the March 1989 memo written by Tim Berners-Lee outlining his proposal for the World Wide Web.  On Saturday we celebrate not only the brilliance of the Web’s conception but the world-changing point at which the Web was offered as a publicly available service.

25 years ago, on 6 August 1991, Tim Berners-Lee, posted information about the WorldWideWeb project on the newsgroup (like a message board) alt.hypertext and invited wide collaboration – marking,  in one email, the Web’s introduction to the wider world.

Even at the start of his work on the Web Tim offered it to everyone, opening it for contribution from all. Because so many around the globe have taken him up on his offer and have helped to develop the Web, to create and share content as well as to build standards to keep it interoperable and innovative, the Web has become not just a repository for knowledge and sharing beyond the dreams of any library, but one of the most unique and powerful tools in history.

W3C CEO Jeff Jaffe noted:

“With the Web we are trying to encapsulate all that civilization needs.  As needs and opportunities arise and new technologies facilitate addressing those needs, W3C focuses on improving the Web technology base.  We need everyone’s engagement to ensure that we are addressing the most important problems in the best way.”

The Web has changed all our lives and we are pleased to celebrate the historical occasion of its release to the public 25 years ago.  At W3C we continue to uphold our core values of openness, collaboration and innovation in our standards while we pursue our mission  of leading the Web to its full potential.

Thank you Tim and thanks to all who have, by their efforts, helped to create the Web  – from its earliest beginnings, to its inestimable impact on our lives now and for all the exciting ways it will continue to evolve in the future.

We  are grateful for all of those who have made the Web what it is now:  for our W3C Members; for Web developers and all Web users;  for those who work to make sure the Web is truly worldwide and for all of humanity; and for those who are working to create what the Web will become.

We invite you to tell us in the comments about when you first came across the Web, your first Web site, the first W3C spec you implemented  or however the Web has positively impacted you.

For those interested in the early history of the Web:

In March 1989, while at CERN, the European particle physics lab, Tim Berners-Lee wrote: “Information Management: A Proposal”  outlining his ideas for the Web as a global hypertext information-sharing space.  In September of 1990, Mike Sendall, Tim’s boss, gave him approval to go ahead and write the global hypertext system he described and allowed purchase of a NeXT cube computer to use to do so.

In October 1990 Tim wrote the first Web browser – or more specifically, a browser-editor -which was called WorldWideWeb.  When it was written in 1990 it was the only way to see the Web. Later this  browser-editor was renamed Nexus in order to save confusion between the program and the abstract information space (which is now spelled with spaces as: World Wide Web).

early WWW browser screenshot 1993

An early color screenshot of the WorldWideWeb browser

web server, web proposal and

The NeXT workstation used by Tim Berners-Lee to design the World Wide Web and the first Web server; a copy of “Information Management: A Proposal”; and a copy of the book “Enquire Within upon Everything”.

Later in October 1991, Robert Cailliau, a colleague of Tim’s at CERN, joined the project and helped rewrite and circulate Tim’s proposal for the Web. In November 1991, Nicola Pellow, then a student, joined the team and wrote the original line-mode browser. Bernd Pollermann also joined the team that month and worked the “XFIND” indexes on the CERNVM node. By Christmas of that year the line mode browser and WorldWideWeb browser/editor was demonstrable and access was possible to hypertext files, CERNVM “FIND”, and Internet news articles.

(Note: in 2013 CERN re-released an online version of the line-mode browser.  You can see that here:

In 1991 presentations and seminars were made within CERN and the code was released on central CERN machines. Then on 6 August 1991, files were made available on the Internet by FTP —  and posted, along with Tim’s email introducing the public to the WorldWideWeb — on the alt.hypertext newsgroup mailing list. (That new users accessed the Web after 23 August is why that date is considered “internaut’s”  day).

In the autumn of 1991, Stanford Linear Accelerator Center (SLAC) Physicist Paul Kunz met with Tim Berners-Lee and brought word of the Web back to SLAC. By December, the first WWW server at SLAC (and first server outside of Europe) was successfully installed.  In the years that followed more browsers were developed, more Web servers were put online and more Web sites were created. The Web as we know it had begun.  In November 1993, at a Newcastle, U.K. conference, Tim Berners-Lee discussed the future of the Web with MIT’s David Gifford, who suggests that Tim contact Michael Dertouzos, the head of the Laboratory for Computer Science at MIT.

By 1994, load on the first Web server ( was 1000 times what it had been 3 years earlier. In February 1994 Tim met with Michael Dertouzos in Zurich to discuss the possibility of starting a new organization at MIT and in April of that year,  Alan Kotok, then at DEC and later Associate Chairman of W3C, visited CERN to discuss creation of a Consortium. In October of that year the World Wide Web Consortium (W3C), the Web standards organization, was started at MIT, as an international body.  In April 1995 INRIA became the W3C Host in France (in 2003 it was changed to ERCIM); in September 1996 Keio University became the W3C Host in Japan and in January 2013 Beihang University became the  W3C host in China.

For more information (including many links to web pages and images) see also: A Little History of the World Wide Web and a W3C timeline from 2005. For more information about the work of the Web Foundation, established  in 2009 to help to connect everyone, to  raise voices and to enhance participation through the open Web see:  For more information on W3C, about Membership and how participate, please see:

by Amy van der Hiel at August 04, 2016 09:24 PM

W3C Web of Things meetings in Beijing, July 2016

Dave Raggett presentingThe W3C Web of Things Interest Group met 11-14 July 2016 in Beijing, China, hosted by the China Electronics Technology Group  Corporation (CETC), the W3C/Beihang Host and the China IoT Industry Technology Innovation Strategic Alliance (CIoTA). The event was co-located with the CIoTA’s 2016 International Open IoT Technology and Standard Summit.

The first two days were open to local companies and institutions. We had talks from a broad range of participants, and many demonstrations from both the CETC and W3C. We learned about CETC’s IoT open system architecture, and its implementation by Beijing Wuliangang Ltd. as the cloud-based E-Harbour IoT platform, and enjoyed; live demonstrations in relation to smart homes, smart communities and smart buildings. W3C returned the favour with integrated demonstrations of the Web of Things by Siemens, Panasonic, Fujitsu, Lemonbeat and Samsung’s SmartThings.

Please read more in Matthias Kovatsch’ summary in the Web of Things Interest Group Blog.

Joerg Hauer presentingThe Web of Things aims to counter fragmentation of the IoT and enable an open market of services, spanning a wide range of standards and platforms from microcontrollers to cloud-based server farms. Our approach focuses on cross platform APIs for simplifying application development, and the role of metadata for enabling different platforms to interoperate with each other.

The last two days were devoted to progressing the work items of the Web of Things Interest Group. We had sessions focusing on a broad range of technical topics, e.g. protocol bindings, data types for application scripting, thing lifecycles, scripting APIs, and proposals for collaborative work with other organisations on building a shared understanding of how to enable semantic interoperability across different platforms. The Interest Group is now being rechartered for a further two years. We are also progressing plans for a Web of Things Working Group which we hope will launch in October 2016, and which will seek to create standards from the ideas explored by the Interest Group. The Interest Group’s next face to face meeting will be in Lisbon, Portugal on 22-23 September 2016 as part of the W3C annual get together (TPAC 2016).


by Olive Xu at August 04, 2016 09:04 AM

July 14, 2016

W3C Blog

Exploring Web platform cross-dependencies

Most of the JavaScript APIs exposed on the Web platform (both in W3C and elsewhere) rely on a formalization language called WebIDL (Web Interface Definition Language).

It provides a simple syntax to express common idioms needed when defining JavaScript APIs, and encompasses many of the specific behaviors required, expected or inherited from the 20 years legacy of Web APIs. Its common usage across APIs has facilitated more consistency across specifications (for instance in error processing), and has helped streamline the testing of browsers implementations of these APIs.

During the 2016 edition of the W3C Geek Week, my colleague François Daoust and myself have explored another way of using this formalism. We first built a crawler and scrapper of specifications (based on the excellent jsdom library) which extracts the WebIDL definitions embedded in these specifications, and the normative references that bind these specifications to specifications they build upon.

With that data collected across all the specifications we were able to identify that use WebIDL, we built a first simple analyzer of that WebIDL graph,  to identify  potential bugs in specifications, including invalid WebIDL definitions, duplication of names in interfaces, references to undefined interface names, missing normative references.

We also built a simple explorer of that WebIDL data that allows to determine which interfaces are defined where, and re-used by what other specifications.

This combination of tools has already allowed us to identify and file a number of specification bugs, and we hope to continue developing them and exploring the resulting data to inform future APIs development and provide opportunities for even more consistency across the Web Platform.

by Dominique Hazaël-Massieux at July 14, 2016 12:47 PM

July 05, 2016

W3C Blog

Celebrating 20th birthday in Japan – ASIA strong in Web standards –

W3C20 ASIA is the celebration of the 20th anniversary of the founding of the first W3C Asian host at Keio University in Japan in September 1996. The celebration consisted of three parts: a technical seminar, keynotes and the reception. The celebration gathered evangelists, specialists and experts to talk about various areas and use cases and topics, including future visions.

One forward-looking highlight of the event was a presentation about how the Tokyo Organising Committee of the Olympics and Paralympics Games is placing high importance on the Web accessibility of their Web pages and digital media, showing how much Web accessibility continues to be vital in the world.

The keynote panel discussion showed a clear strong tie among Asian countries.The speakers from mainland China, Hong Kong, Korea, and Japan discussed topics including accessibility, IoT (Internet of Things), WoT (Web of Things), data, Fintech (Financial technology), RegTech (Regulatory technology) and society-facing issues around trust on the Internet and Web.


That’s why W3C in ASIA is so important. This population is coming online and we are all coming connected. The speed of light is not fast enough; traditional distinctions are disappearing—things like analog vs. digital, physical vs. virtual, real time vs. not real time, human vs. not human, but application or agent—and the disappearance of those distinctions will enable the next digital generation to overcome the speed of light.

An expectation for the W3C is to work with the rest of society to manage what’s coming next, since all people, life, industry segments, jobs or even virtual worlds are on the Internet and Web. Though the next 20 years will be very challenging, the panelists were confident that the W3C is the place for it to be brought to realization.

The reception opened with a Web visualization demonstrated by a budding artist and featured interesting talks from widely known speakers, as well as other demonstrations. This remarkable event made a strong impression about the importance of W3C in Asia.

by Naomi Yoshizawa at July 05, 2016 10:13 AM

July 01, 2016

W3C Blog

W3C holds events and creates opportunities at TPAC 2016, Lisbon in September

TPAC 2016 logoW3C is creating numerous opportunities and events this year during the TPAC 2016 week in Lisbon, Portugal, 19-23 September 2016:

  • Monday Developer Meetup (Web Payment demo, a11y, Webauthn, Flexbox, IndieWeb, etc.)
  • Tuesday breakfast with CEO Jeff Jaffe for W3C group Chairs
  • Technical Plenary Day breakout sessions (Read and submit breakout topics in wiki)
  • Publishing Community meeting
  • W3C Member demos in prominent exhibition area
  • W3C Member Executive Session
  • TPAC 2016 is open to participants in W3C {Working, Interest} Groups (as well as selected Business or Community Groups scheduled to meet), W3C Members, W3C Offices, invited guests. The Combined Technical Plenary / Advisory Committee Meetings Week brings together W3C Technical Groups, the Advisory Board, the TAG and the Advisory Committee for an exciting week of coordinated work.

    As the summer holidays are about to start in the Northern hemisphere, participants should please consider this update as a reminder to book hotel room as soon as possible as September is a busy period for Lisbon; and to please register by 2 September.

by Coralie Mercier at July 01, 2016 11:41 PM

June 28, 2016

ishida >> blog

Unicode Converter v8

Picture of the page in action.
>> Use the converter

An updated version of the Unicode Character Converter web app is now available. This app allows you to convert characters between various different formats and notations.

Significant changes include the following:

  • It’s now possible to generate EcmaScript6 style escapes for supplementary characters in the JavaScript output field, eg. \u{10398} rather than \uD800\uDF98.
  • In many cases, clicking on a checkbox option now applies the change straight away if there is content in the associated output field. (There are 4 output fields where this doesn’t happen because we aren’t dealing with escapes and there are problems with spaces and delimiters.)
  • By default, the JavaScript output no longer escapes the ASCII characters that can be represented by \n, \r, \t, \’ and \”. A new checkbox is provided to force those transformations if needed. This should make the JS transform much more useful for general conversions.
  • The code to transform to HTML/XML can now replace RLI, LRI, FSI and PDI if the Convert bidi controls to HTML markup option is set.
  • The code to transform to HTML/XML can convert many more invisible or ambiguous characters to escapes if the Escape invisible characters option is set.
  • UTF-16 code points are all at least 4 digits long.
  • Fixed a bug related to U+00A0 when converting to HTML/XML.
  • The order of the output fields was changed, and various small improvements were made to the user interface.
  • Revamped and updated the notes

Many thanks to the people who wrote in with suggestions.

by r12a at June 28, 2016 04:50 PM

June 27, 2016

W3C Blog

Perspectives on security research, consensus and W3C Process

Linux Weekly News published a recent story called “Encrypted Media Extensions and exit conditions”, Cory Doctorow followed by publishing “W3C DRM working group chairman vetoes work on protecting security researchers and competition”. While the former is a more accurate account of the status, we feel obligated to offer corrections and clarifications to the latter, and to share a different perspective on security research protection, consensus at W3C, W3C’s mission and the W3C Process, as well as the proposed Technology and Policy Interest Group.

There have been a number articles and blog posts about the W3C EME work but we’ve not been able to offer counterpoints to every public post, as we’re focusing on shepherding and promoting the work of 40 Working Groups and 14 Interest Groups –all working on technologies important to the Web such as: HTML5, Web Security, Web Accessibility, Web Payments, Web of Things, Automotive, etc.

TAG statement on the Web’s security model

In his recent article, Cory wrote:

For a year or so, I’ve been working with the EFF to get the World Wide Web Consortium to take steps to protect security researchers and new market-entrants who run up against the DRM standard they’re incorporating into HTML5, the next version of the key web standard.

First, the W3C is concerned about risks for security researchers. In November 2015 the W3C Technical Architecture Group (TAG), a special group within the W3C, chartered under the W3C Process with stewardship of the Web architecture, made a statement (after discussions with Cory on this topic) about the importance of security research. The TAG statement was:

The Web has been built through iteration and collaboration, and enjoys strong security because so many people are able to continually test and review its designs and implementations. As the Web gains interfaces to new device capabilities, we rely even more on broad participation, testing, and audit to keep users safe and the web’s security model intact. Therefore, W3C policy should assure that such broad testing and audit continues to be possible, as it is necessary to keep both design and implementation quality high.

W3C TAG statements have policy weight. The TAG is co-Chaired by the inventor of the Web and Director of W3C, Tim Berners-Lee. It has elected representatives from W3C members such as Google, Mozilla, Microsoft and others.

This TAG statement was reiterated in an EME Factsheet, published before the W3C Advisory Committee meeting in March 2016 as well as in the W3C blog post in April 2016 published when the EME work was allowed to continue.

Second, EME is not a DRM standard. W3C does not make DRM. The specification does not define a content protection or Digital Rights Management system. Rather, EME defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems. We appreciate that to those who are opposed to DRM, any system which “touches” upon DRM is to be avoided, but the distinction is important. DRM is on the Web and has been for many years. We ask pragmatically what we can do for the good of the Web to both make sure a system which uses protected content insulates users as much as possible, and ensure that the work is done in an open, transparent and accessible way.

A several-month TF to assess EFF’s proposed covenant

Cory further wrote, about the covenant:

As a compromise that lets the W3C continue the work without risking future web users and companies, we’ve proposed that the W3C members involved should agree on a mutually acceptable binding promise not to use the DMCA and laws like it to shut down these legitimate activities — they could still use it in cases of copyright infringement, just not to shut down activity that’s otherwise legal.

The W3C took the EFF covenant proposal extremely seriously. Made as part of EFF’s formal objection to the Working Group’s charter extension, the W3C leadership took extraordinary effort to resolve the objection and evaluate the EFF proposed covenant by convening a several month task force. Hundreds of emails were exchanged between W3C Members and presentations were made to the W3C Advisory Committee at the March 2016 Advisory Committee meeting.

While there was some support for the idea of the proposal, the large majority of W3C Members did not wish to accept the covenant as written (the version they voted on was different from the version the EFF made public), nor a slightly different version proposed by another member.

Member confidentiality vs. transparent W3C Process

Cory continued:

The LWN writeup is an excellent summary of the events so far, but parts of the story can’t be told because they took place in “member-confidential” discussions at the W3C. I’ve tried to make EFF’s contributions to this discussion as public as possible in order to bring some transparency to the process, but alas the rest of the discussion is not visible to the public.

W3C works in a uniquely transparent way. Specifications are largely developed in public and most groups have public minutes and mailings lists. However, Member confidentiality is a very valuable part of the W3C process. That business and technical discussions can happen in confidence between members is invaluable to foster broader discussion, trust and the opportunity to be frank. The proceedings of the HTML Media Extensions work are public however, discussions amongst Advisory Committee members are confidential.

In his post, Nathan Willis quoted a June 6 blog post by EFF’s Cory Doctorow, and continued:

Enough W3C members endorsed the proposed change that the charter could not be renewed. After 90 days’ worth of discussion, the working group had made significant progress, but had not reached consensus. The W3C executive ended this process and renewed the working group’s charter until September.

Similar wording is found in an April EFF blog post, attributing the renewal to “the executive of the W3C.” In both instances, the phrasing may suggest that there was considerable internal debate in the lead-up to the meeting and that the final call was made by W3C leadership. But, it seems, the ultimate decision-making mechanism (such as who at W3C made the final decision and on what date) is confidential; when reached for comment, Doctorow said he could not disclose the process.

Though the Member discussions are confidential, the process itself is not.

In the W3C process, charters for Working Groups go to the Advisory Committee for review at different stages of completion. That happened in this case. The EFF made an objection. By process, when there are formal objections the W3C then tries to resolve the issue.

As part of the process, when there is no consensus, the W3C generally allows existing groups to continue their work as described in the charter. When there is a “tie-break” needed, it is the role of the Director, Tim Berners-Lee, to assess consensus and decide on the outcome of formal objections. It was only after the overwhelming majority of participants rejected the EFF proposal for a covenant attached to the EME work that Tim Berners-Lee and the W3C management felt that the EFF proposal could not proceed and the work would be allowed to continue.

Next steps within the HTML Media Extensions Working Group

Cory also wrote:

The group’s charter is up for renewal in September, and many W3C members have agreed to file formal objections to its renewal unless some protection is in place. I’ll be making an announcement shortly about those members and suggesting some paths for resolving the deadlock.

The group is not up for charter renewal in September but rather, its specifications are progressing on the time-line to “Recommendation“. A Candidate Recommendation transition will soon have to be approved, and then the spec will require interoperability testing, and Advisory Committee approval before it reaches REC. One criteria for Recommendation is that the ideas in the technical report are appropriate for widespread deployment and EME is already deployed in almost all browsers.

To a lesser extent, we wish to clarify that veto is not part of the role of Working Group chairs; indeed Cory wrote:

Linux Weekly News reports on the latest turn of events: I proposed that the group take up the discussion before moving to recommendation, and the chairman of the working group, Microsoft’s Paul Cotton, refused to consider it, writing, “Discussing such a proposed covenant is NOT in the scope of the current HTML Media Extensions WG charter.”

As Chair of the HTML Media Extensions Working Group, Paul Cotton’s primary role is to facilitate consensus-building among Group members for issues related to the specification. A W3C Chair leads the work of the group but does not decide for the group; work proceeds with consensus. The covenant proposal had been under wide review with many lengthy discussions for several months on the W3C Advisory Committee mailing lists. Paul did not dismiss W3C-wide discussion of the topic, but correctly noted it was not a topic in line with the chartered work of the group.


In the April 2016 announcement that the EME work would continue, the W3C reiterated the importance of security research and acknowledged the need for high level technical policy discussions at W3C – not just for the covenant. A few weeks prior, during the March 2016 Advisory Committee meeting the W3C announced a proposal to form a Technology and Policy Interest Group.

The W3C has, for more than 20 years, focused on technology standards for the Web. However, recognizing that as the Web gets more complex and its technology is increasingly woven into our lives, we must consider technical aspects of policy as well. The proposed Technology and Policy Interest Group, if started, will explore, discuss and clarify aspects of policy that may affect the mission of W3C to lead the Web to its full potential. This group has been in preparation before the EME covenant was presented, and will be address broader issues than anti-circumvention. It is designed as a forum for W3C Members to try to reach consensus on the descriptions of varying views on policy issues, such deep linking or pervasive monitoring.

While we tried to find common ground among our membership on the covenant issue, we have not succeeded yet. We hope that EFF and others will continue to try. We recognize and support the importance of security research, and the impact of policy on innovation, competition and the future of the Web. Again, for more ample information on EME and frequently asked questions, please see the EME Factsheet, published in March 2016.

by Coralie Mercier at June 27, 2016 10:30 AM

June 24, 2016

W3C Blog

Subresource Integrity Becomes a W3C Recommendation

The fundamental line of trust on the Web is between the end-user and the Web application: individuals who visit a website rely on HTTPS to trust that they are getting the page or application put there by the site owner. Features of Web Application Security are designed to support that trust, protecting against cross-site scripting and content-injection attacks or unwanted snooping on Web traffic. If a Web application includes resources from third parties, however, it may effectively delegate its trust to all of those included resources, any of which could maliciously or carelessly compromise the overall security of the Web application and data shared with it.

Subresource Integrity (SRI), which just reached W3C Recommendation status, offers a way to include resources without that open-ended delegation. It lets browsers, as user-agents, cryptographically verify that the integrity of included subresources such as scripts and styles, matches as-delivered what was expected by the requesting application.

As explained in the specification:

Sites and applications on the web are rarely composed of resources from only a single origin. For example, authors pull scripts and styles from a wide variety of services and content delivery networks, and must trust that the delivered representation is, in fact, what they expected to load. If an attacker can trick a user into downloading content from a hostile server (via DNS poisoning, or other such means), the author has no recourse. Likewise, an attacker who can replace the file on the Content Delivery Network (CDN) server has the ability to inject arbitrary content.

Delivering resources over a secure channel mitigates some of this risk: with TLS, HSTS, and pinned public keys, a user agent can be fairly certain that it is indeed speaking with the server it believes it’s talking to. These mechanisms, however, authenticate only the server, not the content. An attacker (or administrator) with access to the server can manipulate content with impunity. Ideally, authors would not only be able to pin the keys of a server, but also pin the content, ensuring that an exact representation of a resource, and only that representation, loads and executes.

This document specifies such a validation scheme, extending two HTML elements with an integrity attribute that contains a cryptographic hash of the representation of the resource the author expects to load. For instance, an author may wish to load some framework from a shared server rather than hosting it on their own origin. Specifying that the expected SHA-384 hash of is Li9vy3DqF8tnTXuiaAJuML3ky+er10rcgNR/VqsVpcw+ThHmYcwiB1pbOxEbzJr7 means that the user agent can verify that the data it loads from that URL matches that expected hash before executing the JavaScript it contains. This integrity verification significantly reduces the risk that an attacker can substitute malicious content.

This example can be communicated to a user agent by adding the hash to a script element, like so:

<script src=""

With SRI, WebApps can improve their network performance and security together. Read the Implementation Report for more examples and sites already using the feature.

Thanks to editors Devdatta Akhawe, Dropbox, Inc.; Frederik Braun, Mozilla; François Marier, Mozilla; and Joel Weinberger, Google, Inc. and participants in the Web Application Security Working Group for successful completion of this Recommendation.

by Wendy Seltzer at June 24, 2016 04:34 PM

June 21, 2016

ishida >> blog

UniView 9.0.0 available

Picture of the page in action.
>> Use UniView

UniView now supports Unicode version 9, which is being released today, including all changes made during the beta period. (As before, images are not available for the Tangut additions, but the character information is available.)

This version of UniView also introduces a new filter feature. Below each block or range of characters is a set of links that allows you to quickly highlight characters with the property letter, mark, number, punctuation, or symbol. For more fine-grained property distinctions, see the Filter panel.

In addition, for some blocks there are other links available that reflect tags assigned to characters. This tagging is far from exhaustive! For instance, clicking on sanskrit will not show all characters used in Sanskrit.

The tags are just intended to be an aid to help you find certain characters quickly by exposing words that appear in the character descriptions or block subsection titles. For example, if you want to find the Bengali currency symbol while viewing the Bengali block, click on currency and all other characters but those related to currency will be dimmed.

(Since the highlight function is used for this, don’t forget that, if you happen to highlight a useful subset of characters and want to work with just those, you can use the Make list from highlights command, or click on the upwards pointing arrow icon below the text area to move those characters into the text area.)

by r12a at June 21, 2016 07:39 PM

June 08, 2016

W3C Blog

Wishing Tim Berners-Lee a happy birthday!

piece of cake!Today is Tim Berners-Lee‘s birthday and we’d like to wish him a very happy birthday and many happy returns of the day.

Happy birthday, Tim!

We are so grateful that you invented the Web  27 years ago! and that you are shepherding us, at the W3C (the Team, our members and all our 40+ Working Groups in our work including: Security, Internationalization, Accessibility, Web Applications and more) in leading the Web to its full potential.

Thank you for all you do for the Web (in inventing it) and the world, in sharing it for free, advocating for an Open Web, working to protect it and to bring it to the whole world — to truly be for everyone; a rich, creative resource for all.

We are honored to know you, to work with and for you, and we wish you all the best this and every year.

(Anecdotically, the piece of cake icon has been on the Web for 19 years, 7 months. That this birthday cake was the first image in the W3C icon directory shows just a bit of the sense of caring, humanity and connectedness that are so much a part of this group and the work we do.)

by Coralie Mercier at June 08, 2016 02:59 PM

June 03, 2016

W3C Blog

Exciting Opportunity at TPAC 2016

TPAC 2016 logoThe time since the last TPAC meeting has flown by and we’re in the process of putting the finishing touches on TPAC 2016 in Lisbon, Portugal.   During the meeting in Sapporo we ran an experimental Demo Area.  While that was very successful we decided to expand on that idea and have an Exhibition Area in Lisbon!

The Exhibition Area will be open on all five days of TPAC. It is open to any W3C Member and what you exhibit is up to you!  The rule is that it needs to be something that leverages W3C’s work. That can be a solution that has implemented our Standards.  You may be an organization that offers consulting in A11Y and you want to promote that – fine!  You may offer training on how to implement our standards or follow our best practices – Wonderful!

The price is a mere 1500€ and the space is limited to the first 14 organizations that apply.  In fact we only have 12 spaces left as Viacom is the Exhibition Sponsor (THANKS!) and will be taking two of the tables.

You can register until August 15 by using this form and you will be contacted by either myself or Bernard Gidon.

A reminder that we’ll hold a Developer Meetup on the Monday of the week –stay tuned– and that Wednesday is the day for unconference and breakout sessions.

I look forward to seeing everyone in Lisbon!

J. Alan Bird Global Business Development Leader, W3C

by J. Alan Bird at June 03, 2016 02:19 PM

June 02, 2016

W3C Blog

Finishing HTML5.1 … and starting HTML 5.2

Since we published the Working on HTML5.1 post, we’ve made progress. We’ve closed more issues than we have open, we now have a working rhythm for the specification that is getting up to the speed we want, and we have a spec we think is a big improvement on HTML5.

Now it’s time to publish something serious.

We’ve just posted a Call For Consensus (CFC) to publish the current HTML5.1 Working Draft as a Candidate Recommendation (CR). This means we’re going into feature freeze on HTML5.1, allowing the W3C Patent Policy to come into play and ensure HTML5.1 can be freely implemented and used.

While HTML5.1 is in CR we may make some editorial tweaks to the spec – for instance we will be checking for names that have been left out of the Acknowledgements section. There will also be some features marked “at risk”, which means they will be removed from HTML5.1 if we find during CR that they do not work in at least two shipping browsers.

Beyond this, the path of getting from CR to W3C Recommendation is an administrative one. We hope the Web Platform WG agrees that HTML5.1 is better than HTML5, and that it would benefit the web community if we updated the “gold standard” – the W3C Recommendation. Then we need W3C’s membership, and finally W3C Director Tim Berners-Lee to agree too.

The goal is for HTML5.1 to be a W3C Recommendation in September, and to achieve that we have to put the specification into feature freeze now. But what happens between now and September? Are we really going to sit around for a few months crossing legal t’s and dotting administrative i’s? No way!

We have pending changes that reflect features we believe will be shipped over the next few months. And of course there are always bugs to fix, and editorial improvements to make HTML at W3C more reliable and usable by the web community.

In the next couple of weeks we will propose a First Public Working Draft of HTML5.2. This will probably include some new features, some features that were not interoperable and so not included in HTML5.1, and some more bug fixes. This will kick off a programme of regular Working Draft releases until HTML5.2 is ready to be moved to W3C Recommendation sometime in the next year or so.

As always please join in, whether by following @HTMLWG on Twitter, filing issues, joining WP WG and writing bits of the specification, or just helping your colleagues stay up to date on HTML…

… on behalf of the chairs and editors, thanks!

by Charles McCathie Nevile at June 02, 2016 01:18 PM

Invitation to upcoming GIPO sessions at EuroDIG

I participate in GIPO (Global Internet Policy Observatory), to help frame the dialog on Internet Governance, and in the context of the upcoming European Dialogue on Internet Governance (EuroDIG) taking place in Brussels on 9-10 June 2016, a number of sessions will be devoted to GIPO with experts and interested stakeholders. I’ll be there!

You may register to attend this free event.

by Daniel Dardailler at June 02, 2016 12:48 PM

May 20, 2016

W3C Blog

HTTPS and the Semantic Web/Linked Data

In short, keep writing “http:” and trust that the infrastructure will quietly switch over to TLS (https) whenever both client and server can handle it. Meanwhile, let’s try to get SemWeb software to be doing TLS+UIR+HSTS and be as secure as modern browsers.

Sandro Hawke

As we hope you’ve noticed, W3C is increasing the security of its own Web site and is strongly encouraging everyone to do the same. I’ve included some details from our systems team below for an explanation but the key technologies to look into if you’re interested are Http Strict Transport Security (HSTS) and Upgrade-Insecure-Requests (UIR).

Bottom line: we want everyone to use HTTPS and there are smarts in place on our servers and in many browsers to take care of the upgrade automatically.

So what of Semantic Web URIs, particularly namespaces like

Visit that URI in a modern, secure browser and you’ll be redirected to Older browsers and, in this context more importantly, other user agents that do not recognize HSTS and/or UIR will not be redirected. So you can go on using namespaces without disruption.

This raises a number of questions.

Firstly, is the community agreed that if two URIs differ only in the scheme (http://, https:// and perhaps whatever comes in future) then they identify the same resource? We believe that this can only be asserted by the domain owner. In the specific case of* we do make that assertion. Note that this does not necessarily apply to any current or future subdomains of

Secondly, some members of the Semantic Web community have already moved to HTTPS (it was a key motivator for How steep is the path from where we are today to moving to a more secure Semantic Web, i.e. one that habitually uses HTTPS rather than HTTP? Have you/are you considering upgrading your own software?

Until and if the Semantic Web operates on more secure connections, we will need to be careful to pass around http URIs – which is likely to mean remembering to knock off the s when pasting a URI from your browser.

That’s a royal pain but we’ve looked at various workarounds and they’re all horrible. For example, we could deliberately redirect requests to things like our vocabulary namespaces away from the secure site to a deliberately less secure sub-domain – gah! No thanks.

Thirdly, a key feature of the HSTS/UIR landscape is that there is no need to go back and edit old resources – communication is carried out using HTTPS without further intervention. Can this be true for Semantic Web/Linked Data too or should we be considering more drastic action. For example, editing definitions in turtle files such as the one at to make it explicit that is owl:equivalentClass to (or even worse, having to go through and actually duplicate all the definitions with the different subject).

I really hope point 3 is unnecessary – but I’d like to be sure it is.


Jose Kahan from W3C’s Systems Team adds

HSTS does the client side upgrade from HTTP to HTTPS for a given domain. However, that header is only sent when doing an HTTPS connection. UIR defines a header that, if sent by browser, will tell the server it prefers using HTTPS and the server will redirect to HTTPS, then HSTS (through the header in the response) will kick in. HSTS doesn’t handle the case of mixed-content. That is the other part that UIR does to complement HSTS: tell the browser to update URLs of all content associated with a resource to HTTPS before requesting it.

For browser UAs, if HSTS is enabled for a domain and you browse a document by typing its URL on the navigation bar or follow a link to a new document, the request will be sent as HTTPS, regardless of the URL saying HTTP. If the document includes a CSS file, javascript, or an image, for example and that URL is HTTP, the request for those resources will only be sent as HTTPS if the UA supports UIR.

by Phil Archer at May 20, 2016 05:51 PM

April 06, 2016

W3C Blog

Working on HTML5.1

HTML5 was released in 2014 as the result of a concerted effort by the W3C HTML Working Group. The intention was then to begin publishing regular incremental updates to the HTML standard, but a few things meant that didn’t happen as planned. Now the Web Platform Working Group (WP WG) is working towards an HTML5.1 release within the next six months, and a general workflow that means we can release a stable version of HTML as a W3C Recommendation about once per year.


The core goals for future HTML specifications are to match reality better, to make the specification as clear as possible to readers, and of course to make it possible for all stakeholders to propose improvements, and understand what makes changes to HTML successful.


The plan is to ship an HTML5.1 Recommendation in September 2016. This means we will need to have a Candidate Recommendation by the middle of June, following a Call For Consensus based on the most recent Working Draft.

To make it easier for people to review changes, an updated Working Draft will be published approximately once a month. For convenience, changes are noted within the specification itself.

Longer term we would like to “rinse and repeat”, making regular incremental updates to HTML a reality that is relatively straightforward to implement. In the meantime you can track progress using Github pulse, or by following @HTML_commits or @HTMLWG on Twitter.

Working on the spec…

The specification is on Github, so anyone who can make a Pull Request can propose changes. For simple changes such as grammar fixes, this is a very easy process to learn – and simple changes will generally be accepted by the editors with no fuss.

If you find something in the specification that generally doesn’t work in shipping browsers, please file an issue, or better still file a Pull Request to fix it. We will generally remove things that don’t have adequate support in at least two shipping browser engines, even if they are useful to have and we hope they will achieve sufficient support in the future: in some cases, you can or we may propose the dropped feature as a future extension – see below regarding “incubation”.

HTML is a very large specification. It is developed from a set of source files, which are processed with the Bikeshed preprocessor. This automates things like links between the various sections, such as to element definitions. Significant changes, even editorial ones, are likely to require a basic knowledge of how Bikeshed works, and we will continue to improve the documentation especially for beginners.

HTML is covered by the W3C Patent Policy, so many potential patent holders have already ensured that it can be implemented without paying them any license fee. To keep this royalty-free licensing, any “substantive change” – one that actually changes conformance – must be accompanied by the patent commitment that has already been made by all participants in the Web Platform Working Group. If you make a Pull Request, this will automatically be checked, and the editors, chairs, or W3C staff will contact you to arrange the details. Generally this is a fairly simple process.

For substantial new features we prefer a separate module to be developed, “incubated”, to ensure that there is real support from the various kinds of implementers including browsers, authoring tools, producers of real content, and users, and when it is ready for standardisation to be proposed as an extension specification for HTML. The Web Platform Incubator Community Group (WICG) was set up for this purpose, but of course when you develop a proposal, any venue is reasonable. Again, we ask that you track technical contributions to the proposal (WICG will help do this for you), so we know when it arrives that people who had a hand in it have also committed to W3C’s royalty-free patent licensing and developers can happily implement it without a lot of worry about whether they will later be hit with a patent lawsuit.


W3C’s process for developing Recommendations requires a Working Group to convince the W3C Director, Tim Berners-Lee, that the specification

“is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized”

This had to be done for HTML 5.0. When a change is proposed to HTML we expect it to have enough tests to demonstrate that it does improve interoperability. Ideally these fit into an automatable testing system like the “Webapps test harness“. But in practice we plan to accept tests that demonstrate the necessary interoperability, whether they are readily automated or not.

The benefit of this approach is that except where features are removed from browsers, which is comparatively rare, we will have a consistently increasing level of interoperability as we accept changes, meaning that at any time a snapshot of the Editors’ draft should be a stable basis for an improved version of HTML that can be published as an updated version of an HTML Recommendation.


We want HTML to be a specification that authors and implementors can use with ease and confidence. The goal isn’t perfection (which is after all the enemy of good), but rather to make HTML 5.1 better than HTML 5.0 – the best HTML specification until we produce HTML 5.2…

And we want you to feel welcome to participate in improving HTML, for your own purposes and for the good of the Web.

Chaals, Léonie, Ade – chairs
Alex, Arron, Steve, Travis – editors

by Léonie Watson at April 06, 2016 01:05 PM

April 05, 2016

W3C Blog

HTML Media Extensions to continue work

The HTML Media Extensions Working Group was extended today until the end of September 2016. As part of making video a first class citizen of the Web, an effort started by HTML5 itself in 2007, W3C has been working on many extension specifications for the Open Web Platform: capturing images from the local device camera, handling of video streams and tracks, captioning and other enhancements for accessibility, audio processing, real-time communications, etc. The HTML Media Extensions Working Group is working on two of those extensions: Media Sources Extensions (MSE), for facilitating adaptive and live streaming, and Encrypted Media Extensions (EME), for playback of protected content. Both are extension specifications to enhance the Open Web Platform with rich media support.

The W3C supports the statement from the W3C Technical Architecture Group (TAG) regarding the importance of broad participation, testing, and audit to keep users safe and the Web’s security model intact. The EFF, a W3C member, concerned about this issue, proposed a covenant to be agreed by all W3C members which included exemptions for security researchers as well as interoperable implementations under the US Digital Millennium Copyright Act (DMCA) and similar laws. After discussion for several months and review at the recent W3C Advisory Committee meeting, no consensus has yet emerged from follow-up discussions about the covenant from the EFF.

We do recognize that issues around Web security exist as well as the importance of the work of security researchers and that these necessitate further investigation but we maintain that the premises for starting the work on the EME specification are still applicable. See the information about W3C and Encrypted Media Extensions.

The goal for EME has always been to replace non-interoperable private content protection APIs (see the Media Pipeline Task Force (MPTF) Requirements). By ensuring better security, privacy, and accessibility around those mechanisms, as well as having those discussions at W3C, EME provides more secure interfaces for license and key exchanges by sandboxing the underlying content decryption modules. The only required key system in the specification is one that actually does not perform any digital rights management (DRM) function and is using fully defined and standardized mechanisms (the JSON Web Key format, RFC7517, and algorithms, RFC7518). While it may not satisfy some of the requirements from distributors and media owners in resisting attacks, it is the only fully interoperable key system when using EME.

We acknowledge and welcome further efforts from the EFF and other W3C Members in investigating the relations between technologies and policies. Technologists and researchers indeed have benefited from the EFF’s work in securing an exemption from the DMCA from the Library of Congress which will help to better protect security researchers from the same issues they worked to address at the W3C level.

W3C does intend to keep looking at the challenges related to the US DMCA and similar laws such as international implementations of the EU Copyright Directive with our Members and staff. The W3C is currently setting up a Technology and Policy Interest Group to keep looking at those issues and we intend to bring challenges related to these laws to this Group.

by Philippe le Hegaret at April 05, 2016 02:29 PM