December 18, 2014

ishida >> blog

Thai character picker v15

I have uploaded a new version of the Thai character picker.

The new version uses characters instead of images for the selection table, making it faster to load and more flexible, and dispenses with the transcription view. If you prefer, you can still access the previous version.

Other changes include:

  • Significant rearrangement of the default selection table. The new arrangement makes it easy to choose the right characters if you have a Latin transcription to hand, which allows the removal of the previous transcription view, at the same time as speeding up that type of picking.
  • Addition of latin prompts to help locate letters (standard with v15).
  • Automatic transcription from Thai into ISO 11940-1, ISO 11940-2 and IPA. Note that for the last two there are some corner cases where the results are not quite correct, due to the ambiguity of the script, and note also that you need to show syllable boundaries with spaces before transcribing. (There’s a way to remove those spaces quickly afterwards.) See below for more information.
  • Hints! When switched on and you mouse over a character, other similar characters or characters incorporating the shape you moused over, are highlighted. Particularly useful for people who don’t know the script well, and may miss small differences, but also useful sometimes for finding a character if you first see something similar.
  • It also comes with the new v15 features that are standard, such as shape-based picking without losing context, range-selectable codepoint information, a rehabilitated escapes button, the ability to change the font of the table and the line-height of the output, and the ability to turn off autofocus on mobile devices to stop the keyboard jumping up all the time, etc.

For more information about the picker, see the notes at the bottom of the picker page.

About pickers: Pickers allow you to quickly create phrases in a script by clicking on Unicode characters arranged in a way that aids their identification. Pickers are likely to be most useful if you don’t know a script well enough to use the native keyboard. The arrangement of characters also makes it much more usable than a regular character map utility. See the list of available pickers.

More about the transcriptions: There are three buttons that allow you to convert from Thai text to Latin transcriptions. If you highlight part of the text, only that part will be transcribed.

The toISO-1 button produces an ISO 11940-1 transliteration, that latinises the Thai characters without changing their order. The result doesn’t normally tell you how to pronounce the Thai text, but it can be converted back to Thai as each Thai character is represented by a unique sequence in Latin. This transcription should produce fully conformant output. There is no need to identify syllables boundaries first.

The toISO-2 and toIPA buttons produce an output that is intended to approximately reflect actual pronunciation. It will work fine most of the time, but there are occasional ambiguities and idiosynchrasies in Thai which will cause the converter to render certain, less common syllables incorrectly. It also doesn’t automatically add accent marks to the phonetic version (though that may be added later). So the output of these buttons should be treated as something that gets you 90% of the way. NOTE: Before using these two buttons you need to add spaces or hyphens between each syllable of the Thai text. Syllable boundaries are important for correct interpretation of the text, and they are not detected automatically.

The condense button removes the spaces from the highlighted range (or the whole output area, if nothing is highlighted).

Note: For the toISO-2 transcription I use a macron over long vowels. This is non-standard.

by r12a at December 18, 2014 02:35 PM

December 16, 2014

W3C Blog

OpenSocial Foundation Moves Standards Work to W3C Social Web Activity

W3C and the OpenSocial Foundation announced today that as of 1 January 2015, OpenSocial standards work and specifications beyond OpenSocial 2.5.1 will take place in the W3C Social Web Working Group, of which the OpenSocial Foundation is a founding member. The W3C Social Web Working Group extends the reach of OpenSocial into the enterprise, HTML5 and Indie Web communities.

In this post we talk about next steps for standards work at W3C and open source projects at Apache.

Note: As part of the transfer of OpenSocial specifications and assets to the W3C, requests to will be redirected to this blog post. For more information, please see the FAQ below.

Standards and Requirements at W3C

W3C launched its Social Web Activity in July 2014 with two groups:

  • The Social Web Working Group, which defines the technical standards and APIs to facilitate access to social functionality as part of the Open Web Platform.
  • The Social Interest Group, which coordinates messaging around social at the W3C and is formulating a broad strategy to enable social business and federation.

In addition, some OpenSocial work has moved (or will move) to existing W3C groups. Here is a summary of where you can get involved with different W3C standardization efforts and discussions.

Open Source Projects at Apache Foundation

In addition to the several leading commercial enterprise platforms thant use OpenSocial, the Apache Software Foundation hosts two active and ongoing projects that serve as reference implementations for OpenSocial technology:

  • Apache Shindig is the reference implementation of OpenSocial API specifications, versions 1.0.x and 2.0.x, a standard set of Social Network APIs that includes Profiles, Relationships, Activities, Shared Applications, Authentication, and Authorization.
  • Apache Rave is a lightweight and open-standards based extensible platform for using, integrating and hosting OpenSocial and W3C Widget related features, technologies and services. It will also provide strong context-aware personalization, collaboration and content integration capabilities and a high quality out-of-the-box installation as well as be easy to integrate in other platforms and solutions.


Note: We will add to this FAQ over time as questions arise. Please send questions to

Why is OpenSocial Foundation closing?

OpenSocial Foundation feels that the community will have a better chance of realizing an open social web through discussions at a single organization, and the OpenSocial Foundation board believes that working as an integrated part of W3C will help reach more communities that will benefit from open social standards.

What does it mean that OpenSocial Foundation is closing?

OpenSocial will no longer exist as a separate legal entity, but work will continue within the W3C Social Web Activity.

What will happen to development of the OpenSocial specification?

Development will continue within the Social Web Working Group.

What will happen to development of the reference implementations Apache Shindig and Rave?

Development will continue within the Apache Software Foundation.

Where do I go if I have questions about OpenSocial?

Members of the OpenSocial Community will be actively involved in the Social Web Working Group.

Will older versions of OpenSocial specifications remain available?

Yes, they will remain available on GitHub.

Will discussion archives be preserved?

Discussion archives are in Google groups. As long as those are allowed to remain, they will remain in place.

by Ian Jacobs at December 16, 2014 08:34 PM

December 12, 2014

W3C Blog

This week: Fire TV WebApp kit, Mike[tm] Smith on HTML validation, etc.

This is the 5-12 December 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

W3C in the Press (or blogs)


articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at December 12, 2014 04:44 PM

December 11, 2014

W3C Blog

RXSS Security Audit Results

W3C recently submitted to a Web Application Penetration Test. It was conducted by researchers and testers of SBA Research within the context of Mobsetip research project and specifically targeted Reflected-Cross-Site-Scripting vulnerabilities using combinatorial testing methodologies. SBA Research approached W3C since the size of our website and the nature of our organization made for an interesting test subject. W3C seeks to continually improve its security and has submitted to penetration tests in the past, conducted its own audits and welcomes community reports on its open collaborative infrastructure. A RXSS vulnerability was found in W3C’s online tidy service and corrected. Anyone running their own instance of this service is encouraged to upgrade.

W3C appreciates SBA Research’s effort and responsible vulnerability disclosure practices.

by Ted Guild at December 11, 2014 08:06 PM

December 06, 2014

ishida >> blog

Tibetan character picker v15

I have uploaded a new version of the Tibetan character picker.

The new version dispenses with the images for the selection table. If you don’t have a suitable font to display the new version of the picker, you can still access the previous version, which uses images.

Other changes include:

  • Significant rearrangement of the default table, with many less common symbols moved into a location that you need to click on to reveal. This declutters the selection table.
  • Addition of latin prompts to help locate letters (standard with v15).
  • Hints (When switched on and you mouse over a character, other similar characters or characters incorporating the shape you moused over, are highlighted. Particularly useful for people who don’t know the script well, and may miss small differences, but also useful sometimes for finding a character if you first see something similar.)
  • A new Wylie button that converts Tibetan text into an extended Wylie Latin transcription. There are still some uncommon characters that don’t work, but it should cover most normal needs. I used diacritics over lowercase letters rather than uppercase letters, except for the fixed form characters. I also didn’t provide conversions for many of the symbols – they will appear without change in the transcription. See the notes on the page for more information.
  • The Codepoints button, which produces a list of characters in the output box, now has a new feature. If you have highlighted some text in the output box, you will only see a list of the highlighted characters. If there are no highlights, the contents of the whole output box are listed.
  • Don’t forget, if you are using the picker on an iPad or mobile device, to set Autofocus to Off before tapping on characters. This stops the device keypad popping up every time you select a character. (This is also standard for v15.)

About pickers: Pickers allow you to quickly create phrases in a script by clicking on Unicode characters arranged in a way that aids their identification. Pickers are likely to be most useful if you don’t know a script well enough to use the native keyboard. The arrangement of characters also makes it much more usable than a regular character map utility. See the list of available pickers.

by r12a at December 06, 2014 10:39 PM

Mongolian variant forms

There is some confusion about which shapes should be produced by fonts for Mongolian characters. Most letters have at least one isolated, initial, medial and final shape, but other shapes are produced by contextual factors, such as vowel harmony.

Unicode has a list of standardised variant shapes, dating from 27 November 2013, but that list is not complete and contains what are currently viewed by some as errors. It also doesn’t specify the expected default shapes for initial, medial and final positions.

The original list of standardised variants was based on 蒙古文编码 by Professor Quejingzhabu in 2000.

A new proposal was published on 20 January 2014, which attempts to resolve the current issues, although I think that it introduces one or two issues of its own.

The other factor in this is what the actual fonts do. Sometimes they follow the Unicode standardised variants list, other times they diverge from it. Occasionally a majority of implementations appear to diverge in the same way, suggesting that the standardised list should be adapted to reality.

To help unravel this, I put together a page called Notes on Mongolian variant forms that visually shows the changes between the various proposals, and compares the results produced by various fonts.

This is still an early draft. The information only covers the basic Mongolian range – Todo, Sibe, etc still to come. Also, I would like to add information about other fonts, if I can obtain them.

by r12a at December 06, 2014 10:13 AM

December 05, 2014

W3C Blog

This week: W3C CEO on after HTML5, #bbd14, @W3C-PO protocol droid, No CAPTCHA ReCAPTCHA, etc.

This is the 28 November – 5 December 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

W3C in the Press (or blogs)

4 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at December 05, 2014 05:06 PM

November 28, 2014

W3C Blog

This week: much html-json-forms wow such fame, Epub-Web, Webizen vote, European Parliament on competition, etc.

This is the 21-28 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & digital market

W3C in the Press (or blogs)

4 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 28, 2014 04:10 PM

November 21, 2014

W3C Blog

This week: Chrome HTML5 features, Service Workers, Net neutrality, etc.

This is the 14-21 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality

W3C in the Press (or blogs)

8 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 21, 2014 05:02 PM

November 17, 2014

ishida >> blog

Picker changes

If you use my Unicode character pickers, you may have noticed some changes recently. I’ve moved several pickers on to version 14. Most of the noticeable changes are in the location and styling of elements on the UI – the features remain pretty much unchanged.

Pages have acquired a header at the top (which is typically hidden), that provides links to related pages, and integrates the style into that of the rest of the site. What you don’t see is a large effort to tidy the code base and style sheets.

So far, I have changed the following: Arabic block, Armenian, Balinese, Bengali, Khmer, IPA, Lao, Mongolian, Myanmar, and Tibetan.

I will convert more as and when I get time.

However, in parallel, I have already made a start on version 15, which is a significant rewrite. Gone are the graphics, to be replaced by characters and webfonts. This makes a huge improvement to the loading time of the page. I’m also hoping to introduce more automated transcription methods, and simpler shape matching approaches.

Some of the pickers I already upgraded to version 14 have mechanisms for transcription and shape-based identification that took a huge effort to create, and will take a substantial effort to upgrade to version 15. So they may stay as they are for a while. However, easier to handle and new pickers will move to the new format.

Actually, I already made a start with Gurmukhi v15, which yanks that picker out of the stone-age and into the future. There’s also a new picker for the Uighur language that uses v15 technology. I’ll write separate blogs about those.


[By the way, if you are viewing the pickers on a mobile device such as an iPad, don't forget to turn Autofocus off (click on 'more controls' to find the switch). This will stop the onscreen keyboard popping up, annoyingly, each time you try to tap on a character.]

by r12a at November 17, 2014 10:51 PM

November 16, 2014

koalie’s contemplations in markup


These days, I wish I knew other things so I could consider a career change. Instead, I often long for something else, brood, and sweep the thought away to do what I have to do, because that is a better use of time and energy.

I suspect it would be easier if I knew what else I’d like to do. Even better if I could readily do other things. As to learning new things, well, I don’t feel like I’m up to the effort, and I have not the faintest idea what.

I like my work, however, and so find puzzling that I should yearn for something else. The work is varied, challenging and interesting, the people are wonderful, the mission is a constant inspiration.

Perhaps it’s the long hours. Budgets have been shrinking, and so has the size of our team. Our workload, on the other hand hasn’t. Quite the opposite, it seems. Perhaps it’s the fact I have been around almost 16 years. I have been so lucky to progress in several teams and assume various positions. I’ve been in the team I’m in now for almost 10 years, full time for 7 years, and I have done so many different things and am doing so many other different things that it is truly mind-blowing. No, what I mean is the absolute time it represents.

The Consortium is twenty years old. It’s marvelous it’s still there, and its agenda is full to the brim. If I were to change jobs, wouldn’t it be perfect if it were before I’m in my forties?

Aha! I get it. This is a sort of mid-life work crisis, I’m having. Perhaps.

by koalie at November 16, 2014 01:48 AM

November 14, 2014

W3C Blog

This week: wide-review signal list, President Obama on Net Neutrality, etc.

This is the 7-14 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

W3C in the Press (or blogs)

3 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 14, 2014 04:06 PM

November 11, 2014

W3C Blog

A Productive TPAC 2014 and W3C Highlights


TPAC 2014, W3C’s annual organization-wide meeting, was a milestone for W3C and the Web community on several levels. Thirty-four groups met face-to-face and held joint meetings the last week of October. Participants organized 30 breakout sessions on telecommunications, privacy, Web of things, payments, APIs, testing, robotics, W3C agility, and more. With that many meetings and so many attendees, I can’t speak to all the highlights of the week. But here were a few for me:

  • Nearly 550 people attended TPAC meetings, a record, and a great indicator of the vitality of our agenda. Several groups met for the first time face-to-face: the Social Web Working Group, Payments Interest Group, Web Annotations Working Group, and the RDF Data Shapes Working Group.
  • We celebrated the 20th anniversary of W3C, with an all-star slate of speakers and panelists. We live-streamed the event and have published some photos as well. Many thanks to our sponsors Intel, Ford Foundation, ICANN, Knight Foundation, Rakuten, and Tata Communications.
  • We announced the HTML5 Recommendation, emphasizing the work of the last two years to build a test suite of more than 100,000 tests to drive interoperability, and the Royalty-Free licensing commitments from more than 60 Members that make this the premiere platform for innovation. As part of the announcement, we released a video on the value of standards that was viewed 65,000 times in less than a week.
  • With the completion of HTML5, and while so many people were gathered in Santa Clara, it was a great opportunity to reflect on what the community has accomplished and what lies ahead. The HTML Working Group spent some time at its face-to-face meeting planning next steps. There were also spirited discussions of developer needs, framed through the Application Foundations taxonomy.

The week was busy, and from all signs, productive. One Advisory Committee Representative expressed his appreciation for “an excellent week loaded with events,” and I hope that long-time and new Members alike found it a valuable opportunity to connect.

As part of preparation for TPAC we published for the Membership “W3C Highlights – October 2014,” now public, which I invite you to read.

We are already looking forward to TPAC 2015, 26-30 October in Sapporo, Japan (just prior to the IETF meeting 1-6 November in Yokohama).

by Jeff Jaffe at November 11, 2014 02:38 PM

November 07, 2014

W3C Blog

Payment Industry Priorities: Meeting Summary of Web Payments IG

2014 is the year of Web Payments for W3C. After a March Workshop to bring the community together, and a focused effort to draft a charter for a new steering group, we announced the launch of a new Web Payments Activity in October. The new Web Payments Interest Group began work in earnest the last week of October, during W3C’s annual big meeting called TPAC.

Despite the very short time between the launch of the group and the first face-to-face, more than 50 people participated in two days of good discussion. One major achievement at this meeting was to welcome representatives from major stakeholders groups involved in the payment chain: people from the telecom industry (e.g. Orange, Verizon, AT&T, Deutsche Telekom), browser makers (e.g. Opera), big retailers (e.g. NACS, Walmart), Internet giants (e.g. Paypal, Verisign, Intel), finance industry (e.g. Gemalto, Bloomberg), banks (e.g. Rabobank, World Bank), regulators (US Federal Reserve), and few startups joined their force to start this new activity. The variety of the participants, their interests and perspective was recognized as one of the greatest value of this initiative by the participants themselves.

On the technical side, obviously a first meeting is dedicated to build a common ground between participants, and ensure that we are all aware of the space in which we are working. A big part of the agenda was therefore dedicated to reviewing various specifications from ISO, X9 and a few other standardization bodies. We also reviewed existing work at W3C, on the Recommendation Track (Web Crypto WG, NFC WG, Sysapp WG), in Community Groups (Web Payments CG, Credential CG) and future work in areas like trust and permissions (see the recent workshop on this topic).

We then discussed our initial scope, and in particular, our focus on wallet, that the group is calling for now “payment agent.” The group first decided it will first address the person-to-business case, where someone is paying a bill issued by an organization (private or public, which includes person-to government payments).

Then the group decided to focus on convergent payment solutions, developing a wallet framework that will support both online and brick & mortar store payments. Finally one of the key work items will be security and how to increase security of credit card payments on the Web by enabling tokenized payment and push-based payments. Push-based payments are payments initiated by users: the merchant sends a bill to the customer who then sends an order to his payment system provider to pay the merchant. All the parties in the room agreed on the need to move out of exchange of credit card information for payments, and enable these new approaches through open standards. It was clear in the room that secure hardware storage has a big role to play here, particularly secure elements for both emulating credit cards and for managing identity and credentials securely.

Lastly, the group also held a number of discussions around privacy. Customers should be allowed to decide which information they wish to share with various parties to a transaction. There are also external forces such what is required required by regulation (e.g., minimal age to buy specific product, or money laundering detection) or for anti-fraud systems. There is clearly a tension between various parties on this topic that we must address.

The group has created two task forces too begin work on a detailed roadmap identifying technology gaps and opportunities for standardization:

  • The Use Cases Task Force will take a bottom-up approach, identifying the list of scenarios that a payments framework should be able to address. The task force with work on requirements, design criteria and use-cases that will enable the design of a wallet architecture. The task force will first review various use-case documents produced by various W3C and non-W3C groups such as the W3C Web Payments Community Group, and X9 use-cases for ISO 12812 specifications.
  • The Payment Agent Task Force will have a more top-down approach and will work towards proposing a disaggregated architecture based on the discussions we had during the meeting.

We hope to accelerate results by approaching the question from these two angles. Now is a great time to join the group and help shape the roadmap, before the group’s next face-to-face meeting in Q1 2015.

by Stéphane Boyera at November 07, 2014 06:13 PM

This week: HTML5 is a Recommendation, #w3c20, Winamp in HTML5+JS, etc.

This is the 28 October – 7 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

W3C in the Press (or blogs)

57 articles since the last Digest, including 26 about HTML5 to Rec; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 07, 2014 03:10 PM

October 29, 2014

W3C Blog

Streaming video on the Web: a good example of more work to do

Yesterday we announced the HTML5 Recommendation. One of the most significant features of HTML5, and one that has been deployed for some time, is the <video> element, which will make it easier to include video in pages and applications without requiring users to download plug-ins.

There is already strong browser support for video today, but we have more work to do on interoperable support for streaming video. That is why we are working on a number of specifications to support streaming media interoperability, including Media Source Extensions, currently a Candidate Recommendation.

We ran into live stream interop issues as part of planning our W3C20 Webcast today (from 3pm-6pm Pacific Time) and ensuring the widest audience as possible. The deployed solutions we found (and will be using) rely on Flash plugins and other platform-specific approaches such as HTTP Live Streaming (HLS).

Despite that limitation, we are happy to offer the live stream with captions to those who cannot join us in Santa Clara.

Interoperable streaming is just one area where we want to make it easier for developers and users to play video and audio on the Web. We still need Royalty-Free codecs, the ability to play the content on second screens, improved support for accessibility, and more.

by Philippe le Hegaret at October 29, 2014 06:02 PM

October 18, 2014

ishida >> blog

Notes on Tibetan script

See the Tibetan Script Notes

Last March I pulled together some notes about the Tibetan script overall, and detailed notes about Unicode characters used in Tibetan.

I am writing these pages as I explore the Tibetan script as used for the Tibetan language. They may be updated from time to time and should not be considered authoritative. Basically I am mostly simplifying, combining, streamlining and arranging the text from the sources listed at the bottom of the page.

The first half of the script notes page describes how Unicode characters are used to write Tibetan. The second half looks at text layout in Tibetan (eg. line-breaking, justification, emphasis, punctuation, etc.)

The character notes page lists all the characters in the Unicode Tibetan block, and provides specific usage notes for many of them per their use for writing the Tibetan language.

See the Tibetan Character Notes

Tibetan is an abugida, ie. consonants carry an inherent vowel sound that is overridden using vowel signs. Text runs from left to right.

There are various different Tibetan scripts, of two basic types: དབུ་ཙན་ dbu-can, pronounced /uchen/ (with a head), and དབུ་མེད་ dbu-med, pronounced /ume/ (headless). This page concentrates on the former. Pronunciations are based on the central, Lhasa dialect.

The pronunciation of Tibetan words is typically much simpler than the orthography, which involves patterns of consonants. These reduce ambiguity and can affect pronunciation and tone. In the notes I try to explain how that works, in an approachable way (though it’s still a little complicated, at first).

Traditional Tibetan text was written on pechas (དཔེ་ཆ་ dpe-cha), loose-leaf sheets. Some of the characters used and formatting approaches are different in books and pechas.

For similar notes on other scripts, see my docs list.

by r12a at October 18, 2014 05:48 AM

October 14, 2014

W3C Blog

How to further improve the world of open standards

Today is World Standards Day (almost everywhere in the world ;) and as I mentioned in an earlier standard day anniversary post, I like open standards and the benefits they bring to humanity.

To me, they are a first class public service. Much like people can take a public bus to go from some street to a stadium, they can also use Wifi, IP, TCP, HTTP, HTML, etc, to go from one place on the net to another.

These same folks pay for the development of their bus lines and the tons of standards that surround the public transport sector (including the vehicles) through their taxes and their passes, and although they pay for their ISP subscriptions, very little comes from their taxes in terms of net standards.

In other words: most if not all governments support the development of standards for public transport, and a myriad of other areas (housing, food, health, electricity, radio, etc), but their support for Internet and Web open standard development is close to nil.

Why is that so ?

First, it sounds like a paradox, since funding for better or more net open standards would accelerate the growth of ICT altogether (ICT is largely based on the net being there, plus other communication technologies), something everybody agrees is good for society. Governments also receive a mandate from their citizens to promote and support standardization, in compliance with the WTO (World Trade Organization) TBT guidelines for standardization, and we think we are compliant, in terms of transparency, neutrality, etc.

Also, for the end-users, the citizen, there is no difference: I know that I can buy a tire for my car in a different shop than the one where I bought the car because there is a independent global standard for tire size, thanks to that, the same way I know that I can buy a computer in one shop and connect to a computer coming from other shops in another country. Standards are hidden, just referred to by a name in a given context: tire size 170/55R14 or net protocols

Looking at the business model of the de-jure standardization system, there is also the added “anomaly” that the organizations involved not only receive direct government support (each gov has a budget line for official standard development) but their standard documents are almost never freely accessible, as it’s the case for IETF or W3C for instance. So they have two lines of revenues we don’t have in their budget: gov support and sales of standards.

I don’t want to spend much time on the sales aspect: it’s both historical and a stable situation unlikely to change as long as people are ready to pay for important standard specifications in electronic form. This is also a source of revenues hard to get away from one you have it (I heard once it amounts to about one third of the global budget of de-jure standardizers). Our situation is also quite unique since most net standards are and have always been available freely on the net, at no cost, as a way to further develop the net itself by various actors with no desire to pay for software specifications.

The absence of a net standardization government envelop is also historical, and amounts to the infrastructure itself (the cables, the antennas, etc) being privatized from the start, but before going further, let me emphasize one important point: our model of voluntary standards not funded by government has been extremely successful.  With no standardization government funds, IETF, W3C and others have created an infrastructure of enormous prosperity.

However, the Internet and the Web have become a core infrastructure of our societies,  and we now have infrastructure standardization issues of a different variety. They may be longer term.  They may be in the public interest, but not immediate economic interest of economic beneficiaries.  They may require long-term focus; beyond the interests of current funders.  Areas such as security, privacy, internationalization, robustness, or accessibility come to mind.

It’s worth noting that the Internet and the Web have, through their couple of decades of evolution, received reasonable amount of public funding through R&D grants, e.g. from DARPA, the EC or the Japanese MITI, but most would say it was for the “innovation” part of the net development and the informal standardization coming with it. The net technologies comes from R&D, clearly, but they have now build their home in standardization land as well. W3C and IETF specifications have recently been made legally referenceable by government policies and procurements in Europe for instance, which proves our seriousness in this business

So we don’t get standard government money, reserved for de-jure/official standardization organizations, but we can somehow get R&D money, provided that we show clearly the innovation side of our work. This situation, unfortunately, doesn’t scale well, for various reasons. First, our standard agenda is not dictated by any government R&D planning, and although we have made efforts in getting closer to the policy makers in charge of the various gov R&D and standard agenda, there is no guarantee that our community will follow any of these policy needs when it comes to do it for real. Our agenda comes live from participants that decide to spent their time with us on a project.

For one thing, we’d need much more resources to close this gap and work on a useful gov/fora agenda convergence on a global scale, that is, in all countries with policy priorities in terms of net developments. But, as it happens, the state of the standardization agenda in our sector is behind schedule in terms of the potential needs of society at large, and as a result the priorities of both governments public policy maker and our more “private” communities are often the same. Everybody need things like Web Payments and true privacy or device independence to work (based on open standards) for yesterday.

The other issue with most gov R&D sources of funding is that they are open to everybody: academia, commercial companies, research labs, industries, etc, so the competition to get a given grant is fierce, with no guarantee to get anything from one year to the other. Why do we have to compete with everybody in the market, since we’re an SDO, doing a public service, a necessity for the entire market to exist ? Plus, as anyone who has done one knows well, it costs of lot of resources to apply for any single R&D public grant, and this time spent preparing them is not paid for, so for SDOs this is time not spent on better or more standards.

Because of all these reasons, getting added gov funding would allow W3C to more easily hold together and maintain what we have achieved so far (that everybody uses but not a lot of people wants to fund), and more generally to do a better job at moving the Open Web Platform forward, against a wind of proprietary software platforms.

And it would cost a fraction of what government gives to the more official standardizers to get to a more balanced situation. The issue here is not so much one of unfair competition between standardization organizations, since there is work for everyone and we’re all busy, it’s more the issue of loss of quality in fora/consortia deliverables, risk of fragmentation of the net stacks, and going back to the pre-Internet days of online walled-garden services.

Fora and consortia net SDOs are small organizations: an order of magnitude less staff than the average de-jure SDO, and they also produce an order of magnitude less standards each year, but their impact is huge, nobody will disagree with that in any countries, so my message to governments around the world is simple: please consider investing a tenth of what your give yearly to your local de-jure organizations (e.g. to your national ISO members, or to an ITU mirror), and you won’t be disappointed by the benefits you’ll get back to your society.

by Daniel Dardailler at October 14, 2014 08:56 PM

Application Foundations for the Open Web Platform

Bringing HTML5 to the status of W3C Recommendation (in October 2014) is a defining moment in the development of the Open Web Platform (OWP), a set of technologies for developing distributed applications with the greatest interoperability in history. This year is also the 25th anniversary of the Web and 20th anniversary of W3C, making this an even more meaningful time to engage with the community about the Next Big Thing for the Web Platform.

My starting point for this discussion is that, now that HTML5 is done, W3C should focus on strengthening the parts of the Open Web Platform that developers most urgently need for success. I call this push for developers “Application Foundations.”

This is a new formulation, shaped in part by discussion at the September Extensible Web Summit in Berlin, as well as discussions within the W3C staff. I am planning further discussion at W3C’s TPAC 2014 meeting at the end of the month, and I welcome your feedback to this post and in the months ahead.

While this formulation is new, most of the work is not new. Rather, this represents a new way of looking at the considerable work that is already in the Web community, and placing some structure around the considerable work in front of us.

The Focus on Developers

The OWP is widely deployed, improving in function every day, and transforming industry after industry. According to a survey earlier this year, 42% of developers are using HTML5, CSS, and JavaScript when building applications. The promise of the Open Web Platform is to lower the cost of developing powerful applications to reach the most people, on any device.

As popular as the OWP is, it is still too challenging for developers to create some types of Web applications. Lack of broad interoperability for some features complicates development. Lack of standard features in the platform drives developers to create hybrid applications, implying a larger mix of tools, libraries, and interoperability issues. There is more work to meet growing expectations around privacy, security, and accessibility.

There are many ways to focus on developers. Many W3C activities outside of standards development are geared toward enabling developers, including tools (validator), documentation (Web Platform Docs), training (W3DevCampus, W3Conf), participation (Community Groups, draft Webizen program).

The question I want to get at in this post, however, relates to our open standards agenda: are we building the platform that developers need? How can we find out?

That is where the Application Foundations come in. They give us a way to think about the Open Web Platform that will make it easier for the W3C community to converge on the top priorities for developers.

Illustration of application foundation top-level categories and a few second-level topics

What are Application Foundations?

Platforms mature predictably in the following way: at a given time, some capabilities are core and “applications” rely on the core. Invariably, there comes a time when certain features are so pervasively used as services by other applications, the “next generation” of the platform must subsume some of those features (via libraries, platform services, daemons, etc.).

Operating systems provide a familiar example. Typically, an operating system kernel provides the key lower layer functions that a computer needs for its programs (aka applications): program execution, memory management, support for devices, etc. In early versions of many operating systems, there are also higher layer functions (such as networking, security, GUIs, etc.). Often these functions have some manifestation in the kernel, but also some manifestation in applications. Over time, given experience with the higher layer functions, people recognize that some must mature into major subsystems (aka foundations) that are above the kernel, leveraged by many applications. Modular design of these subsystems allows experts in different areas (security, communications protocols, and so on) to deliver solutions that will best serve all the other parts of the platform.

We see this pattern with the Open Web Platform as well. There was a time that video would have been viewed as an application of the Web, but in HTML5, video has unquestionably been absorbed into the core infrastructure (e.g., via the HTML <video> element). An apt metaphor is to call the programmable Open Web Platform of today the first generation operating system of the Web. In the past couple of years, important subsystems have already started to emerge, and in this post I propose a taxonomy of eight Application Foundations to focus our discussion on the next generation:

  • Security and Privacy
  • Core Web Design and Development
  • Device Interaction
  • Application Lifecycle
  • Media and Real-Time Communications
  • Performance and Tuning
  • Usability and Accessibility
  • Services

Each Foundation represents collection of services and capabilities that should be available for all applications. For example, the Security and Privacy Foundation includes capabilities such as crypto, multi-factor authentication, and resource integrity.

We expect each Foundation to evolve independently, driven by experts in that topic. We also know that there will be interconnections, such as Security implications of Device Interactions, or Accessibility considerations of streaming Media.

Below I will begin to enumerate the capabilities we have associated with each Foundation, both long-standing and new or planned work that will substantially advance the capability of the OWP.

In our internal discussions there was quick consensus on the usefulness of an Application Foundations paradigm. There was also passionate debate about taxonomy itself. Did we come up with one that will speak to developers? Did we neglect some important piece of functionality? Should this or that second-level item be a top-level category or vice versa? To help structure the broader debate to come, I’d like to provide some background for the choices proposed here.

Principles for Thinking about these Foundations

Bearing in mind that we are looking for a best fit to structure discussion, not a perfect dissection, here are some key principles for thinking about these Foundations:

  • Although this exercise is motivated by the desire to meet developer needs, we sought labels that would be meaningful for end users as well. We looked for terms we thought would speak to those audiences about both desirable qualities of the platform and current pain points that we need to address.
  • These topics were derived by looking at W3C’s current priorities and discussions about upcoming work. W3C’s agenda is in constant motion, and this taxonomy will only be useful so long as it aligns with priorities. But the river bed shapes the river and vice versa.
  • Because the focus is on current developer pain points, we do not attempt to fit all of W3C’s work into the eight categories. We are committed to our entire agenda, but this particular exercise is limited in scope. For the same reason, we do not attempt to represent all completed work in this categorization. While we might want to see how broadly we could apply this taxonomy, our priority project is to enable developers today and tomorrow.
  • Because the focus is on W3C’s agenda, these Foundations do not attempt to represent all things one we think of as being important to the Web, HTTP and JavaScript being two notable examples. Moreover, many key IETF standards (HTTP, URL, IPv6) might more properly be defined as part of the kernel – rather than a higher level Foundation.

Putting the Foundations to Use

Although this framework is meant initially only as a communications vehicle —a way of describing the substantial work we have to do to enhance the OWP— we may find other uses later. Once fleshed out and road-tested, for example, the W3C Technical Architecture Group (TAG) might use this model for architectural discussions about the Open Web Platform.

Ultimately, with such a framework, it becomes easier to identify what is missing from the platform, because we will think more cohesively about its key components. And where there are similar capabilities (e.g. different functions that show up in the same Foundation), it will make it easier to identify where synergies can simplify or improve the platform.

By definition, Foundations are common subsystems useful not only for “horizontal applications”, but also for a variety of industries such as digital publishing, automotive, or entertainment. In a separate exercise we plan to work with those industries to create a view of the Foundations specific to what they need from the Open Web Platform.

So let’s get started. In each paragraph below, I outline why we think this area deserves to be a Foundation. I list some absolutely critical problems the community is currently addressing. This will help motivate why each Foundation was chosen, and the technology development required to give rise to the next generation Web.

Application Foundations

Security and Privacy

The Web is an indispensable infrastructure for sharing and for commerce. As we have created the OWP, we have become increasingly sensitive to making this a secure infrastructure. See, for example, our 2013 “Montevideo” statement calling for greater Internet security.

The vulnerabilities have become increasingly prominent. They vary in range and style. There are vulnerabilities that result from criminal exploitation of security holes for financial gain. There are numerous situations where information and communications that was intended to be private has found its way into unauthorized hands.

From a pure technical point of view, there is a tremendous amount of security research and there is knowledge on how to make an infrastructure secure. Many security advances are available in devices that are connected to the Web today. Nonetheless, security exposures abound: because it is too difficult for applications to leverage the security that is available; because new security techniques are not yet in place; and because users are not encouraged to rely on strong security.

We do not expect all developers to be security experts, so we must make it easier to use the security mechanisms of operating systems. The Crypto API provides access to some of those services from within JavaScript, and is already being deployed. This trend will be extended as platforms add stronger security such as multi-factor authentication, biometrics, smartcards, all discussed at our September Workshop on Authentication, Hardware Tokens and Beyond. We also need to add a more comprehensive Identity Management system which discourages weak passwords.

To strengthen this Foundation, we are working closely with a number of organizations, including the IETF, FIDO Alliance, and Smartcard Alliance.

Core Web Design and Development

Developers use many widely deployed front end technologies for structure, style, layout, animations, graphics, interactivity, and typography of pages and apps. HTML5 brought native support for audio and video, canvas, and more. Dozens of CSS modules are used for advanced layout, transformations, transitions, filters, writing modes, and more. SVG is now widely supported for scalable graphics and animations, and WOFF is beautifying the Web and making it easier to read.

Still, the work is not complete. Much new work will be driven by the adoption of the Web on a broader set of devices, including mobile phones, tablets and e-book readers, televisions, and automobiles. The responsive design paradigm helps us think about how we need to enhance HTML, CSS, and other APIs to enable presentation across this wider set of devices.

One exciting direction for this Foundation is Web Components, which will make it easier for developers to carry out common tasks with reusable templates and code modules, all leveraging standards under the hood.

Another area of anticipated of work will be driven by a more complete integration of digital publishing into the Web. In the past, advanced styling and layout for magazines has remained an area where special purpose systems were required. In this Foundation, we will ensure that we have the primitives for advanced styling and layout so that all publishing can be done interoperably on all Web devices.

Our Test the Web Forward activity, though relevant across the Foundations, has been particularly influential for Core Web Design and Development, and we invite the community to contribute to that active testing effort.

Device interaction

Closely related to the Core Foundation is the Device Interaction Foundation, which describes the ways that devices are used to control or provide data to applications. New Web APIs are proposed weekly to give access to all of the features offered by supporting devices. For mobile phones, APIs exist or are in development for access to camera, microphone, orientation, GPS, vibration, ambient light, pointer lock, screen orientation, battery status, touch events, bluetooth, NFC, and more.

The next generation of Web attached devices will introduce new challenges. For instance, the Automotive and Web Platform Business Group is developing APIs to access information about vehicle speed, throttle position, interior lights, horn, and other car data that could help improve driving safety and convenience. We anticipate some of that work will advance to the standards track. In general, wearables, personal medical equipment devices, home energy management devices, and the Internet of Things will drive developer demand for data in applications, and for Web abstractions to simplify what will be new complexity in underlying networks. To achieve that simplicity for developers, the TAG, Device APIs Working Group, Web Apps Working Group, and Systems Applications Working Group all have a role to play in capturing good practices for API design.

Application Lifecycle

The proliferation of environments —both mobile and non-mobile— in which an application may run has created new challenges for developers to satisfy user expectations. People expect their apps to be useful even when there is no network (“offline”), to do the right thing when the network returns (“sync”), to take into account location-specific information (“geofencing”), to be easy to launch on their device (“manifest”), to respond to notifications (from the local device or remote server), and so on. The Application Lifecycle Foundation deals with the range of context changes that may affect an application. For example, developers have made clear that that AppCache fails to meet important offline use cases. so we must come up with a superior solution.

The emerging approach (“Workers”) for addressing many these lifecycle requirements involves spawning important tasks as asynchronous processes outside of an application. For instance, a Worker can be used to manage a cache and update it according to network availability or receive server-sent notifications, even when an application is not actively running. Enhancing these Foundations these will enable developers to create superior user experiences.

Media and Real-Time Communications

A number of communications protocols and related APIs continue to serve developers well, from HTTP to XMLHttpRequest to Web Sockets. But to meet the growing demand for real-time communications and streaming media, we must add new capabilities, the focus of this Foundation.

The promise of WebRTC is to make every single connected device with a Web browser a potential communications end point. This turns the browser into a one-stop solution for voice, video, chat, and screen sharing. A sample use case driving interest in real-time in the browser is enabling “click-to-call” solutions for improved customer service. WebRTC has the potential to bring augmented reality to the Web and create a brand new class of user experiences – an exciting direction for this Foundation.

For audio and video, developers will have a variety of tools to manipulate media streams, edit audio input, and send output to multiple screens (“second screen”). This last capability is of particular interest to the entertainment industry. For example, in the US, a majority of people have a second device nearby while watching television, allowing for new interactive experiences such as social interactions or online commerce.

Performance and Tuning

Open Web Platform functionality has moved steadily to the client side, which creates a variety of new challenges related to security, application lifecycle management, but especially performance. JavaScript engines have improved dramatically in the past few years. But for high-end games, media streams, and even some simple interactions like scrolling, we still have much to do so that developers can monitor application performance and code in ways that make the best use of resources. This is the focus of our Performance and Tuning Foundation.

Today we are working on APIs for performance profiling such as navigation timing and resource hints. In various discussions and Workshops, people have asked for a number of enhancements: for understanding load times, enabling automatic collection of performance data, providing hints to the server for content adaptation, improving performance diagnostics, managing memory and garbage collection, preserving frame rates, using the network efficiently, and much more.

The responsive design paradigm mentioned in the Core Web Design and Development Foundation also plays a role in the world of performance: we can make better use of the network and processing power if we can take into account screen size and other device characteristics.

Usability and Accessibility

The richness of the Open Web Platform has raised new challenges for some users. It is great to be able to create an app that runs on every device, but is it easy to use or klunky? It’s great to offer streaming media, but do developers have the standards to include captions to make the media accessible?

Designers have pioneered a number of approaches (responsive, mobile first), that can improve accessibility and usability, and W3C’s Web Accessibility Initiative has developed some standards (such as WCAG2 and WAI-ARIA) to enable developers to build accessible applications. But we have more work to do to make it easier to design user interfaces that scale to a wide array of devices and assistive technologies. We have confidence that designers and developers will come up with creative new ways to use standards for new contexts. For example, the vibration API used by some mobile applications might offer new ways to communicate safely with drivers through the steering wheel in some automotive apps, and could also be used to create more accessible experiences for people with certain types of disabilities.

Less than one third of current Web users speak English as their native language and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the “World Wide” portion of its name, it must support the needs of world-wide users at a basic level as they engage with content in the various languages. The W3C Internationalization Activity pursues this goal in various ways, including coordination with other organizations, creation of educational materials, coordination on the work of other W3C groups, and technical work itself on various topics.


Earlier I mentioned the pattern of widely used applications migrating “closer to the core.” While this is true for all the Foundations, it is especially clear in the Services Foundation, where today we are exploring the four most likely candidates for future inclusion.

The first is Web payments. Payments have been with us for decades, and e-commerce is thriving, predicted to reach $1.471 trillion this year, an increase of nearly 20% from last year. But usability issues, security issues, and lack of open standard APIs are slowing innovation around digital wallets and other ways to benefit payments on the Web. W3C is poised to launch a group to study the current gaps in Web technology for payments. The Payments group will recommend new work to fill those gaps, some of which will have an impact on other Foundations (e.g., Usability, Security and Privacy). Because a successful integration of payments into the Web requires extensive cooperation, the group will also liaise with other organizations in the payments industry that are using Web technology to foster alignment and interoperability on a global scale.

The second is annotations. People annotate the Web in many ways, commenting on photos or videos, when reading e-books, and when supporting social media posts. But there is no standard infrastructure for annotations. Comments are siloed in someone else’s blog system, or controlled by the publisher of an e-book. Our vision is that annotations on the Web should be more Web-like: linkable, sharable, discoverable, and decentralized. We need a standard annotation services layer.

The third is the Social Web. Consumer facing social Web services, combined with “bring your own device (BYOD)” and remote work policies in enterprise, have driven businesses to turn increasingly to social applications as a way to achieve scalable information integration. Businesses are now looking for open standards for status updates (e.g., Activity Streams) and other social data. These same standards will give users greater control over their own data and thus create new opportunities in the Security and Privacy Foundation as well.

The fourth is the Web of Data. The Semantic Web and Linked Data Platform already provide enhanced capabilities for publishing and linking data. These services have been used to enhance search engines and to address industry use cases in health care and life sciences, government, and elsewhere. But we know that more is necessary for developers to make use of the troves of data currently available. One upcoming activity will be to collect ontologies of how linked data should be organized for different applications (notably for search).


Web technology continues to expand by leaps and bounds. The core capability is growing, the application to industry is growing, and we continually find new devices for web technology and new use cases. To be able to focus on this expansion we need modular design, and a principle in modular design is to be able to clearly and succinctly talk about categories of function. Hopefully this post begins a healthy discussion about the framework for the Open Web Platform going forward.

As part of that discussion will continue to develop a new Application Foundations Web page and invite feedback via our public Application Foundations wiki.


I acknowledge extensive discussions within the W3C Team, but especially with Phil Archer, Robin Berjon, Dominique Hazaël-Massieux, Ivan Herman, Ian Jacobs, Philippe Le Hégaret, Dave Raggett, Wendy Seltzer, and Mike Smith. At the Extensible Web Summit, I received great input from Axel Rauschmayer, Benoit Marchant, and Alan Stearns.

by Jeff Jaffe at October 14, 2014 05:45 PM

October 10, 2014

W3C Blog

This week: CSS 20th anniversary, autowebplatform progress, TimBL’s keynote, Physical Web, etc.

This is the 3-10 October 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Open Web & net neutrality

W3C in the Press (or blogs)

22 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at October 10, 2014 02:31 PM