November 21, 2014

W3C Blog

This week: Chrome HTML5 features, Service Workers, Net neutrality, etc.

This is the 14-21 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality

W3C in the Press (or blogs)

8 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 21, 2014 05:02 PM

November 17, 2014

ishida >> blog

Picker changes

If you use my Unicode character pickers, you may have noticed some changes recently. I’ve moved several pickers on to version 14. Most of the noticeable changes are in the location and styling of elements on the UI – the features remain pretty much unchanged.

Pages have acquired a header at the top (which is typically hidden), that provides links to related pages, and integrates the style into that of the rest of the site. What you don’t see is a large effort to tidy the code base and style sheets.

So far, I have changed the following: Arabic block, Armenian, Balinese, Bengali, Khmer, IPA, Lao, Mongolian, Myanmar, and Tibetan.

I will convert more as and when I get time.

However, in parallel, I have already made a start on version 15, which is a significant rewrite. Gone are the graphics, to be replaced by characters and webfonts. This makes a huge improvement to the loading time of the page. I’m also hoping to introduce more automated transcription methods, and simpler shape matching approaches.

Some of the pickers I already upgraded to version 14 have mechanisms for transcription and shape-based identification that took a huge effort to create, and will take a substantial effort to upgrade to version 15. So they may stay as they are for a while. However, easier to handle and new pickers will move to the new format.

Actually, I already made a start with Gurmukhi v15, which yanks that picker out of the stone-age and into the future. There’s also a new picker for the Uighur language that uses v15 technology. I’ll write separate blogs about those.

 

[By the way, if you are viewing the pickers on a mobile device such as an iPad, don't forget to turn Autofocus off (click on 'more controls' to find the switch). This will stop the onscreen keyboard popping up, annoyingly, each time you try to tap on a character.]

by r12a at November 17, 2014 10:51 PM

November 16, 2014

koalie’s contemplations in markup

koaliemoon

These days, I wish I knew other things so I could consider a career change. Instead, I often long for something else, brood, and sweep the thought away to do what I have to do, because that is a better use of time and energy.

I suspect it would be easier if I knew what else I’d like to do. Even better if I could readily do other things. As to learning new things, well, I don’t feel like I’m up to the effort, and I have not the faintest idea what.

I like my work, however, and so find puzzling that I should yearn for something else. The work is varied, challenging and interesting, the people are wonderful, the mission is a constant inspiration.

Perhaps it’s the long hours. Budgets have been shrinking, and so has the size of our team. Our workload, on the other hand hasn’t. Quite the opposite, it seems. Perhaps it’s the fact I have been around almost 16 years. I have been so lucky to progress in several teams and assume various positions. I’ve been in the team I’m in now for almost 10 years, full time for 7 years, and I have done so many different things and am doing so many other different things that it is truly mind-blowing. No, what I mean is the absolute time it represents.

The Consortium is twenty years old. It’s marvelous it’s still there, and its agenda is full to the brim. If I were to change jobs, wouldn’t it be perfect if it were before I’m in my forties?

Aha! I get it. This is a sort of mid-life work crisis, I’m having. Perhaps.

by koalie at November 16, 2014 01:48 AM

November 14, 2014

W3C Blog

This week: wide-review signal list, President Obama on Net Neutrality, etc.

This is the 7-14 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Net Neutrality & Open Web

W3C in the Press (or blogs)

3 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 14, 2014 04:06 PM

November 11, 2014

W3C Blog

A Productive TPAC 2014 and W3C Highlights

tpac-250

TPAC 2014, W3C’s annual organization-wide meeting, was a milestone for W3C and the Web community on several levels. Thirty-four groups met face-to-face and held joint meetings the last week of October. Participants organized 30 breakout sessions on telecommunications, privacy, Web of things, payments, APIs, testing, robotics, W3C agility, and more. With that many meetings and so many attendees, I can’t speak to all the highlights of the week. But here were a few for me:

  • Nearly 550 people attended TPAC meetings, a record, and a great indicator of the vitality of our agenda. Several groups met for the first time face-to-face: the Social Web Working Group, Payments Interest Group, Web Annotations Working Group, and the RDF Data Shapes Working Group.
  • We celebrated the 20th anniversary of W3C, with an all-star slate of speakers and panelists. We live-streamed the event and have published some photos as well. Many thanks to our sponsors Intel, Ford Foundation, ICANN, Knight Foundation, Rakuten, and Tata Communications.
  • We announced the HTML5 Recommendation, emphasizing the work of the last two years to build a test suite of more than 100,000 tests to drive interoperability, and the Royalty-Free licensing commitments from more than 60 Members that make this the premiere platform for innovation. As part of the announcement, we released a video on the value of standards that was viewed 65,000 times in less than a week.
  • With the completion of HTML5, and while so many people were gathered in Santa Clara, it was a great opportunity to reflect on what the community has accomplished and what lies ahead. The HTML Working Group spent some time at its face-to-face meeting planning next steps. There were also spirited discussions of developer needs, framed through the Application Foundations taxonomy.

The week was busy, and from all signs, productive. One Advisory Committee Representative expressed his appreciation for “an excellent week loaded with events,” and I hope that long-time and new Members alike found it a valuable opportunity to connect.

As part of preparation for TPAC we published for the Membership “W3C Highlights – October 2014,” now public, which I invite you to read.

We are already looking forward to TPAC 2015, 26-30 October in Sapporo, Japan (just prior to the IETF meeting 1-6 November in Yokohama).

by Jeff Jaffe at November 11, 2014 02:38 PM

November 07, 2014

W3C Blog

Payment Industry Priorities: Meeting Summary of Web Payments IG

2014 is the year of Web Payments for W3C. After a March Workshop to bring the community together, and a focused effort to draft a charter for a new steering group, we announced the launch of a new Web Payments Activity in October. The new Web Payments Interest Group began work in earnest the last week of October, during W3C’s annual big meeting called TPAC.

Despite the very short time between the launch of the group and the first face-to-face, more than 50 people participated in two days of good discussion. One major achievement at this meeting was to welcome representatives from major stakeholders groups involved in the payment chain: people from the telecom industry (e.g. Orange, Verizon, AT&T, Deutsche Telekom), browser makers (e.g. Opera), big retailers (e.g. NACS, Walmart), Internet giants (e.g. Paypal, Verisign, Intel), finance industry (e.g. Gemalto, Bloomberg), banks (e.g. Rabobank, World Bank), regulators (US Federal Reserve), and few startups joined their force to start this new activity. The variety of the participants, their interests and perspective was recognized as one of the greatest value of this initiative by the participants themselves.

On the technical side, obviously a first meeting is dedicated to build a common ground between participants, and ensure that we are all aware of the space in which we are working. A big part of the agenda was therefore dedicated to reviewing various specifications from ISO, X9 and a few other standardization bodies. We also reviewed existing work at W3C, on the Recommendation Track (Web Crypto WG, NFC WG, Sysapp WG), in Community Groups (Web Payments CG, Credential CG) and future work in areas like trust and permissions (see the recent workshop on this topic).

We then discussed our initial scope, and in particular, our focus on wallet, that the group is calling for now “payment agent.” The group first decided it will first address the person-to-business case, where someone is paying a bill issued by an organization (private or public, which includes person-to government payments).

Then the group decided to focus on convergent payment solutions, developing a wallet framework that will support both online and brick & mortar store payments. Finally one of the key work items will be security and how to increase security of credit card payments on the Web by enabling tokenized payment and push-based payments. Push-based payments are payments initiated by users: the merchant sends a bill to the customer who then sends an order to his payment system provider to pay the merchant. All the parties in the room agreed on the need to move out of exchange of credit card information for payments, and enable these new approaches through open standards. It was clear in the room that secure hardware storage has a big role to play here, particularly secure elements for both emulating credit cards and for managing identity and credentials securely.

Lastly, the group also held a number of discussions around privacy. Customers should be allowed to decide which information they wish to share with various parties to a transaction. There are also external forces such what is required required by regulation (e.g., minimal age to buy specific product, or money laundering detection) or for anti-fraud systems. There is clearly a tension between various parties on this topic that we must address.

The group has created two task forces too begin work on a detailed roadmap identifying technology gaps and opportunities for standardization:

  • The Use Cases Task Force will take a bottom-up approach, identifying the list of scenarios that a payments framework should be able to address. The task force with work on requirements, design criteria and use-cases that will enable the design of a wallet architecture. The task force will first review various use-case documents produced by various W3C and non-W3C groups such as the W3C Web Payments Community Group, and X9 use-cases for ISO 12812 specifications.
  • The Payment Agent Task Force will have a more top-down approach and will work towards proposing a disaggregated architecture based on the discussions we had during the meeting.

We hope to accelerate results by approaching the question from these two angles. Now is a great time to join the group and help shape the roadmap, before the group’s next face-to-face meeting in Q1 2015.

by Stéphane Boyera at November 07, 2014 06:13 PM

This week: HTML5 is a Recommendation, #w3c20, Winamp in HTML5+JS, etc.

This is the 28 October – 7 November 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

W3C in the Press (or blogs)

57 articles since the last Digest, including 26 about HTML5 to Rec; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at November 07, 2014 03:10 PM

October 29, 2014

W3C Blog

Streaming video on the Web: a good example of more work to do

Yesterday we announced the HTML5 Recommendation. One of the most significant features of HTML5, and one that has been deployed for some time, is the <video> element, which will make it easier to include video in pages and applications without requiring users to download plug-ins.

There is already strong browser support for video today, but we have more work to do on interoperable support for streaming video. That is why we are working on a number of specifications to support streaming media interoperability, including Media Source Extensions, currently a Candidate Recommendation.

We ran into live stream interop issues as part of planning our W3C20 Webcast today (from 3pm-6pm Pacific Time) and ensuring the widest audience as possible. The deployed solutions we found (and will be using) rely on Flash plugins and other platform-specific approaches such as HTTP Live Streaming (HLS).

Despite that limitation, we are happy to offer the live stream with captions to those who cannot join us in Santa Clara.

Interoperable streaming is just one area where we want to make it easier for developers and users to play video and audio on the Web. We still need Royalty-Free codecs, the ability to play the content on second screens, improved support for accessibility, and more.

by Philippe le Hegaret at October 29, 2014 06:02 PM

October 18, 2014

ishida >> blog

Notes on Tibetan script

tibetan-udhr
See the Tibetan Script Notes

Last March I pulled together some notes about the Tibetan script overall, and detailed notes about Unicode characters used in Tibetan.

I am writing these pages as I explore the Tibetan script as used for the Tibetan language. They may be updated from time to time and should not be considered authoritative. Basically I am mostly simplifying, combining, streamlining and arranging the text from the sources listed at the bottom of the page.

The first half of the script notes page describes how Unicode characters are used to write Tibetan. The second half looks at text layout in Tibetan (eg. line-breaking, justification, emphasis, punctuation, etc.)

The character notes page lists all the characters in the Unicode Tibetan block, and provides specific usage notes for many of them per their use for writing the Tibetan language.

tibetan-char-notes
See the Tibetan Character Notes

Tibetan is an abugida, ie. consonants carry an inherent vowel sound that is overridden using vowel signs. Text runs from left to right.

There are various different Tibetan scripts, of two basic types: དབུ་ཙན་ dbu-can, pronounced /uchen/ (with a head), and དབུ་མེད་ dbu-med, pronounced /ume/ (headless). This page concentrates on the former. Pronunciations are based on the central, Lhasa dialect.

The pronunciation of Tibetan words is typically much simpler than the orthography, which involves patterns of consonants. These reduce ambiguity and can affect pronunciation and tone. In the notes I try to explain how that works, in an approachable way (though it’s still a little complicated, at first).

Traditional Tibetan text was written on pechas (དཔེ་ཆ་ dpe-cha), loose-leaf sheets. Some of the characters used and formatting approaches are different in books and pechas.

For similar notes on other scripts, see my docs list.

by r12a at October 18, 2014 05:48 AM

October 14, 2014

W3C Blog

How to further improve the world of open standards

Today is World Standards Day (almost everywhere in the world ;) and as I mentioned in an earlier standard day anniversary post, I like open standards and the benefits they bring to humanity.

To me, they are a first class public service. Much like people can take a public bus to go from some street to a stadium, they can also use Wifi, IP, TCP, HTTP, HTML, etc, to go from one place on the net to another.

These same folks pay for the development of their bus lines and the tons of standards that surround the public transport sector (including the vehicles) through their taxes and their passes, and although they pay for their ISP subscriptions, very little comes from their taxes in terms of net standards.

In other words: most if not all governments support the development of standards for public transport, and a myriad of other areas (housing, food, health, electricity, radio, etc), but their support for Internet and Web open standard development is close to nil.

Why is that so ?

First, it sounds like a paradox, since funding for better or more net open standards would accelerate the growth of ICT altogether (ICT is largely based on the net being there, plus other communication technologies), something everybody agrees is good for society. Governments also receive a mandate from their citizens to promote and support standardization, in compliance with the WTO (World Trade Organization) TBT guidelines for standardization, and we think we are compliant, in terms of transparency, neutrality, etc.

Also, for the end-users, the citizen, there is no difference: I know that I can buy a tire for my car in a different shop than the one where I bought the car because there is a independent global standard for tire size, thanks to that, the same way I know that I can buy a computer in one shop and connect to a computer coming from other shops in another country. Standards are hidden, just referred to by a name in a given context: tire size 170/55R14 or net protocols http://www.org/doc.html

Looking at the business model of the de-jure standardization system, there is also the added “anomaly” that the organizations involved not only receive direct government support (each gov has a budget line for official standard development) but their standard documents are almost never freely accessible, as it’s the case for IETF or W3C for instance. So they have two lines of revenues we don’t have in their budget: gov support and sales of standards.

I don’t want to spend much time on the sales aspect: it’s both historical and a stable situation unlikely to change as long as people are ready to pay for important standard specifications in electronic form. This is also a source of revenues hard to get away from one you have it (I heard once it amounts to about one third of the global budget of de-jure standardizers). Our situation is also quite unique since most net standards are and have always been available freely on the net, at no cost, as a way to further develop the net itself by various actors with no desire to pay for software specifications.

The absence of a net standardization government envelop is also historical, and amounts to the infrastructure itself (the cables, the antennas, etc) being privatized from the start, but before going further, let me emphasize one important point: our model of voluntary standards not funded by government has been extremely successful.  With no standardization government funds, IETF, W3C and others have created an infrastructure of enormous prosperity.

However, the Internet and the Web have become a core infrastructure of our societies,  and we now have infrastructure standardization issues of a different variety. They may be longer term.  They may be in the public interest, but not immediate economic interest of economic beneficiaries.  They may require long-term focus; beyond the interests of current funders.  Areas such as security, privacy, internationalization, robustness, or accessibility come to mind.

It’s worth noting that the Internet and the Web have, through their couple of decades of evolution, received reasonable amount of public funding through R&D grants, e.g. from DARPA, the EC or the Japanese MITI, but most would say it was for the “innovation” part of the net development and the informal standardization coming with it. The net technologies comes from R&D, clearly, but they have now build their home in standardization land as well. W3C and IETF specifications have recently been made legally referenceable by government policies and procurements in Europe for instance, which proves our seriousness in this business

So we don’t get standard government money, reserved for de-jure/official standardization organizations, but we can somehow get R&D money, provided that we show clearly the innovation side of our work. This situation, unfortunately, doesn’t scale well, for various reasons. First, our standard agenda is not dictated by any government R&D planning, and although we have made efforts in getting closer to the policy makers in charge of the various gov R&D and standard agenda, there is no guarantee that our community will follow any of these policy needs when it comes to do it for real. Our agenda comes live from participants that decide to spent their time with us on a project.

For one thing, we’d need much more resources to close this gap and work on a useful gov/fora agenda convergence on a global scale, that is, in all countries with policy priorities in terms of net developments. But, as it happens, the state of the standardization agenda in our sector is behind schedule in terms of the potential needs of society at large, and as a result the priorities of both governments public policy maker and our more “private” communities are often the same. Everybody need things like Web Payments and true privacy or device independence to work (based on open standards) for yesterday.

The other issue with most gov R&D sources of funding is that they are open to everybody: academia, commercial companies, research labs, industries, etc, so the competition to get a given grant is fierce, with no guarantee to get anything from one year to the other. Why do we have to compete with everybody in the market, since we’re an SDO, doing a public service, a necessity for the entire market to exist ? Plus, as anyone who has done one knows well, it costs of lot of resources to apply for any single R&D public grant, and this time spent preparing them is not paid for, so for SDOs this is time not spent on better or more standards.

Because of all these reasons, getting added gov funding would allow W3C to more easily hold together and maintain what we have achieved so far (that everybody uses but not a lot of people wants to fund), and more generally to do a better job at moving the Open Web Platform forward, against a wind of proprietary software platforms.

And it would cost a fraction of what government gives to the more official standardizers to get to a more balanced situation. The issue here is not so much one of unfair competition between standardization organizations, since there is work for everyone and we’re all busy, it’s more the issue of loss of quality in fora/consortia deliverables, risk of fragmentation of the net stacks, and going back to the pre-Internet days of online walled-garden services.

Fora and consortia net SDOs are small organizations: an order of magnitude less staff than the average de-jure SDO, and they also produce an order of magnitude less standards each year, but their impact is huge, nobody will disagree with that in any countries, so my message to governments around the world is simple: please consider investing a tenth of what your give yearly to your local de-jure organizations (e.g. to your national ISO members, or to an ITU mirror), and you won’t be disappointed by the benefits you’ll get back to your society.

by Daniel Dardailler at October 14, 2014 08:56 PM

Application Foundations for the Open Web Platform

Bringing HTML5 to the status of W3C Recommendation (in October 2014) is a defining moment in the development of the Open Web Platform (OWP), a set of technologies for developing distributed applications with the greatest interoperability in history. This year is also the 25th anniversary of the Web and 20th anniversary of W3C, making this an even more meaningful time to engage with the community about the Next Big Thing for the Web Platform.

My starting point for this discussion is that, now that HTML5 is done, W3C should focus on strengthening the parts of the Open Web Platform that developers most urgently need for success. I call this push for developers “Application Foundations.”

This is a new formulation, shaped in part by discussion at the September Extensible Web Summit in Berlin, as well as discussions within the W3C staff. I am planning further discussion at W3C’s TPAC 2014 meeting at the end of the month, and I welcome your feedback to this post and in the months ahead.

While this formulation is new, most of the work is not new. Rather, this represents a new way of looking at the considerable work that is already in the Web community, and placing some structure around the considerable work in front of us.

The Focus on Developers

The OWP is widely deployed, improving in function every day, and transforming industry after industry. According to a survey earlier this year, 42% of developers are using HTML5, CSS, and JavaScript when building applications. The promise of the Open Web Platform is to lower the cost of developing powerful applications to reach the most people, on any device.

As popular as the OWP is, it is still too challenging for developers to create some types of Web applications. Lack of broad interoperability for some features complicates development. Lack of standard features in the platform drives developers to create hybrid applications, implying a larger mix of tools, libraries, and interoperability issues. There is more work to meet growing expectations around privacy, security, and accessibility.

There are many ways to focus on developers. Many W3C activities outside of standards development are geared toward enabling developers, including tools (validator), documentation (Web Platform Docs), training (W3DevCampus, W3Conf), participation (Community Groups, draft Webizen program).

The question I want to get at in this post, however, relates to our open standards agenda: are we building the platform that developers need? How can we find out?

That is where the Application Foundations come in. They give us a way to think about the Open Web Platform that will make it easier for the W3C community to converge on the top priorities for developers.

Illustration of application foundation top-level categories and a few second-level topics

What are Application Foundations?

Platforms mature predictably in the following way: at a given time, some capabilities are core and “applications” rely on the core. Invariably, there comes a time when certain features are so pervasively used as services by other applications, the “next generation” of the platform must subsume some of those features (via libraries, platform services, daemons, etc.).

Operating systems provide a familiar example. Typically, an operating system kernel provides the key lower layer functions that a computer needs for its programs (aka applications): program execution, memory management, support for devices, etc. In early versions of many operating systems, there are also higher layer functions (such as networking, security, GUIs, etc.). Often these functions have some manifestation in the kernel, but also some manifestation in applications. Over time, given experience with the higher layer functions, people recognize that some must mature into major subsystems (aka foundations) that are above the kernel, leveraged by many applications. Modular design of these subsystems allows experts in different areas (security, communications protocols, and so on) to deliver solutions that will best serve all the other parts of the platform.

We see this pattern with the Open Web Platform as well. There was a time that video would have been viewed as an application of the Web, but in HTML5, video has unquestionably been absorbed into the core infrastructure (e.g., via the HTML <video> element). An apt metaphor is to call the programmable Open Web Platform of today the first generation operating system of the Web. In the past couple of years, important subsystems have already started to emerge, and in this post I propose a taxonomy of eight Application Foundations to focus our discussion on the next generation:

  • Security and Privacy
  • Core Web Design and Development
  • Device Interaction
  • Application Lifecycle
  • Media and Real-Time Communications
  • Performance and Tuning
  • Usability and Accessibility
  • Services

Each Foundation represents collection of services and capabilities that should be available for all applications. For example, the Security and Privacy Foundation includes capabilities such as crypto, multi-factor authentication, and resource integrity.

We expect each Foundation to evolve independently, driven by experts in that topic. We also know that there will be interconnections, such as Security implications of Device Interactions, or Accessibility considerations of streaming Media.

Below I will begin to enumerate the capabilities we have associated with each Foundation, both long-standing and new or planned work that will substantially advance the capability of the OWP.

In our internal discussions there was quick consensus on the usefulness of an Application Foundations paradigm. There was also passionate debate about taxonomy itself. Did we come up with one that will speak to developers? Did we neglect some important piece of functionality? Should this or that second-level item be a top-level category or vice versa? To help structure the broader debate to come, I’d like to provide some background for the choices proposed here.

Principles for Thinking about these Foundations

Bearing in mind that we are looking for a best fit to structure discussion, not a perfect dissection, here are some key principles for thinking about these Foundations:

  • Although this exercise is motivated by the desire to meet developer needs, we sought labels that would be meaningful for end users as well. We looked for terms we thought would speak to those audiences about both desirable qualities of the platform and current pain points that we need to address.
  • These topics were derived by looking at W3C’s current priorities and discussions about upcoming work. W3C’s agenda is in constant motion, and this taxonomy will only be useful so long as it aligns with priorities. But the river bed shapes the river and vice versa.
  • Because the focus is on current developer pain points, we do not attempt to fit all of W3C’s work into the eight categories. We are committed to our entire agenda, but this particular exercise is limited in scope. For the same reason, we do not attempt to represent all completed work in this categorization. While we might want to see how broadly we could apply this taxonomy, our priority project is to enable developers today and tomorrow.
  • Because the focus is on W3C’s agenda, these Foundations do not attempt to represent all things one we think of as being important to the Web, HTTP and JavaScript being two notable examples. Moreover, many key IETF standards (HTTP, URL, IPv6) might more properly be defined as part of the kernel – rather than a higher level Foundation.

Putting the Foundations to Use

Although this framework is meant initially only as a communications vehicle —a way of describing the substantial work we have to do to enhance the OWP— we may find other uses later. Once fleshed out and road-tested, for example, the W3C Technical Architecture Group (TAG) might use this model for architectural discussions about the Open Web Platform.

Ultimately, with such a framework, it becomes easier to identify what is missing from the platform, because we will think more cohesively about its key components. And where there are similar capabilities (e.g. different functions that show up in the same Foundation), it will make it easier to identify where synergies can simplify or improve the platform.

By definition, Foundations are common subsystems useful not only for “horizontal applications”, but also for a variety of industries such as digital publishing, automotive, or entertainment. In a separate exercise we plan to work with those industries to create a view of the Foundations specific to what they need from the Open Web Platform.

So let’s get started. In each paragraph below, I outline why we think this area deserves to be a Foundation. I list some absolutely critical problems the community is currently addressing. This will help motivate why each Foundation was chosen, and the technology development required to give rise to the next generation Web.

Application Foundations

Security and Privacy

The Web is an indispensable infrastructure for sharing and for commerce. As we have created the OWP, we have become increasingly sensitive to making this a secure infrastructure. See, for example, our 2013 “Montevideo” statement calling for greater Internet security.

The vulnerabilities have become increasingly prominent. They vary in range and style. There are vulnerabilities that result from criminal exploitation of security holes for financial gain. There are numerous situations where information and communications that was intended to be private has found its way into unauthorized hands.

From a pure technical point of view, there is a tremendous amount of security research and there is knowledge on how to make an infrastructure secure. Many security advances are available in devices that are connected to the Web today. Nonetheless, security exposures abound: because it is too difficult for applications to leverage the security that is available; because new security techniques are not yet in place; and because users are not encouraged to rely on strong security.

We do not expect all developers to be security experts, so we must make it easier to use the security mechanisms of operating systems. The Crypto API provides access to some of those services from within JavaScript, and is already being deployed. This trend will be extended as platforms add stronger security such as multi-factor authentication, biometrics, smartcards, all discussed at our September Workshop on Authentication, Hardware Tokens and Beyond. We also need to add a more comprehensive Identity Management system which discourages weak passwords.

To strengthen this Foundation, we are working closely with a number of organizations, including the IETF, FIDO Alliance, and Smartcard Alliance.

Core Web Design and Development

Developers use many widely deployed front end technologies for structure, style, layout, animations, graphics, interactivity, and typography of pages and apps. HTML5 brought native support for audio and video, canvas, and more. Dozens of CSS modules are used for advanced layout, transformations, transitions, filters, writing modes, and more. SVG is now widely supported for scalable graphics and animations, and WOFF is beautifying the Web and making it easier to read.

Still, the work is not complete. Much new work will be driven by the adoption of the Web on a broader set of devices, including mobile phones, tablets and e-book readers, televisions, and automobiles. The responsive design paradigm helps us think about how we need to enhance HTML, CSS, and other APIs to enable presentation across this wider set of devices.

One exciting direction for this Foundation is Web Components, which will make it easier for developers to carry out common tasks with reusable templates and code modules, all leveraging standards under the hood.

Another area of anticipated of work will be driven by a more complete integration of digital publishing into the Web. In the past, advanced styling and layout for magazines has remained an area where special purpose systems were required. In this Foundation, we will ensure that we have the primitives for advanced styling and layout so that all publishing can be done interoperably on all Web devices.

Our Test the Web Forward activity, though relevant across the Foundations, has been particularly influential for Core Web Design and Development, and we invite the community to contribute to that active testing effort.

Device interaction

Closely related to the Core Foundation is the Device Interaction Foundation, which describes the ways that devices are used to control or provide data to applications. New Web APIs are proposed weekly to give access to all of the features offered by supporting devices. For mobile phones, APIs exist or are in development for access to camera, microphone, orientation, GPS, vibration, ambient light, pointer lock, screen orientation, battery status, touch events, bluetooth, NFC, and more.

The next generation of Web attached devices will introduce new challenges. For instance, the Automotive and Web Platform Business Group is developing APIs to access information about vehicle speed, throttle position, interior lights, horn, and other car data that could help improve driving safety and convenience. We anticipate some of that work will advance to the standards track. In general, wearables, personal medical equipment devices, home energy management devices, and the Internet of Things will drive developer demand for data in applications, and for Web abstractions to simplify what will be new complexity in underlying networks. To achieve that simplicity for developers, the TAG, Device APIs Working Group, Web Apps Working Group, and Systems Applications Working Group all have a role to play in capturing good practices for API design.

Application Lifecycle

The proliferation of environments —both mobile and non-mobile— in which an application may run has created new challenges for developers to satisfy user expectations. People expect their apps to be useful even when there is no network (“offline”), to do the right thing when the network returns (“sync”), to take into account location-specific information (“geofencing”), to be easy to launch on their device (“manifest”), to respond to notifications (from the local device or remote server), and so on. The Application Lifecycle Foundation deals with the range of context changes that may affect an application. For example, developers have made clear that that AppCache fails to meet important offline use cases. so we must come up with a superior solution.

The emerging approach (“Workers”) for addressing many these lifecycle requirements involves spawning important tasks as asynchronous processes outside of an application. For instance, a Worker can be used to manage a cache and update it according to network availability or receive server-sent notifications, even when an application is not actively running. Enhancing these Foundations these will enable developers to create superior user experiences.

Media and Real-Time Communications

A number of communications protocols and related APIs continue to serve developers well, from HTTP to XMLHttpRequest to Web Sockets. But to meet the growing demand for real-time communications and streaming media, we must add new capabilities, the focus of this Foundation.

The promise of WebRTC is to make every single connected device with a Web browser a potential communications end point. This turns the browser into a one-stop solution for voice, video, chat, and screen sharing. A sample use case driving interest in real-time in the browser is enabling “click-to-call” solutions for improved customer service. WebRTC has the potential to bring augmented reality to the Web and create a brand new class of user experiences – an exciting direction for this Foundation.

For audio and video, developers will have a variety of tools to manipulate media streams, edit audio input, and send output to multiple screens (“second screen”). This last capability is of particular interest to the entertainment industry. For example, in the US, a majority of people have a second device nearby while watching television, allowing for new interactive experiences such as social interactions or online commerce.

Performance and Tuning

Open Web Platform functionality has moved steadily to the client side, which creates a variety of new challenges related to security, application lifecycle management, but especially performance. JavaScript engines have improved dramatically in the past few years. But for high-end games, media streams, and even some simple interactions like scrolling, we still have much to do so that developers can monitor application performance and code in ways that make the best use of resources. This is the focus of our Performance and Tuning Foundation.

Today we are working on APIs for performance profiling such as navigation timing and resource hints. In various discussions and Workshops, people have asked for a number of enhancements: for understanding load times, enabling automatic collection of performance data, providing hints to the server for content adaptation, improving performance diagnostics, managing memory and garbage collection, preserving frame rates, using the network efficiently, and much more.

The responsive design paradigm mentioned in the Core Web Design and Development Foundation also plays a role in the world of performance: we can make better use of the network and processing power if we can take into account screen size and other device characteristics.

Usability and Accessibility

The richness of the Open Web Platform has raised new challenges for some users. It is great to be able to create an app that runs on every device, but is it easy to use or klunky? It’s great to offer streaming media, but do developers have the standards to include captions to make the media accessible?

Designers have pioneered a number of approaches (responsive, mobile first), that can improve accessibility and usability, and W3C’s Web Accessibility Initiative has developed some standards (such as WCAG2 and WAI-ARIA) to enable developers to build accessible applications. But we have more work to do to make it easier to design user interfaces that scale to a wide array of devices and assistive technologies. We have confidence that designers and developers will come up with creative new ways to use standards for new contexts. For example, the vibration API used by some mobile applications might offer new ways to communicate safely with drivers through the steering wheel in some automotive apps, and could also be used to create more accessible experiences for people with certain types of disabilities.

Less than one third of current Web users speak English as their native language and that proportion will continue to decrease as the Web reaches more and more communities of limited English proficiency. If the Web is to live up to the “World Wide” portion of its name, it must support the needs of world-wide users at a basic level as they engage with content in the various languages. The W3C Internationalization Activity pursues this goal in various ways, including coordination with other organizations, creation of educational materials, coordination on the work of other W3C groups, and technical work itself on various topics.

Services

Earlier I mentioned the pattern of widely used applications migrating “closer to the core.” While this is true for all the Foundations, it is especially clear in the Services Foundation, where today we are exploring the four most likely candidates for future inclusion.

The first is Web payments. Payments have been with us for decades, and e-commerce is thriving, predicted to reach $1.471 trillion this year, an increase of nearly 20% from last year. But usability issues, security issues, and lack of open standard APIs are slowing innovation around digital wallets and other ways to benefit payments on the Web. W3C is poised to launch a group to study the current gaps in Web technology for payments. The Payments group will recommend new work to fill those gaps, some of which will have an impact on other Foundations (e.g., Usability, Security and Privacy). Because a successful integration of payments into the Web requires extensive cooperation, the group will also liaise with other organizations in the payments industry that are using Web technology to foster alignment and interoperability on a global scale.

The second is annotations. People annotate the Web in many ways, commenting on photos or videos, when reading e-books, and when supporting social media posts. But there is no standard infrastructure for annotations. Comments are siloed in someone else’s blog system, or controlled by the publisher of an e-book. Our vision is that annotations on the Web should be more Web-like: linkable, sharable, discoverable, and decentralized. We need a standard annotation services layer.

The third is the Social Web. Consumer facing social Web services, combined with “bring your own device (BYOD)” and remote work policies in enterprise, have driven businesses to turn increasingly to social applications as a way to achieve scalable information integration. Businesses are now looking for open standards for status updates (e.g., Activity Streams) and other social data. These same standards will give users greater control over their own data and thus create new opportunities in the Security and Privacy Foundation as well.

The fourth is the Web of Data. The Semantic Web and Linked Data Platform already provide enhanced capabilities for publishing and linking data. These services have been used to enhance search engines and to address industry use cases in health care and life sciences, government, and elsewhere. But we know that more is necessary for developers to make use of the troves of data currently available. One upcoming activity will be to collect ontologies of how linked data should be organized for different applications (notably for search).

Conclusion

Web technology continues to expand by leaps and bounds. The core capability is growing, the application to industry is growing, and we continually find new devices for web technology and new use cases. To be able to focus on this expansion we need modular design, and a principle in modular design is to be able to clearly and succinctly talk about categories of function. Hopefully this post begins a healthy discussion about the framework for the Open Web Platform going forward.

As part of that discussion will continue to develop a new Application Foundations Web page and invite feedback via our public Application Foundations wiki.

Acknowledgments

I acknowledge extensive discussions within the W3C Team, but especially with Phil Archer, Robin Berjon, Dominique Hazaël-Massieux, Ivan Herman, Ian Jacobs, Philippe Le Hégaret, Dave Raggett, Wendy Seltzer, and Mike Smith. At the Extensible Web Summit, I received great input from Axel Rauschmayer, Benoit Marchant, and Alan Stearns.

by Jeff Jaffe at October 14, 2014 05:45 PM

October 10, 2014

W3C Blog

This week: CSS 20th anniversary, autowebplatform progress, TimBL’s keynote, Physical Web, etc.

This is the 3-10 October 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Open Web & net neutrality

W3C in the Press (or blogs)

22 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at October 10, 2014 02:31 PM

October 08, 2014

koalie’s contemplations in markup

koaliemoon

Before I started my day, I read Trouble at the Koolaid Point, by Seriouspony [who writes “I’m not linking it to the blog, and it won’t likely stay up long”]. I had not heard about her until very recently, and reading her account felt like a punch in the face. It stayed with me since then. I think it’s going to stay with me a while.

Midway through her account, Seriouspony wrote:

“This is the world we have created.”

Later in the day at work, I followed tweets and news of the Keynote on the future of the Web that Tim Berners-Lee gave at the opening of IPExpo in London. He said many inspiring things in his habitual humble manner, but one in particular resonated with me. It was in response to a question from the floor related to the Dark Web. I soon found it in Brian’s timeline:

(The Register also quoted Tim at the end of a piece they published after his keynote.)

Kevin read Seriouspony too; here is his advice to which I live by:

And finally, Amy retweeted this:

All is not white and all is not black, but there are some pretty dark grey stuff out there. Let’s be considerate of our fellow humans, please. Let’s stand up for ourselves. If the future is what we build, let’s build and nurture a world we can be proud of.

by koalie at October 08, 2014 08:26 PM

October 07, 2014

W3C Blog

Decision by consensus or by informed editor; which is better?

There has been discussion in the Web standards community about which is the better way to advance technical specifications: by a formal consensus process or by having all decisions made by informed editors as they informally gather a consensus. After all, the W3C has long considered consensus a core value of W3C. On the other hand, the WHATWG describes a process whereby the relevant editor makes the decisions by trying to see where the consensus is, and is explicit about eschewing formal consensus. Which approach is better?

False dichotomy!

In my view, there are advantages to either approach. Clearly when there is an excellent spec writer who works with colleagues there is tremendous efficiency to having decisions made by informed editors. People make excellent progress with this approach both in the WHATWG and in many W3C Groups, including Community Groups (which typically have a high degree of flexibility in how they approach spec writing). In W3C Working Groups it is often the case that an informed editor is able to rapidly make progress in writing a spec and produces Working Drafts with the results. A W3C Working Draft does not necessarily represent a consensus of the Working Group, as it is not a finished product of the group.

While rapid progress can be made by informed editors, W3C will not give its imprimatur that a specification is a “Standard”, or a W3C Recommendation, however, unless it goes through the formal consensus process.

Billions of people and millions of developers rely on Web standards. Since the Web was invented by W3C Director Tim Berners-Lee 25 years ago, it has become a core infrastructure for personal communications, commerce, education, and entertainment. While implementers of standards can rapidly interact with informed editors, it is important that the entire ecosystem has the confidence of due process and is assured that they have their say. This ecosystem includes those who implement standards in browsers, web site developers, app developers, users, people with disabilities, corporate IT, procurement officers, telecommunication firms, ISPs, publishers, news outlets, marketing professionals, on-line e-commerce sites, researchers, educators, textbook authors, and de jure standards organizations such as the International Organization for Standardization (ISO). Significantly, the community is global.

To appreciate why this is important and necessary, it is worthwhile to review some of the key principles of OpenStand and explain their motivation.

OpenStand’s five fundamental principles of standards development

  • Due process. Included in this requirement is the opportunity to appeal decisions. Even the best editors make mistakes. Even when they are sincere and knowledgeable, the process needs to give assurance that there is a fair mechanism for appeal to independent arbiters. Also included in due process are processes for periodic standards review. If the entire globe relies on the standards, and the ecosystem of people that are affected is diverse, it is essential that there be underlying due process.
  • Broad consensus. Included is that there are processes to allow all views to be considered across a range of interests. Again, it is the diverse range of interests that requires a consensus process. While much of the ecosystem might choose not to provide comments, they need to be assured that there are opportunities for the fair consideration of every issue.
  • Transparency. Included are public comment periods before approval and adoption. It is understood that experts in spec writing are able to iterate rather rapidly at a pace that the general public could not possibly appreciate. Hence it is essential that when a major level of progress has been made in some technical area that a Working Group declares that it is timely to adopt this major level of progress as a new standard – and provides comment periods of appropriate length for the ecosystem to weigh in.
  • Balance. Standards are not dominated by any particular person, company, or interest group. Hypothetically, a company, or small number of related companies might have the resources to invest in standards definition and (potentially) market power to enforce their decisions. The web, being an infrastructure dedicated for the benefit of all people needs to be assured that it remains open of undue influence by interest groups. Again, although it is certainly possible for informed editors to shield web standards from such influence, it is critical that a process guarantees that there is openness to consensus by all.
  • Openness. The requirement that processes be open to all interested and informed parties appears to be one that can be achieved both by consensus processes as well as informed editor processes.

The consensus process has much to learn from Decisions by informed editors

As described in OpenStand, the consensus process has the right properties for developing standards as critical and pervasive as web standards. However, there are tradeoffs, prominent among them that introducing review and accountability to those review comments typically has an impact on speed and agility. We have much to learn from other processes such as “decisions by informed editors”. We are currently looking at a prime example of that, HTML4 was standardized in 1999, and it is taking us 15 years to get to HTML5 – due to be standardized later this year. Three years ago we began revamping our processes in W3C to achieve much of the agility that the industry needs, without sacrificing OpenStand principles. Here are some of the key recent process innovations that W3C has taken to get the best of both worlds.

  • Community Groups. Three years ago we introduced Community Groups for topics not yet ready for standardization. Today over 4400 people work in 179 Community Groups in a far more agile fashion. We still withhold the imprimatur of standard, however, unless the work transitions to the formal Working Group process.
  • Process Revision. We have revised the W3C Process, eliminating unnecessary process steps, and giving WGs more latitude how they achieve the key requirements of wide review.
  • Modularity. We recognize that large monolithic specs are difficult to advance. CSS2.1 took 13 years to complete after CSS2, so for further enhancements we have modularized the CSS spec. Several CSS Level 3 specs are already at the stable “Candidate Recommendation” level; some are already Recommendations.
  • Improving W3C’s plans for spec iteration. When the HTML group decided to build a plan to get HTML5 to recommendation, they simultaneously planned for a HTML5.1 in 2016. There is also discussion to modularize HTML to allow parts to proceed more rapidly.
  • Errata management. Implementations evolve continuously, and we discover bugs in our Recommendations. We need to improve our processes to more rapidly reflect fixes to our Recommendations among implementers. This is done well in the WHATWG, less well in W3C and we need to improve. Based on community input, I recently raised an issue to work on improving this.

So in the end, which is better?

There are advantages to having decisions made by informed editors. But W3C must keep its commitment to the OpenStand consensus principles. For good reasons as described above. At the same time, we must improve our processes to foster agility and learning from other approaches.

by Jeff Jaffe at October 07, 2014 06:11 PM

October 05, 2014

koalie’s contemplations in markup

Drawing of Pomponette, ink and pastel

Pomponette is a sweet female cat that lives in the neighbourhood. I don’t know if she has a home or just owns the neighbours instead. Pomponette is how we call her; the neighbor calls her Mimine.

She showed up in our garden, I remember, Adrien had been born only a few weeks or so. She’s friendly albeit a little wary. And she shows up most days. She likes very much to be petted but will not settle on anyone’s knees or be held. There is a lot in her face, size and built that reminds me of Emu, my own cat, who’s been gone more than a year now, so Pomponette does linger now. This is nice.

Yesterday we found her curled in an empty flower pot, sleeping in the morning sun. As she heard my approaching foot steps, she raised a cocked head, before yawning and stretching.

Made on paper (12×12 cm) with Pentel Brush Pen, charcoal and white, brown and terra cotta pastels..

Drawing of Pomponette, ink and pastel

by koalie at October 05, 2014 09:17 AM

October 03, 2014

W3C Blog

This week: W3C turned 20, Typography, Mozilla/Ford Open Web Fellowship, etc.

This is the 26 September – 3 October 2014 edition of a “weekly digest of W3C news and trends” that I prepare for the W3C Membership and public-w3c-digest mailing list (publicly archived). This digest aggregates information about W3C and W3C technology from online media —a snapshot of how W3C and its work is perceived in online media.

W3C and HTML5 related Twitter trends

[What was tweeted frequently, or caught my attention. Most recent first]

Open Web & net neutrality

W3C in the Press (or blogs)

9 articles since the last Digest; a selection follows. You may read all articles in our Press Clippings page.

by Coralie Mercier at October 03, 2014 03:35 PM

October 01, 2014

W3C Blog

Thank you for the warm 20th birthday greetings!

Look at a Little History of the World Wide Web and scroll down to 1994. Today 20 years ago:

“1 October World Wide Web Consortium founded.”

We are turning 20 today!

@Yogesh asked “What’s the plan for celebration? :-)”

@brucel nailed it almost entirely:

Please join us for the celebration. We’ll discuss the future of the Web and W3C on 29 October in Santa Clara, California. Register now for the W3C 20th Anniversary Symposium and Gala Dinner.

Finally, thank you all for the avalanche of birthday greetings. Creative, multilingual, nerdy, poetic, endearing; all warmed our collective heart, and at some point I could not keep up with fav’ing (but will catch up).

Cake!

Screenshot of 1998:

All the things proper cheer:

Japanese greeting, bubble text style:

18th birthday of PNG, our first REC:

From Inria, our second Host:

A suspiciously harmless and cute squirrel from our friends at @w3cmemes:

Thank you!

by Coralie Mercier at October 01, 2014 04:31 PM

September 28, 2014

koalie’s contemplations in markup

koaliemoon

I’m running in wide long carpeted corridors. The place is gigantic. I am in America, this is certain. The mix of people I see however is strange. Clusters of students carrying books, wealthy looking elderly people in formal dress.

Even a whole burlesque party, progressing slowly through a massive hall, women laughing loudly at men’s jests and waving colourful feather boas. I slithered among them in a hurry. I reached three sets of double wooden doors leading to a bigger area. A dead end it seemed. I doubled back, paused.

High-heel shoes, glittering slender naked body, small firm breasts, she wore a costume made of golden silk and rhinestones, two fine chains of gold crossed her chest and rippled on her shoulders. Two people at her sides hold her graceful hands. “Make way!” one of them said. “Make way! The artist needs to prepare for the performance!” She seemed absent; is she there of her own will? Not my business, I must be running again.

More corridors. Marble halls and wooden intricate staircases. More young people in groups, dressed in everyday clothes. I’ll take a left here, it has to be this way.

“Are you the seven-bell Dane?” a young man asks behind me. I slow down, turn to him, frown and tilt my head. Seven-bell? I have no idea what he’s talking about. I tell him in my mind. I am in such a rush, can’t he see that? He continues unsure, “Err, are you… Dane?” Already he’s raising a hand in an apologetic gesture and turns away. I’m half French half Italian and I’m lost, dammit!

I take a left, running –I’m so late by now. The hallway leads me to a velvet padded lecture hall. An elegant old woman comes forward. “Are you lost, child?” In my mind I tell her I left all my things in a meeting room and I’ve been running like crazy for too long to find it again. I don’t know where I am, which room I left. I don’t even know where to start. Purple velvet –what is this room? “Sorry, I’ve got to go!” I jumped in what had become a thirty-feet purple velvety cliff. I dived in little gravity, bounced off a purple cushion at an angle, kicked at another one and landed at the bottom.

by koalie at September 28, 2014 10:59 AM

September 27, 2014

koalie’s contemplations in markup

Japanese woman at Stata Center, Jun-Sep 2014

A Japanese woman from another century walks on a path in front of the Ray and Maria Stata Center.

Made on iPad mini between June and September 2014, using ArtRage.

Japanese woman at Stata Center, Jun-Sep 2014

by koalie at September 27, 2014 09:34 PM

September 26, 2014

koalie’s contemplations in markup

20140926-233656-85016262.jpg

A clearing in the forest. A medium canvas on a wooden easel. I dreamt I was painting a woman.

She was standing, her back to me, in a pastel pink satin and organza dress. I was painting her neck, the fine strands of wavy hair rippling under her large-brim hat, around her silky shoulders.

My brush gave her life. She was free from the canvas and stood before me gazing at the forest and humming to herself, as I worked on her green-grey hat. It stirred gently in the wind and so did her auburn hair curling around her neck.

The forest murmured in the wind. The canopy swayed and rustled, patches of sun light danced on the ground. I kept weaving intricate straw braids on her hat. In a strong gust of wind, leaves fell from the trees –we shivered.

I stepped back when I was done and contemplated the canvas. Such disappointment! I looked at my fat brush, grudgingly. This wasn’t the right tool for such delicate work! Yet it seemed so perfect, so real moments before.

It was a beautiful dream within a strange dream.

I don’t paint very often and I don’t have an illustration of the mysterious auburn belle in pink, so all I can think of is this yellow iris in my parents’ garden that I drew on iPad last May.

20140926-233656-85016262.jpg

by koalie at September 26, 2014 08:37 PM