February 05, 2016

W3C Blog

Presentation on PWP planned at the EPUB Summit

As reported a few days ago the European Digital Reading Laboratory (EDRLab) is organizing an EPUB Summit in Bordeaux, France, early April 2016. I am happy to present our work on Portable Web Publications, one of the major topics of discussion at W3C on the convergence of the Open Web Platform and the goals of the Digital Publishing Community. Also, W3C members are entitled to a discount of 50€ on the registration fee if registration is done before the 15th of February. Meet you in Bordeaux!

by Ivan Herman at February 05, 2016 10:12 AM

February 03, 2016

W3C Blog

Summaries of TPAC2015 Breakout Sessions

TPAC 2015 logoDuring TPAC (“Technical Plenary / Advisory Committee”) every year, W3C hosts a Technical Plenary with panels and presentations that brings participants together. For a few years now, we’ve organized most of the plenary as “camp-style” Breakout Sessions and all the participants are invited to propose Breakout Sessions. The meeting attendees build the Breakout Sessions Grid early in the day, drawing from ideas socialized in advance and new ideas proposed on that day.

TPAC2015 was an extremely successful week of meetings. Nearly 580 people attended, 43 work groups met face-to-face, and participants organized 50 Breakout Sessions on topics ranging from network interactions, device synchronization, Web Payments, Social Web, Testing, Web of Things, distributed Web applications, video processing, Web-based signage, digital marketing, privacy and permission, just to name a few. Please, lear more in the W3C CEO report on TPAC2015 and IETF94.

A few summaries

We invite you to read the summaries of a few of these breakouts, excerpted here:

Network Interactions

Network Interactions, proposed by Dominique Hazaël-Massieux, reviewed the outcomes of the recent GSMA/IAB MaRNEW workshop and looked at various cases where this additional interaction could be applied: WebRTC optimization, network usage adaption based on user data allowance, overall optimization of radio usage. The overall discussions of how and when the network operator would want to accommodate more specific requests for control or information on their network from the application layer remain inconclusive on a way forward.

FoxEye – video processing

FoxEye – video processing, proposed by Chia-Hung Tai and Tzuhao Kuo, aimed at bringing more power to the Web to make the Web more friendly for video processing and computer vision. Issues garnered as part of the work session were filed to the github repository for tracking.

Cross-device synchronization

Cross-device synchronization, proposed by François Daoust, explored cross-device synchronization scenarios, including shared video viewing, lip-sync use cases, distributed music playback, video walls, cross-device animations, etc. The Timing Object specification defines an API to expose cross-device sync mechanisms to Web applications. Interested parties are invited to join the Multi-Device Timing Community Group to continue the work on this specification.

How blockchain could change the Web-based content distribution

How blockchain could change the Web-based content distribution, proposed by Shigeru Fujimura and Hiroki Watanabe, was about the mechanism of blockchain and its potential related to web-based content distribution followed by a open discussion focusing on business model regarding the incentive to continue maintaining blockchain.

Requirements for Embedded Browsers needed by Web-based Signage

Requirements for Embedded Browsers needed by Web-based Signage, proposed by Kiyoshi Tanaka and Shigeru Fujimura, started from a presentation of the feature of the web-based signage and requirements for the browser. The API ideas such as auto-pilot API and rich presentation API were shown and discussed regarding the proper Working Groups where such APIs would be considered. The results of this session were provided to the Web-based Signage Business Group and reflected the discussion in the review of a draft charter for a proposed Web-based Signage Working Group.

HTMLCue

HTMLCue, proposed by Nigel Megitt, discussed the idea of a new kind of Text Track Cue which would allow any fragment of HTML+CSS to be used to modify the display on a target element in synchronization with a media element’s timeline. Different views were expressed during the discussion, and two actions were noted. Other next steps include summarizing the HTMLCue proposal in a clear document.

Webex – how’s it going?

Webex – how’s it going?, proposed by Ralph Swick and Nigel Megitt, was a feedback gathering session to understand the experience particularly of working groups since W3C moved from Zakim to Webex for audio calls. Some of the issues can be resolved through best practices, others Ralph Swick offered to handle off-line.

Distributing Web Applications Across Devices

Distributing Web Applications Across Devices, proposed by Mark A. Foltz, discussed the potential for creating a new class of Web applications that can be distributed among multiple devices and user agents, instead of executing within the context of a single device/user agent.

Read more

The well-attended Breakout Sessions during TPAC is an opportunity to meet and liaise with participants from other groups, brainstorm ideas, coordinate solutions for technical issues. Although participation in TPAC is limited to those already in W3C groups, the TPAC proceedings are public, including the TPAC2015 Plenary Breakout Sessions records, which we invite you to read.

by Xueyuan Jia at February 03, 2016 10:47 AM

February 01, 2016

W3C Blog

Feedback on European Banking Authority Discussion Paper

In December 2015 the European Banking Authority (EBA) announced a consultation on RTS on strong customer authentication and secure communication under PSD2. The consultation states: “The revised Payment Services Directive (PSD2) will mandate the EBA to deliver Regulatory Technical Standards on this topic, which the EBA is required to deliver by January 2017. Prior to starting the development of these requirements, the EBA is issuing a Discussion Paper, with a view to obtaining early input into the development process. Responses can be submitted until 8 February 2016.”

Today Harry Halpin (W3C Team Contact) and Ian Jacobs (Web Payments Activity Lead) submitted a response to the discussion paper. We did so as individuals, not on behalf of the W3C. We hope that our response will foster a more comprehensive and ongoing collaboration between the EBA and W3C.

Below we list the questions for which we provided feedback. The Web Payments Interest Group also plans to discuss the potential impact of PSD2 on the Web at its February face-to-face meeting.

#2: Which examples of possession elements do you consider as appropriate to be used in the context of strong customer authentication, must these have a physical form or can they be data? If so, can you provide details on how it can be ensured that these data can only be controlled by the PSU?

In terms of possession elements, it is difficult for any possession element or even behavior-based characteristic to naturally fulfill strict security requirements. Indeed, as shown by various high-profile attacks on biometrics, most of these elements can be attacked successfully by a determined attacker. The most important single characteristic is for any secret key material to be adequately defended from malicious access and to be used appropriately defending access. While embedding such secret key material in hardware tokens is usually a step forward, high security access is also possible through software. Note that poor security hardware tokens that are difficult to update may pose their own problems (including signatures created using broken hash functions). Thus, we encourage open innovation via standards for possession elements both in hardware and software.

#3: Do you consider that in the context of “inherence” elements, behaviour-based characteristics are appropriate to be used in the context of strong customer authentication? If so, can you specify under which conditions?

Privacy is a critical consideration when using behavior-based characteristics. In particular, behavior-based characteristics should be determined locally and client-side, and user appropriate controls and awareness should be satisfied. Failure to exercise this precaution could lead to violations of data protection regulations.

#5: Which challenges do you identify for fulfilling the objectives customer authentication with respect to dynamic linking?

The EBA should favor standards for strong authentication that may be implemented on royalty-free terms, for the entirety of the authentication stack. In addition, solutions should allow for multiple types of possession elements as different applications will demand different levels of assurance.

#14: Can you indicate the segment of the payment chain in which risks to the confidentiality, integrity of users’ personalised security credentials are most likely to occur at present an d in the foreseeable future?

Historically, risks are most acute at the weakest link in the transaction, particularly in terms of user control over their credentials. One of today’s most important vectors is phishing since ordinary users may have trouble determining faked websites. In collaboration with the FIDO Alliance, W3C is working on standards to help address this problem. Cloud computing has given rise to another issue that must be considered: long-term storage of credentials. The FIDO approach to be standardized at W3C will reduce the risk of attacks on backend credential databases, as no symmetric authentication credentials (such as passwords) are stored on the server.

#17: In your opinion, is there any standards (existing or in development) outlining aspects that could be common and open, which would be especially suitable for the purpose of ensuring secure communications as well as for the appropriate identification of PSPs taking into consideration the privacy dimension?

W3C is currently developing (or expects to develop) open standards in several relevant areas:

  • Web cryptography, to enable developers to implement secure application protocols on the level of Web applications, including message confidentiality and authentication services, by exposing trusted cryptographic primitives from the browser.
  • Web application security, to ensure that Web applications are delivered as intended, free from spoofing, injection, and eavesdropping.
  • Strong authentication, to reduce the use of shared secrets, (i.e., passwords) as authentication credentials, facilitating instead multi-factor authentication support as well as hardware-based key storage.
  • Hardware security, to provide Web Applications with access to secure services enabled by hardware modules.
  • Payment initiation, to make payments easier and more secure on the Web, by streamlining checkout and making it easier to bring new and secure payment methods to ECommerce.

#18: How would these requirement for common and open standards need to be designed and maintained to ensure that these are able to securely integrate other innovative business models than the one explicitly mentioned under article 66 and 67 (e.g. issuing of own credentials by the AIS/PIS)?

The discussion paper refers to EBA requirements for “Common and open” standards and poses a challenge in provision 63 to ‘Define what makes a standard “common” and “open”‘. We would like to bring to the EBA’s attention the Open Stand statement of principles for open standards development published in 2012 by W3C, the IETF, the Internet Architecture Board, the Internet Society, and the IEEE. We hope that these principles can inform the EBA discussion about open standards.

In W3C’s experience, global participation in transparent processes —expressed in the Open Stand principles— is critical to ensuring that regional needs are met, and that technology achieves global adoption. The discussion paper includes this statement:

“Article 98 states that EBA shall develop draft regulatory technical standards specifying “the requirements for common and secure open standards of communication.” However, PSD2 does not mandate EBA to develop or maintain these open and common standards of communication themselves or to appoint a central entity in charge of developing or maintaining these standards.”

Successful open standards reflect the needs and input of diverse global stakeholders. W3C recommends that the EBA adopt relevant existing open standards from W3C, IETF and similar organizations, or, join the efforts of those organizations to develop new global standards in support of PSD2. We believe that in collaboration with the EBA, we can help achieve the goal cited in the discussion paper to “ensure the interoperability of different technological communication solutions” where those solutions involve the Web.

#19: Do you agree that the E-IDAS regulation could be considered as a possible solution for facilitating the strong customer authentication, protecting the confidentiality and the integrity of the payment service users’ personalised security credentials as well as for common and secure open standards of communication for the purpose of identification, DP on future RTS on strong customer and secure communication under PSD2 authentication, notification, and information? If yes, please explain how. If no, please explain why.

Given the volume of cross-border transactions (as well as the anticipated growth in that area), any solution should not be bound only to national or even European regulations, but should be ultimately compatible with them. Thus, E-IDAS will help, but is not sufficient. To help ensure compatibility of global technology with E-IDAS and other regional requirements, we recommend direct participation in the global standards processes.

by Ian Jacobs at February 01, 2016 09:05 PM

January 29, 2016

W3C Blog

Digital Publishing Event in Bordeaux

The newly created European Digital Reading Laboratory (EDRLab) is organizing an EPUB Summit in Bordeaux, France, in April 2016. It will be an important moment of collaboration among various players of the Digital Publishing ecosystem, including the Readium Foundation, IDPF, experts and representatives of digital publishers, and also W3C. Relationships of Digital Publishing and the advances of the Open Web Platform will be an important topic on the agenda, closely related to the work of the W3C Digital Publishing Activity. For further details, please consult the Event’s Press Release (also available in French). I am looking forward participating in the discussion!

by Ivan Herman at January 29, 2016 03:03 PM

January 14, 2016

ishida >> blog

What characters are in or not in encoding X?

I just received a query from someone who wanted to know how to figure out what characters are in and what characters are not in a particular legacy character encoding. So rather than just send the information to her I thought I’d write it as a blog post so that others can get the same information. I’m going to write this quickly, so let me know if there are parts that are hard to follow, or that you consider incorrect, and I’ll fix it.

A few preliminary notes to set us up: When I refer to ‘legacy encodings’, I mean any character encoding that isn’t UTF-8. Though, actually, I will only consider those that are specified in the Encoding spec, and I will use the data provided by that spec to determine what characters each encoding contains (since that’s what it aims to do for Web-based content). You may come across other implementations of a given character encoding, with different characters in it, but bear in mind that those are unlikely to work on the Web.

Also, the tools I will use refer to a given character encoding using the preferred name. You can use the table in the Encoding spec to map alternative names to the preferred name I use.

What characters are in encoding X?

Let’s suppose you want to know what characters are in the character encoding you know as cseucpkdfmtjapanese. A quick check in the Encoding spec shows that the preferred name for this encoding is euc-jp.

Go to http://r12a.github.io/apps/encodings/ and look for the selection control near the bottom of the page labelled show all the characters in this encoding.

Select euc-jp. It opens a new window that shows you all the characters.

picture of the result

This is impressive, but so large a list that it’s not as useful as it could be.

So highlight and copy all the characters in the text area and go to https://r12a.github.io/apps/listcharacters/.

Paste the characters into the big empty box, and hit the button Analyse characters above.

This will now list for you those same characters, but organised by Unicode block. At the bottom of the page it gives a total character count, and adds up the number of Unicode blocks involved.

picture of the result

What characters are not in encoding X?

If instead you actually want to know what characters are not in the encoding for a given Unicode block you can follow these steps.

Go to UniView (http://r12a.github.io/uniview/) and select the block you are interested where is says Show block, or alternatively type the range into the control labelled Show range (eg. 0370:03FF).

Let’s imagine you are interested in Greek characters and you have therefore selected the Greek and Coptic block (or typed 0370:03FF in the Show range control).

On the edit buffer area (top right) you’ll see a small icon with an arrow point upwards. Click on this to bring all the characters in the block into the edit buffer area. Then hit the icon just to its left to highlight all the characters and then copy them to the clipboard.

picture of the result

Next open http://r12a.github.io/apps/encodings/ and paste the characters into the input area labelled with Unicode characters to encode, and hit the Convert button.

picture of the result

The Encoding converter app will list all the characters in a number of encodings. If the character is part of the encoding, it will be represented as two-digit hex codes. If not, and this is what you’re looking for, it will be represented as decimal HTML escapes (eg. Ͱ). This way you can get the decimal code point values for all the characters not in the encoding. (If all the characters exist in the encoding, the block will turn green.)

(If you want to see the list of characters, copy the results for the encoding you are interested in, go back to UniView and paste the characters into the input field labelled Find. Then click on Dec. Ignore all ASCII characters in the list that is produced.)

Note, by the way, that you can tailor the encodings that are shown by the Encoding converter by clicking on change encodings shown and then selecting the encodings you are interested in. There are 36 to choose from.

by r12a at January 14, 2016 08:29 PM

January 11, 2016

W3C Blog

Supporting HTTPS and HSTS on w3.org

Photo showing LED lights on the W3C servers at MIT. W3C servers at MIT.

W3C advocates that the Web platform “actively prefer secure communication”. Thanks to recent work in the Web Application Security Working Group and supporting client implementations, and the deployment work of the W3C Systems Team, we are now able to provide HTTPS access to all W3C resources. All W3C documents, including Recommendations, DTDs, and vocabularies will be available with the authentication, integrity-protection, and confidentiality HTTPS supports.

HTTPS deployment posed some challenges based on our commitment to preserve substantial archives of historic material (for which we could not simply assume that all links were scheme-relative or convert all included content to HTTPS) and the desire to maintain availability to older clients in the field that might support only HTTP. Accordingly, our setup makes use of the Upgrade Insecure Requests spec, but does not force HTTPS on those who start from HTTP.

In detail

The upgrade to HTTPS/HSTS support involves the following:

  • Support of the Upgrade-Insecure-Requests HTTP request header for transparently requesting the upgrade of HTTP requests to HTTPS ones. Note that you will only get the benefits of this feature if your browser sends this header.
  • Support of the Strict-Transport-Security HTTP response header (HSTS) for instructing browsers that they should transparently transform all HTTP requests to HTTPS ones for all access to the www.w3.org domain. All recent browsers support this header. It allows to convert a site to HTTPS without needing to revise the content of all the legacy resources that may have hard-coded HTTP links. We have been supporting this header in lists.w3.org and other domains for a long time.
    This status is cached in the browser for a given time and will be refreshed each time you browse an HTTPS link to www.w3.org.
  • We’re not planning at this point of time to enforce server redirection of all HTTP requests to HTTPS ones for public resources. This is due to avoid breaking older software that can’t be upgraded, such as those in built-in devices. This may be done later at another milestone. As a consequence, if your browser doesn’t send the Upgrade-Insecure-Requests header, you’ll need to browse an HTTPS links pointing to www.w3.org to get the benefits of using HTTPS on our site.
  • Note that this change has no effect on namespaces. The actual namespace will continue to use HTTP, even if it is also served through HTTPS. This applies as well for XMI DTD, Schema, and SGML DTDs resources.
  • There may be some side-effects you need to be aware of:

    • Infinite loops happening if due to server or proxy configuration there are hard-coded redirects to an HTTP link after the HSTS header is cached in your browser. Please send the URL to to sysreq@w3.org so that we can fix it.
    • Mixed-content warnings. They will be raised if you’re using a browser that doesn’t support Strict-Transport-Security. The solution here is to update to a more recent browser.

by Coralie Mercier at January 11, 2016 06:08 PM

January 05, 2016

W3C Blog

Shaping the WCAG 2.0 extensions

A few months ago, we announced an an updated charter for the Web Content Accessibility Guidelines (WCAG) Working Group that allows the group to develop extensions to WCAG 2.0. Extensions will be separate guideline documents, to increase the amount of coverage on particular accessibility needs. The group set to work right away to establish the framework for these extensions, because three task forces are already working on material intended to become extensions. To address questions of scope, conformance, and relationships among extensions, the Working Group today published Requirements for WCAG 2.0 Extensions. We seek wide review of this draft, before the requirements are finalized, to ensure that extensions will strike an appropriate balance between meeting the needs of users with disabilities, working within existing WCAG 2.0 conformance policies, and implementability.

The extension requirements have the following goals:

  1. Satisfy pre-existing requirements for WCAG 2.0,
  2. Ensure that web pages which conform to WCAG 2.0 plus an extension also conform to WCAG 2.0 on its own,
  3. Ensure that all WCAG extensions are compatible with each other,
  4. Define a clear conformance model for WCAG 2.0 plus extensions, and
  5. Ensure the conformance structure utilizes the WCAG 2.0 A/AA/AAA model.

The first two requirements aim to maximize compatibility of extensions with the WCAG 2.0 base standard. This will help retain the benefits of harmonization of web accessibility practices that WCAG 2.0 has brought and help ensure that extensions can be easily applied to sites that conform to WCAG 2.0. Extensions, like WCAG 2.0, need to provide clear conformance requirements, that provide a clear accessibility benefit, are technology neutral, work on the existing and future Web, and can have their success objectively verified. Extensions can augment WCAG 2.0 success criteria, but only in a way that raises the conformance bar, not in a way that lowers it. This ensures that sites that adopt one or more extensions remain interoperable with sites that conform to WCAG 2.0 on its own.

The third requirement, that extensions are compatible with each other, was a challenge for the Working Group. If there are conflicts between extensions, sites could be unable to conform to more than one extension, thereby making the extensions less universally applicable. On the other hand, the currently anticipated extensions are designed to address specific issues that have come up, and sites may primarily adopt the extension that addresses the needs of their own audience. The current wording of the requirement is that extensions should strive to avoid conflicting requirements, working within the Working Group to address issues found, but does not completely forbid conflicts. There are considerations in favor both of allowing and of forbidding conflicts between extensions, and input on this topic from people who expect to use extensions would be especially helpful at this stage.

The final requirements address the conformance model of extensions. For sites to conform to a WCAG 2.0 extension, it must first conform to WCAG 2.0 on its own. If going further to conform to an extension, the conformance claim to that extension is at the same level (A, AA, or AAA) as the base WCAG 2.0 conformance level for the site. Extensions may or may not provide success criteria at all three conformance levels, but there will be a way to conform to WCAG 2.0 plus extensions at any given conformance level.

It is important to recognize that extensions are based on WCAG 2.0, and are not intended to radically change the landscape of accessibility guidelines. They address specific issues that now need consideration because of technology changes or emerging knowledge since the time it was finalized. Future accessibility guidelines may go beyond this to recast the guidelines in a way to better address technology evolution we now see and to better address the whole of the web accessibility picture. When this happens, some recommendations that first appear in WCAG 2.0 extensions could appear in these later guidelines, but some might not. As the extensions mature, the Working Group will turn its attention to the longer timeline and consider the shape of future guidelines.

Your input on these extension requirements is important to the ultimate success of the extensions. Please review and comment on these proposed requirements. The Working Group requests input by 5 February 2016 so it can address any issues raised and turn its focus fully onto the extensions themselves.

by Michael Cooper at January 05, 2016 01:47 PM

January 02, 2016

ishida >> blog

New picker: Old English

Picture of the page in action.
>> Use the picker

Following closely on the heels of the Old Norse and Runic pickers comes a new Old English (Anglo-Saxon) picker.

This Unicode character picker allows you to produce or analyse runs of Old English text using the Latin script.

In addition to helping you to type Old English latin-based text, the picker allows you to automatically generate phonetic and runic transcriptions. These should be used with caution! The transcriptions are only intended to be a rough guide, and there may occasionally be slight inaccuracies that need patching.

The picture in this blog post shows examples of old english text, and phonetic and runic transcriptions of the same, from the beginning of Beowulf. Click on it to see it larger, or copy-paste the following into the picker, and try out the commands on the top right: Hwæt! wē Gār-Dena in ġēar-dagum þēod-cyninga þrym gefrūnon, hūðā æþelingas ellen fremedon.

If you want to work more with runes, check out the Runic picker.

by r12a at January 02, 2016 11:02 PM

January 01, 2016

W3C Blog

Job: China Business Development Lead & Marketing Coordinator

The World Wide Web Consortium (W3C) is looking for a new, full-time staff member to be located at Beihang University in Beijing, where W3C has its China Host, to fill two roles: China Business Development Lead (CBDL) (60%) and a W3C Marketing Coordinator (40%).

Responsibilities:

In the role of China Business Development Lead (60%) the individual will work with W3C’s Global Business Development Leader and with the W3C/Beihang Site Manager of the W3C on executing a strategic plan to recruit Member organizations in mainland China, Hong Kong, Taiwan and Macao. The CBDL will work to identify business development opportunities and make the plans required to spearhead the desired results. These plans will include, but not be limited to, recruiting new members, developing and executing marketing campaigns designed to recruit new members and raise overall W3C visibility within the W3C/Beihang territory, driving sponsorship opportunities within the region and working with the W3C Global organization to improve W3C offices’ efficiency in driving these activities.

In the role of Marketing Coordinator you will be a part of W3C’s global Marketing and Communications Team. The individual will work on developing and managing global marketing initiatives and campaigns in close coordination with the W3C Staff. Topic areas will primarily be Digital Publishing, Web Payments, Digital Marketing, Telecommunications, TV and Entertainment, Automotive and Web of Things. You will develop strategic messages to reach multiple audiences, including C-level executives and software developers and create content on the W3C Web site to support these messages. You will identify, review, and assess speaking and events opportunities. You will also strengthen W3C’s marketing and communications programs globally.

Requirements:

A bachelor’s degree in management, marketing, communications or related business degrees; at least four years of proven business development and marketing success, preferably in the high tech industry or in a member funded consortium; and the ability to work proactively and think strategically. Must be comfortable providing leadership to a virtual team and setting and achieving a variety of business development goals with a focus on new Member Recruitment and Marketing goals with an emphasis on outward facing messaging. Ability to understand global trends in information technology, with an emphasis on Web technologies, and a solid understanding of the Web industry is highly desirable. Additionally you should have evidence of solid marketing and communication skills; highly developed organizational, planning, and writing skills. Must be fluent in English as well as Chinese.

The position is based with the W3C Beihang team in Beijing, China. Must be able to attend evening and/or weekend meetings/events, and travel within China and to global W3C events.

Work may begin March 1, 2016.

To apply, please send a motivation letter, your resume or CV and copies of your Diplomas (including High School or equivalent) in electronic form to team-beihang-position@w3.org.

by Coralie Mercier at January 01, 2016 06:43 PM

ishida >> blog

New pickers: Runic & Old Norse

Picture of the page in action.
>> Use the picker

Character pickers are especially useful for people who don’t know a script well, as characters are displayed in ways that aid identification. These pickers also provide tools to manipulate the text.

The Runic character picker allows you to produce or analyse runs of Runic text. It allows you to type in runes for the Elder fuþark, Younger fuþark (both long-branch and short-twig variants), the Medieval fuþark and the Anglo-Saxon fuþork. To help beginners, each of the above has its own keyboard-style layout that associates the runes with characters on the keyboard to make it easier to locate them.

It can also produce a latin transliteration for a sequence of runes, or automatically produce runes from a latin transliteration. (Note that these transcriptions do not indicate pronunciation – they are standard latin substitutes for graphemes, rather than actual Old Norse or Old English, etc, text. To convert Old Norse to runes, see the description of the Old Norse pickers below. This will soon be joined by another picker which will do the same for Anglo-Saxon runes.)

Writing in runes is not an exact science. Actual runic text is subject to many variations dependent on chronology, location and the author’s idiosyncracies. It should be particularly noted that the automated transcription tools provided with this picker are intended as aids to speed up transcription, rather than to produce absolutely accurate renderings of specific texts. The output may need to be tweaked to produce the desired results.

You can use the RLO/PDF buttons below the keyboard to make the runic text run right-to-left, eg. ‮ᚹᚪᚱᚦᚷᚪ‬, and if you have the right font (such as Junicode, which is included as the default webfont, or a Babelstone font), make the glyphs face to the left also. The Bablestone fonts also implement a number of bind-runes for Anglo-Saxon (but are missing those for Old Norse) if you put a ZWJ character between the characters you want to ligate. For example: ᚻ‍ᛖ‍ᛚ. You can also produce two glyphs mirrored around the central stave by putting ZWJ between two identical characters, eg. ᚢ‍ᚢ. (Click on the picture of the picker in this blog post to see examples.)

Picture of the page in action.
>> Use the picker

The Old Norse picker allows you to produce or analyse runs of Old Norse text using the Latin script. It is based on a standardised orthography.

In addition to helping you to type Old Norse latin-based text, the picker allows you to automatically generate phonetic and runic transcriptions. These should be used with caution! The phonetic transcriptions are only intended to be a rough guide, and, as mentioned earlier, real-life runic text is often highly idiosyncratic, not to mention that it varies depending on the time period and region.

The runic transcription tools in this app produce runes of the Younger fuþark – used for Old Norse after the Elder and before the Medieval fuþarks. This transcription tool has its own idiosyncracies, that may not always match real-life usage of runes. One particular idiosyncracy is that the output always regularly conforms to the same set of rules, but others include the decision not to remove homorganic nasals before certain following letters. More information about this is given in the notes.

You can see an example of the output from these tools in the picture of the Old Norse picker that is attached to this blog post. Here’s some Old Norse text you can play with: Ok sem leið at jólum, gørðusk menn þar ókátir. Bǫðvarr spurði Hǫtt hverju þat sætti; hann sagði honum at dýr eitt hafi komit þar tvá vetr í samt, mikit ok ógurligt.

The picker also has a couple of tools to help you work with A New Introduction to Old Norse.

by r12a at January 01, 2016 01:43 PM

December 28, 2015

W3C Blog

W3C will be at CES – will you?

As we reach the end of 2015 the W3C team is putting our plans in place for CES 2016 which is the week of 04 Jan 2016 in Las Vegas. We will be there with a Hospitality Suite in the Westgate Hotel and would love to meet with you to discuss how working with our Membership on W3C Standards can help your organization at the same time that you’re helping the Web Reach it’s Full Potential.

We’ll have both our Technical and Business Development teams there to discuss our activities in Entertainment, Automotive and Web of Things. We can also cover our Digital Marketing and Security activities as well as any other Web Technologies that you would want to explore.

In addition to these teams our CEO, Dr. Jeff Jaffe, is also available for discussion during the week. We’d love to hear from you so send your meeting requests into team-contact@w3.org or directly to me at abird@w3.org.

by J. Alan Bird at December 28, 2015 11:29 AM

December 23, 2015

W3C Blog

Looking for Certified HTML5 Developers? The search is over!

In response to requests from both our Members and the general Web Community, W3C has been offering training on Web standards for a number of years. The initial offerings were on W3DevCampus which continues to offer unique instructor-led courses in a number of topics. One of the challenges we’ve faced is that we can’t scale that platform to reach the wider audience and we took steps to address that in early 2015.

You may have seen our announcement of our partnership with edX at the end of March 2015. This exciting advancement in our overall Training activity has resulted in us creating two courses on HTML5 and we’re working on several more. The first course, HTML5 Part 1: HTML5 Coding Essentials and Best Practices, has drawn over 150K learners in the two runs of the course! The second course,  HTML5 Part 2: Advanced Techniques for Designing HTML5 Apps, is finishing it’s first run and we have almost 20K students currently developing HTML5-based apps.

Example of a W3Cx Verified Certificate

Example of a W3Cx Verified Certificate

One of the things that we are encouraging our learners to do is sign up for a Verified Certificate. This certificate is awarded when the learner has completed the work and achieved an overall grade of 70% or better. We have over 1,000 learners that have earned these certificates.

If your organization is looking to bring on HTML5 personnel, then we think you should ask them if they have these certificates. Another option is for you to reach out directly to the people who have earned them by going to the W3Cx Verified Students LinkedIn Group. This group was formed by W3C to allow those learners that have earned Verified Certificates to promote themselves. It is administered by my team and we insure that only learners that have earned the proper certificates are represented there.

I would encourage you to keep letting us know what other courses you would like us to develop and to follow our progress on W3Cx.

by J. Alan Bird at December 23, 2015 02:12 PM

Tokyo Web Payments Seminar Summary

A few days after W3C’s TPAC 2015 in Sapporo, nearly 70 people met in Tokyo on 2 November to discuss W3C’s Web Payments activities, with special consideration of the relevance to the Japanese payments ecosystem. Keio University hosted the event, attended by Rakuten, Yahoo Japan, NTT Data, ACCESS, DDS, ookami, NEC Corporation, Mitsubishi UFJ Financial Group, Softbank, Sony, NTT Communications, METI, Kyocera, NTT Comware, Fujitsu, Toshiba, So-net, NTT, Canton Consulting, SD Association, Casio, Daikanyama RED, KDDI, Sumitomo Mitsui Banking Corporation, Hightech Explore, Bank of Japan, TMI, National Association of Convenience Stores(NACS), Ripple Labs, Monex, and KDDI Engineering.

W3C staff and Members shared progress about W3C’s payment-related activities. Prof. Jun Murai (W3C Associate Chair/Keio University) and Jeff Jaffe (W3C CEO) welcomed the attendees and provided an overview of W3C’s payment activities. Ian Jacobs (W3C Web Payments Activity Lead) gave a more detailed presentation on the recently launched Web Payments Working Group as well the evolving agenda and priorities of the Web Payments Interest Group, which is looking at upcoming standardization opportunities. Prof. Keiji Takeda (Keio University / MIT) described W3C’s security activities, including development of W3C Recommendations for FIDO 2.0 in an upcoming Web Authentication Working Group charter as well as a charter for a Hardware Security Working Group. Adrian Hope-Bailie (Ripple) presented the Interledger Protocol (ILP) as a means to facilitate payments across disparate payment systems. This protocol is being discussed in the Interledger Payments Community Group. Jean-Yves Rossi (CANTON Consulting) shared a European perspective on payments, including discussion of upcoming European regulations and the importance of ISO20022 to the industry. Jeff Jaffe, Adrian Hope-Bailie, Jean-Yves Rossi, and David Ezell (NACS) took questions from the audience.

We hope that this event and others like it will ensure that W3C’s payments activities reflect regional requirements and foster global participation in the development of Web standards.

Japanese Version

史上最大の参加者となった札幌でのTPAC2015に続く形で、11月2日 (月) に慶應義塾大学三田キャンパスにおいてW3C Web Payments Seminarが開催されW3CのWeb Payments活動について積極的な意見交換が行われました。特に日本でのペイメント・エコシステムに関連するトピックについて考える良い機会となりました。参加企業 (順不同) は、楽天、ヤフー!ジャパン、NTTデータ、ACCESS、DDS、ookami、NEC、三菱UFJフィナンシャル・グループ、ソフトバンク、ソニー、NTTコミュニケーションズ、経済産業省、京セラ、NTTコムウェア、富士通、東芝、So-net、NTT、Canton Consulting、SD Association、カシオ、代官山RED、KDDI、三井住友銀行、Hightech Explore、日本銀行、TMI、National Association of Convenience Stores (NACS)、Ripple Labs、マネックス証券、KDDIエンジニアリングなどから計70名のご参加をいただきました。

最初に村井純 (W3C/Keio Deputy Director・慶應義塾大学) およびJeffrey Jaffe (W3C CEO) より参加者への歓迎の挨拶とW3Cのペイメント活動の概要について説明を行いました。次にIan Jacobs (W3C Web Payments Activity Lead) より10月に立ち上がったばかりのWeb Payments Working Groupについて詳細説明Web Payments Interest Groupの今後の方針や優先課題の発表がありました。続いて武田圭史 (慶應義塾大学・MITフェロー)よりFIDO2.0をインプットとするWeb Authentification Working Groupの設立趣意書およびHardware Security Working Groupの設立趣意書の説明を含むW3Cのセキュリティ活動全般について発表をしました。それからRippleのAdrian Hope-Bailie氏より異種のペイメント・システム間でペイメントをスムーズに行う上でInterledger Protocol (ILP) が一つの有効手段であることを説明していただきました。このプロトコルについてはInterledgerPayments Community Groupにて議論されています。その後にCANTON ConsultingのJean-Yves Rossi氏から近々発表されるヨーロッパでの法整備や業界におけるISO20022の重要性などを含むヨーロッパにおけるペイメントの展望について発表がありました。最後にJeffrey Jaffe、Adrian Hope-Bailie氏、Jean-Yves Rossi氏、David Ezell氏 (NACS) がパネルディスカッションを行い、参加者から質問などを受け付けました。

今回のようなW3Cのイベントは、国や地域ごとの仕様要求をW3Cのペイメント活動に反映させるのに貢献するだけでなく、Web技術標準化活動への国際的な参画を促進することに寄与できると確信しています。

(Translation by Sam Sugimoto)

by Ian Jacobs at December 23, 2015 01:45 PM

December 18, 2015

W3C Blog

From a world-wide web of pages to a world-wide web of things – interoperability for connected devices

Logo for Industry of Things World USA conference W3C is seeking to unlock the potential of the Internet of Things and reduce its fragmentation. Jeff Jaffe (W3C CEO) will give a keynote at the Industry of Things World USA conference in San Diego on 25 February 2016.

He will explain how W3C is focusing on simplifying application development through a platform of platforms that integrates existing standards to reduce costs and enable open markets of services. W3C is bringing people together to work on the challenges posed by discovery, composition and monetization of services, along with security, privacy and resilience in the face of faults and cyber attacks. Jeff will further explain how W3C is addressing scaling across devices, platforms, and application domains, including the challenges involved in web scale services. He will describe the need for collaboration across industry alliances and standards development organizations and the steps that W3C is taking to achieve this.

We look forward to connecting with the Industrial Internet experts at IoTW USA 2016, to challenge current thinking, share best practices, talk about future developments and latest innovations, and we hope to meet you there. W3C Members will receive a discount on the event ticket price.

by Dave Raggett at December 18, 2015 04:26 PM

December 10, 2015

W3C Blog

An array of tools to ensure security and privacy of the Open Web Platform

As noted in “Better specifications for the sake of the Web” last month, W3C conducts wide reviews for an ever-increasing number of specifications; and Virginie and Richard provided some tips to make those reviews more effective. We’re pleased to add more tools, focused on privacy and security on the Web.

Today, the Technical Architecture Group (TAG) published a Self-Review Questionnaire: Security and Privacy, a high-level tool to help editors and Working Groups spot security and privacy issues of a new feature or specification early on. This document will evolve based on feedback received by those who use it.

As one simple nudge for specification authors, we can also update the software commonly used for spec writing to remind editors to include a privacy and security considerations section, as has been done in Bikeshed and discussed for Respec.

The Privacy Interest Group (PING) has recently published a draft of Fingerprinting Guidance for Web Specification Authors, which aims to provide advice for mitigating browser fingerprintability in the development of new Web features. In addition, PING, working with the TAG, is developing a more detailed questionnaire for experts in privacy and security who are reviewing documents from W3C Working Groups, to supplement editors’ self-review.

Interest Groups, including the Privacy Interest Group and Web Security Interest Group, look at these privacy and security topics in particular, and welcome comments from all. Indeed, comments, issues or even GitHub pull requests are invited for all of the documents mentioned above; you can help all the W3C groups who will refer to these documents with your reviews now.

Experience shows that people, process and tools combined can make the Web more private and secure; see, for example, research presented earlier this year. Tips for spotting issues to review, tools for conducting in-depth reviews and a vibrant community of interested reviewers fit together to provide stronger security and privacy for the Open Web Platform.

by Nick Doty at December 10, 2015 06:37 PM

December 05, 2015

ishida >> blog

New app: Encoding converter

Picture of the page in action.
>> Use the app

This app allows you to see how Unicode characters are represented as bytes in various legacy encodings, and vice versa. You can customise the encodings you want to experiment with by clicking on change encodings shown. The default selection excludes most of the single-byte encodings.

The app provides a way of detecting the likely encoding of a sequence of bytes if you have no context, and also allows you to see which encodings support specific characters. The list of encodings is limited to those described for use on the Web by the Encoding specification.

The algorithms used are based on those described in the Encoding specification, and thus describe the behaviour you can expect from web browsers. The transforms may not be the same as for other conversion tools. (In some cases the browsers may also produce a different result than shown here, while the implementation of the spec proceeds. See the tests.)

Encoding algorithms convert Unicode characters to sequences of double-digit hex numbers that represent the bytes found in the target character encoding. A character that cannot be handled by an encoder will be represented as a decimal HTML character escape.

Decoding algorithms take the byte codes just mentioned and convert them to Unicode characters. The algorithm returns replacement characters where it is unable to map a given byte to the encoding.

For the decoder input you can provide a string of hex numbers separated by space or by percent signs.

Green backgrounds appear behind sequences where all characters or bytes were successfully mapped to a character in the given encoding. Beware, however, that the character mapped to may not be the one you expect – especially in the single byte encodings.

To identify characters and look up information about them you will find UniView extremely useful. You can paste Unicode characters into the UniView Edit Buffer and click on the down-arrow icon below to find out what they are. (Click on the name that appears for more detailed information.) It is particularly useful for identifying escaped characters. Copy the escape(s) to the Find input area on UniView and click on Dec just below.

by r12a at December 05, 2015 05:23 PM

December 03, 2015

W3C Blog

WAI-ARIA Graphics Module Published

By Amelia Bellamy-Royds

A First Public Working Draft of the WAI-ARIA Graphics Module was published today. This new vocabulary for describing graphical documents allows improved representation to and interaction with people with disabilities, and will be of interest to those working in any graphics format on the web or in XML documents. It is being developed by the SVG Accessibility Task Force, a joint effort of the Accessible Rich Internet Applications Working Group and the Scalable Vector Graphics (SVG) Working Group.

Graphics can be particularly problematic from an accessibility perspective, because they are oriented so strongly towards a holistic, visual perception of the content. Graphics are used to convey complex patterns, relationships, and details of shape and structure that cannot be easily conveyed in text—not even in the 1000 words a picture is allegedly worth. Numerous possible differences in abilities, from color blindness to complete blindness to alternative cognitive processing, can affect a person’s potential to extract information from an image. So can technological limitations, from black-and-white printing to tiny screens.

Nonetheless, graphics formats such as SVG allow authors to define an image, not as the final visual representation, but as a structured document with meaningful parts and embedded text descriptions. Unfortunately, current software tools make limited use of this information to assist end-users. Furthermore, there are no standards for encoding many important aspects of graphical structure and data, limiting the development of specialized accessibility tools. A truly accessible graphical document could be explored regardless of the medium used to access it. The person accessing it could identify meaningful parts, follow the relationships between them, and understand the concepts or data they represent.

In combination with other standards in development, the WAI-ARIA Graphics Module aims to establish a language with which graphics creators can communicate the structure of their document to assistive technologies, so those technologies can effectively present the content to users. It does so by introducing graphics-specific content roles to the WAI-ARIA (Accessible Rich Internet Applications) model.

The WAI-ARIA roles model is a standard vocabulary through which content creators can describe the structure and function of elements within a document. Web browsers, computer operating systems, and assistive technologies use this information to help users perceive, understand, and interact with the document—even if they do so using interfaces quite different from those used by the author. However, the established taxonomy of ARIA roles only address a subset of document features. In particular, they emphasize landmarks within standard web page layouts (such as headers, footers, navigation menus, and articles) and user-input widgets (buttons, sliders, text boxes, and so on).

The WAI-ARIA Graphics Module proposes three new ARIA roles which will provide the foundation for a structured, semantic approach to graphical content. The graphics roles are being developed as part of a modular approach to extending ARIA with domain-specific vocabularies. For example, another new ARIA module in development is the Digital Publishing Module for describing parts of a book.

The new roles would allow authors to:

  • Declare a document, or section of a document, to be a complex graphic using the graphics-doc role. This would warn web browsers and assistive technologies that the visual presentation and layout of the content may convey information not contained in its plain text content. Users may wish to explore connections between content in a two-dimensional manner that does not follow the linear structure of the document source code. Alternative visual presentations—such as simplified “reader” modes—should respect the graphical nature of the content.
  • Describe a section of a document as a meaningful object, using the graphics-object role. The vocabulary used to describe the structure of text documents—sections, lists, and groups—can be confusing when applied to a graphic. The new graphical object role will allow authors to apply labels and descriptions to composite objects, while clearly distinguishing these from a grouping of distinct elements.
  • Identify an image or graphic as a standard symbolic representation of a concept or category, using the graphics-symbol role. Graphical symbols are used in many contexts: on maps, in charts, in weather reports. In each case, the specific appearance of the symbol is less important than the category or value it represents. With graphics clearly identified as symbols when appropriate, people experiencing the content through a screen reader can know whether a full description is required. Alternative visual or tactile representations of the graphic might even substitute all instances of the symbol with a simplified equivalent.

These roles would complement the existing img role, which tells assistive technologies to treat an element or section of a document as a single, indivisible complex image with an alternative text description.

Future extensions of this module will focus on information-heavy graphics such as data charts and maps, defining more specific roles for common graphical objects such as legends and axes, and providing standard ways for authors to annotate their graphical elements with the data and relationships they represent. With this additional information, advanced assistive technologies could be developed to convert and adapt the data into other formats more suitable for a particular user. Nonetheless, the core graphics roles should provide a framework for assistive technology developers to start addressing the particularly needs of graphical comment, for layout and for navigation.

In the near term, work on the WAI-ARIA Graphics Module will focus on developing a suitable mapping in the SVG Accessibility API Mappings of the new ARIA roles to roles and properties used by operating system accessibility APIs (which are, in turn, used by many assistive technologies). This may require new extensions to some of those application programming interfaces. The editors would also like feedback on the current draft specification, in particular:

  • Are proposed roles clear and appropriate to the needs of interactive graphics?
  • Is the use of the “graphics-” prefix in role names to avoid potential collision with other ARIA roles acceptable?
  • What mechanism would be suitable for addition of new roles?
  • Is the relationship of this specification to WAI-ARIA 1.1 clear?

Feedback may be submitted as an issue to the W3C’s ARIA specifications GitHub repository or via feedback to the SVG Accessibility Task Force’s e-mail list, public-svg-a11y@w3.org.

by Michael Cooper at December 03, 2015 01:19 PM

Media Accessibility User Requirements is a W3C Note

By Janina Sajka

Today the Protocols and Formats Working Group published Media Accessibility User Requirements (MAUR) as a W3C Note. This document describes the needs of users with disabilities to be able to consume media (video and audio) content. In development since late 2009, the MAUR has already been used to ensure that the HTML 5 specification can fully support traditional alternative media access technologies (such as captioning), and newer, digitally based approaches (such as simultaneous sign language translation). It is the most thorough and comprehensive review of alternative media support for persons with disabilities yet developed. In addition to HTML 5 support for traditional broadcast approaches, it also describes media accessibility user requirements related to newer technologies being developed specifically for the web.

Media accessibility is familiar to many from the closed captions used in television broadcasts. While captions are frequently used by the general public in noisy environments, it’s also generally understood that captions were initially created to allow persons who are deaf or hard of hearing understand the audio content of television and movies–content they cannot hear, as indeed no one can in very noisy environs. Over the past 30 years, most people have come to an appreciation of captions in movies and television content.

Less well known, but widely established and equally successful, is the practice of describing the visual content of television and movies for those who cannot see it. The human-narrated descriptions are generally provided on the secondary audio programming (SAP) channel of television broadcasts, or via wireless headphones in movie theaters.

As HTML 5 based web technologies are increasingly used to deliver video content, the W3C Web Accessibility Initiative has stepped forward to develop a set of requirements for insuring that media content delivered over the web can also leverage the power of the web to make media accessibility to persons with disabilities accessible. The MAUR will be useful for user agent developers and media content developers alike as they exploit the power of HTML 5. It will aid broadcasters as they publish their content on their web sites, and it will aid governmental entities seeking to meet their legislated mandates to make governmental web content accessible.

by Michael Cooper at December 03, 2015 01:19 PM

December 01, 2015

W3C Blog

New Scholarly Coalition Embraces W3C Web Annotations

Today marks the launch of an informal annotation coalition, organized by the Hypothes.is Project, a W3C Member. W3C is excited to be part of this growing effort of over 40 leading organizations in the technology and scholarly publishing communities, including W3C Members IDPF, MIT Press, and Wiley.

The partners in this coalition share a vision of how annotation can benefit scholarly publishing, and of open collaboration for integrating web annotation into their platforms, publications, workflow, and communities.

W3C sees an important role for Web Annotations as a new layer of user-generated content and commentary on top of the Web, and across digital publications of all sorts. Today, comments on the Web are disjointed and often disruptive; a unified mechanism for creating, publishing, displaying, and sharing annotations and other comments in a decentralized way can aid in distributed curation, improving the quality of comments that a reader sees for Web content, and improving the reading experience. In parallel, Web users want to organize and remember useful sites on the Web, and want to synchronize their favorite sites across multiple devices, or to share their thoughts about a site with friends or colleagues; Web annotations enable all this by allowing users to make highlights or detailed notes about a site, to add tags for categorization and search, and to share these links and notes across multiple conforming social media services. This is ideal for casual users, or for focused reading circles or classrooms.

The W3C Web Annotation Working Group is working on a set of loosely related “building block” specifications to enable this functionality. The Web Annotation Model serves as a simple but full-featured data structure for interchange between browsers and different annotation-capable services. The Annotation Protocol defines behavior for publishing annotations to a service, for searching these services for annotation content, or for subscribing to annotation feeds. The FindText API lets a browser or plugin “re-anchor” an annotation to its original selection within a Web page, and a related URL fragment specification will leverage the FindText API to let you share a URL that navigates directly to the selection you shared. Together with a few other bits and pieces, these specifications, when implemented, will let you create a new annotation based on a specific selection on a page, share it to your preferred social media service, and let others (either a small group, or the world) discover and read your annotation right in the context of the page you commented on, or to find other annotations in your feed.

In addition to standardizing annotation technologies, W3C is experimenting with using the technology itself. Our standards process includes public review of all of our specifications, and we have enabled feedback via an annotation interface on some of our specifications; the expectation is that it will be easier for readers to provide feedback, and easier for Working Groups to understand, respond, track, and process feedback that’s presented in the context of the specification document itself. if this experiment succeeds, we will spread this feedback mechanism across W3C’s specifications.

Before a full ecosystem develops, where multiple browsers and e-readers, content sites, and annotation services interoperate, the groundwork has to be laid, in communities that already understand the power of annotation. The scholarly community has used annotations (and the related techniques of footnotes and citations) extensively for centuries. This annotation coalition brings that practice into the 21st century, with a solid technological underpinning that will empower this community to use the Web for review, copy-editing, collaboration, categorization, and reference. W3C welcomes technical feedback on its Web Annotation specifications, and the new annotation coalition welcomes all interested stakeholders to participate in all aspects of this effort.

We look forward to keeping the conversation going about how we can meet the needs of this community, and how we can spread this to other communities, from the next generation of “close reading” students who want to engage with content and not just consume it, to the professionals who want to organize their research, to the person who just wants to share their thoughts on content that excites them.

by Doug Schepers at December 01, 2015 09:00 AM

November 30, 2015

W3C Blog

New Draft for Portable Web Publications has been Published

One of the results of the busy TPAC F2F meeting of the DPUB IG Interest Group (see the separate reports on TPAC for the first and second F2F days), the group just published a new version of the Portable Web Publications for the Open Web Platform (PWP) draft. This draft incorporates the discussions at the F2F meeting.

As a reminder: the PWP document describes a future vision on the relationships of Digital Publishing and the Open Web Platform. The vision can be summarized as:

Our vision for Portable Web Publications is to define a class of documents on the Web that would be part of the Digital Publishing ecosystem but would also be fully native citizens of the Open Web Platform. In this vision, the current format- and workflow-level separation between offline/portable and online (Web) document publishing is diminished to zero. These are merely two dynamic manifestations of the same publication: content authored with online use as the primary mode can easily be saved by the user for offline reading in portable document form. Content authored primarily for use as a portable document can be put online, without any need for refactoring the content. Publishers can choose to utilize either or both of these publishing modes, and users can choose either or both of these consumption modes. Essential features flow seamlessly between online and offline modes; examples include cross-references, user annotations, access to online databases, as well as licensing and rights management.

The group already had lots of discussions on this vision, and published a first version of the PWP draft before the TPAC F2F meeting. That version already included a series of terms establishing the notion of Portable Web Documents and also outlined an draft architecture for PWP readers based on Service Workers. The major changes of the new draft (beyond editorial changes) include a better description of that architecture, a reinforced view and role for manifests and, mainly, a completely re-written section on addressing and identification.

The updated section makes a difference between the role of identifiers (e.g., ISBN, DOI, etc.) and locators (or addresses) on the Web, typically an HTTP(S) URL. While the former is a stable identification of the publication, the latter may change when, e.g., the publication is copied, made private, etc. Defining identifiers is beyond the scope of the Interest Group (and indeed of W3C in general); the goal is to further specify the usage patterns around locators, i.e., URL-s. The section looks at the issue of what an HTTP GET would return for such a URL, and what the URL structure of the constituent resources are (remember that a Web Publication being defined as a set of Web Resources with its own identity). All these notions will need further refinements (and the IG has recently set up a task force to look into the details) but the new draft gives a better direction to explore.

As always, issues and comments are welcome on the new document. The preferred way is to use the github issue tracker but, alternatively, mails can be sent to the IG’s mailing list.

(Original blog was published in the Digital Publishing Activity Blog)

by Ivan Herman at November 30, 2015 08:00 AM