The Encoding specification has been published as a Candidate Recommendation. This is a snapshot of the WHATWG document, as of 4 September 2014, published after discussion with the WHATWG editors. No changes have been made in the body of this document other than to align with W3C house styles. The primary reason that W3C is publishing this document is so that HTML5 and other specifications may normatively refer to a stable W3C Recommendation.
Going forward, the Internationalization Working Group expects to receive more comments in the form of implementation feedback and test cases. The Working Group
believes it will have satisfied its implementation criteria no earlier than 16 March 2015. If you would like to contribute test cases or information about implementations, please send mail to firstname.lastname@example.org.
The utf-8 encoding is the most appropriate encoding for interchange of Unicode, the universal coded character set. Therefore for new protocols and formats, as well as existing formats deployed in new contexts, this specification requires (and defines) the utf-8 encoding.
The other (legacy) encodings have been defined to some extent in the past. However, user agents have not always implemented them in the same way, have not always used the same labels, and often differ in dealing with undefined and former proprietary areas of encodings. This specification addresses those gaps so that new user agents do not have to reverse engineer encoding implementations and existing user agents can converge.
Industry speakers lined up to discuss use cases and requirements for linked data and content analytics
The agenda of the 4th LIDER roadmapping workshop and LD4LT event has been published. A great variety of industry stakeholders will talk about linked data and content analytics. Industry areas represented include content analytics technology, multilingual conversational applications, localisation and more.
The workshop will take place on September 2nd in Leipzig, Germany and it will be collocated with the SEMANTiCS conference. The workshop will be organised as part of MLODE 2014 and will be preceded by a hackathon on the 1st of September.
A report summarizing the MultilingualWeb workshop in Madrid is now available from the MultilingualWeb site. It contains a summary of each session with links to presentation slides and minutes taken during the workshop in Madrid. The workshop was a huge success, with approximately 110 participants, and with the associated LIDER roadmapping workshop. The Workshop was hosted by Universidad Politécnica de Madrid, sponsored by the EU-funded LIDER project, by Verisign and by Lionbridge.
A new workshop in the MultilingualWeb series is planned for 2015.
This document builds upon on the Character Model for the World Wide Web 1.0: Fundamentals to provide authors of specifications, software developers, and content developers a common reference on string matching on the World Wide Web and thereby increase interoperability. String matching is the process by which a specification or implementation defines whether two string values are the same or different from one another.
The main target audience of this specification is W3C specification developers. This specification and parts of it can be referenced from other W3C specifications and it defines conformance criteria for W3C specifications, as well as other specifications.
This version of this document represents a significant change from its previous edition. Much of the content is changed and the recommendations are significantly altered. This fact is reflected in a change to the name of the document from “Character Model: Normalization” to “Character Model for the World Wide Web: String Matching and Searching”.
The goal of the workshop is to gather input from experts and stakeholders in the area of content analytics, to identify areas and tasks in content analytics where linked data & semantic technologies can contribute. The workshop will organised as part of MLODE 2014 and will be preceded by a hackathon on the 1st of September.
Version 7.0 of the Unicode Standard is now available, adding 2,834 new characters. This latest version adds the new currency symbols for the Russian ruble and Azerbaijani manat, approximately 250 emoji (pictographic symbols), many other symbols, and 23 new lesser-used and historic scripts, as well as character additions to many existing scripts. These additions extend support for written languages of North America, China, India, other Asian countries, and Africa. See the link above for full details.
Most of the new emoji characters derive from characters in long-standing and widespread use in Wingdings and Webdings fonts.
Major enhancements were made to the Indic script properties. New property values were added to enable a more algorithmic approach to rendering Indic scripts. These include properties for joining behavior, new classes for numbers, and a further division of the syllabic categories of viramas and rephas. With these enhancements, the default rendering for newly added Indic scripts can be significantly improved.
Unicode character properties were extended to the new characters. The old characters have enhancements to Script and Alphabetic properties, and casing and line-breaking behavior. There were also nearly 3,000 new Cantonese pronunciation entries, as well as new or clarified stability policies for promoting interoperable implementations.
Two other important Unicode specifications are maintained in synchrony with the Unicode Standard, and have updates for Version 7.0. These will be released at the same time:
Industry stakeholders from many areas (localization, publishing, language technology applications etc.) and key researchers from linked data and language technology discussed promises and challenges around linguistic linked data. The report summarizes all presentations and includes an initial list of use cases and requirements for linguistic linked data. This and the overall outcome of the event will feed into work of the LD4LT group (see especially the LD4LT latest draft version of use cases), and the field of multilingual linked data in general.
A Last Call Working Draft of Encoding has been published.
While encodings have been defined to some extent, implementations have not always implemented them in the same way, have not always used the same labels, and often differ in dealing with undefined and former proprietary areas of encodings. This specification attempts to fill those gaps so that new implementations do not have to reverse engineer encoding implementations of the market leaders and existing implementations can converge.
The body of this spec is an exact copy of the WHATWG version as of the date of its publication, intended to provide a stable reference for other specifications. We are hoping for people to review the specification and send comments about any technical areas that need attention (see the Status section for details).
Please send comments by 1 July 2014.
One more week to FEISGILTT – learn about and contribute to the future of Web and localization standards!
On 4 June and as part of the Localization World conference in Dublin, the FEISGILTT event will again provide an opportunity to discuss latest developments around localization and multilingual Web technologies. The event is sponsored by the LIDER project.
Highlights include updates about ITS 2.0 and XLIFF 2.0, and a session about usage scenarios for linguistic linked data in localization. Speakers include Kevin O’Donnell (Microsoft), Bryan Schnabel (Tektronix), Yves Savourel (Enlaso) and many more.
Register now to meet the key players around standards that will influence today’s and future business.
The slides from the MultilingualWeb workshop (including several posters) and the LIDER roadmapping workshop are now available for download. Additional material (videos of the presentations, a workshop report and more) will follow in the next weeks – stay tuned.