May 20, 2016

W3C Blog

HTTPS and the Semantic Web/Linked Data

In short, keep writing “http:” and trust that the infrastructure will quietly switch over to TLS (https) whenever both client and server can handle it. Meanwhile, let’s try to get SemWeb software to be doing TLS+UIR+HSTS and be as secure as modern browsers.

Sandro Hawke

As we hope you’ve noticed, W3C is increasing the security of its own Web site and is strongly encouraging everyone to do the same. I’ve included some details from our systems team below for an explanation but the key technologies to look into if you’re interested are Http Strict Transport Security (HSTS) and Upgrade-Insecure-Requests (UIR).

Bottom line: we want everyone to use HTTPS and there are smarts in place on our servers and in many browsers to take care of the upgrade automatically.

So what of Semantic Web URIs, particularly namespaces like http://www.w3.org/1999/02/22-rdf-syntax-ns#?

Visit that URI in a modern, secure browser and you’ll be redirected to https://www.w3.org/1999/02/22-rdf-syntax-ns#. Older browsers and, in this context more importantly, other user agents that do not recognize HSTS and/or UIR will not be redirected. So you can go on using http://www.w3.org namespaces without disruption.

This raises a number of questions.

Firstly, is the community agreed that if two URIs differ only in the scheme (http://, https:// and perhaps whatever comes in future) then they identify the same resource? We believe that this can only be asserted by the domain owner. In the specific case of http://www.w3.org/* we do make that assertion. Note that this does not necessarily apply to any current or future subdomains of w3.org.

Secondly, some members of the Semantic Web community have already moved to HTTPS (it was a key motivator for w3id.org). How steep is the path from where we are today to moving to a more secure Semantic Web, i.e. one that habitually uses HTTPS rather than HTTP? Have you/are you considering upgrading your own software?

Until and if the Semantic Web operates on more secure connections, we will need to be careful to pass around http URIs – which is likely to mean remembering to knock off the s when pasting a URI from your browser.

That’s a royal pain but we’ve looked at various workarounds and they’re all horrible. For example, we could deliberately redirect requests to things like our vocabulary namespaces away from the secure w3.org site to a deliberately less secure sub-domain – gah! No thanks.

Thirdly, a key feature of the HSTS/UIR landscape is that there is no need to go back and edit old resources – communication is carried out using HTTPS without further intervention. Can this be true for Semantic Web/Linked Data too or should we be considering more drastic action. For example, editing definitions in turtle files such as the one at http://www.w3.org/ns/dcat# to make it explicit that http://www.w3.org/ns/dcat#Dataset is owl:equivalentClass to https://www.w3.org/ns/dcat#Dataset (or even worse, having to go through and actually duplicate all the definitions with the different subject).

I really hope point 3 is unnecessary – but I’d like to be sure it is.

Background

Jose Kahan from W3C’s Systems Team adds

HSTS does the client side upgrade from HTTP to HTTPS for a given domain. However, that header is only sent when doing an HTTPS connection. UIR defines a header that, if sent by browser, will tell the server it prefers using HTTPS and the server will redirect to HTTPS, then HSTS (through the header in the response) will kick in. HSTS doesn’t handle the case of mixed-content. That is the other part that UIR does to complement HSTS: tell the browser to update URLs of all content associated with a resource to HTTPS before requesting it.

For browser UAs, if HSTS is enabled for a domain and you browse a document by typing its URL on the navigation bar or follow a link to a new document, the request will be sent as HTTPS, regardless of the URL saying HTTP. If the document includes a CSS file, javascript, or an image, for example and that URL is HTTP, the request for those resources will only be sent as HTTPS if the UA supports UIR.

by Phil Archer at May 20, 2016 05:51 PM

May 15, 2016

ishida >> blog

Protected: Eurovision alternative results

This content is password protected. To view it please enter your password below:

by r12a at May 15, 2016 09:02 AM

April 06, 2016

W3C Blog

Working on HTML5.1

HTML5 was released in 2014 as the result of a concerted effort by the W3C HTML Working Group. The intention was then to begin publishing regular incremental updates to the HTML standard, but a few things meant that didn’t happen as planned. Now the Web Platform Working Group (WP WG) is working towards an HTML5.1 release within the next six months, and a general workflow that means we can release a stable version of HTML as a W3C Recommendation about once per year.

Goals

The core goals for future HTML specifications are to match reality better, to make the specification as clear as possible to readers, and of course to make it possible for all stakeholders to propose improvements, and understand what makes changes to HTML successful.

Timelines

The plan is to ship an HTML5.1 Recommendation in September 2016. This means we will need to have a Candidate Recommendation by the middle of June, following a Call For Consensus based on the most recent Working Draft.

To make it easier for people to review changes, an updated Working Draft will be published approximately once a month. For convenience, changes are noted within the specification itself.

Longer term we would like to “rinse and repeat”, making regular incremental updates to HTML a reality that is relatively straightforward to implement. In the meantime you can track progress using Github pulse, or by following @HTML_commits or @HTMLWG on Twitter.

Working on the spec…

The specification is on Github, so anyone who can make a Pull Request can propose changes. For simple changes such as grammar fixes, this is a very easy process to learn – and simple changes will generally be accepted by the editors with no fuss.

If you find something in the specification that generally doesn’t work in shipping browsers, please file an issue, or better still file a Pull Request to fix it. We will generally remove things that don’t have adequate support in at least two shipping browser engines, even if they are useful to have and we hope they will achieve sufficient support in the future: in some cases, you can or we may propose the dropped feature as a future extension – see below regarding “incubation”.

HTML is a very large specification. It is developed from a set of source files, which are processed with the Bikeshed preprocessor. This automates things like links between the various sections, such as to element definitions. Significant changes, even editorial ones, are likely to require a basic knowledge of how Bikeshed works, and we will continue to improve the documentation especially for beginners.

HTML is covered by the W3C Patent Policy, so many potential patent holders have already ensured that it can be implemented without paying them any license fee. To keep this royalty-free licensing, any “substantive change” – one that actually changes conformance – must be accompanied by the patent commitment that has already been made by all participants in the Web Platform Working Group. If you make a Pull Request, this will automatically be checked, and the editors, chairs, or W3C staff will contact you to arrange the details. Generally this is a fairly simple process.

For substantial new features we prefer a separate module to be developed, “incubated”, to ensure that there is real support from the various kinds of implementers including browsers, authoring tools, producers of real content, and users, and when it is ready for standardisation to be proposed as an extension specification for HTML. The Web Platform Incubator Community Group (WICG) was set up for this purpose, but of course when you develop a proposal, any venue is reasonable. Again, we ask that you track technical contributions to the proposal (WICG will help do this for you), so we know when it arrives that people who had a hand in it have also committed to W3C’s royalty-free patent licensing and developers can happily implement it without a lot of worry about whether they will later be hit with a patent lawsuit.

Testing

W3C’s process for developing Recommendations requires a Working Group to convince the W3C Director, Tim Berners-Lee, that the specification

“is sufficiently clear, complete, and relevant to market needs, to ensure that independent interoperable implementations of each feature of the specification will be realized”

This had to be done for HTML 5.0. When a change is proposed to HTML we expect it to have enough tests to demonstrate that it does improve interoperability. Ideally these fit into an automatable testing system like the “Webapps test harness“. But in practice we plan to accept tests that demonstrate the necessary interoperability, whether they are readily automated or not.

The benefit of this approach is that except where features are removed from browsers, which is comparatively rare, we will have a consistently increasing level of interoperability as we accept changes, meaning that at any time a snapshot of the Editors’ draft should be a stable basis for an improved version of HTML that can be published as an updated version of an HTML Recommendation.

Conclusions

We want HTML to be a specification that authors and implementors can use with ease and confidence. The goal isn’t perfection (which is after all the enemy of good), but rather to make HTML 5.1 better than HTML 5.0 – the best HTML specification until we produce HTML 5.2…

And we want you to feel welcome to participate in improving HTML, for your own purposes and for the good of the Web.

Chaals, Léonie, Ade – chairs
Alex, Arron, Steve, Travis – editors

by Léonie Watson at April 06, 2016 01:05 PM

April 05, 2016

W3C Blog

HTML Media Extensions to continue work

The HTML Media Extensions Working Group was extended today until the end of September 2016. As part of making video a first class citizen of the Web, an effort started by HTML5 itself in 2007, W3C has been working on many extension specifications for the Open Web Platform: capturing images from the local device camera, handling of video streams and tracks, captioning and other enhancements for accessibility, audio processing, real-time communications, etc. The HTML Media Extensions Working Group is working on two of those extensions: Media Sources Extensions (MSE), for facilitating adaptive and live streaming, and Encrypted Media Extensions (EME), for playback of protected content. Both are extension specifications to enhance the Open Web Platform with rich media support.

The W3C supports the statement from the W3C Technical Architecture Group (TAG) regarding the importance of broad participation, testing, and audit to keep users safe and the Web’s security model intact. The EFF, a W3C member, concerned about this issue, proposed a covenant to be agreed by all W3C members which included exemptions for security researchers as well as interoperable implementations under the US Digital Millennium Copyright Act (DMCA) and similar laws. After discussion for several months and review at the recent W3C Advisory Committee meeting, no consensus has yet emerged from follow-up discussions about the covenant from the EFF.

We do recognize that issues around Web security exist as well as the importance of the work of security researchers and that these necessitate further investigation but we maintain that the premises for starting the work on the EME specification are still applicable. See the information about W3C and Encrypted Media Extensions.

The goal for EME has always been to replace non-interoperable private content protection APIs (see the Media Pipeline Task Force (MPTF) Requirements). By ensuring better security, privacy, and accessibility around those mechanisms, as well as having those discussions at W3C, EME provides more secure interfaces for license and key exchanges by sandboxing the underlying content decryption modules. The only required key system in the specification is one that actually does not perform any digital rights management (DRM) function and is using fully defined and standardized mechanisms (the JSON Web Key format, RFC7517, and algorithms, RFC7518). While it may not satisfy some of the requirements from distributors and media owners in resisting attacks, it is the only fully interoperable key system when using EME.

We acknowledge and welcome further efforts from the EFF and other W3C Members in investigating the relations between technologies and policies. Technologists and researchers indeed have benefited from the EFF’s work in securing an exemption from the DMCA from the Library of Congress which will help to better protect security researchers from the same issues they worked to address at the W3C level.

W3C does intend to keep looking at the challenges related to the US DMCA and similar laws such as international implementations of the EU Copyright Directive with our Members and staff. The W3C is currently setting up a Technology and Policy Interest Group to keep looking at those issues and we intend to bring challenges related to these laws to this Group.

by Philippe le Hegaret at April 05, 2016 02:29 PM

March 31, 2016

W3C Blog

W3C Highlights and Advisory Committee meeting

AC 2016 logo

W3C had its annual meeting last week in Boston and it was one of the most interactive meetings we have had in recent memory. During the meeting we released the W3C highlights for Spring 2016, a comprehensive report of W3C’s vision and focus; and had informative discussions from industry presenters and keynote speakers.

We also had an opportunity to discuss a proposal from the Electronic Frontier Foundation about a covenant related to our Encrypted Media Extensions specification; see the information on EME work at W3C we recently made available.

Participants were most energized by a discussion about what is the Next Big Thing for the Web. Everyone seemed to think there was so much more work to do in moving the Web forward. In a straw poll, the leading topics were Web Security, Web Payments, Web of Things, and (of course) the core platform. These definitely map to where we are seeing the most excitement in W3C groups. More details can be found in our recent security blog posts; my recent payments blog post; my recent WoT post; and the Web Platform Working Group’s co-chairs next steps for the core platform blog post.

Other Highlights of the meeting:

  • We spent half a day looking at how the Web leads industry to its full potential and vice versa. The speakers – principally from industry – taught us a great deal about how Web technologies impact their industries. See our efforts in Telecommunications, Web Payments, Web of Things, Digital Publishing, Automotive and Entertainment.
  • We had about 10 BOF sessions over lunch as Advisory Committee representatives and the Team thought through diverse topics such as Digital Marketing, executive focus on Web technologies, and blockchain.
  • We spent an afternoon getting an update on technical topics and tooling. A highlight was Nigel Megitt’s professionally delivered video speech as we all celebrated getting our first Emmy. Several attendees were seen getting their pictures taken with the Emmy.
  • Bruce Schneier delivered an impressive keynote about the growing impact of what he called “the World-Sized Web” as the Internet of Things brings more parts of world infrastructure within the domain of Web technology. A particular focus was on how to secure this infrastructure.

by Jeff Jaffe at March 31, 2016 06:30 PM

March 25, 2016

ishida >> blog

Historical maps of Europe, 362-830 AD

Picture of the page in action.
>> See the chronology
>> See the maps

This blog post introduces the first of a set of historical maps of Europe that can be displayed at the same scale so that you can compare political or ethnographic boundaries from one time to the next. The first set covers the period from 362 AD to 830 AD.

A key aim here is to allow you to switch from map to map and see how boundaries evolve across an unchanging background.

The information in the maps is derived mostly from information in Colin McEvedy’s excellent series of books, in particular (so far) The New Penguin Atlas of Medieval History, but also sometimes brings in information from the Times History of Europe. Boundaries are approximate for a number of reasons: first, in the earlier times especially, the borders were only approximate anyway, second, I have deduced the boundary information from small-scale maps and (so far) only a little additional research, third, the sources sometimes differ about where boundaries lay. I hope to refine the data during future research, in the meantime take this information as grosso modo.

The link below the picture takes you to a chronological summary of events that lie behind the changes in the maps. Click on the large dates to open maps in a separate window. (Note that all maps will open in that window, and you may have to ensure that it isn’t hidden behind the chronology page.)

The background to the SVG overlay is a map that shows relief and rivers, as well as modern country boundaries (the dark lines). These were things which, as good as McEvedy’s maps were, I was always missing in order to get useful reference points. Since the outlines and text are created in SVG, you can zoom in to see details.

This is just the first stage, and the maps are still largely first drafts. The plan is to refine the details for existing maps and add many more. So far we only deal with Europe. In the future I’d like to deal with other places, if I can find sources.

by r12a at March 25, 2016 11:37 AM

March 19, 2016

ishida >> blog

UniView now supports Unicode 9 beta

Picture of the page in action.
>> Use the picker

UniView now supports the characters introduced for the beta version of Unicode 9. Any changes made during the beta period will be added when Unicode 9 is officially released. (Images are not available for the Tangut additions, but the character information is available.)

It also brings in notes for individual characters where those notes exist, if Show notes is selected. These notes are not authoritative, but are provided in case they prove useful.

A new icon was added below the text area to add commas between each character in the text area.

Links to the help page that used to appear on mousing over a control have been removed. Instead there is a noticeable, blue link to the help page, and the help page has been reorganised and uses image maps so that it is easier to find information. The reorganisation puts more emphasis on learning by exploration, rather than learning by reading.

Various tweaks were made to the user interface.

by r12a at March 19, 2016 10:22 PM

March 11, 2016

W3C Blog

An invitation to the free-software community for real dialog

This is an open invitation to all people in the free-software community for genuine person-to-person dialog with people in the W3C staff about DRM on the Web (and any other topics of importance to the Web we all have an interest in discussing).

We have a People of the W3C page that lists the names and e-mail addresses of all the W3C staff, and we always welcome you to contact us about the work we are doing together for the Web. Along with that we have a Contact page that includes more details about how to find us.

We believe this invitation from us to you for real person-to-person dialog is a much more constructive route to mutual understanding and change than approaches such as the recent campaign (under the apparent aegis of the Free Software Foundation) which you might have seen, that encourages you to instead go by a W3C office to just “take a protest selfie” in demonstration against “DRM in HTML”.

As the announcement about that campaign suggests, if you live near a W3C office, “you have a unique opportunity to make a difference”—but that opportunity is actually for much more than just snapping a selfie next to a W3C sign. Instead you have a chance to talk with real people who care a great deal about the Web and its future—just as you do—and to find out things we agree about with each other, and problems we can work on solving together.

We’re all real people. So let’s treat each other like real people, and don’t instead let someone else make you try to shoehorn yourself into any narrative they want to construct about fearless activists doing battle against some faceless uncaring entity.

So if you care enough yourself to make time to visit a W3C office in person, please consider not doing it only to take a selfie in front of a W3C sign and then leave. Instead, make it an opportunity to actually meet the people at your nearby W3C office who care deeply about a lot of same things you do, and chat with some of us person-to-person over a cup of coffee (or hey, maybe even some after-work drinks somewhere nearby).

The announcement about the “take a protest selfie” campaign claims to have “reliable advice” that it will be “very influential to the W3C’s leadership”. But I have a lot more reliable advice for you: The open invitation for real person-to-person conversation, that we as people are offering you right here, is an opportunity to be much more influential.


Related:

by Michael[tm] Smith at March 11, 2016 01:45 PM

March 09, 2016

W3C Blog

HTML: What’s next?

Since the end of last year the Web Platform Working Group has responsibility for W3C’s HTML spec, as well as many other core specifications. What have we been doing with HTML, and what is the plan?

The short story is that we are working toward an HTML 5.1 Recommendation later this year. The primary goals are to provide a specification that is a better match for reality, by incorporating things that are interoperable and removing things that aren’t.

We also want more people and organisations to get involved and make sure the development of HTML continues to reflect the needs and goals of the broad community.

As an important step down that path, the editors (Arron Eicholz, Steve Faulkner and Travis Leithead) have published the Editors’ Draft in github, and by using bikeshed to build it we have made it easier for people to propose an effective edit. Different kinds of edit require different levels of effort, of course…

Fixing a typo, or clarifying some text so it is easier to understand, are easy ways to start contributing, getting used to the spec source and github, and improving HTML. This level of edit will almost always be accepted with little discussion.

Meanwhile, we welcome suggestions – ideally as pull requests, but sometimes raising an issue is more appropriate – for features that should not be in a Recommendation yet, for example because they don’t work interoperably.

Naturally proposals for new features require the most work. Before we will accept a substantial feature proposal as part of an HTML recommendation, there needs to be an indication that it has real support from implementors – browsers, content producers, content authoring and management system vendors and framework developers are all key stakeholders. The Web Platform Incubator Community Group is specifically designed to provide a home for such incubation, although there is no obligation to do it there. Indeed, the picture element was developed in its own Community Group, and is a good example of how to do this right.

Finally, a lot of time last year was spent talking about modularisation of HTML. But that is much more than just breaking the spec into pieces – it requires a lot of deep refactoring work to provide any benefit. We want to start building new things that way, but we are mostly focused on improving quality for now.

The Working Group is now making steady progress on its goals for HTML, as well as its other work. An important part of W3C work is getting commitments to provide Royalty-Free patent licenses from organisations, and for some large companies with many patents that approval takes time. At the same time, Art Barstow who was for many years co-chair of Web Apps, and an initial co-chair of this group, has had to step down due to other responsibilities. While chaals continues as a co-chair from Web Apps, joined by new co-chairs Adrian Bateman and Léonie Watson, we still miss both Art’s invaluable contributions and Art himself.

So we have taken some time to get going, but we’re now confident that we are on track to deliver a Recommendation for HTML 5.1 this year, with a working approach that will make it possible to deliver a further improved HTML Recommendation (5.2? We’re not too worried about numbering yet…) in another year or so.

by Charles McCathie Nevile at March 09, 2016 02:00 PM

March 08, 2016

W3C Blog

W3C Web of Things at Industry of Things World

Illustration of WoT interoperability layer and things interactions

Two weeks ago I had the pleasure to present our vision of the future of the Internet of Things at the Industry of Things World USA conference in San Diego.

Here in a nutshell is what I told them:

Conference chairman Jeremy Geelan had started the conference with a retrospective. It took 27 years from J. C. R. Licklider’s vision of a worldwide computer network until Tim Berners-Lee’s invention of the Web became the spark to make this network broadly useful.

I then said that history could repeat itself with this next driver on the Internet, the Internet of Things (IoT). That is, unless we have a model which makes broad sharing available for IOT, we are liable to delay progress for years or decades. Specifically, today’s architectures and initial implementations tend to be silo-ed. There are standards at the physical layer but insufficient interoperability at the higher layers.

For example, a person’s watch (as an IoT device) will want to participate in IoT wearable applications (since it is worn), IoT medical applications (as it takes one’s pulse and links into personal medical information), IoT Smart Homes (used to control the home), IoT Smart Cities (as the municipal infrastructure relies on data about weather and traffic), and IoT Smart Factories (to track its usage and condition). But to participate across all silos, and for applications to be built which leverage all silos requires common data models, metadata, and an interoperable layered model.

From this I introduced the Web of Things model and Interest Group. This complements IoT by providing a higher level interoperability layer above IoT. Through the work W3C is doing in our task forces, we are addressing interoperability related issues: thing descriptions, API and Protocol bindings, discovery and provisioning, and security.

I spoke to several stakeholders: thing manufacturers, developers, solution providers who all seemed to agree that interoperability needed a greater level of attention in IoT.

by Jeff Jaffe at March 08, 2016 08:46 PM

March 07, 2016

ishida >> blog

More updates for the Egyptian hieroglyph picker

Picture of the page in action.
>> Use the picker

I’ve been doing more work over the weekend.

The data behind the keyword search has now been completely updated to reflect descriptions by Gardiner and Allen. If you work with those lists it should now be easy to locate hieroglyphs using keywords. The search mechanism has also been rewritten so that you don’t need to type keywords in a particular order for them to match. I also strip out various common function words and do some other optimisation before attempting a match.

The other headline news is the addition of various controls above the text area, including one that will render MdC text as a two-dimensional arrangement of hieroglyphs. To do this, I adapted WikiHiero’s PHP code to run in javascript. You can see an example of the output in the picture attached to this post. If you want to try it, the MdC text to put in the text area is:
anx-G5-zmA:tA:tA-nbty-zmA:tA:tA-sw:t-bit:t- -zA-ra:.-mn:n-T:w-Htp:t*p->-anx-D:t:N17-!

The result should look like this:

Picture of hieroglyphs.

Other new controls allow you to convert MdC text to hieroglyphs, and vice versa, or to type in a Unicode phonetic transcription and find the hieroglyphs it represents. (This may still need a little more work.)

I also moved the help text from the notes area to a separate file, with a nice clickable picture of the picker at the top that will link to particular features. You can get to that page by clicking on the blue Help box near the bottom of the picker.

Finally, you can now set the text area to display characters from right to left, in right-aligned lines, using more controls > Output direction. Unfortunately, i don’t know of a font that under these conditions will flip the hieroglyphs horizontally so that they face the right way.

For more information about the new features, and how to use the picker, see the Help page.

by r12a at March 07, 2016 11:21 AM

March 04, 2016

W3C Blog

W3C Web Payments at NACHA Payments Innovation Alliance

Web Payments icons; shopping basket, padlock, shopping basket on handheld device and hand

Last week I had the pleasure to present our vision of the future of Web Payments and e-commerce at the NACHA Payments Innovation Alliance meeting in San Francisco. This alliance focuses on modernizing payments in view of technology trends, standards, and security.

It was a good opportunity to share our vision of web payments with the financial services and fintech communities. Here in a nutshell is what I told them:

  • The historic dichotomy between the design point of the Web and how it is used by merchants is striking. The Web is user-centric. It is designed to connect humanity to information. e-Commerce has historically been merchant-centric. A merchant makes their product available on their site for a customer to come in and buy.
  • But e-Commerce has changed. In today’s omni-channel model, the customer is in charge. They receive information from social media and recommendation websites. They combine in-store with on-line; and expect service from laptop and mobile. There are enormous changes and more flexibility in payment methods. Users are looking and finding the best deal and most importantly the best overall experience.

A few blocks away from the NACHA meeting at the Google office, W3C’s Web Payments Interest Group and Web Payments Working Group were meeting face-to-face to discuss how the Web will need to expand to help the industry meet these challenges.

The Web Payments Working Group is looking at how to improve the e-Commerce user experience by streamlining the checkout process and improving security. Ian Jacobs has written a summary of the Working Group’s recent discussion and plan to publish a First Public Working Draft in early April of what we believe will be transformative new Web technology.

Ultimately, we will need a broad set of offerings. The Interest Group discussed a number of topics as fodder for new standardization efforts. These included the impact of regulatory changes such as the Payment Services Directive (PSD2) in Europe, as well as faster-payments initiatives such as those of the US Federal Reserve. The group also discussed use cases for “verifiable claims” on the Web, opened a discussion on blockchain and the Web, and discussed interoperability with the broader financial services ecosystem (e.g., through alignment with ISO20022).

It was great to brainstorm the changing face of e-Commerce with the Financial Services industry during the NACHA conference —joined by Working Group co-Chair Nick Telford-Reed (Worldpay)— and illustrate how the W3C community is addressing the challenge. The net of all of these changes is that the underlying payment and e-commerce infrastructure will be “web-like”. In addition to connecting humanity to information, we will better connect humanity with their economic potential.

by Jeff Jaffe at March 04, 2016 12:23 AM

February 29, 2016

ishida >> blog

Egyptian hieroglyph picker updated

Picture of the page in action.
>> Use the picker

Over the weekend I added a set of new features to the picker for Egyptian Hieroglyphs, aimed at making it easier to locate a particular hieroglyph. Here is a run-down of various methods now available.

Category-based input

This was the original method. Characters are grouped into standard categories. Click on one of the orange characters, chosen as a nominal representative of the class, to show below all the characters in that category. Click on one of those to add it to the output box. As you mouse over the orange characters, you’ll see the name of the category appear just below the output box.

Keyword-search-based input

The app associates most hieroglyphs with keywords that describe the glyph. You can search for glyphs using those keywords in the input field labelled Search for.

Searching for ripple will match both ripple and ripples. Searching for king will match king and walking. If you want to only match whole words, surround the search term with colons, ie. :ripple: or :king:.

Note that the keywords are written in British English, so you need to look for sceptre rather than scepter.

The search input is treated as a regular expression, so if you want to search for two words that may have other words between them, use .*. For example, ox .* palm will match ox horns with stripped palm branch.

Many of the hieroglyphs have also been associated with keywords related to their use. If you select Include usage, these keywords will also be selected. Note that this keyword list is not exhaustive by any means, but it may occasionally be useful. For example, a search for Anubis will produce 𓁢 𓃢 𓃣 𓃤 .

(Note: to search for a character based on the Unicode name for that character, eg. w004, use the search box in the yellow area.)

Searching for pronunciations

Many of the hieroglyphs are associated with 1, 2 or 3 consonant pronunciations. These can be looked up as follows.

Type the sequence of consonants into the output box and highlight them. Then click on Look up from Latin. Hieroglyphs that match that character or sequence of characters will be displayed below the output box, and can be added to the output box by clicking on them. (Note that if you still have the search string highlighted in the output box those characters will be replaced by the hieroglyph.)

You will find the panel Latin characters useful for typing characters that are not accessible via your keyboard. The panel is displayed by clicking on the higher L in the grey bar to the left. Click on a character to add it to the output area.

For example, if you want to obtain the hieroglyph 𓎝, which is represented by the 3-character sequence wꜣḥ, add wꜣḥ to the output area and select it. Then click on Latin characters. You will see the character you need just above the SPACE button. Click on that hieroglyph and it will replace the wꜣḥ text in the output area. (Unhighlight the text in the output area if you want to keep both and add the hierglyph at the cursor position.)

Input panels accessed from the vertical grey bar

The vertical grey bar to the left allows you to turn on/off a number of panels that can help create the text you want.

Latin characters. This panel displays Latin characters you are likely to need for transcription. It is particularly useful for setting up a search by pronunciation (see above).

Latin to Egyptian. This panel also displays Latin characters used for transcription, but when you click on them they insert hieroglyphs into the output area. These are 24 hieroglyphs represented by a single consonant. Think of it as a shortcut if you want to find 1-consonant hieroglyphs by pronunciation.

Where a single consonant can be represented by more than one hieroglyph, a small pop-up will present you with the available choices. Just click on the one you want.

Egyptian alphabet. This panel displays the 26 hieroglyphs that the previous panel produces as hieroglyphs. In many cases this is the quickest way of typing in these hieroglyphs.

by r12a at February 29, 2016 12:45 PM

February 25, 2016

ishida >> blog

New picker: Egyptian hieroglyphs

Picture of the page in action.
>> Use the picker

I have just published a picker for Egyptian Hieroglyphs.

This Unicode character picker allows you to produce or analyse runs of Egyptian Hieroglyph text using the Latin script.

Characters are grouped into standard categories. Click on one of the orange characters, chosen as a nominal representative of the class, to show below all the characters in that category. Click on one of those to add it to the output box. As you mouse over the orange characters, you’ll see the name of the category appear just below the output box.

Just above the orange characters you can find buttons to insert RLO and PDF controls. RLO will make the characters that follow it to progress from right to left. Alternatively, you can select more controls > Output direction to set the direction of the output box to RTL/LTR override. The latter approach will align the text to the right of the box. I haven’t yet found a Unicode font that also flips the glyphs horizontally as a result. I’m not entirely sure about the best way to apply directionality to Egyptian hieroglyphs, so I’m happy to hear suggestions.

Alongside the direction controls are some characters used for markup in the Manuel de Codage, which allows you to prepare text for an engine that knows how to lay it out two-dimensionally. (The picker doesn’t do that.)

The Latin Characters panel, opened from the grey bar to the left, provides characters needed for transcription.

In case you’re interested, here is the text you can see in the picture. (You’ll need a font to see this, of course. Try the free Noto Sans font, if you don’t have one – or copy-paste these lines into the picker, where you have a webfont.)
𓀀𓅃𓆣𓁿
<-i-mn:n-R4:t*p->
𓍹𓇋-𓏠:𓈖-𓊵:𓏏*𓊪𓍺

The last two lines spell the name of Amenhotep using Manuel de Codage markup, according to the Unicode Standard (p 432).

by r12a at February 25, 2016 05:43 PM

W3C Blog

The CSVW Working Group has published three notes before closing

The CSV on the Web Working group has published three Working Group Notes. These notes complement the set of Recommendations that have been published in December 2015: the generic Model and the Metadata Vocabulary for Tabular Data on the Web, as well as the conversion of tabular data to JSON and RDF. The three notes are as follows.

  1. The work of the Working Group started with a rich collection of Use Cases and Requirements. Those requirements played an essential role in defining various aspects of the technology; it is therefore fitting that, at the end of the Working Group’s life, one would assess how the results of the group compares to the original goals. The final Use Cases and Requirements Note completes therefore the earlier drafts by systematically going through all the listed requirements and documenting how that requirements are covered, or not, by the final Recommendations. The results are, actually, quite satisfactory: there are only a few initial requirements that could not be answered, or only partially; the vast majority are covered by the new Recommendations!

  2. Whereas the recommendations deal with CSV (or, more generally, tabular) data on the Web, primarily in terms of data files served through HTTP(S), it is clear that tables embedded in HTML files are also a significant source of tabular data on the Web. The note on Embedding Tabular Metadata in HTML specifies how to add metadata using the standard Metadata Vocabulary to such data as well, thereby covering an obviously very important use case. Covering this case was not part of the group’s original charter, so this proposal remains only a note; a possible future Working Group may pick this up and, based on usage experience, can then turn it into a bona fide Recommendation.

  3. Last be certainly not least, the Working Group has published a CSV on the Web Primer. While the specifications, beyond the systematic definition of all the features, also contain examples, they are nevertheless difficult to read and digest. In contrast, the Primer is a user facing document, organized around the specific questions and problems a user may have when specifying metadata for his/her CSV data. The WG hopes that this document will greatly help users to add metadata to their published without being lost in the arcane details of a formal specification. End users should certainly start here!

With the publication of these notes the CSV Working group has completed its chartered work and will be officially closed soon. However, recognizing that it is important to maintain an active community, a separate CSV on the Web Community Group has been set up, opened to everyone, where usage patterns, technical issues and questions, implementation experiences, etc., can be discussed. Although the Community Group is not in charge of changing the published document, it may still use the GitHub Repository of the (then defunct) Working Group to record further issues or add new documents if needed. Those may then be used by a possible new incarnation of the Working Group in a few years, which can be set up if the community and the W3C membership so desires. If you are interested in this area in general, do join the community group!

As a final touch: the WG has published seven documents altogether, which is quite a lot. If you are a digital book user, you can also download an EPUB version of the documents, that contains all 7 specifications and notes in one place…

by Ivan Herman at February 25, 2016 03:52 PM

February 21, 2016

W3C Blog

W3C and Intel collaboration brings HTML5 Introduction course offering

The W3Cx MOOC training is a W3C initiative that started in June 2015. In just over six months, nearly 200K people enrolled in our HTML5 courses and the feedback has been overwhelmingly positive.

Intel Corporation logoW3Cx logo

Building on these successful results, W3C is proud to strengthen its online course offering on edX with an HTML5 course at the beginners level. W3Cx and Intel Corporation, put together the “HTML5 introduction” course to encourage more students to enter the Web programming world.

HTML5 Introduction course logo

Taught by Intel experts,  this course presents the basic building blocks of Web design and style, using basics of HTML5 and a few CSS features – the fundamentals for building modern Web sites.

HTML5 Introduction is a 6 week course to start on 4 April 2016. We encourage future Web developers to see the course intro video and to enroll soon.

This introductory course nicely completes the “Learn HTML5 from W3C” XSeries group of three courses – all are open for registration:

  • HTML 5 Introduction offers the basics of HTML5 to start writing your first Web page or site – starts 4 April 2016
  • HTML 5 Part 1 is an intermediate level course requiring basic HTML5 and CSS knowledge – starts 16 May 2016
  • HTML Part 2 offers advanced techniques for creating apps and games – starts 27 June 2016

The HTML5 Introduction course allows participants to earn a verified certificate of achievement and as such be eligible to join the W3Cx Verified Students LinkedIn Group.

by Marie-Claire Forgue at February 21, 2016 05:33 PM

February 18, 2016

W3C Blog

The Evolving Web Security Strategy: The Web Authentication Working Group to end passwords

Passwords are one of the most irritating and least secure parts of our everyday Web experience. Users re-use passwords, so when a single server is hacked, millions are put at risk across multiple websites. We can’t expect users to remember long and complicated passwords. A new effort at W3C called the Web Authentication Working Group is launching their first meeting March 4th next to the RSA conference. Working with the  FIDO 2.0 Member Submission from members of the FIDO Alliance,  the W3C plans to help industry eliminate passwords and replace them with more secure and standardized ways of logging in, such as entering a USB key into your device or activating a nearby smartphone. We at W3C believe these capabilities should be available to Web developers everywhere via open standards, just like the rest of the Web.

While it’s not the first attempt to get rid of passwords, this is the first attempt that looks like it will succeed, likely by virtue being based on industry consensus and open standards rather than proprietary technology being pushed by a single company. This new and exciting effort includes Google, Microsoft, Mozilla, Paypal, and many more  – and is currently looking for new members. The W3C’s Working Groups get most of their work done by non-paid volunteers – so we in the Web Authentication Working Group are looking for people to put in the blood, sweat, and tears to get rid of passwords. As we’re our first face-to-face meeting next to the RSA conference on March 4th at Microsoft in San Francisco, we hope we get the right crowd. The meeting is already filling up, so sign up now via the Web form if you intend to join the Web Authentication Working Group as an W3C member, even if you haven’t joined quite yet. We already are discussing with and are supported by academics like the Prosecco team at INRIA, who are famous for breaking TLS. Academics and others who can’t reasonably become W3C members are welcome to join as Invited Experts if they have a background in security and cryptography. We’ll want as many eyes on these authentication standards as possible.

Fixing web authentication is part of a larger strategy for securing the Web being co-ordinated throughout many parts of the W3C. The Web of 2016 is no longer the Web of 1996 – password failures today can have exceedingly dangerous  consequences when a bank account password or social media account is taken hostage. The Web was meant to share open data between researchers, but also via new efforts at the W3C is now being increasingly used for monetary payments and even interface with automobiles.  So giant password breaches threaten everyone from ordinary web users to the ‘cybersecurity’ of nation-states.

Yet while lots of people are talking about cybersecurity, very few people know what to do! That is except the technologists themselves, who at the W3C are busy fixing the fundamental protocols of the Web to make it more secure. The W3C, a global consortium founded by Tim Berners-Lee to safeguard the future of the Web, has taken on security as a topmost priority. Through the W3C’s myriad Working Groups – bottom-up groups that any member or anyone able to demonstrate expertise can join – a new array of security and cryptography is being built into the Web. This is necessary now more than ever, as the original Web was designed without a security model and, even worse, without privacy in mind. However, this can all be fixed. The core problem is that these Working Groups are having to deal with upgrading fundamentally insecure protocols, protocols designed before anyone even took security seriously, without breaking the Web that millions rely on.

The new Web Authentication Working Group is part of a larger security strategy at the Web. In essence, there are two levels of problems facing security on the Web. The first level is called the Network Level, the level that runs all Internet traffic, of which the Web is only a part. Network traffic is for the most part insecure, which allows not only the nation-state actors with NSA-level capabilities but anyone in a cafe with open source tools such as Wireshark and other tricks to snoop on and intercept your HTTP communications with a webserver when a ‘lock’ (TLS, formerly known as HTTP) isn’t displayed in your browser. By co-ordinating and building on standards from the Internet Engineering Task Force who are in charge of upgrading the fundamental protocol of TLS, the Web Application Security Group is making it harder for attackers to intercept traffic and send malicious code to your browser. So when you go to a website, you can be assured that you are really getting the Website itself, not some malicious impostor trying to steal your data. From mature standards such as Content Security Policy to newer exciting work such as Subresource Integrity, the Web is step-by-step improving its fundamental security model: The Same Origin Policy.

The second level is the Web Level: How can we secure not just the underlying network level but the web applications themselves. Web applications like Gmail and Netflix can run inside browsers (often in addition to being ‘native’ apps you can get through an app store) and are still one of the best ways to get cross-platform applications developed quickly. However, the primary programming language of the Web, Javascript, rose to ascendancy rather by accident and didn’t have fundamental cryptographic functionality built in, ranging from generating random numbers to digital signatures. Thanks to the W3C Web Cryptography Working Group, and three years after starting, now every browser has advanced cryptographic functionality built in via the Web Cryptography API – enough to create a whole new generation of cryptographically-aware Web applications. Over the next few months, the Web Cryptography Working Group will be testing interoperability and finalizing the specs to reflect the reality of implementation.

With the formation of the Web Authentication Working Group and FIDO 2.0 Platform specifications, we finally have what is rapidly appearing to be industry consensus on a cryptographic replacement for passwords that will be both more secure and easy to use, as well as respect the privacy and security of users on the Web by following the Same Origin Policy. Via the W3C’s Royalty Free Patent Policy, we’ll make sure these authentication standards are open and safe to implement in terms of patents, and hopefully by this next year you’ll start seeing Web authentication without passwords in a browser near you.

We’d like to thank NGRC for support as well as the FIDO Alliance. Also, everyone who attended the WebCrypto v.Next workshop.

by Harry Halpin at February 18, 2016 12:21 AM

February 14, 2016

Reinventing Fire

Justice in the End

Some of my international friends have asked what the recent death of Supreme Court Justice Antonin Scalia means for America, for our process, and for the election. I couldn’t fit it into a tweet, so I thought I’d share my understanding and opinions here. I don’t have any great insights or expertise, but I hope this is useful for those who haven’t delved into the peculiarities of US government and law.

The death of Justice Scalia leaves a seat open in the Supreme Court of the United States (SCOTUS), the third branch of government (the Judicial branch); the other two branches are Congress (the Legislative branch, comprised of the House and Senate), and the Presidency (the Executive branch).

The Supreme Court has 9 Justices, appointed for life. This means that whomever is appointed as a replacement for Scalia will likely affect the tone of American justice for decades after the President who appointed them has left. Scalia was appointed in 1986, by President Reagan, and has been a consistently conservative voice for 30 years, frequently writing scathing and sarcastic dissenting opinions (“minority reports”) for decisions he did not agree with, including the legalization of same-sex marriage, Obamacare, women’s rights to abortion, civil rights, and many other progressive issues. Though he was intelligent, witty, and well-versed in the law, he was not kind in his judgments.

When Scalia was alive, the Supreme Court was almost evenly split between conservatives and progressives, with Chief Justice John Roberts, Clarence Thomas, Samuel Alito, and Antonin Scalia on the strongly conservative side, and Ruth Bader Ginsburg, Elena Kagan, Stephen Breyer, and Sonia Sotomayor on the moderate to strongly progressive side; the deciding vote has usually been the generally fair-minded, moderately conservative Anthony Kennedy. The death of Justice Scalia changes that balance. It’s expected that President Obama would nominate a progressive as Scalia’s replacement, and though he hasn’t yet named a candidate, conservative politicians have already attempted to block Obama’s appointment (in the true spirit of ♫whatever it is, I’m against it♫ ), leaving it to the next President to decide.

The Justices of the Supreme Court

It’s the duty and right of the sitting President to name replacement nominees to the Supreme Court (and Obama does intend to do so), and the duty and right of the Senate (not all of Congress) to approve these nominations. This has been highly politicized in the past few years, with more and more attempts by both conservatives and (to a lesser extent) progressives to block Supreme Court appointments, drawing out the debate, so there’s some wisdom in nominating a moderate Justice, in hopes of a speedy and non-contentious approval by the Senate. Notably, the nominee doesn’t have to be a current judge, or even a lawyer, but in reality, the Senate would be unlikely to approve anyone who isn’t a law professional (with good reason).

The nominee must get a simple majority in the Senate; currently, with 2 Senators from each of the 50 States, that means 51 approval votes.

The Republicans control the Senate, with 54 senators; the Democrats have only 44 senators; Independents make up the balance, with 2 senators (Bernie Sanders of Vermont and Angus King of Maine), who typically vote with the Democrats. While there are a few conservative Democratic senators, it’s likely that all Democratic and Independent senators will vote to appoint Obama’s nominee, whomever that might be. That’s only 46 votes, meaning at least 4 Republican senators will need to cross party lines to vote for the appointee… in an election year. That could be a tough sell for Obama.

But Obama has 342 days left in office, and the longest Supreme Court confirmation process, from nomination to resolution, was 125 days, back in 1916, when nominee Louis Brandeis “frightened the Establishment” by being “a militant crusader for social justice”. (Thanks, Rachel Schnepper!) In today’s sharply divided and fractured political system, I expect that we will set a new record for how long it takes to confirm a Supreme Court Justice, if it happens in the Obama administration at all.

If you did the math, you’ll have noticed that 46 + 4 = 50, not 51; luckily, if there’s a split vote in the Senate, the Vice President casts the deciding vote, and Joe Biden is closely aligned with President Obama.

If Obama can’t get the votes he needs for his nominee (a real possibility), he could wait until Congress adjourns for the year, and make a recess appointment, meaning a judicial selection while Congress is not in session; but this appointment would be temporary, less than 2 years, and the next President would certainly be the one to make the permanent appointment.

I’m reasonably confident (though not certain) that the next Supreme Court Justice will be a progressive, and will be appointed by President Obama, not the next President. But that wouldn’t mean that the implications of this for the 2016 Presidential election are any less notable! Other Justices (including the beloved Notorious RBG and “Swing Vote” Kennedy) may step down or even die during the term of the next President, meaning that the balance might shift yet again. We can’t ignore the fact that Bernie Sanders, a sitting Independent senator, will have a vote in the current Supreme Court nomination, while Hillary won’t, which will likely raise Bernie’s profile (for good or ill). And while the nomination process is underway, all the candidates will talk about who they’d appoint to the Supreme Court (keeping in mind that Obama probably doesn’t want the job), though I dearly hope they don’t get the chance. Finally, there’s the tiny chance that in a close race, the Supreme Court may decide who the next President is…

The Impact of the Supreme Court

It’s easy to underestimate the power the Supreme Court has on America’s domestic policy, and on people’s lives. What is legal or not is often (perhaps usually) decided not by Congress (the Legislative branch, which drafts, proposes, and votes on new federal laws) or the President (the Executive branch, which approves, implements, enforces, and administers those federal laws), but by the Supreme Court (the Judicial branch, which decides if federal laws adhere to the Constitution, and which acts as the final say on the application of federal laws, and on how state laws are affected by federal laws and the Constitution).

Some landmark policies that the average person associates generally with the US government were specifically decided by the Supreme Court:

  • the legality of a woman’s right to abortion (in the famous court case Roe v Wade)
  • whether states had the right to keep their schools segregated between black students and white students (and much earlier, whether African American slaves were entitled to citizenship)
  • whether same-sex couples can get married (and earlier, whether interracial couples could get married)
  • whether there is any limit on how much money a corporation or union can spend in elections (under the aegis of free speech)
  • the legality of some aspects of Obamacare (aka the Affordable Care Act), which determined if the law as a whole could be implemented
  • whether Florida could recount the ballots in the contested 2000 election between George Bush and Al Gore (though this is a rare instance… that usually doesn’t happen)

Some of these are issues that could have specific federal laws about them, but which Congress did not address. For example, Congress has never made a federal law that makes same-sex marriage legal, and it probably would have been decades before that would ever have happened, if it ever did (politicians typically play it safe, because they have to try to get re-elected); but based on the Supreme Court’s hearing of lower (state and district) courts’ rulings on state laws to determine if state laws were legal through the lens of the federal Constitution, and on the Supreme Court’s decision around some federal laws, it became legal for same-sex couples to get married. Congress could still make a law on this, one way or another, to settle details or try to overturn the Supreme Court’s decision (for example, by changing the Constitution itself), but for the foreseeable future, the Supreme court made same-sex marriage legal in every state of the Union, and has all the federal benefits of marriage.

The Supreme Court decides which cases it will try. On average, SCOTUS tries 60-75 oral arguments (what we think of as a court case) per year, and reviews another 50-60 more cases on paper.

Every year, tens of millions of civil and criminal court cases are tried in US state courts; hundreds of thousands of those decisions are appealed to a higher state court; tens of thousands of those are appealed to a federal district court, if the matter is applicable to federal law rather than state law, and district courts are further organized into 11 federal circuits; thousands of these cases are appealed to the Supreme Court, of which they accept a mere 1–2%. In addition, there are court cases of major federal or interstate crimes, and cases of disputes between state governments or between a state government and the federal government, or maritime laws where no state has jurisdiction, or cases of bankruptcy or ambassadorial issues.

So, the chance that the Supreme Court will hear any particular case is very slim, and is typically only the most important cases, but when they do rule on a case, it sets the precedent for the rest of the country, at a state and federal level, and is rarely overturned.

Scalia’s Legacy

While it’s not polite to speak ill of the dead, and while I can mourn Scalia’s death as a person, I’ve long held a very low opinion of him, and I admit that I’m glad of any opportunity to shift the character of the Supreme Court to a more progressive, compassionate, and modern constituency.

Many have painted Scalia as a patriot who’s made America better; here’s my dissenting opinion.

Scalia was clever, and I think it’s even more important for clever people to also strive to be good people; even more so if they are in a position of power. He may have been a good person to his friends and family, but he did not carry that over into how he served this country.

His writings struck me as insincere, and his claim to adhere to “Constitutional originalism” was belied by his whimsical interpretations of the US Constitution, such as his very modern stance that the 2nd Amendment ensured private ownership of guns, rather than the original emphasis on militias for national defense, and the absurd notion that “The Constitution is not a living document”, when the Constitution itself defines how to amend it.

And while he’s perhaps most famous for his dissenting opinions, it’s his majority rulings that have caused the most damage to America and Americans. And even beyond that, he’s used his Judicial authority to step into decisions on lower courts. For example, in 2000’s Presidential election, it was Scalia who personally intervened in the Florida court decision to halt the recount, and later the Supreme Court ruled not to let the votes be recounted, handing the election to George W. Bush.

Beyond his own rulings, his influence and legacy is in giving voice, authority, and credibility to a radical conservatism that influenced a generation of legal thought, carried on in Alito and Roberts, which holds that interpreting the text of the law is more important than the applicability to modern society and technology. In other words, it claims that trying to imagine (in a ridiculous fantasy) the opinions of a person living over two centuries ago, when this country was yet unformed, is more relevant than a view informed by the country as it has since developed. Generously, this is truly “conservative”, preserving the prejudices and ignorance of bygone eras along with any wisdom; more pointedly, this was a convenient way to appear impartial while twisting the result to his own backwards ideological view.

Scalia’s rulings were often specious and inhumane, mere clever arguments based on selective interpretation of the wording of laws and the Constitution rather than attempts at applying justice. In dissenting on a ruling for reopening a death-penalty case, where most of the original witnesses had recanted their testimonies, Scalia said, “Mere factual innocence is no reason not to carry out a death sentence properly reached.” For Scalia, it seems, the law was not a way to achieve social or personal fairness, but a pro-forma game whose rules were both strict and meaningless.

It’s hard to imagine someone as retrograde as Scalia getting nominated or confirmed, so I’m hopeful that we’ll have a more reasonable, just, and progressive Supreme Court in the next few months. This is how the Founding Fathers wanted this country to work… with each generation forging its own vision of a more perfect union, renewing the government to meet their own needs and desires, with the consistent thread of life, liberty, and the pursuit of happiness. And, in the end, justice.

by Shepazu at February 14, 2016 08:53 AM

February 09, 2016

W3C Blog

Added the ‘csvw’ prefix to the RDFa initial context

In accordance with the way the RDFa initial context is defined, the list of predefined prefixes has been extended by adding ‘csvw‘. The corresponding vocabulary URI is http://www.w3.org/ns/csvw#, and it stands for the metadata vocabulary for tabular data, like CSV or TSV, which was published as a Recommendation at the very end of 2015.

Reminder: the RDFa initial context provides a number of prefixes and a number of prefix-less properties that RDFa processors should recognize with the user being required to add those via a @prefix declaration.

by Ivan Herman at February 09, 2016 07:44 AM

February 05, 2016

W3C Blog

Presentation on PWP planned at the EPUB Summit

As reported a few days ago the European Digital Reading Laboratory (EDRLab) is organizing an EPUB Summit in Bordeaux, France, early April 2016. I am happy to present our work on Portable Web Publications, one of the major topics of discussion at W3C on the convergence of the Open Web Platform and the goals of the Digital Publishing Community. Also, W3C members are entitled to a discount of 50€ on the registration fee if registration is done before the 15th of February. Meet you in Bordeaux!

by Ivan Herman at February 05, 2016 10:12 AM