September 19, 2016

W3C Blog

Bringing Virtual Reality to the Web platform

The world of Virtual Reality in 2016 feels a lot like the world of Mobile in 2007: a lot of expectations around a new way to interact with users, a lot of innovations in hardware and software to make it a reality, and a lot of unknowns as to how these interactions would develop.

Having played a tiny role in making the Web a better platform for mobile devices, and with the feeling that Progressive Web Apps are finally bringing us where we need to be in that space, I have been looking in the past few months at where the Web needs to be to provide at least one of the major platform for Virtual Reality experiences.

Beyond the expected application of VR to gaming and video consumption, many innovative use cases have emerged to make Virtual Reality a compelling platform for e-commerce, news (see for instance NYTVR), learning and discovery, communication and social inclusiveness, engineering, and many other use cases.

As such, VR feels to me like a big new set of opportunities for creativity and expression, with its more immersive and more direct interactions. The Web ought to be able to cater for this space of innovation.

The Web comes with well-known strengths in that space:

  • As the number of headsets and other associated VR devices grows by the day, the plasticity of the Open Web Platform to adapt content and services to a great many of device types, varying in processing power, resolution, interactions and operating systems is no longer to demonstrate, and is certain to bring content and service providers a uniform platform on which to build.
  • As wearing a headset tends to isolate users from their external environments, there is a risk that VR experiences remain limited to intense but somewhat exclusive type of content or applications (e.g. games or videos); but the Web has proved excellent at providing an on-ramp to engaging users (as Progressive Web Apps demonstrate). While it’s hard to imagine oneself immersing into a real-estate VR experience while looking for a house, the idea of starting a VR experience while browsing a particularly appealing house on a real-estate Web site seems much more compelling.
  • The Web was created first and foremost to facilitate sharing, and the continued demonstration of the power of URLs to enable this some 25 years after its inception is a testament to the robustness and strength of that approach. VR would hardly be the first ecosystem to benefit from the reach and social effect enabled by the Web.
  • Finally, as a fundamentally open platform, that anyone can use, build on and contribute to building, the Web can ensure that the new space of creativity enabled by VR is not stifled by the rules and constraints of closed and proprietary platforms.

But to make these strengths applicable to VR, the Web obviously needs to provide the basic technical bricks that are necessary to build VR experiences.

Fortunately, many such technologies are already in place or are making good progress toward widespread development.

WebGL has provided the basic layer for 3D graphics for a number of years and has now reached widespread deployment.

The Gamepad API brings the necessary interface to the various type of devices used to navigate in virtual experiences.

The Web Audio API features, among its many amazing capabilities, spatialized audio, providing a critical component to truly immersive experiences.

But critically, the possibility of projecting graphics to VR headsets, taking into account their optical and geometrical specificities, has been recently enabled experimentally via the WebVR API that Mozilla started (recently releasing it in its nightly builds), soon after joined by Google and Samsung with their respective browsers, and recently joined by Microsoft.

While this collection of APIs can easily be perceived as a steep learning curve for many Web developers, another project pushed by Mozilla, A-Frame, demonstrates the expressivity of encapsulating a lot of such APIs in Web Components. With A-Frame, a few lines of HTML-like markup suffice to create a first VR-enabled scene, including all the plumbing needed to make the experience of switching from regular browsing to the more immersive view.

WebVR is being developed in a W3C Community Group, but is not on the W3C standardization track yet. It will be one of the core topics of the upcoming W3C Workshop on Web & Virtual Reality I am organizing next month (October 19-20) in California. The goal of that event (open to all practitioners of the field) will be to establish the overall roadmap to standardization to make the Web a robust platform for Virtual Reality.

Let’s all work together to make sure Web & VR grow together harmoniously!

by Dominique Hazaël-Massieux at September 19, 2016 12:19 PM

September 16, 2016

ishida >> blog

New Persian character picker

Picture of the page in action.

A new Persian Character Picker web app is now available. The picker allows you to produce or analyse runs of Persian text using the Arabic script. Character pickers are especially useful for people who don’t know a script well, as characters are displayed in ways that aid identification.

The picker is able to produce UN transcriptions of the text in the box. The transcription appears just below the input box, where you can copy it, move it into the input box at the caret, or delete it. In order to obtain a full transcription it is necessary to add short vowel diactritics to places that could have more than one pronunciation, but the picker can work out the vowels needed for many letter combinations.

See the help file for more information.

by r12a at September 16, 2016 06:26 AM

September 15, 2016

W3C Blog

HTML – from 5.1 to 5.2

There is a First Public Working Draft of HTML 5.2. There is also a Proposed Recommendation of HTML 5.1. What does that mean? What happened this year, what didn’t? And what next?

First, the Proposed Recommendation. W3C develops specifications, like HTML 5.1, and when they are “done”, as agreed by the W3C, they are published as a “Recommendation”. Which means what it says – W3C Recommends that the Web use the specification as a standard.

HTML 5.0 was published as a Recommendation a bit over 2 years ago. It was a massive change from HTML 4, published before the 21st Century began. And it was a very big improvement. But not everything was as good as it could be.

A couple of years before the HTML 5 Recommendation was published, a decision was taken to get it done in 2014. Early this year, we explained that we were planning to release HTML 5.1 this year.

There is an implementation report for HTML 5.1 that shows almost all of the the things we added since HTML 5.0 are implemented, and work out there on the Web already. Some things that didn’t work, or did but don’t any more, were removed.

HTML 5.1 certainly isn’t perfect, but we are convinced it is a big improvement over HTML 5.0, and so it should become the latest W3C Recommendation for HTML. That’s why we have asked W3C to make it a Proposed Recommendation. That means it gets a formal review from W3C’s members to advise Tim Berners-Lee whether this should be a W3C Recommendation, before he makes a decision.

Meanwhile, we are already working on a replacement. We believe HTML 5.1 is today the best forward looking, reality-based, HTML specification ever. So our goal with HTML 5.2 is to improve on that.

As well as fixing bugs people find in HTML 5.1, we are working to describe HTML as it really will be in late 2017. By then Custom Elements are likely to be more than just a one-browser project and we will document how they fit in with HTML. We expect improvements in the ability to use HTML for editing content, using e.g. contenteditable, and perhaps some advances in javascript. Other features that have been incubating, for example in the Web Platform Incubator Community Group, will reach the level of maturity needed for a W3C Recommendation.

We have wanted to make the specification of HTML more modular, and easier to read, for a long time. Both of those are difficult, time-consuming jobs. They are both harder to do than people have hoped over the last few years. We have worked on strategies to deal with making HTML more modular, but so far we have only broken out one “module”: ARIA in HTML.

We hope to break out at least one substantial more module in the next year. Whether it happens depends on sufficient participation and commitment from across the community.

We will further improve our testing efforts, and make sure that HTML 5.2 describes things that work, and will be implemented around the Web. We have developed a process for HTML 5.1 that ensures we don’t introduce things that don’t work, and remove things already there that don’t reflect reality.

And we will continue working to a timeline, with the HTML 5.2 specification heading for Recommendation around the end of 2017.

By which time, we will probably also be working on a replacement for it, because the Web seems like it will continue to develop for some time to come…

by Charles McCathie Nevile at September 15, 2016 11:45 AM

Just how should we share data on the Web?

The UK government is currently running a survey to elicit ideas on how it should update As one of the oldest such portals, despite various stages of evolution and upgrade, it is, unsurprisingly, showing signs of age. Yesterday’s blog post by Owen Boswarva offers a good summary of the kind of issues that arise when considering the primary and secondary functions of a data portal. Boswarva emphasizes the need for discovery metadata (title, description, issue date, subject matter etc.) which is certainly crucial, but so too is structural metadata (use our Tabular Metadata standards to describe your CSV, for example), licensing information, the use of URIs as identifiers for and within datasets, information about the quality of the data, location information, update cycle, contact point, feedback loops, usage information and more.

It’s these kind of questions that gave rise to the Data on the Web Best Practices WG whose primary document is now at Candidate Recommendation. We need help from the likes of Owen Boswarva and* portals around the world to help us gather evidence of implementation of course. The work is part of a bigger picture that includes two ancillary vocabularies that can be used to provide structured information about data quality and dataset usage, the outputs of the Spatial Data on the Web Working Group, in which we’re collaborating with fellow standards body OGC, and the Permissions and Obligations Expression WG that is developing machine readable license terms and more, beginning with the output of the ODRL Community Group.

A more policy-oriented view is provided by a complementary set of Best Practices developed by the EU-funded Share-PSI project. It was under the aegis of that project that the role of the portal was discussed at great length at a workshop I ran back in November last year. That showed that a portal must be a lot more than a catalog: it should be the focus of a community.

Last year’s workshop took place a week after the launch of the European Data Portal, itself a relaunch in response to experience gained through running earlier versions. One of the aims of that particuilar portal is that it should act as a gateway to datasets available throughout Europe. That implies commonly agreed discovery metadata standards for which W3C Recommends the Data Catalog Vocabulary, DCAT. However, it’s not enough. What profile of DCAT should you use? The EU’s DCAT-AP is a good place to start but how do you validate against that? Enter SHACL for example.

Those last points highlight the need for further work in this area which is one of the motivations for the Smart Descriptions & Smarter Vocabularies (SDSVoc) workshop later this year that we’re running in collaboration with the VRE4EIC project. We also want to talk in more general terms about vocabulary development & management at W3C.

Like any successful activity, if data is to be discoverable, usable, useful and used, it needs to be shared among people who have a stake in the whole process. We need to think in terms of an ecosystem, not a shop window.

by Phil Archer at September 15, 2016 10:12 AM

September 14, 2016

ishida >> blog

Timeline: 5 dynasties & 10 kingdoms

This shows the durations of dynasties and kingdoms of China in the 900s. Click on the image below to see an interactive version that shows a guide that follows your cursor and indicates the year.

Chart of timelines

See a map of territories around 944 CE.

by r12a at September 14, 2016 08:54 AM

September 13, 2016

W3C Blog

Portable Web Publications Use Cases and Requirements FPWD

How do publications differ from web sites? What are the nuances of publishing on the web and making use of the tools of the Open Web Platform? Do publishers really need more than linked web sites? Yes, we do! Portable Web Publications Use Cases and Requirements provides detailed use cases and requirements from the Digital Publishing Interest Group, focusing on two primary issues. These use cases look at the portability of published works, which allow users to transfer their books, articles, and magazines from state to state and device to device. The document also seeks to define the book or publication as a rightful citizen of the Open Web Platform. Thousands of years of successful history, knowledge and information sharing in easily consumable, producible, and storable formats must be recognized as we focus on the tools of the Open Web Platform and what it means for Publishers, Authors, and Readers today. We welcome your feedback on GitHub.

by Tzviya Siegman at September 13, 2016 03:17 PM

September 12, 2016

W3C Blog

Dave Raggett at Industry of Things World

IoTW 2016 logo

It is our pleasure to announce that Dave Raggett, W3C lead for the Web of Things; and Georg Rehm, Manager of the German / Austrian Office of W3C, will hold a workshop at Industry of Things Worls in Berlin, Germany on Monday September 19, 2016.

Workshop description: The value of the Internet of Things will be in the services, especially those that combine different sources of information. However, today there is a severe lack of interoperability across different platforms. This will increase risks and lower the return on investment. We can expect the striking heterogeneity in standards to continue as different platforms serve different needs. This workshop will provide an opportunity to discuss properties and characteristics of overarching umbrella standards that are able to bridge the gaps between platforms. The big challenge is to enable semantic interoperability and end to end security. The workshop will introduce the work being done in the World Wide Web Consortium (W3C), with its Web of Things group, in collaboration with other alliances and standards development organisations. These future Web of Things standards are being designed on the foundation of the rich set of existing W3C standards such as RDF, XML, OWL, Semantic Web etc. To enable a high level of interaction among the attendees, the number of places at the workshop is strictly limited, so please register as soon as possible.

by Coralie Mercier at September 12, 2016 01:48 PM

September 07, 2016

ishida >> blog

Notes on case conversion

Examples of case conversion.

These are notes culled from various places. There may well be some copy-pasting involved, but I did it long enough ago that I no longer remember all the sources. But these are notes, it’s not an article.

Case conversions are not always possible in Unicode by applying an offset to a codepoint, although this can work for the ASCII range by adding 32, or by adding 1 for many other characters in the Latin extensions. There are many cases where the corresponding cased character is in another block, or in an irregularly offset location.

In addition, there are linguistic issues that mean that simple mappings of one character to another are not sufficient for case conversion.

In German, the uppercase of ß is SS. German and Greek cannot, however, be easily transformed from upper to lower case: German because SS could be converted either to ß or ss, depending on the word; Greek because all tonos marks are omitted in upper case, eg. does ΑΘΗΝΑ convert to Αθηνά (the goddess) or Αθήνα (capital of Greece)? German may also uppercase ß to ẞ sometimes for things like signboards.

Also Greek converts uppercase sigma to either a final or non-final form, depending on the position in a word, eg. ΟΔΥΣΣΕΥΣ becomes οδυσσευς. This contextual difference is easy to manage, however, compared to the lexical issues in the previous paragraph.

In Serbo-Croatian there is an important distinction between uppercase and titlecase. The single letter dž converts to DŽ when the whole word is uppercased, but Dž when titlecased. Both of these forms revert to dž in lowercase, so there is no ambiguity here.

In Dutch, the titlecase of ijsland is IJsland, ie. the first two letters have to be titlecased.

In Greek, tonos diacritics are dropped during uppercasing, but not dialytika. Greek diphthongs with tonos over the first vowel are converted during uppercasing to no tonos but a dialytika over the second vowel in the diphthong, eg. Νεράιδα becomes ΝΕΡΑΪΔΑ. A letter with both tonos and dialytika above drops the tonos but keeps the dialytika, eg. ευφυΐα becomes ΕΥΦΥΪΑ. Also, contrary to the initial rule mentioned here, Greek does not drop the tonos on the disjunctive eta (usually meaning ‘or’), eg. ήσουν ή εγώ ή εσύ becomes ΗΣΟΥΝ Ή ΕΓΩ Ή ΕΣΥ (note that the initial eta is not disjunctive, and so does drop the tonos). This is to maintain the distinction between ‘either/or’ ή from the η feminine form of the article, in the nominative case, singular number.

Greek titlecased vowels, ie. a vowel at the start of a word that is uppercased, retains its tonos accent, eg. Όμηρος.

Turkish, Azeri, Tatar and Bashkir pair dotted and undotted i’s, which requires special handling for case conversion, that is language-specific. For example, the name of the second largest city in Turkey is “Diyarbakır”, which contains both the dotted and dotless letters i. When rendered into upper case, this word appears like this: DİYARBAKIR.

Lithuanian also has language-specific rules that retain the dot over i when combined with accents, eg. i̇̀ i̇́ i̇̃, whereas the capital I has no dot.

Sometimes European French omits accents from uppercase letters, whereas French Canadian typically does not. However, this is more of a stylistic than a linguistic rule. Sometimes French people uppercase œ to OE, but this is mostly due to issues with lack of keyboard support, it seems (as is the issue with French accents).

Capitalisation may ignore leading symbols and punctuation for a word, and titlecase the first casing letter. This applies not only to non-letters. A letter such as the (non-casing version of the) glottal stop, ʔ, may be ignored at the start of a word, and the following letter titlecased, in IPA or Americanist phonetic transcriptions. (Note that, to avoid confusion, there are separate case paired characters available for use in orthographies such as Chipewyan, Dogrib and Slavey. These are Ɂ and ɂ.)

Another issue for titlecasing is that not all words in a sequence are necessarily titlecased. German uses capital letters to start noun words, but not verbs or adjectives. French and Italian may expect to titlecase the ‘A’ in “L’Action”, since that is the start of a word. In English, it is common not to titlecase words like ‘for’, ‘of’, ‘the’ and so forth in titles.

Unicode provides only algorithms for generic case conversion and case folding. CLDR provides some more detail, though it is hard to programmatically achieve all the requirements for case conversion.

Case folding is a way of converting to a standard sequence of (lowercase) characters that can be used for comparisons of strings. (Note that this sequence may not represent normal lowercase text: for example, both the uppercase Greek sigma and lowercase final sigma are converted to a normal sigma, and the German ß is converted to ‘ss’.) There are also different flavours of case folding available: common, full, and simple.

by r12a at September 07, 2016 04:03 PM

August 31, 2016

W3C Blog

Memento at the W3C

memento follow your nose through time architectureThe W3C Wiki and the W3C specifications are now accessible using the Memento “Time Travel for the Web” protocol. This is the result of a collaboration between the W3C, the Prototyping Team of the Los Alamos National Laboratory, and the Web Science and Digital Library Research Group at Old Dominion University.

The Memento protocol is a straightforward extension of HTTP that adds a time dimension to the Web. It supports integrating live web resources, resources in versioning systems, and archived resources in web archives into an interoperable, distributed, machine-accessible versioning system for the entire web. The protocol is broadly supported by web archives. Recently, its use was recommended in the W3C Data on the Web Best Practices, when data versioning is concerned. But resource versioning systems have been slow to adopt. Hopefully, the investment made by the W3C will convince others to follow suit.

Memento is formally specified in RFC7089; a brief overview is available from the Memento web site. In essence, the protocol associates two special types of resources with a web resource, both made discoverable using typed links in the HTTP Link header. A TimeGate is capable of datetime negotiation, a variant on content negotiation. It provides access to the version of the web resource as it existed around a preferred datetime expressed by a client using the Accept-Datetime header; the version resource itself includes a Memento-Datetime header, which expresses the resource’s actual version datetime. A TimeMap provides an overview of versions of the web resource and their version datetimes. The need for datetime negotiation had already been suggested by Tim Berners-Lee in his W3C Note about Generic Resources but it was not until 2009 that datetime negotiation was effectively introduced in an preprint Memento: Time Travel for the Web.

memento follow your nose through time architecture

Memento provides a bridge between the present and the past Web

Adding Memento support to versioning systems allows a client to uniformly access the version of a resource that  was active at a certain moment in time (TimeGate) and to obtain its version history (TimeMap). When a version page in a system that supports Memento links to a resource that resides in another system that supports Memento, a client can uniformly access the version of the linked resource that was active at the same moment in time.  If the linked resource is in a system that does not support Memento – it does not expose a TimeGate – the client can fall back to a default TimeGate that operates across web archives and retrieve an archived resource using the uniform datetime negotiation approach. Alternatively, the client can resort to the TimeGate of a specific web archive, such as that of the Internet Archive or the Portugese Web Archive. But, while resource versioning systems hold on to their entire resource history, web archives merely store discrete observations of (some) web resources. As such, with pages retrieved from web archives, there is no certainty that the archived page was active at that same time, but rather only around that same time.

A variety of tools is available to add support to systems that handle resource versions and expose associated APIs. Memento support was added to the W3C Wiki pages by deploying the Memento Extension for MediaWiki. Memento support for W3C specifications was realized by installing a Generic TimeGate Server for which a handler was implemented that interfaces with the versioning capabilities offered by the W3C API.

Memento can be leveraged programmatically, for example, by adding Accept-Datetime headers to curl commands, or by using the Python Memento Client Library. The Time Travel portal exposes an API that covers web archives and resource versioning systems with Memento support. The API can, for example, be used to construct a URI that redirects to the version of a resource as it existed around a given date. For example:

Browsers do not yet natively support Memento, but its cross-time and cross-server capabilities can be experienced by installing the Memento extension for Chrome. Try it out for yourself. Browse over to the W3C AWWW and pick some dates in the extension’s calendar between 1 September 2002 and 1 September 2004. Navigate to the version of the specification that was current at the selected dates by right clicking the page and choosing “Get near saved date …” from the context menu. Notice how the centrality of REST in the specification diminishes over time. In each version, find the reference to the IANA URI Registry and right click the link, this time choosing “Get near memento date …” to see the Registry as it existed around the time of the version of the AWWW specification you are on. You will retrieve versions of the Registry from web archives and notice its evolution over time, for example around 1 September 2002 and around 1 September 2004. Compare the archived state of the Registry conveyed with its current state by right clicking in an archived page and choosing “Get at current date”.

Further pointers:

by Herbert Van De Sompel at August 31, 2016 07:24 AM

August 30, 2016

ishida >> blog

Language subtag tool now links to Wikipedia

The language subtag lookup tool now has links to Wikipedia search for all languages and scripts listed. This helps for finding information about languages, now that SIL limits access to their Ethnologue, and offers a new source of information for the scripts listed.

Picture of the page in action.

by r12a at August 30, 2016 10:38 AM

August 26, 2016

W3C Blog

Building Blocks to Blockchains: a Report on the W3C Blockchains and the Web Workshop

Blockchain workshop graphical representationIn June, W3C hosted a workshop to determine whether there were any aspects of blockchains that intersected with Web technologies, and if there were any specific technologies that were mature enough to consider for incubation toward standardization. We had lots of promising discussions and identified several next steps. You can read more details in the W3C Blockchains and the Web workshop report, released today.

Following the success of the workshop, we have begun to coordinate blockchain activities in the newly-formed Blockchain Community Group, which has chairs from Asia (Youngwhan “Nick” Lee), Europe (Marta Piekarska), and North America (Doug Schepers). The Blockchain CG has a regular coordination meeting on Thursdays, and is planning to start topic-specific short-term technical teleconferences as needed.

Blockchain is comprised of a broad set of cross-domain technologies, and thus our two chief tasks are to:

  1. monitor the work of other groups (Web Payments, Internet of Things, etc) to make sure we are aware of what is happening in those fields, as well as groups outside of W3C, so we can complement their work); and
  2. propose use cases beyond what is happening in existing working groups to ensure we identify the applications of blockchain that do and do not make sense. There is not yet consensus on which applications of blockchain technology are appropriate uses of the technology, but there is general agreement on a subset of useful applications, and these are the ones which we plan to dedicate resources to.

The Blockchain CG is intended partly to coordinate the activities of other topic-specific CGs, in addition to working on its own reports and deliverables. We will continue to work closely together with the new Blockchain Digital Assets Community Group (formed as an outcome of the workshop), and participate in the already-active Interledger Payments CG. Moreover we will collaborate with the Verifiable Claims task force of the Web Payments Interest Group.

As part of this ongoing coordination, we are planning an informal meeting during the W3C’s TPAC meeting in Lisbon, Portugal. It will take place on Tuesday 20th of September at 10:30–12:30, and will also have a short session during the Web Payments Interest Group f2f on Friday 23 September; the Interledger Payments CG is also meeting at TPAC, on Thursday, 23rd of September. Leading up to TPAC, we will build an agenda to make the best use the time we have together in Lisbon. If you wish to attend TPAC, you must be a member of one of the Community Groups or Working Groups meeting there, and register by September 2.

After the September face-to-face meeting, the Blockchain CG will continue with regular calls, and will incubate low-hanging fruit by beginning to draft specifications and build community interested in Recommendation-track work, perhaps including the Chainpoint specification, which was discussed in detail at the workshop.

We are also considering a second blockchain workshop, possibly on the US West Coast, where we will work on more technical aspects and specifications that can contribute to W3C standardization, with particular focus on client-side technologies.

By the end of the year, we hope to have laid the groundwork for possible candidates for formal standardization.

We encourage interested people and organizations to join the W3C Blockchain Community Group to keep informed about future developments. We are expanding the scope of that group to include coordination for our various activities around blockchains, including links to related specific-topic community groups, such as the new Blockchain Digital Assets CG.

This post was co-written by Marta Piekarska and Doug Schepers.

by Doug Schepers at August 26, 2016 10:39 AM

August 25, 2016

ishida >> blog

Right-to-left scripts

These are just some notes for future reference. The following scripts in Unicode 9.0 are normally written from right to left.

Scripts containing characters with the property ARABIC RIGHT-to-LEFT have an asterisk. The remaining scripts have characters with the property RIGHT:

In modern use

Arabic *
Syriac *
Thaana *

Limited modern use

Mende Kikakui (small numbers)
Old Hungarian
Samaritan (religious)


Imperial Aramaic
Old South Arabian
Old North Arabian
Old Turkic
Pahlavi, (Inscriptional)
Parthian, (Inscriptional)

by r12a at August 25, 2016 07:32 PM

August 17, 2016

Reinventing Fire

Topic of Cancer

I’m now officially a cancer survivor! Achievement unlocked!

A couple weeks ago, on July 27th, during a routine colonoscopy, they found a mass in my ascending colon which turned out to have some cancer cells.

I immediately went to UNC Hospital, a world-class local teaching hospital, and they did a CT scan on me. There are no signs that the cancer has spread. I was asymptomatic, so they caught it very early. The only reason I did the colonoscopy is that there’s a history of colon cancer in my family.

Yesterday, I had surgery to remove my ascending colon (an operation they call a “right colectomy”). They used a robot (named da Vinci!) operated by their chief GI oncology surgeon, and made 5 small incisions: 4 on the left side of my belly to cut out that part of the right colon; and a slightly larger one below my belly to remove the tissue (ruining my bikini line).

Everything went fine (I made sure in advance that this was a good robot and not a killer robot that might pull a gun on me), and I’m recovering well. I walked three times today so far, and even drank some clear liquids. I’ll probably be back on my feet and at home sometime this weekend. Visitors are welcome!

There are very few long-term negative effects from this surgery, if any.

They still don’t know for certain what stage the cancer was at, or if it’s spread to my lymph nodes; they’ll be doing a biopsy on my removed colon and lymph nodes to determine if I have to do chemotherapy. As of right now, they are optimistic that it has not spread, and even if it has, the chemo for this kind of cancer is typically pretty mild. If it hasn’t spread (or “metastasized”), then I’m already cured by having the tumor removed. In either case, I’m going to recover quickly.

My Dad had colon cancer, and came through fine. My eldest sister also had colon cancer over a decade ago, and it had even metastasized, and her chemo went fine… and cancer treatments have greatly improved in the past few years.

So, nobody should worry. I didn’t mention it widely, because I didn’t want to cause needless grief to anyone until after the operation was done. Cancer is such a scary word, and I don’t think this is going to be as serious as it might otherwise sound.

I’ll be seeing a geneticist in the coming weeks to determine exactly what signature of cancer I have, so I know what I’m dealing with. And I want to give more information to my family, because this runs in our genes, and if I’d gotten a colonoscopy a few years ago, they could have removed the polyp in the early stages and I’d have never developed cancer. (And because I’m otherwise healthy, I probably wouldn’t have gotten the colonoscopy if I hadn’t had insurance, which I probably wouldn’t have had if Obamacare didn’t mandate it. Thanks, Obama!)

Yay, science!

Future Plans

So, the cliché here is for me to say that this has opened my eyes to the ephemerality and immediacy of life, and that I’m planning to make major decisions in my life that prioritize what I truly value, based on my experience with cancer.

But the fact is, I’ve already been doing that recently, and while the cancer underscores this, I’ve already been making big plans for the future. I’ll post soon about some exciting new projects I’m trying to get underway, things that are far outside my comfort zone for which I’ll need to transform myself (you know, in a not-cancerous sort of way). I’ve already reduced my hours at W3C to 50%, and I’m looking at changing my role and remaining time there; I love the mission of W3C, which I see as a valuable kind of public service, so no matter what, I’ll probably stay involved there in some capacity for the foreseeable future. But I feel myself pulled toward building software and social systems, not just specifications. Stay tuned for more soon!

I’m optimistic and excited, not just about leaving behind this roadbump of cancer, but of new possibilities and new missions to change the world for the better in my own small ways.


Today (Friday, 26 August), I got the results of my biopsy from my oncologist, and I’m pleased to announce that I have no more colon cancer! The results were that the cancer was “well-differentiated, no activity in lymph nodes”, meaning that there was no metastasis, and I’m cured. This whole “adventure” emerged, played out, and concluded in just a month: I heard there was a tumor, was diagnosed with cancer, consulted an oncologist, had surgery, recovered, and got my cancer-free results all in 30 days. It felt much longer!

by Shepazu at August 17, 2016 08:39 PM

August 16, 2016

W3C Blog

W3C Day in Spain: Web of Things to boost industrial productivity

Recently the W3C Spanish Office organized the W3C Day in Spain, an annual conference held in different cities across Spain. The objective of this event is facilitating a collaborative  environment that enables local stakeholders to share their knowledge on ICTs. This W3C Day is considered as one of the major forums in Spain to discuss about the future of Web technologies in industry, academia, public sector, and the society in general.

Dave Raggett during his talk at W3C Day in Spain 2016

Dave Raggett during his talk at W3C Day in Spain 2016

This 13rd edition, hosted by CTIC Technology Centre in Gijón (Spain), was focused on Web of Things and its application on the industry, aiming at raising awareness about the new technologies related to IoT to boost the concept of Industry 4.0 in Spain. The event gathered together over 220 experts who interacted with the speakers during five dynamic panels. Most of the attendees came from Spain, although there were a few international representatives from Latin American and Eastern European countries.

The agenda of the event was full of high level speakers from Spanish leading corporations, national government or prestigious universities. All the keynotes and panels were focused on the evolution of the Internet and the Web towards the Internet of Things (IoT) paradigm, framed in the potential interest for the industry. Experts’ speeches addressed the topics from a high perspective, introducing challenges and opportunities in their sectors.

The first keynote speaker was Szymon Lewandowski (Policy Officer at DG CONNECT, European Commission), who presented the efforts of the European Commission in order to evolve industry towards a Digital Single Market. His speech was clear and concise, encouraging companies to evolve their strategies by the right use of data and the adoption of Web and Internet standards.

Dave Raggett (W3C Web of Things) presented the W3C’s work in this new promising concept that will solve the problem of interoperability in the IoT. His talk, titled ‘Web of Things: Web standards to bridge the IoT silos’, was a good motivation to start specific discussions on different subtopics such as: interoperability, security, big data, cloud computing, and Industry 4.0 strategies.

Panelists during the W3C Day in Spain 2016

Panelists during the W3C Day in Spain 2016

During a full day, keynote speakers and panelists discussed how open standards could increase productivity, optimize business processes, and avoid silos of things (and information). This event served as a good starting point for the Spanish industry to change their minds towards the Web of Things. We are already thinking about what topics to discuss next year.

Interested in learning more? Read a complete report about the W3C Day in Spain 2016 and join the W3C Web of Things Interest Group if you want to make (and change) the rules for a better Internet ecosystem.

by Martin Alvarez-Espinar at August 16, 2016 10:13 AM

August 09, 2016

W3C Blog

W3C China celebrated its 10th Anniversary in July

W3C Beihang Host celebrated W3C China10th Anniversary in Beihang University on July 9th 2016. To honor the past fruitful 10 years and look forward to brighter future, W3C China team invited the local web community to celebrate this great moment together.

The event was organized in 3 sessions, including Core Web technology, Future of the Web and Web&Indsutry. 11 speakers from W3C team, W3C members as well as some notedresearchers shared their insights. More than 200 participants attended this event on site and about 20000 remote attendees watched the onsite video streaming. The Core Web Technology Session focused on the current achievements of the Open Web Platform. Presentations about Web design principles, Web applications and web accessibility were shared with the audience; in the Future of Web Session, the speakers talked about the hot topics such as blockchain, virtual reality and data visualization. Prof. Wei-Tek Tsai who just came back from W3C Blockchain workshop shared his experience on this workshop as well as his vision about blockchain; the Web & Industry Session were mainly for W3C’s efforts in the vertical industries such as payment, automotive as well as web of things. Dr.Jie Bao, a former W3C Linked data activity participants talked about the use of linked data in financial industry and brought the audience a fresh new angle to view the linked data technologies.

Prof. Jinpeng Huai, former Beihang Host representative, ex-President of Beihang University, the Vice Minister of Ministry of Industry and Information joint this event and expressed his best wishes for the future of W3C and the Web.

A brief history of W3C in China

In the spring of 2006, W3C China Office was launched in Beihang University and Beihang University starts to host W3C in China ever since. In 2008, W3C China Office took over the related business from W3C Hong Kong Office and W3C Hong Kong Office was terminated for a good reason. W3C China Office appreciated the contribution from Hong Kong Office, especially the efforts and supports from Prof. Vincent Shen the Office Manager of W3C Hong Kong Office. With the continuous endeavor from W3C team home and abroad, as well as the strong support from Web community, W3C has grown robustly together with the web industry in China. More and more noted Chinese ICT organizations such as Alibaba, Tencent, Huawei, Baidu, China Mobile, Chine Unicom and Chinese Academy of Science joint W3C as members. New web technologies like HTML5 gains increasing popularity among the Chinese developers. In January 2016, W3C upgraded its China Office and launched the fourth international R&D center in Beihang AKA a W3C Host in China.

by Angel Li at August 09, 2016 04:15 AM

August 08, 2016

Reinventing Fire

In Praise of HB2

North Carolina House Bill 2 (aka, “HB2”, or the “Public Facilities Privacy & Security Act”, or simply “the Bathroom Bill”), which  among other things prohibits transgender people from using the bathroom designated to the sex of their identity, is going to force another step forward in civil liberties.

Four years ago, in the 2012 gubernatorial election season, the North Carolina General Assembly, controlled by Republicans, passed North Carolina Amendment 1 (aka, “SB514”, or “An Act to Amend the Constitution to Provide That Marriage Between One Man and One Woman is the Only Domestic Legal Union That Shall Be Valid or Recognized in This State”), which called for a public referendum on the issue of constitutionally banning same-sex marriage.

From its inception, this bill was doomed to have no long-term relevance; it was cast in the mold of the polemical 2008 California Proposition 8. Already, the battle lines were being drawn for the national legalization of same-sex marriage: the military’s restrictive “Don’t ask, don’t tell” policy had been repealed, and the Department of Defense was permitting military chaplains to perform same-sex marriage ceremonies; President Obama had announced his support for marriage equality; challenges to Prop 8 were wending their way to the Supreme Court; and public polling indicated that a slender-but-growing majority of Americans approved of same-sex marriage. Predictably, in July 2014, the 4th Circuit U.S. Court of Appeals overturned an equivalent bill in Virginia, declaring it unconstitutional, thus nullifying NC’s Amendment 1. Why did NC legislators waste so much time, money, and energy on a bill they had to know wouldn’t last?

Because this was about more than just the bill itself. It was a dog whistle, or maybe a bullhorn, to rally conservatives around the state to come to the polls. A well-funded campaign of anti-marriage-equality groups spread across rural NC, especially conservative Christian groups, from the famous evangelical pastor Reverend Billy Graham, to two NC Roman Catholic bishops, to the Christian-funded Vote for Marriage NC, to the pulpit activism of ministers around the state. The message wasn’t just “vote for Amendment 1”, it was “vote for conservatives”; Representative Mark Hilton (R-Catawba) said, “One of the issues [conservative groups] have come to me about is the marriage amendment. It’s important to the conservative groups that we get this passed this year because they need that to be able to get their ground game working to get the maximum effect to get out the vote.” It was a heavily divisive issue, one that played to the deepest emotions of conservatives, and the public debate energized the voters, and helped usher in a new conservative Republican governor, Pat McCrory, after 20 years of fairly progressive Democratic governors (and a longer history of less-progressive Democratic governors before that).

So, is it really a coincidence that 4 years later, in the 2016 gubernatorial election season, the North Carolina General Assembly, controlled by Republicans, passed a bill that limits the rights of a gender minority? Or that some of them are calling for a public referendum? Or that they diverted $500K from the state’s Emergency Response and Disaster Relief fund to defend the fore-doomed HB2 in court against the U.S. Department of Justice, maintaining the controversy and the press for the next several months until the November election? I don’t think it will have the saving grace for Pat McCrory that it did last time, however; it’s already cost the state millions of dollars in revenue, and it’s made us an international laughing-stock.

Like Amendment 1 before it, HB2 is destined to be overturned, a footnote in history. But in the meantime, it’s causing real harm to real people; phone calls to Trans Lifeline, the nonprofit transgender crisis hotline, doubled after the passage of HB2; and some bigots feel emboldened to mock or even harm transgender people in the name of this law. This must have been profoundly disappointing for the human rights activists in Charlotte who’ve spent years working to make NC more inclusive, and who scored a victory with the Charlotte City Council with the passage of Charlotte Ordinance 7056 (aka, “An Ordinance Amending Chapter 2 of the Charlotte City Code Entitled “Administration”, Chapter 12 Entitled “Human Relations”, and Chapter 22 Entitled “Vehicles for Hire””), only to have it struck down at the state level by HB2. So, why am I praising HB2, rather than Charlotte Ordinance 7056?

Because, as good as the intention was behind Charlotte Ordinance 7056, if left unopposed, it would have had minor and purely local effect, rather than the transformative societal effect of HB2.

California’s Prop 8, banning same-sex marriage, was the critical event that made same-sex marriage legal across the entire US, in three notable ways:

  1. The public debate forced people to form an opinion on the issue, and when pressed on it, most people decided that either they were in support of marriage equality or that it simply wasn’t their business to dictate what other adults did;
  2. It inspired contrary legislation in several other states, legalizing same-sex marriage there;
  3. It forced the issue to be resolved in the courts, rather than the timid Congress.

Federal laws are made in two ways in the USA; either they are enacted by Congress; or they are decided as interpretations of the Constitution by the Supreme Court (or its lower district courts). Though same-sex marriage was trending upward in favorability among Americans, it would likely have been decades before Congress would have acted on that; members of Congress are too afraid of strong action on contentious issues, lest it endanger their reelection; and no single party is likely to have a clear mandate to act unilaterally for the next several elections. (A cynical view might assert that controversial issues –like same-sex marriage, gun control, health care, and abortion– are kept unresolved so the parties have strong, emotional differentiators to garner voters, but I prefer to ascribe it to simple inability.) So, the courts brought in marriage equality at least a decade, and probably much longer, than would have been possible from Congress. And this has been a huge step forward in civil rights, positively affecting hundreds of thousands of lives, and giving millions of people their dignity.

And these laws do more than just determine how people are treated by the government. They set a normative expectation among the public. Same-sex marriage is enjoying more popular support now not only because the law reflects how people feel… people feel differently because of the law itself. At their best, laws are a reflection and reinforcement and declaration of shared social values.

So ask yourself, and be honest: Were you concerned about the rights of transgender people a year ago? Were you inspired to march in the streets, attend rallies, or even post on social media about it?

I wasn’t. Sure, if you’d asked me, I would have said truthfully that I thought transgender people should have the same rights as others. But I wouldn’t have felt that strongly about it.

And then HB2 happened. In my state. And I was forced to form an opinion.

And I took to the streets.

Because, who are we, as a state? Who am I, as a citizen? I’ll tell you, clearly, in the face of legal claims by representatives of my state government: “We are not this.”

We are not punching down. We are not petty. We are not oppressive. We are not exclusionary.

Still, if same-sex marriage was yet decades away, how long in the future were transgender rights? How many years and how many lives until we cared?

But now, around the country, around the world, people are defiantly defining themselves by what they are not, on an issue that had not even been on their radar: “We are not this.”

I may not know much about law, but I know what I don’t like.

“We are not this.”

I can’t predict if HB2, this bigoted bill, will help conservatives maintain control of the North Carolina state government for another term. But I do know its one inevitable effect: however hurtful it will be for transgender people in its short life, and though some of those affected may not live to see the long-term benefits, it will give transgender people their legal dignity ever after.

So, self-styled “social conservatives”, keep bringing us hateful, hurtful laws. Keep pushing against the tide of history. Keep forcing us to form an opinion. Please.

by Shepazu at August 08, 2016 04:20 AM

August 04, 2016

W3C Blog

25 years ago the world changed forever

6 August 1991 usenet post by Tim Berners-Lee

6 August 1991 usenet post about the WorldWideWeb by Tim Berners-Lee

If you grew up thinking that the Web always existed since you were born, you may be right. If not, you may remember the very early days of the Web.

Two years ago we celebrated  the invention of the Web on the anniversary of the March 1989 memo written by Tim Berners-Lee outlining his proposal for the World Wide Web.  On Saturday we celebrate not only the brilliance of the Web’s conception but the world-changing point at which the Web was offered as a publicly available service.

25 years ago, on 6 August 1991, Tim Berners-Lee, posted information about the WorldWideWeb project on the newsgroup (like a message board) alt.hypertext and invited wide collaboration – marking,  in one email, the Web’s introduction to the wider world.

Even at the start of his work on the Web Tim offered it to everyone, opening it for contribution from all. Because so many around the globe have taken him up on his offer and have helped to develop the Web, to create and share content as well as to build standards to keep it interoperable and innovative, the Web has become not just a repository for knowledge and sharing beyond the dreams of any library, but one of the most unique and powerful tools in history.

W3C CEO Jeff Jaffe noted:

“With the Web we are trying to encapsulate all that civilization needs.  As needs and opportunities arise and new technologies facilitate addressing those needs, W3C focuses on improving the Web technology base.  We need everyone’s engagement to ensure that we are addressing the most important problems in the best way.”

The Web has changed all our lives and we are pleased to celebrate the historical occasion of its release to the public 25 years ago.  At W3C we continue to uphold our core values of openness, collaboration and innovation in our standards while we pursue our mission  of leading the Web to its full potential.

Thank you Tim and thanks to all who have, by their efforts, helped to create the Web  – from its earliest beginnings, to its inestimable impact on our lives now and for all the exciting ways it will continue to evolve in the future.

We  are grateful for all of those who have made the Web what it is now:  for our W3C Members; for Web developers and all Web users;  for those who work to make sure the Web is truly worldwide and for all of humanity; and for those who are working to create what the Web will become.

We invite you to tell us in the comments about when you first came across the Web, your first Web site, the first W3C spec you implemented  or however the Web has positively impacted you.

For those interested in the early history of the Web:

In March 1989, while at CERN, the European particle physics lab, Tim Berners-Lee wrote: “Information Management: A Proposal”  outlining his ideas for the Web as a global hypertext information-sharing space.  In September of 1990, Mike Sendall, Tim’s boss, gave him approval to go ahead and write the global hypertext system he described and allowed purchase of a NeXT cube computer to use to do so.

In October 1990 Tim wrote the first Web browser – or more specifically, a browser-editor -which was called WorldWideWeb.  When it was written in 1990 it was the only way to see the Web. Later this  browser-editor was renamed Nexus in order to save confusion between the program and the abstract information space (which is now spelled with spaces as: World Wide Web).

early WWW browser screenshot 1993

An early color screenshot of the WorldWideWeb browser

web server, web proposal and

The NeXT workstation used by Tim Berners-Lee to design the World Wide Web and the first Web server; a copy of “Information Management: A Proposal”; and a copy of the book “Enquire Within upon Everything”.

Later in October 1991, Robert Cailliau, a colleague of Tim’s at CERN, joined the project and helped rewrite and circulate Tim’s proposal for the Web. In November 1991, Nicola Pellow, then a student, joined the team and wrote the original line-mode browser. Bernd Pollermann also joined the team that month and worked the “XFIND” indexes on the CERNVM node. By Christmas of that year the line mode browser and WorldWideWeb browser/editor was demonstrable and access was possible to hypertext files, CERNVM “FIND”, and Internet news articles.

(Note: in 2013 CERN re-released an online version of the line-mode browser.  You can see that here:

In 1991 presentations and seminars were made within CERN and the code was released on central CERN machines. Then on 6 August 1991, files were made available on the Internet by FTP —  and posted, along with Tim’s email introducing the public to the WorldWideWeb — on the alt.hypertext newsgroup mailing list. (That new users accessed the Web after 23 August is why that date is considered “internaut’s”  day).

In the autumn of 1991, Stanford Linear Accelerator Center (SLAC) Physicist Paul Kunz met with Tim Berners-Lee and brought word of the Web back to SLAC. By December, the first WWW server at SLAC (and first server outside of Europe) was successfully installed.  In the years that followed more browsers were developed, more Web servers were put online and more Web sites were created. The Web as we know it had begun.  In November 1993, at a Newcastle, U.K. conference, Tim Berners-Lee discussed the future of the Web with MIT’s David Gifford, who suggests that Tim contact Michael Dertouzos, the head of the Laboratory for Computer Science at MIT.

By 1994, load on the first Web server ( was 1000 times what it had been 3 years earlier. In February 1994 Tim met with Michael Dertouzos in Zurich to discuss the possibility of starting a new organization at MIT and in April of that year,  Alan Kotok, then at DEC and later Associate Chairman of W3C, visited CERN to discuss creation of a Consortium. In October of that year the World Wide Web Consortium (W3C), the Web standards organization, was started at MIT, as an international body.  In April 1995 INRIA became the W3C Host in France (in 2003 it was changed to ERCIM); in September 1996 Keio University became the W3C Host in Japan and in January 2013 Beihang University became the  W3C host in China.

For more information (including many links to web pages and images) see also: A Little History of the World Wide Web and a W3C timeline from 2005. For more information about the work of the Web Foundation, established  in 2009 to help to connect everyone, to  raise voices and to enhance participation through the open Web see:  For more information on W3C, about Membership and how participate, please see:

by Amy van der Hiel at August 04, 2016 09:24 PM

W3C Web of Things meetings in Beijing, July 2016

Dave Raggett presentingThe W3C Web of Things Interest Group met 11-14 July 2016 in Beijing, China, hosted by the China Electronics Technology Group  Corporation (CETC), the W3C/Beihang Host and the China IoT Industry Technology Innovation Strategic Alliance (CIoTA). The event was co-located with the CIoTA’s 2016 International Open IoT Technology and Standard Summit.

The first two days were open to local companies and institutions. We had talks from a broad range of participants, and many demonstrations from both the CETC and W3C. We learned about CETC’s IoT open system architecture, and its implementation by Beijing Wuliangang Ltd. as the cloud-based E-Harbour IoT platform, and enjoyed; live demonstrations in relation to smart homes, smart communities and smart buildings. W3C returned the favour with integrated demonstrations of the Web of Things by Siemens, Panasonic, Fujitsu, Lemonbeat and Samsung’s SmartThings.

Please read more in Matthias Kovatsch’ summary in the Web of Things Interest Group Blog.

Joerg Hauer presentingThe Web of Things aims to counter fragmentation of the IoT and enable an open market of services, spanning a wide range of standards and platforms from microcontrollers to cloud-based server farms. Our approach focuses on cross platform APIs for simplifying application development, and the role of metadata for enabling different platforms to interoperate with each other.

The last two days were devoted to progressing the work items of the Web of Things Interest Group. We had sessions focusing on a broad range of technical topics, e.g. protocol bindings, data types for application scripting, thing lifecycles, scripting APIs, and proposals for collaborative work with other organisations on building a shared understanding of how to enable semantic interoperability across different platforms. The Interest Group is now being rechartered for a further two years. We are also progressing plans for a Web of Things Working Group which we hope will launch in October 2016, and which will seek to create standards from the ideas explored by the Interest Group. The Interest Group’s next face to face meeting will be in Lisbon, Portugal on 22-23 September 2016 as part of the W3C annual get together (TPAC 2016).


by Olive Xu at August 04, 2016 09:04 AM

July 14, 2016

W3C Blog

Exploring Web platform cross-dependencies

Most of the JavaScript APIs exposed on the Web platform (both in W3C and elsewhere) rely on a formalization language called WebIDL (Web Interface Definition Language).

It provides a simple syntax to express common idioms needed when defining JavaScript APIs, and encompasses many of the specific behaviors required, expected or inherited from the 20 years legacy of Web APIs. Its common usage across APIs has facilitated more consistency across specifications (for instance in error processing), and has helped streamline the testing of browsers implementations of these APIs.

During the 2016 edition of the W3C Geek Week, my colleague François Daoust and myself have explored another way of using this formalism. We first built a crawler and scrapper of specifications (based on the excellent jsdom library) which extracts the WebIDL definitions embedded in these specifications, and the normative references that bind these specifications to specifications they build upon.

With that data collected across all the specifications we were able to identify that use WebIDL, we built a first simple analyzer of that WebIDL graph,  to identify  potential bugs in specifications, including invalid WebIDL definitions, duplication of names in interfaces, references to undefined interface names, missing normative references.

We also built a simple explorer of that WebIDL data that allows to determine which interfaces are defined where, and re-used by what other specifications.

This combination of tools has already allowed us to identify and file a number of specification bugs, and we hope to continue developing them and exploring the resulting data to inform future APIs development and provide opportunities for even more consistency across the Web Platform.

by Dominique Hazaël-Massieux at July 14, 2016 12:47 PM

July 05, 2016

W3C Blog

Celebrating 20th birthday in Japan – ASIA strong in Web standards –

W3C20 ASIA is the celebration of the 20th anniversary of the founding of the first W3C Asian host at Keio University in Japan in September 1996. The celebration consisted of three parts: a technical seminar, keynotes and the reception. The celebration gathered evangelists, specialists and experts to talk about various areas and use cases and topics, including future visions.

One forward-looking highlight of the event was a presentation about how the Tokyo Organising Committee of the Olympics and Paralympics Games is placing high importance on the Web accessibility of their Web pages and digital media, showing how much Web accessibility continues to be vital in the world.

The keynote panel discussion showed a clear strong tie among Asian countries.The speakers from mainland China, Hong Kong, Korea, and Japan discussed topics including accessibility, IoT (Internet of Things), WoT (Web of Things), data, Fintech (Financial technology), RegTech (Regulatory technology) and society-facing issues around trust on the Internet and Web.


That’s why W3C in ASIA is so important. This population is coming online and we are all coming connected. The speed of light is not fast enough; traditional distinctions are disappearing—things like analog vs. digital, physical vs. virtual, real time vs. not real time, human vs. not human, but application or agent—and the disappearance of those distinctions will enable the next digital generation to overcome the speed of light.

An expectation for the W3C is to work with the rest of society to manage what’s coming next, since all people, life, industry segments, jobs or even virtual worlds are on the Internet and Web. Though the next 20 years will be very challenging, the panelists were confident that the W3C is the place for it to be brought to realization.

The reception opened with a Web visualization demonstrated by a budding artist and featured interesting talks from widely known speakers, as well as other demonstrations. This remarkable event made a strong impression about the importance of W3C in Asia.

by Naomi Yoshizawa at July 05, 2016 10:13 AM