The future of style

The Future of Style aggregates posts from various blogs that talk about the development of Cascading Style Sheets (CSS) [not development with Cascading Style Sheets]. While it is hosted by the W3C CSS Working Group, the content of the individual entries represent only the opinion of their respective authors and does not reflect the position of the CSS Working Group or the W3C.

Latest articles

Video of the Week–Denise Jacobs: Hacking the Creative Brain

Source: Web Directions Blog John • 05 February 2016 12:33 AM

I first met Denise Jacobs in 2005, at SxSW (back then it was much smaller, no one talked about Startups, and everyone seemed to go. Now it’s 10 times bigger, and no one goes–a paradox I can’t quite work out).

Over the years I followed her career, as an increasingly respected Web Designer, author of The CSS Detective, and much more besides. And then in recent times, she’s changed her focus (as many of us old time Web designers have), in her case toward a deeper study of human creativity, and how to nurture and develop it.

So, it was a real privilege to have Denise come and speak at Web Directions 2015, the last Web Directions (but as we explained, it’s not really going away, just changing a little, to become Direction, after a decade it was time to refresh and refocus, watch this space for something special in November 2016).

Hacking the Creative Brain was one of those presentations that people mentioned to me as particularly engaging and valuable, and it’s great to be able to share this with a wider audience. If ever Denise is speaking or giving one of her workshops nearby you, drop everything and get there! Meanwhile, set aside your lunch break to hack your creative brain.

The post Video of the Week–Denise Jacobs: Hacking the Creative Brain appeared first on Web Directions.

Summaries of TPAC2015 Breakout Sessions

Source: W3C Blog Xueyuan Jia • 03 February 2016 10:47 AM

TPAC 2015 logoDuring TPAC (“Technical Plenary / Advisory Committee”) every year, W3C hosts a Technical Plenary with panels and presentations that brings participants together. For a few years now, we’ve organized most of the plenary as “camp-style” Breakout Sessions and all the participants are invited to propose Breakout Sessions. The meeting attendees build the Breakout Sessions Grid early in the day, drawing from ideas socialized in advance and new ideas proposed on that day.

TPAC2015 was an extremely successful week of meetings. Nearly 580 people attended, 43 work groups met face-to-face, and participants organized 50 Breakout Sessions on topics ranging from network interactions, device synchronization, Web Payments, Social Web, Testing, Web of Things, distributed Web applications, video processing, Web-based signage, digital marketing, privacy and permission, just to name a few. Please, lear more in the W3C CEO report on TPAC2015 and IETF94.

A few summaries

We invite you to read the summaries of a few of these breakouts, excerpted here:

Network Interactions

Network Interactions, proposed by Dominique Hazaël-Massieux, reviewed the outcomes of the recent GSMA/IAB MaRNEW workshop and looked at various cases where this additional interaction could be applied: WebRTC optimization, network usage adaption based on user data allowance, overall optimization of radio usage. The overall discussions of how and when the network operator would want to accommodate more specific requests for control or information on their network from the application layer remain inconclusive on a way forward.

FoxEye – video processing

FoxEye – video processing, proposed by Chia-Hung Tai and Tzuhao Kuo, aimed at bringing more power to the Web to make the Web more friendly for video processing and computer vision. Issues garnered as part of the work session were filed to the github repository for tracking.

Cross-device synchronization

Cross-device synchronization, proposed by François Daoust, explored cross-device synchronization scenarios, including shared video viewing, lip-sync use cases, distributed music playback, video walls, cross-device animations, etc. The Timing Object specification defines an API to expose cross-device sync mechanisms to Web applications. Interested parties are invited to join the Multi-Device Timing Community Group to continue the work on this specification.

How blockchain could change the Web-based content distribution

How blockchain could change the Web-based content distribution, proposed by Shigeru Fujimura and Hiroki Watanabe, was about the mechanism of blockchain and its potential related to web-based content distribution followed by a open discussion focusing on business model regarding the incentive to continue maintaining blockchain.

Requirements for Embedded Browsers needed by Web-based Signage

Requirements for Embedded Browsers needed by Web-based Signage, proposed by Kiyoshi Tanaka and Shigeru Fujimura, started from a presentation of the feature of the web-based signage and requirements for the browser. The API ideas such as auto-pilot API and rich presentation API were shown and discussed regarding the proper Working Groups where such APIs would be considered. The results of this session were provided to the Web-based Signage Business Group and reflected the discussion in the review of a draft charter for a proposed Web-based Signage Working Group.

HTMLCue

HTMLCue, proposed by Nigel Megitt, discussed the idea of a new kind of Text Track Cue which would allow any fragment of HTML+CSS to be used to modify the display on a target element in synchronization with a media element’s timeline. Different views were expressed during the discussion, and two actions were noted. Other next steps include summarizing the HTMLCue proposal in a clear document.

Webex – how’s it going?

Webex – how’s it going?, proposed by Ralph Swick and Nigel Megitt, was a feedback gathering session to understand the experience particularly of working groups since W3C moved from Zakim to Webex for audio calls. Some of the issues can be resolved through best practices, others Ralph Swick offered to handle off-line.

Distributing Web Applications Across Devices

Distributing Web Applications Across Devices, proposed by Mark A. Foltz, discussed the potential for creating a new class of Web applications that can be distributed among multiple devices and user agents, instead of executing within the context of a single device/user agent.

Read more

The well-attended Breakout Sessions during TPAC is an opportunity to meet and liaise with participants from other groups, brainstorm ideas, coordinate solutions for technical issues. Although participation in TPAC is limited to those already in W3C groups, the TPAC proceedings are public, including the TPAC2015 Plenary Breakout Sessions records, which we invite you to read.

Minutes Telecon 2016-01-27

Source: CSS WG Blog Dael Jackson • 28 January 2016 12:00 AM

Full Minutes

CSS Fragmentation L3: Candidate Recommendation

Source: CSS WG Blog fantasai • 23 January 2016 07:04 AM

The CSS Working Group has published a Candidate Recommendation and invites implementations of CSS Fragmentation Module Level 3. This module describes the fragmentation model that partitions a flow into pages, columns, or regions and defines properties that control breaking. Changes since the last Working Draft are listed in the Changes section.

New since CSS Level 2:

As always, please send feedback to the (archived) public mailing list www-style@w3.org with the spec code ([css-break]) and your comment topic in the subject line. (Alternatively, you can email one of the editors and ask them to forward your comment.)

Minutes Telecon 2016-01-20

Source: CSS WG Blog Dael Jackson • 21 January 2016 12:17 AM

Full Minutes

The CSS Working Group, in cooperation with John Allsopp (Web…

Source: W3C's Cascading Style Sheets home page19 January 2016 03:30 PM

3 Feb 2016 The CSS Working Group, in cooperation with John Allsopp (Web Directions), organize a Developer Meetup at the Google offices in Sydney, in the evening of Wednesday 3 February. (Free, but registration required.)

CodeLobster publishes the CodeLobster PHP Edition version 5.…

Source: W3C's Cascading Style Sheets home page18 January 2016 12:00 AM

18 Jan 2016 CodeLobster publishes the CodeLobster PHP Edition version 5.8.1, an IDE for HTML, CSS, PHP and JavaScript, with code highlighting, debugger, auto-complete, etc., and built-in support for several CMS'es (SVN, Git, SASS, LESS, Drupal, Joomla, Wordpress, Symfony, etc.). (Windows; free version (without CMS support), Lite version and Professional version)

CSS Cascade 4: Candidate Recommendation

Source: CSS WG Blog fantasai • 15 January 2016 05:42 PM

The CSS Working Group has published a Candidate Recommendation and invites implementations of CSS Cascading and Inheritance Level 4. This CSS module describes how to collate style rules and assign values to all properties on all elements by way of cascading (choosing a winning declaration among many) and inheritance (propagating values from parent to child).

Additions to Level 4 include:

There have been no changes since the September LCWD.

As always, please send any feedback to the (archived) public mailing list www-style@w3.org with the spec code ([css-cascade]) and your comment topic in the subject line. (Alternatively, you can email one of the editors and ask them to forward your comment.)

What characters are in or not in encoding X?

Source: ishida blog » cssr12a • 14 January 2016 08:29 PM

I just received a query from someone who wanted to know how to figure out what characters are in and what characters are not in a particular legacy character encoding. So rather than just send the information to her I thought I’d write it as a blog post so that others can get the same information. I’m going to write this quickly, so let me know if there are parts that are hard to follow, or that you consider incorrect, and I’ll fix it.

A few preliminary notes to set us up: When I refer to ‘legacy encodings’, I mean any character encoding that isn’t UTF-8. Though, actually, I will only consider those that are specified in the Encoding spec, and I will use the data provided by that spec to determine what characters each encoding contains (since that’s what it aims to do for Web-based content). You may come across other implementations of a given character encoding, with different characters in it, but bear in mind that those are unlikely to work on the Web.

Also, the tools I will use refer to a given character encoding using the preferred name. You can use the table in the Encoding spec to map alternative names to the preferred name I use.

What characters are in encoding X?

Let’s suppose you want to know what characters are in the character encoding you know as cseucpkdfmtjapanese. A quick check in the Encoding spec shows that the preferred name for this encoding is euc-jp.

Go to http://r12a.github.io/apps/encodings/ and look for the selection control near the bottom of the page labelled show all the characters in this encoding.

Select euc-jp. It opens a new window that shows you all the characters.

picture of the result

This is impressive, but so large a list that it’s not as useful as it could be.

So highlight and copy all the characters in the text area and go to https://r12a.github.io/apps/listcharacters/.

Paste the characters into the big empty box, and hit the button Analyse characters above.

This will now list for you those same characters, but organised by Unicode block. At the bottom of the page it gives a total character count, and adds up the number of Unicode blocks involved.

picture of the result

What characters are not in encoding X?

If instead you actually want to know what characters are not in the encoding for a given Unicode block you can follow these steps.

Go to UniView (http://r12a.github.io/uniview/) and select the block you are interested where is says Show block, or alternatively type the range into the control labelled Show range (eg. 0370:03FF).

Let’s imagine you are interested in Greek characters and you have therefore selected the Greek and Coptic block (or typed 0370:03FF in the Show range control).

On the edit buffer area (top right) you’ll see a small icon with an arrow point upwards. Click on this to bring all the characters in the block into the edit buffer area. Then hit the icon just to its left to highlight all the characters and then copy them to the clipboard.

picture of the result

Next open http://r12a.github.io/apps/encodings/ and paste the characters into the input area labelled with Unicode characters to encode, and hit the Convert button.

picture of the result

The Encoding converter app will list all the characters in a number of encodings. If the character is part of the encoding, it will be represented as two-digit hex codes. If not, and this is what you’re looking for, it will be represented as decimal HTML escapes (eg. Ͱ). This way you can get the decimal code point values for all the characters not in the encoding. (If all the characters exist in the encoding, the block will turn green.)

(If you want to see the list of characters, copy the results for the encoding you are interested in, go back to UniView and paste the characters into the input field labelled Find. Then click on Dec. Ignore all ASCII characters in the list that is produced.)

Note, by the way, that you can tailor the encodings that are shown by the Encoding converter by clicking on change encodings shown and then selecting the encodings you are interested in. There are 36 to choose from.

Minutes Telecon 2016-01-13

Source: CSS WG Blog Dael Jackson • 14 January 2016 12:40 AM

Full Minutes

The CSS WG published Candidate Recommendations of CSS Fragme…

Source: W3C's Cascading Style Sheets home page14 January 2016 12:00 AM

14 Jan 2016 The CSS WG published Candidate Recommendations of CSS Fragmentation Module Level 3 and CSS Cascading and Inheritance Level 4

Grid Layout Spec Workshops

Source: CSS WG Blog fantasai • 10 January 2016 08:32 AM

Since the Flexbox workshops seemed to be helpful, fantasai is planning to organize a few Grid Layout spec workshops. There will be one in SF this month; add your availability to the Doodle if you’re interested. NYC and/or Philadelphia next month probably. And if neither of those work, you can get a group together to run one yourself—there are instructions for running a CSS spec workshop that anyone can organize.

Minutes Telecon 2016-01-06

Source: CSS WG Blog Dael Jackson • 07 January 2016 10:38 AM

Full Minutes

New picker: Old English

Source: ishida blog » cssr12a • 02 January 2016 11:02 PM

Picture of the page in action.
>> Use the picker

Following closely on the heels of the Old Norse and Runic pickers comes a new Old English (Anglo-Saxon) picker.

This Unicode character picker allows you to produce or analyse runs of Old English text using the Latin script.

In addition to helping you to type Old English latin-based text, the picker allows you to automatically generate phonetic and runic transcriptions. These should be used with caution! The transcriptions are only intended to be a rough guide, and there may occasionally be slight inaccuracies that need patching.

The picture in this blog post shows examples of old english text, and phonetic and runic transcriptions of the same, from the beginning of Beowulf. Click on it to see it larger, or copy-paste the following into the picker, and try out the commands on the top right: Hwæt! wē Gār-Dena in ġēar-dagum þēod-cyninga þrym gefrūnon, hūðā æþelingas ellen fremedon.

If you want to work more with runes, check out the Runic picker.

New pickers: Runic & Old Norse

Source: ishida blog » cssr12a • 01 January 2016 01:43 PM

Picture of the page in action.
>> Use the picker

Character pickers are especially useful for people who don’t know a script well, as characters are displayed in ways that aid identification. These pickers also provide tools to manipulate the text.

The Runic character picker allows you to produce or analyse runs of Runic text. It allows you to type in runes for the Elder fuþark, Younger fuþark (both long-branch and short-twig variants), the Medieval fuþark and the Anglo-Saxon fuþork. To help beginners, each of the above has its own keyboard-style layout that associates the runes with characters on the keyboard to make it easier to locate them.

It can also produce a latin transliteration for a sequence of runes, or automatically produce runes from a latin transliteration. (Note that these transcriptions do not indicate pronunciation – they are standard latin substitutes for graphemes, rather than actual Old Norse or Old English, etc, text. To convert Old Norse to runes, see the description of the Old Norse pickers below. This will soon be joined by another picker which will do the same for Anglo-Saxon runes.)

Writing in runes is not an exact science. Actual runic text is subject to many variations dependent on chronology, location and the author’s idiosyncracies. It should be particularly noted that the automated transcription tools provided with this picker are intended as aids to speed up transcription, rather than to produce absolutely accurate renderings of specific texts. The output may need to be tweaked to produce the desired results.

You can use the RLO/PDF buttons below the keyboard to make the runic text run right-to-left, eg. ‮ᚹᚪᚱᚦᚷᚪ‬, and if you have the right font (such as Junicode, which is included as the default webfont, or a Babelstone font), make the glyphs face to the left also. The Bablestone fonts also implement a number of bind-runes for Anglo-Saxon (but are missing those for Old Norse) if you put a ZWJ character between the characters you want to ligate. For example: ᚻ‍ᛖ‍ᛚ. You can also produce two glyphs mirrored around the central stave by putting ZWJ between two identical characters, eg. ᚢ‍ᚢ. (Click on the picture of the picker in this blog post to see examples.)

Picture of the page in action.
>> Use the picker

The Old Norse picker allows you to produce or analyse runs of Old Norse text using the Latin script. It is based on a standardised orthography.

In addition to helping you to type Old Norse latin-based text, the picker allows you to automatically generate phonetic and runic transcriptions. These should be used with caution! The phonetic transcriptions are only intended to be a rough guide, and, as mentioned earlier, real-life runic text is often highly idiosyncratic, not to mention that it varies depending on the time period and region.

The runic transcription tools in this app produce runes of the Younger fuþark – used for Old Norse after the Elder and before the Medieval fuþarks. This transcription tool has its own idiosyncracies, that may not always match real-life usage of runes. One particular idiosyncracy is that the output always regularly conforms to the same set of rules, but others include the decision not to remove homorganic nasals before certain following letters. More information about this is given in the notes.

You can see an example of the output from these tools in the picture of the Old Norse picker that is attached to this blog post. Here’s some Old Norse text you can play with: Ok sem leið at jólum, gørðusk menn þar ókátir. Bǫðvarr spurði Hǫtt hverju þat sætti; hann sagði honum at dýr eitt hafi komit þar tvá vetr í samt, mikit ok ógurligt.

The picker also has a couple of tools to help you work with A New Introduction to Old Norse.

2015 Snapshot and Prefix Policy Update

Source: CSS WG Blog fantasai • 31 December 2015 10:24 PM

The CSS WG has finally published a new CSS Snapshot: the CSS 2015 snapshot. This includes the new experimental implementations policy aka “prefix policy”, which has been formalized from the 2012 San Diego discussions. (Finalized wording for this was the hold-up to publishing this update; fantasai kept finding less-scary things to do.)

The CSS Snapshot collects together the set of CSS specs that are known to be a stable implementation target, but may not yet have a full set of passing test results. It also includes guidelines on the responsible implementation of CSS, as agreed on by the implementer representatives in the CSSWG.

We hope this document helps people understand the current state of the CSS specs and to find what they’re looking for among the many documents published by the CSSWG. As always, please send feedback to the (archived) public mailing list www-style@w3.org
with the spec code ([css-2015]) and your comment topic in the subject line. (Alternatively, you can email one of the editors and ask them to forward your comment.)

CSS Writing Modes Level 3 Updated CR

Source: CSS WG Blog fantasai • 31 December 2015 10:13 PM

The CSS Working Group has published an updated Candidate Recommendation of CSS Writing Modes Level 3. CSS Writing Modes Level 3 defines CSS handling of various international writing modes, such as left-to-right (e.g. Latin or Indic), right-to-left (e.g. Hebrew or Arabic), bidirectional (e.g. mixed Latin and Arabic) and vertical (e.g. Asian scripts).

This update fixes a bunch of problems found in the previous Candidate Recommendation, resulting in a small number of major substantive changes. Important ones include:

All substantive changes since the previous Candidate Recommendation are listed in the Changes section.

As always, please send feedback to the (archived) public mailing list www-style@w3.org with the spec code ([css-writing-modes]) and your comment topic in the subject line. (Alternatively, you can email one of the editors and ask them to forward your comment.)

Minutes Telecon 2015-12-16

Source: CSS WG Blog Dael Jackson • 17 December 2015 12:37 AM

Full Minutes

The CSS WG updated the Candidate Recommendation of CSS Writi…

Source: W3C's Cascading Style Sheets home page15 December 2015 12:00 AM

15 Dec 2015 The CSS WG updated the Candidate Recommendation of CSS Writing Modes Level 3

Minutes Telecon 2015-12-09

Source: CSS WG Blog Dael Jackson • 10 December 2015 12:50 AM

Full Minutes

New app: Encoding converter

Source: ishida blog » cssr12a • 05 December 2015 05:23 PM

Picture of the page in action.
>> Use the app

This app allows you to see how Unicode characters are represented as bytes in various legacy encodings, and vice versa. You can customise the encodings you want to experiment with by clicking on change encodings shown. The default selection excludes most of the single-byte encodings.

The app provides a way of detecting the likely encoding of a sequence of bytes if you have no context, and also allows you to see which encodings support specific characters. The list of encodings is limited to those described for use on the Web by the Encoding specification.

The algorithms used are based on those described in the Encoding specification, and thus describe the behaviour you can expect from web browsers. The transforms may not be the same as for other conversion tools. (In some cases the browsers may also produce a different result than shown here, while the implementation of the spec proceeds. See the tests.)

Encoding algorithms convert Unicode characters to sequences of double-digit hex numbers that represent the bytes found in the target character encoding. A character that cannot be handled by an encoder will be represented as a decimal HTML character escape.

Decoding algorithms take the byte codes just mentioned and convert them to Unicode characters. The algorithm returns replacement characters where it is unable to map a given byte to the encoding.

For the decoder input you can provide a string of hex numbers separated by space or by percent signs.

Green backgrounds appear behind sequences where all characters or bytes were successfully mapped to a character in the given encoding. Beware, however, that the character mapped to may not be the one you expect – especially in the single byte encodings.

To identify characters and look up information about them you will find UniView extremely useful. You can paste Unicode characters into the UniView Edit Buffer and click on the down-arrow icon below to find out what they are. (Click on the name that appears for more detailed information.) It is particularly useful for identifying escaped characters. Copy the escape(s) to the Find input area on UniView and click on Dec just below.

Minutes Telecon 2015-12-02

Source: CSS WG Blog Dael Jackson • 03 December 2015 10:44 AM

Full Minutes

New Draft for Portable Web Publications has been Published

Source: W3C Blog Ivan Herman • 30 November 2015 08:00 AM

One of the results of the busy TPAC F2F meeting of the DPUB IG Interest Group (see the separate reports on TPAC for the first and second F2F days), the group just published a new version of the Portable Web Publications for the Open Web Platform (PWP) draft. This draft incorporates the discussions at the F2F meeting.

As a reminder: the PWP document describes a future vision on the relationships of Digital Publishing and the Open Web Platform. The vision can be summarized as:

Our vision for Portable Web Publications is to define a class of documents on the Web that would be part of the Digital Publishing ecosystem but would also be fully native citizens of the Open Web Platform. In this vision, the current format- and workflow-level separation between offline/portable and online (Web) document publishing is diminished to zero. These are merely two dynamic manifestations of the same publication: content authored with online use as the primary mode can easily be saved by the user for offline reading in portable document form. Content authored primarily for use as a portable document can be put online, without any need for refactoring the content. Publishers can choose to utilize either or both of these publishing modes, and users can choose either or both of these consumption modes. Essential features flow seamlessly between online and offline modes; examples include cross-references, user annotations, access to online databases, as well as licensing and rights management.

The group already had lots of discussions on this vision, and published a first version of the PWP draft before the TPAC F2F meeting. That version already included a series of terms establishing the notion of Portable Web Documents and also outlined an draft architecture for PWP readers based on Service Workers. The major changes of the new draft (beyond editorial changes) include a better description of that architecture, a reinforced view and role for manifests and, mainly, a completely re-written section on addressing and identification.

The updated section makes a difference between the role of identifiers (e.g., ISBN, DOI, etc.) and locators (or addresses) on the Web, typically an HTTP(S) URL. While the former is a stable identification of the publication, the latter may change when, e.g., the publication is copied, made private, etc. Defining identifiers is beyond the scope of the Interest Group (and indeed of W3C in general); the goal is to further specify the usage patterns around locators, i.e., URL-s. The section looks at the issue of what an HTTP GET would return for such a URL, and what the URL structure of the constituent resources are (remember that a Web Publication being defined as a set of Web Resources with its own identity). All these notions will need further refinements (and the IG has recently set up a task force to look into the details) but the new draft gives a better direction to explore.

As always, issues and comments are welcome on the new document. The preferred way is to use the github issue tracker but, alternatively, mails can be sent to the IG’s mailing list.

(Original blog was published in the Digital Publishing Activity Blog)

Respond 2016, featuring Ethan Marcotte, now two days, and two cities!

Source: Web Directions Blog John • 30 November 2015 03:00 AM

Back in 2014, we tried something new, a “popup” conference. A single day, focussing on the challenges of Responsive Web Design, called “Respond“.

I pitched the idea to Maxine on November 30th, we launched it on December 15th, and it was held on February 5th. Despite the short lead time, and the fact that the entire period we were promoting it was Christmas/New Year and the quiet summer holidays, it was a great success. So the popup event became permanent.

This year, we kept the same formula, and grew the audience. But we weren’t 100% satisfied with the result (though it was definitely an excellent event, and the feedback we received was as good as for any event we’ve done).

And then it hit us. Respond was really a return to our roots in Web Design. After all, that’s where Web Directions began (we didn’t even have any JavaScript content in our early conferences, so little did people really use it back then).

Over time, Web Directions focussed less and less explicitly on Web design – on CSS and HTML, and the day to day issues people face designing for the Web. But we’ve realised there was not only still a place for this content, the need had grown significantly as we as an industry rose to the challenge of designs that responded to multiple screen sizes and use contexts.

So starting in 2016, Respond will be squarely a (Responsive) Web Design conference, covering the technologies, but most importantly, current patterns and practices in designing for the Web.

And not only has it grown from one day to two, it’s grown from one city (Sydney) to two (Sydney and Melbourne)*.

An amazing lineup

We’ve already lined up some incredible speakers, including a return from the inventor of Responsive Web Design himself, Ethan Marcotte, along with

and many more local and international speakers.

Rare Workshop with Ethan Marcotte and Karen McGrane

We’re also bringing you the genuinely rare chance to spend an entire day with Ethan and Karen with their incredible workshop created for designers, developers, content owners, and business stakeholders—anyone who participates in making a responsive redesign happen.

It all takes place in Sydney April 5 (workshop) and 6–7 (conference) and Melbourne April 11–12 (conference) and 13 (workshop).

We’ve got early bird pricing available now, with

And if you register before January 15th, you’ll also received eBook copies of both of Karen McGrane’s and Ethan Marcotte’s A Book Apart books (four in total), including the just released “Responsive Design: Patterns and Principles” by Ethan, and “Going Responsive” by Karen.

* in fact, it’s actually three, as we’ll also be visiting Tokyo with a number of the speakers the Week after Sydney and Melbourne

The post Respond 2016, featuring Ethan Marcotte, now two days, and two cities! appeared first on Web Directions.

CSS Device Adaptation Draft Updated

Source: CSS WG Blog Florian Rivoal • 26 November 2015 02:06 PM

The CSS Working Group has published an updated Working Draft of the CSS Device Adaptation Module Level 1. This specification provides a way for an author to specify, in CSS, the size, zoom factor, and orientation of the viewport that is used as the base for the initial containing block.

This update contains changes accumulated since 2011, so there’s quite a few of them.

Changes since the last Working Draft are listed in the Changes section.

As always, please send feedback to the (archived) public mailing list
www-style@w3.org
with the spec code ([css-device-adapt])
and your comment topic in the subject line.
(Alternatively, you can email one of the editors and ask them to forward your comment.)

Mongolian picker updated: standardised variants

Source: ishida blog » cssr12a • 22 November 2015 01:03 PM

Picture of the page in action.
>> Use the picker

An update to version 17 of the Mongolian character picker is now available.

When you hover over or select a character in the selection area, the box to the left of that area displays the alternate glyph forms that are appropriate for that character. By default, this only happens when you click on a character, but you can make it happen on hover by clicking on the V in the gray selection bar to the right.

The list includes the default positional forms as well as the forms produced by following the character with a Free Variation Selector (FVS). The latter forms have been updated, based on work which has been taking place in 2015 to standardise the forms produced by using FVS. At the moment, not all fonts will produce the expected shapes for all possible combinations. (For more information, see Notes on Mongolian variant forms.)

An additional new feature is that when the variant list is displayed, you can add an appropriate FVS character to the output area by simply clicking in the list on the shape that you want to see in the output.

This provides an easy way to check what shapes should be produced and what shapes are produced by a given font. (You can specify which font the app should use for display of the output.)

Some small improvements were also made to the user interface. The picker works best in Firefox and Edge desktop browsers, since they now have pretty good support for vertical text. It works least well in Safari (which includes the iPad browsers).

For more information about the picker, see the notes at the bottom of the picker page.

About pickers: Pickers allow you to quickly create phrases in a script by clicking on Unicode characters arranged in a way that aids their identification. Pickers are likely to be most useful if you don’t know a script well enough to use the native keyboard. The arrangement of characters also makes it much more usable than a regular character map utility. See the list of available pickers.

Minutes Telecon 2015-11-18

Source: CSS WG Blog Dael Jackson • 19 November 2015 10:45 AM

Full Minutes

Introducing EdgeHTML 13, our first platform update for Microsoft Edge

Source: IEBlog Kyle Pflug • 16 November 2015 06:00 PM

Last week, the first major update for Windows 10 began rolling out to over 110 million devices, including improvements in all aspects of the platform and experience. This update brings Microsoft Edge’s rendering engine to EdgeHTML 13, which Windows Insiders have been previewing for the last few months.

When we first introduced Microsoft Edge as “Project Spartan” back in January, we promised an evergreen browser. This means developers can rely on Microsoft Edge users always having the latest version of the rendering engine, and can expect frequent updates to the platform with new features and standards support. With EdgeHTML 13, we’re excited to deliver a broad set of major new platform features only a few months after the first public release of Microsoft Edge, as an automatic update to all Current Branch customers of Windows 10.

Feature updates in EdgeHTML 13

Back in August, we gave our first peek at our priorities for this release, as well as some longer term goals for future releases. If you have been watching the Microsoft Edge changelog, you may have seen these features lighting up build-by-build in the Insider Program. These updates bring Microsoft Edge to a score of 458 on HTML5Test – an improvement of 56 points in just a few months, and 117 points over Internet Explorer 11.

Screen capture showing HTML5Test scores for Microsoft browsers

HTML5Test measures declared support for features defined in the HTML5 specification as well as extensions and related specifications.

Here are the highlights of what’s now supported in EdgeHTML (for a full breakdown, visit the changelog):

CSS

File APIs

User Input

Graphics

Communication

Tools

Web Components

Feature updates in Chakra

In addition to the improvements in EdgeHTML, this release also includes major improvements and new feature support in Chakra, the JavaScript engine powering Microsoft Edge. Major features like asm.js are now enabled by default. With these updates, Microsoft Edge is by a wide margin the highest-scoring desktop browser in the Kangax ES6 compatibility table, which measures support for the component features of ES2015, perhaps the largest update in JavaScript history.

Screen capture showing EdgeHTML 13 leading in the Kangax ES6 compatibility table

Kangax ES6 scores for Microsoft Edge and other desktop browsers.

In addition, the “Experimental JavaScript features” flag under about:flags now includes experimental support for early ES2016 features, including Async Functions and the Exponentiation operator.

Here’s the full list of the major new features supported in Chakra:

Many Windows 10 devices, one web platform

This update marks a special moment for the Windows 10 web platform as we ship the same version of EdgeHTML to all Windows 10 devices: PCs, Windows 10 phones (via the Windows Insider Program), and even Xbox One. Whether it’s adaptive images on phones with the <picture> element and extended srcset, or even in-browser gaming on the Xbox One with WebGL and GamePad API, Microsoft Edge empowers users and developers alike to be confident in a consistent, modern, and powerful experience across devices.

Illustration showing Microsoft Edge running on Windows 10 phone, laptop, and Xbox One

New end-user features in Microsoft Edge

While this blog is focused on developer features, this release also updates the Microsoft Edge app to Microsoft Edge 25, which includes powerful new features like Tab Preview, synced Favorites and Reading List, and wireless multimedia casting for video, audio, and photos. You can learn more about the updates to Microsoft Edge and other Windows apps on the Windows Experience blog.

Tab Preview in Microsoft Edge

Tab Preview in Microsoft Edge

This update also includes major security improvements in Microsoft Edge, with industry-leading code integrity enforcement in the Windows kernel, and updates to SmartScreen to protect users from drive-by attacks in the browser. We’ll be exploring each of these features in separate posts soon.

What’s next?

This is just the first step in a journey of delivering Microsoft Edge updates regularly, and we’re excited to get right back to work on our next set of improvements. As always, the Windows Insider Program is the best way to preview our upcoming features early, and we’ll continue to share details on our plans as soon as we begin development. Expect to hear more about our next feature investments and our priorities for 2016 in the coming months. In the meantime, we welcome your feedback and look forward to seeing what you do with the powerful new capabilities in Microsoft Edge!

Kyle Pflug, Program Manager, Microsoft Edge

Better specifications for the sake of the Web

Source: W3C Blog Virginie GALINDO • 16 November 2015 01:00 PM

This post is co-authored by Virginie Galindo and Richard Ishida, currently working hand in hand to promote better wide review of W3C specifications.

The Open Web Platform is getting increased traction from new communities and markets thanks to the attractive portability and cross-device nature of its specifications – characteristics which are strengthened by horizontal and wide reviews. But the increase in specifications compounds a growing difficulty when it comes to ensuring that specs are adequately reviewed.

The number of specifications initiated in W3C is increasing every year. That growth is welcome, but we want to avoid ending up with series of parallel technologies that lack coherence. That is one reason the W3C is putting efforts into a campaign to ensure that all specifications will benefit from wide review. Reviews, from the public and from experts, to ensure that all features and specifications create a trusted and sustainable Web for All.

Reviewing a specification is not an easy task, especially when a reviewer does so on a voluntary basis, squeezing it in between two or more high value tasks. One can appreciate that a prerequisite for asking for wide review is that the W3C specification is readable by the non-specialist who is affected by the features it addresses.

Think also about the scenario where an accessibility expert is reviewing an automotive API, or an internationalization expert is reviewing a brand new CSS feature, or a security expert is reviewing a new protocol. The spec needs to be understandable to these non-domain experts.

The basics dictate that a specification should contain use cases and vocabulary sections, that it should rely on plain English, etc. But you should also bear in mind that most reviewers have to produce feedback in a limited time, with limited experience, and having perhaps only read the spec through a couple of times.

Here are few additional tricks for specification editors to keep your reviewer on track.

Summarize algorithms. Parts of the spec that are expressed as algorithmic steps can make it difficult to grasp the fundamental lines of what is being proposed (and sometimes it even takes a while to ascertain that it’s not anything particularly complicated in the end). Adding a summary of what the algorithm does can make a huge difference for those needing to get the bigger picture quickly.

Do not fragment information (or do use signage). When information is dispersed around the document to such an extent that one has to hold the whole spec in one’s brain to be able to find or piece together information on a particular topic, this is not good for reviewers. If it’s possible to reduce the fragmentation, that would be helpful. If not, please add plenty of signage, so that it’s clear to people who don’t hold the whole spec in their brain where they need to look for related information.

Use diagrams. Sometimes a large amount of textual information could be expressed very quickly using a railroad diagram, an illustration, or something similar. No-one wants to wade through (often pages of) tedious detail when reading a spec when a diagrammatic approach could summarise the information quickly.

Give examples. Examples are extremely useful and help people grasp even complex ideas quickly. Please use as many as you can. If you are describing a format, include an example of that format which includes most of the quirks and kinks that need to be described. If you are describing a result, show an example of the code and the result. If you are describing something you need the reader to visualise, use a picture. Etc. Basically, please use as many examples as possible.

Ensuring that W3C specifications are readable leads to better reviews and feedback. Better reviews and feedback lead to a more coherent Web and greater support for universal access and interoperability. These latter, in return, lead to greater attractiveness of W3C specifications for new communities and markets.

RealObjects released PDFreactor version 8, an XML-to-PDF for…

Source: W3C's Cascading Style Sheets home page16 November 2015 12:00 AM

16 Nov 2015 RealObjects released PDFreactor version 8, an XML-to-PDF formatter that runs either as a Web service or as a command line tool. It has support for, among other things, CSS Transforms, CSS Regions, Web Fonts, and running elements. Other features include support for HTML5 (including the <canvas> element), MathML, SVG, XSLT, JavaScript, and accessible PDF. (Java. Free personal version)

Feeds

The Future of Style features:

If you have a post you want to add to this feed, post a link (or the whole thing) on the CSS3 Soapbox. If you own a blog with frequent posts about the future of CSS, and want to be added to this aggregator, please get in touch with fantasai.

fantasai

Made with CSS! Valid CSS!Valid HTML 4.0! RSS feed Atom feed