Planet MathML

The Planet MathML aggregates posts from various blogs that concern MathML. Although it is hosted by W3C, the content of the individual entries represent only the opinion of their respective authors and does not reflect the position of W3C.

Latest articles

RealObjects released PDFreactor version 8.1, an XML-to-PDF f…

Source: Ask.com News Search for "mathml" • June 27, 2016 • Permalink

W3C - Found 3 hours ago
Other features include support for HTML5 (including the element), MathML, SVG, XSLT, JavaScript, and accessible PDF. This version adds PDF/UA...

American Physical Society continues as MathJax Supporter

Source: MathJax • June 27, 2016 • Permalink

The American Physical Society (APS) continues to support the MathJax project as a MathJax Supporter.

Founded in 1899, the American Physical Society (APS) is the world’s largest organization of physicists and involved in several activities to advance and diffuse the knowledge of physics, including a strong publication program with landmark titles such as Physical Review Letters, the Physical Review journals, and Reviews of Modern Physics. As an influential supporter of SGML-based math notation in the 1990s and an early adopter of MathML, the APS has long been furthering innovation in academic communication.

“APS is very pleased to continue our support of MathJax, both financially and through our participation on the new technical committee”, said Mark Doyle, Chief Information Officer, American Physical Society. “Recent efforts on accessibility and the current work on updating the core of MathJax are exciting enhancements that will benefit all readers of math on the web.”

“As one of the earliest MathJax sponsors, APS has helped push MathJax forward from Day 1.”, said Peter Krautzberger, MathJax manager. “Thanks to their continued support we are able to keep MathJax the most reliable, high-quality solution for math and science on the web.”

We look forward to continuing the collaboration with APS, and welcome their ongoing support for the MathJax project.

Pearson becomes a MathJax Supporter

Source: MathJax • June 23, 2016 • Permalink

Pearson is giving the MathJax project a boost by joining our sponsorship program as MathJax Supporter.

Founded in 1844 as a small building firm in Yorkshire, Pearson is the largest education company in the world today. With Pearson School, Pearson Higher Education and Pearson Professional, Pearson’s focus today lies solely on education.

“Pearson is proud to become a MathJax sponsor as we often rely on MathJax to deliver our web-based math and science assessment and instructional content in our digital products.”, says Wayne Ostler, VP Content Systems and Publishing. “We look forward to working with the MathJax community to develop and improve MathML rendering and accessibility tools in our digital products”.

“The support as a MathJax sponsor demonstrates Pearson’s commitment to being a partner to the math and science community on the web”, comments Peter Krautzberger, MathJax manager. “Becoming a MathJax Supporter allows Pearson to make optimal use of MathJax, and makes an important contribution to keeping MathJax the reliable, flexible, and open technology it is today.”

The MathJax team looks forward to the collaboration with Pearson, and welcomes their support for the MathJax project.

About Pearson

Pearson is the world’s leading learning company, with expertise in educational courseware and assessment, and a range of teaching and learning services powered by technology.

Pearson’s mission is to help people make progress through access to better learning. We believe that learning opens up opportunities, creating fulfilling careers and better lives.

About MathJax

MathJax was initiated in 2009 by the American Mathematical Society (AMS), Design Science, and the Society for Industrial and Applied Mathematics (SIAM) with the aim of developing a universal, robust, and easy-to-use solution to display mathematics on the web. MathJax’s open source JavaScript library provides high-quality display on all browsers and platforms without the need for readers to install plugins or fonts. Using MathJax also enables copy&paste of equations and is compatible with accessibility tools for vision and learning disabilities. The MathJax Consortium is supported by numerous sponsors.

Re: Houdini and MathML in Blink

Source: public-digipub-ig@w3.org Mail Archives • Olaf Drümmer (olaf@druemmer.com) • June 22, 2016 • Permalink

So this seemingly implies the following:
> [...] it probably means that we won’t see MathML supported on Blink in the short-term [...]

Too bad…  (I do know that using MathJAX can work around this in the meantime, but still...)

Olaf

- - -
CONFIDENTIALITY UN-DISCLAIMER: Nothing in my email contains confidential material. Even if you are not the intended recipient, please feel free to read it, or simply delete it whenever you feel like it. Please understand that my email may contain incorrect or useless information, so please use it with a grain of salt.


> On 22.06.2016, at 01:07, Cramer, Dave <Dave.Cramer@hbgusa.com> wrote:
> 
> This article discusses what Blink is doing with Houdini and MathML:
> 
> https://blogs.igalia.com/mrego/2016/06/21/my-blinkon6-summary-grid-layout-h
> oudini-and-mathml/
> 
> This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
> 

Re: Houdini and MathML in Blink

Source: public-digipub-ig@w3.org Mail Archives • Ivan Herman (ivan@w3.org) • June 22, 2016 • Permalink

Both of these mails had the URL cut, because your respective clients wrapped the lines…

Here is a short URL that can help:-)

http://bit.ly/28US66t


Ivan

> On 22 Jun 2016, at 03:58, George Kerscher <kerscher@montana.com> wrote:
> 
> Hi,
> Here is a link that should work:
> https://blogs.igalia.com/mrego/2016/06/21/my-blinkon6-summary-grid-layout-ho
> udini-and-mathml/
> 
> Best
> George
> 
> -----Original Message-----
> From: Cramer, Dave [mailto:Dave.Cramer@hbgusa.com]
> Sent: Tuesday, June 21, 2016 5:07 PM
> To: W3C Digital Publishing IG
> Subject: Houdini and MathML in Blink
> 
> This article discusses what Blink is doing with Houdini and MathML:
> 
> https://blogs.igalia.com/mrego/2016/06/21/my-blinkon6-summary-grid-layout-h
> oudini-and-mathml/
> 
> This may contain confidential material. If you are not an intended
> recipient, please notify the sender, delete immediately, and understand that
> no disclosure or reliance on the information herein is permitted. Hachette
> Book Group may monitor email to and from our network.
> 
> 
> 
> 


----
Ivan Herman, W3C
Digital Publishing Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
ORCID ID: http://orcid.org/0000-0003-0782-2704





RE: Houdini and MathML in Blink

Source: public-digipub-ig@w3.org Mail Archives • George Kerscher (kerscher@montana.com) • June 22, 2016 • Permalink

Hi,
Here is a link that should work:
https://blogs.igalia.com/mrego/2016/06/21/my-blinkon6-summary-grid-layout-ho
udini-and-mathml/

Best
George

-----Original Message-----
From: Cramer, Dave [mailto:Dave.Cramer@hbgusa.com] 
Sent: Tuesday, June 21, 2016 5:07 PM
To: W3C Digital Publishing IG
Subject: Houdini and MathML in Blink

This article discusses what Blink is doing with Houdini and MathML:

https://blogs.igalia.com/mrego/2016/06/21/my-blinkon6-summary-grid-layout-h
oudini-and-mathml/

This may contain confidential material. If you are not an intended
recipient, please notify the sender, delete immediately, and understand that
no disclosure or reliance on the information herein is permitted. Hachette
Book Group may monitor email to and from our network.

Houdini and MathML in Blink

Source: public-digipub-ig@w3.org Mail Archives • Cramer, Dave (Dave.Cramer@hbgusa.com) • June 21, 2016 • Permalink

This article discusses what Blink is doing with Houdini and MathML:

https://blogs.igalia.com/mrego/2016/06/21/my-blinkon6-summary-grid-layout-h
oudini-and-mathml/

This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

Accessible Math Takes Another Step Forward

Source: Design Science News • Neil Soiffer • June 07, 2016 • Permalink

Wikipedia recently changed its default so that not only does most math look better, but is also accessible by default. It used to be that by default, Wikipedia delivered math as PNG images. Here’s an example of what that looks like:

With PNGs, not only does the math not look very nice (wrong size and blurry), it isn’t accessible.

For the last few years, you could log into Wikipedia, and change the math to use MathML with SVG or PNG fallback. Now that is the default so that the math looks good and it is accessible because MathML is in the page. Here is what the page now looks like by default:

When reading this page with NVDA+MathPlayer, the first big equation will read as “the fraction with numerator a plus b and denominator a, equals, a over b, is defined to be, phi.

There are tens of thousands of pages on Wikipedia with accessible math in them. If you already have NVDA, all you need is MathPlayer, which you can download for free from the MathPlayer portion of the Design Science website. There’s nothing else you need to do; now math “just works” for Wikipedia pages (mostly*).

If English isn’t your preferred language, Wikipedia has lots of pages in other languages with math in them and MathPlayer supports 14 of those languages: Chinese, Czech, Danish, Dutch, Finnish, French, German, Greek, Icelandic, Italian, Japanese, Norwegian, and Swedish.

*Because images looked so bad, Wikipedia made it possible to author math as a combination of HTML tags (e.g., <sub> and <sup>) and text. This was harder to do, but for short inline equations wasn’t horrible and the display looked better, so some authors did that. That math isn’t accessible yet, but often it is very simple and is not too hard to understand.

 

Topics in this post: 

Re: Manifest/Metadata requirements

Source: public-digipub-ig@w3.org Mail Archives • Leonard Rosenthol (lrosenth@adobe.com) • June 07, 2016 • Permalink

A system/process/application that would update an existing PWP.  It might be a publication system – for example, O’Reilly’s system knows about author updates to books, and might choose to push out updates.  Or it might simply be an end user that is creating a new PWP by combining pieces from existing ones (aka remixing or repurposing) using some desktop tool.

Leonard

From: "Siegman, Tzviya - Hoboken" <tsiegman@wiley.com>
Date: Tuesday, June 7, 2016 at 11:20 AM
To: Leonard Rosenthol <lrosenth@adobe.com>, Bill Kasdorf <bkasdorf@apexcovantage.com>, W3C Digital Publishing IG <public-digipub-ig@w3.org>
Subject: RE: Manifest/Metadata requirements

What’s an updating system?

Tzviya Siegman
Information Standards Lead
Wiley
201-748-6884
tsiegman@wiley.com<mailto:tsiegman@wiley.com>

From: Leonard Rosenthol [mailto:lrosenth@adobe.com]
Sent: Tuesday, June 07, 2016 10:51 AM
To: Bill Kasdorf; W3C Digital Publishing IG
Subject: Re: Manifest/Metadata requirements

I agree that in the best of all possible worlds, the updating system would update the manifest/metadata – however, in the real world that simply doesn’t happen reliably.  As such, reading systems can’t make assumptions and end up ignoring that type of info.

Leonard

From: Bill Kasdorf <bkasdorf@apexcovantage.com<mailto:bkasdorf@apexcovantage.com>>
Date: Tuesday, June 7, 2016 at 10:34 AM
To: Leonard Rosenthol <lrosenth@adobe.com<mailto:lrosenth@adobe.com>>, W3C Digital Publishing IG <public-digipub-ig@w3.org<mailto:public-digipub-ig@w3.org>>
Subject: RE: Manifest/Metadata requirements

Re item 2, I believe we had a use case along the lines of "As a reading system, I need to know that the manifest accurately and completely reflects the current version of the publication." So if you add such a chapter, you also have to update the manifest accordingly.

Agreed with item 1. I probably should have spoken up yesterday but I didn't want to interrupt the momentum we were on. I think clearly cover image can't be a requirement. Title, though, arguably could be: no matter what the nature of the publication is, it could be argued that a reading system needs some way to identify it to a user. That's of course veering into identifier land, so some wordsmithing to get at the issue of "designed for human readability" or something like that might be appropriate.

—Bill K

From: Leonard Rosenthol [mailto:lrosenth@adobe.com]
Sent: Tuesday, June 07, 2016 8:08 AM
To: W3C Digital Publishing IG
Subject: Manifest/Metadata requirements

Sorry I missed the call yesterday, but in reviewing the minutes on the various use cases, I see two of them that I would like to pick out for further discussions.

1 - As a reading system, I need to know the title and cover image to display the publication on a shelf without downloading all it's content.

In the case of a formal publication – such as a book or magazine – this certainly makes sense.  But as we consider the various informal use cases for PWPs, then such things wouldn’t be present.  So having a place for these things, should they exist, makes sense.  But we need to ensure that they aren’t requirements.


2 - As a reading system, I need to know if I need additional processing instructions, such as with MathML.

This is an example of a general category of things that I class as “the dangers of duplicated data”.

Anytime you have a “feature list” of a document/publication, you run the risk that it will not be properly maintained to match the actual content. What happens if the original version of a publication doesn’t use MathML but a chapter is added later on that contains it but the manifest isn’t updated?   A Reading System (in order to function properly) has to assume that the manifest’s list is wrong – and if it’s wrong, then why bother having it at all.

I would strongly recommend that we not go down this path.


Leonard

RE: Manifest/Metadata requirements

Source: public-digipub-ig@w3.org Mail Archives • Siegman, Tzviya - Hoboken (tsiegman@wiley.com) • June 07, 2016 • Permalink

What’s an updating system?

Tzviya Siegman
Information Standards Lead
Wiley
201-748-6884
tsiegman@wiley.com<mailto:tsiegman@wiley.com>

From: Leonard Rosenthol [mailto:lrosenth@adobe.com]
Sent: Tuesday, June 07, 2016 10:51 AM
To: Bill Kasdorf; W3C Digital Publishing IG
Subject: Re: Manifest/Metadata requirements

I agree that in the best of all possible worlds, the updating system would update the manifest/metadata – however, in the real world that simply doesn’t happen reliably.  As such, reading systems can’t make assumptions and end up ignoring that type of info.

Leonard

From: Bill Kasdorf <bkasdorf@apexcovantage.com<mailto:bkasdorf@apexcovantage.com>>
Date: Tuesday, June 7, 2016 at 10:34 AM
To: Leonard Rosenthol <lrosenth@adobe.com<mailto:lrosenth@adobe.com>>, W3C Digital Publishing IG <public-digipub-ig@w3.org<mailto:public-digipub-ig@w3.org>>
Subject: RE: Manifest/Metadata requirements

Re item 2, I believe we had a use case along the lines of "As a reading system, I need to know that the manifest accurately and completely reflects the current version of the publication." So if you add such a chapter, you also have to update the manifest accordingly.

Agreed with item 1. I probably should have spoken up yesterday but I didn't want to interrupt the momentum we were on. I think clearly cover image can't be a requirement. Title, though, arguably could be: no matter what the nature of the publication is, it could be argued that a reading system needs some way to identify it to a user. That's of course veering into identifier land, so some wordsmithing to get at the issue of "designed for human readability" or something like that might be appropriate.

—Bill K

From: Leonard Rosenthol [mailto:lrosenth@adobe.com]
Sent: Tuesday, June 07, 2016 8:08 AM
To: W3C Digital Publishing IG
Subject: Manifest/Metadata requirements

Sorry I missed the call yesterday, but in reviewing the minutes on the various use cases, I see two of them that I would like to pick out for further discussions.

1 - As a reading system, I need to know the title and cover image to display the publication on a shelf without downloading all it's content.

In the case of a formal publication – such as a book or magazine – this certainly makes sense.  But as we consider the various informal use cases for PWPs, then such things wouldn’t be present.  So having a place for these things, should they exist, makes sense.  But we need to ensure that they aren’t requirements.


2 - As a reading system, I need to know if I need additional processing instructions, such as with MathML.

This is an example of a general category of things that I class as “the dangers of duplicated data”.

Anytime you have a “feature list” of a document/publication, you run the risk that it will not be properly maintained to match the actual content. What happens if the original version of a publication doesn’t use MathML but a chapter is added later on that contains it but the manifest isn’t updated?   A Reading System (in order to function properly) has to assume that the manifest’s list is wrong – and if it’s wrong, then why bother having it at all.

I would strongly recommend that we not go down this path.


Leonard

Re: Manifest/Metadata requirements

Source: public-digipub-ig@w3.org Mail Archives • Leonard Rosenthol (lrosenth@adobe.com) • June 07, 2016 • Permalink

I agree that in the best of all possible worlds, the updating system would update the manifest/metadata – however, in the real world that simply doesn’t happen reliably.  As such, reading systems can’t make assumptions and end up ignoring that type of info.

Leonard

From: Bill Kasdorf <bkasdorf@apexcovantage.com>
Date: Tuesday, June 7, 2016 at 10:34 AM
To: Leonard Rosenthol <lrosenth@adobe.com>, W3C Digital Publishing IG <public-digipub-ig@w3.org>
Subject: RE: Manifest/Metadata requirements

Re item 2, I believe we had a use case along the lines of "As a reading system, I need to know that the manifest accurately and completely reflects the current version of the publication." So if you add such a chapter, you also have to update the manifest accordingly.

Agreed with item 1. I probably should have spoken up yesterday but I didn't want to interrupt the momentum we were on. I think clearly cover image can't be a requirement. Title, though, arguably could be: no matter what the nature of the publication is, it could be argued that a reading system needs some way to identify it to a user. That's of course veering into identifier land, so some wordsmithing to get at the issue of "designed for human readability" or something like that might be appropriate.

—Bill K

From: Leonard Rosenthol [mailto:lrosenth@adobe.com]
Sent: Tuesday, June 07, 2016 8:08 AM
To: W3C Digital Publishing IG
Subject: Manifest/Metadata requirements

Sorry I missed the call yesterday, but in reviewing the minutes on the various use cases, I see two of them that I would like to pick out for further discussions.

1 - As a reading system, I need to know the title and cover image to display the publication on a shelf without downloading all it's content.

In the case of a formal publication – such as a book or magazine – this certainly makes sense.  But as we consider the various informal use cases for PWPs, then such things wouldn’t be present.  So having a place for these things, should they exist, makes sense.  But we need to ensure that they aren’t requirements.


2 - As a reading system, I need to know if I need additional processing instructions, such as with MathML.

This is an example of a general category of things that I class as “the dangers of duplicated data”.

Anytime you have a “feature list” of a document/publication, you run the risk that it will not be properly maintained to match the actual content. What happens if the original version of a publication doesn’t use MathML but a chapter is added later on that contains it but the manifest isn’t updated?   A Reading System (in order to function properly) has to assume that the manifest’s list is wrong – and if it’s wrong, then why bother having it at all.

I would strongly recommend that we not go down this path.


Leonard

RE: Manifest/Metadata requirements

Source: public-digipub-ig@w3.org Mail Archives • Bill Kasdorf (bkasdorf@apexcovantage.com) • June 07, 2016 • Permalink

Re item 2, I believe we had a use case along the lines of "As a reading system, I need to know that the manifest accurately and completely reflects the current version of the publication." So if you add such a chapter, you also have to update the manifest accordingly.

Agreed with item 1. I probably should have spoken up yesterday but I didn't want to interrupt the momentum we were on. I think clearly cover image can't be a requirement. Title, though, arguably could be: no matter what the nature of the publication is, it could be argued that a reading system needs some way to identify it to a user. That's of course veering into identifier land, so some wordsmithing to get at the issue of "designed for human readability" or something like that might be appropriate.

—Bill K

From: Leonard Rosenthol [mailto:lrosenth@adobe.com]
Sent: Tuesday, June 07, 2016 8:08 AM
To: W3C Digital Publishing IG
Subject: Manifest/Metadata requirements

Sorry I missed the call yesterday, but in reviewing the minutes on the various use cases, I see two of them that I would like to pick out for further discussions.

1 - As a reading system, I need to know the title and cover image to display the publication on a shelf without downloading all it's content.

In the case of a formal publication – such as a book or magazine – this certainly makes sense.  But as we consider the various informal use cases for PWPs, then such things wouldn’t be present.  So having a place for these things, should they exist, makes sense.  But we need to ensure that they aren’t requirements.


2 - As a reading system, I need to know if I need additional processing instructions, such as with MathML.

This is an example of a general category of things that I class as “the dangers of duplicated data”.

Anytime you have a “feature list” of a document/publication, you run the risk that it will not be properly maintained to match the actual content. What happens if the original version of a publication doesn’t use MathML but a chapter is added later on that contains it but the manifest isn’t updated?   A Reading System (in order to function properly) has to assume that the manifest’s list is wrong – and if it’s wrong, then why bother having it at all.

I would strongly recommend that we not go down this path.


Leonard

Manifest/Metadata requirements

Source: public-digipub-ig@w3.org Mail Archives • Leonard Rosenthol (lrosenth@adobe.com) • June 07, 2016 • Permalink

Sorry I missed the call yesterday, but in reviewing the minutes on the various use cases, I see two of them that I would like to pick out for further discussions.

1 - As a reading system, I need to know the title and cover image to display the publication on a shelf without downloading all it's content.

In the case of a formal publication – such as a book or magazine – this certainly makes sense.  But as we consider the various informal use cases for PWPs, then such things wouldn’t be present.  So having a place for these things, should they exist, makes sense.  But we need to ensure that they aren’t requirements.


2 - As a reading system, I need to know if I need additional processing instructions, such as with MathML.

This is an example of a general category of things that I class as “the dangers of duplicated data”.

Anytime you have a “feature list” of a document/publication, you run the risk that it will not be properly maintained to match the actual content. What happens if the original version of a publication doesn’t use MathML but a chapter is added later on that contains it but the manifest isn’t updated?   A Reading System (in order to function properly) has to assume that the manifest’s list is wrong – and if it’s wrong, then why bother having it at all.

I would strongly recommend that we not go down this path.


Leonard

Deadline extension for MathUI 2016 - the Mathematical User Interfaces Workshop

Source: www-math@w3.org Mail Archives • Paul Libbrecht (paul@hoplahup.net) • May 30, 2016 • Permalink

We have extended the deadline for MathUI 2016 as follows:

  * Extended Abstracts 6^th June 2016 (extended)
  * Author Notification 19^th June 2016
    Please note the deadlines for early registration for CICM.
  * Final Version 2 July 2016
  * Workshop Day 25. July 2016

Thanks in advance.
Paul Libbrecht & Andrea Kohlhase

> http://www.cicm-conference.org/2016/cicm.php?event=mathui
>

[CSSWG] Minutes San Francisco F2F 2016-05-09 Part IV: CSS Content, Testing [css-content]

Source: www-style@w3.org Mail Archives • Dael Jackson (daelcss@gmail.com) • May 24, 2016 • Permalink

=========================================
  These are the official CSSWG minutes.
  Unless you're correcting the minutes,
 Please respond by starting a new thread
   with an appropriate subject line.
=========================================


CSS Content
-----------

  - The conversation started based around an issue that, as written,
      cross-references have the potential to create infinite loops.
      - plinss suggested solving this using a similar approach to
          how footnotes are handled traditionally and Florian agreed
          to investigate it.
  - The group then moved onto a more philosophical discussion on the
      future of the spec. Topics discussed included:
      - How can we ensure that browser vendors are okay with the
          approach so that they can implement it in the future?
      - Is the spec too big to publish?
      - How much attention should this spec get as it's not likely
          to be prioritized and/or implemented by browsers? Or
          should browsers be more interested in this?
      - Should epub have a special time on a F2F agenda like the FX
          meeting slots?

Testing
-------

  - RESOLVED: Drop requirement for author or reviewer metadata
  - RESOLVED: Move to primary <link> to spec+section being inferred
              from directory structure. Supplemental <link>s must be
              inline.
  - RESOLVED: spec-shortname/N-levels-of-ignored-subdirectory-
              names/frag-id-of-section
  - RESOLVED: Remove any title requirement, other than having one
              (implied by validity of HTML requirement)
  - RESOLVED: testharness.js tests don't need a meta assert (but
              reftests still do)
  - Moving to Github, move all of our tests to WPT repo, and
      stopping future use of Shepherd were discussed, but tabled for
      future conversation.

===== FULL MINUTES BELOW ======

Agenda: https://wiki.csswg.org/planning/san-francisco-2016#proposed-agenda-topics

Scribe: fantasai

CSS Content
===========

Page Number References
----------------------

  Florian: With regard to cross-references
  Florian: It looks like this:
  <Florian> a::after { content: target-counter(attr(href, url),
            page); }
  Florian: Implemented in Antenna House, Prince, and PDFReactor
  Florian: Vivliostyle looking into implementing.
  Florian: Major issue right now is that as specified, it could be
           infinite passes.
  Florian: If you have something reference at the very end of 109
  Florian: You do re-layout, and then now it's at page 110.
  Florian: This goes back to page 109.
  Florian: In most cases you do N passes and it eventually stabilizes.
  Florian: 20 minutes is okay for print.
  Florian: 20 minutes is better than infinite minutes.
  Florian: I've never run into infinite loops in the wild, but
           multi-passes happens.
  Florian: It's already terrible.
  Florian: N passes of laying out several hundred pages is a bad idea.
  dauwhe: If N is 2 it's not too bad.

  <ojan> is there a spec for what we're discussing or just the
         mailing list threads?
  <fantasai> no spec for this issue
  <dauwhe> https://drafts.csswg.org/css-content/#cross-references

  plinss: It's only infinite if you allow the race condition to
          continue.
  plinss: There's other fun things in paged layout that can create
          race conditions.
  plinss: Can apply a generic solution, detect if you are racing and
          stop racing.
  Florian: Detecting a loop and stopping is one way out of the
           problem.
  plinss: Do layout, then check if it moved, then it stays there,
          don't bring it back.
  Florian: Problem is you end up with incorrect table of contents
           numbers.
  plinss: No, do second page layout. If it ends up on 110, you treat
          is as a widow.
  plinss: Leave it out at 110.
  plinss: Don't re-layout.
  dbaron: You do one pass, do references, say this has to be on at
          least this page
  dbaron: And shift things to later page.
  dbaron: But not earlier page.
  Florian: With a forced break priority.
  dbaron: Well, might want to do a priority with some ...

  Florian: Alternative solution, similar to manual fix-ups
  Florian: On first pass you have a placeholder.
  Florian: First do pass with a placeholder.
  Florian: Then do layout, fill in the text.
  Florian: If it's smaller, do some alignment within the placeholder
           space.
  Florian: If it's larger, might have ink overflow.
  [skepticism of this approach]
  ...
  plinss: This is a generic problem. Could have same problem with
          widows and orphans.
  plinss: This is a generic problem. Gonna run into the same problem
          in all sorts of other ways.
  plinss: Think we should have a generic solution.
  plinss: Not have an author-specified placeholder.
  dbaron: The solution is similar to standard solution to footnotes,
          which is if you push ...
  plinss: If your footnote grows, starts to push marker off, stop
          growing at that point.
  plinss: Same principle.
  plinss: Just before it would create a race condition, you stop.
  Florian: Okay, I hadn't thought about that one.

  Florian: We have this on our roadmap, not interested in infinite
           loop solution.
  Florian: We also think that having CSS properties that are
           acceptable for print formatters but not Web, is bad.
  Florian: Many passes is not good.
  plinss: There are other problems that require multiple passes.
  plinss: But most can be done in 2 passes, tops.
  Florian: So, way you suggested, no need to change syntax.

  Florian: One side advantage of placeholders, if you have long
           document and it takes awhile to render, have something to
           show in the meantime.
  dauwhe: Could do that as an author by putting placeholder content
          in HTML and then replace with generated content in CSS.
  [missed how]
  Florian: So most of the time you get 2 passes.
  plinss: Most of time you can do in 1 pass.
  plinss: Sometimes have to go back and do it again, then stop.
  plinss: You might have a rippling effect, but not a looping effect.
  plinss: Might have to do nlogn.

  ojan: We won't implement any of this, ever.
  ojan: It involves laying out the entire document in order to
        figure out what page something is on.
  ojan: Never doing that.
  esprehn: We treat generated content as DOM.
  esprehn: So this is going to be an infinite loop for us.
  ekimber: Certainly the case that in context of print, potential
           for infinite lops is unavoidable.
  ekimber: We might not be doing in this context, but then maybe you
           move stuff to a different page cuz not enough space, but
           that creates enough space, so move it back.
  ekimber: So only solution is to have a "stop trying" solution.
  esprehn: In our case, at a fundamental level layout can't affect
           DOM.
  esprehn: You can't say "do layout, and then set attribute based on
           layout"
  plinss: Layout should never affect DOM. But you don't have to
          implement this as modifying the DOM.
  ...
  plinss: It's not DOM tree, it's layout tree.
  plinss: You have some situations, but you deal with it.
  plinss: But this is implementation details.

  myles: We're all agreeing here.
  plinss: Nobody is saying that layout affects DOM. I'm saying you
          don't have to implement generated content by affecting the
          DOM. If you do, can't implement these kinds of features.
  gsnedders: If we have 2 implementations, we're fine.
  <gsnedders> (sidenote: I was mostly being sarcastic about the fact
              that no browser will support it and our 2 impl
              requirement, given minutes don't capture tone)
  Florian: We have 3.

  TabAtkins: How do we deal with things that will not be implemented
             by browsers, but are implemented interoperably by other
             engines?
  TabAtkins: How do we indicate to authors, that this won't work in
             browsers?
  Florian: Making clear to authors is useful.
  Florian: But one goals of what we're doing at Vivliostyle is to
           explicitly stop having CSS for Web and CSS for Print
           being two different things.
  Florian: Want to be integrated.
  Florian: We know browsers won't prioritize these features, but
           want to make sure they are implementable if they become
           prioritized.
  ...
  hober: Florian wants the solution to be something we could
         theoretically accept.
  tantek: We will never implement vs. this is not a priority, those
          are two very different things.
  esprehn: They should just do it in Houdini.
  Florian: Risking a longer debate.
  TabAtkins: Similar to static vs dynamic profile of Selectors.
  TabAtkins: We have something that will fundamentally not be done
             in CSS processing.
  TabAtkins: But works fine in batch processors and JS.

Ebooks vs. Web
--------------

  Florian: I'm talking about ebook readers.
  Florian: Not batch processors.
  TabAtkins: We've already made a somewhat principled split between
             main Web stuff for CSS and other stuff.
  TabAtkins: Speccing that, and doing it well.
  TabAtkins: Fine to do that for ebook or whatever.
  TabAtkins: As long as a) it's clear, as in Selectors, that it's
             not expected to work in Web pages right now (though
             could change in future)
  TabAtkins: and also b) scope amount of time we spend on this.

  Florian: Jen has some good talks about this, Web design now be
           incredibly poor and boring cmp print
  Florian: Fantastic design on magazines, not done on Web.
  Florian: Web designers should look for inspiration elsewhere than
           the Web.
  Florian: Maybe pagination isn't part of that.
  Florian: Maybe mostly print, but having on the Web might be fine.

  esprehn: We should provide primitives, so people can make
           libraries, and if it's super popular we can backport to
           the Web.
  esprehn: Core of Web should be small, and libraries should
           implement these things.
  Florian: That is okay, but saying " this is for print " don't want
           to do.
  esprehn: Should say "this is for libraries"
  ojan: Being a standard and a spec isn't coupled to whether shipped
        in Web or print formatters
  ojan: Being a spec doesn't say where it gets implemented. That's a
        good thing.
  ojan: If we treat the whole room as caring about all the issues [??]

  plinss: I think I agree with all of you guys.
  plinss: What products implement what features? Depends on the
          market.
  plinss: Doesn't mean we can't standardize how these things should
          behave.
  plinss: Whether implemented in browser vendors or libraries or
          whatever.
  ojan: Why have browsers in the room?
  astearns: Because the CSS expertise is in this room.
  astearns: We want to avoid the situation like EPUB 3 where they
            made up stuff for CSS that was badly designed.
  Florian: Also obvious that browser vendors get priority on topics,
  Florian: Then again, a whole bunch of people here who are not
           browser vendors.
  gsnedders: Also worthwhile to point out that a lot of people in
             this room have a good idea what can be performantly
             implemented within the larger CSS model
  gsnedders: regardless of whether implemented in browser or not.
  gsnedders: And that's a good thing.

  jensimmons: It feels like some of this also is about giant billion
              dollar companies saying "this isn't in our interest,
              therefore shouldn't be in wg"
  jensimmons: But there are smaller vendors, smaller groups that
              have other interests
  jensimmons: I don't think it's fair to say that if Google will
              never implement, we shouldn't spend time on it.
  jensimmons: There are some things that are super worth our time to
              make sure specced well.
  jensimmons: Things that epub and print needs, and a lot of other
              things.
  jensimmons: Tension between reality of who pays us and let's make
              it the Web. Web is cool.

  dauwhe: I would just drawing back and looking at question in hand,
          this spec is about generated content in general.
  dauwhe: Talking about the value of one particular type of counter.
          But this applies to a lot of different counters, and there
          are a lot of web stuff in this spec.
  dauwhe: We reach some resolution on the particular issue, might
          not end to specify width for placeholder for page counters
          in this circumstance.
  dauwhe: As we gain implementation experience we can perhaps
          revisit the issue.

  Florian: plinss suggestion is maybe a more interesting solution to
           explore.
  Florian: Given obvious priorities of browsers I think it's fair to
           timebox this type of non-browser discussion.
  Florian: But definitely don't want it to be on a separate track.
           That has been a disaster.
  Florian: When liam came to us to say that "I'm sorry to shut down
           XSL:FO, but would be ok if CSSWG takes on our use cases"
  ojan: Don't have a problem with CSSWG taking on these specs
  ojan: Do have a problem with having this whole room to discuss
        spec where only 5 people are talking.
  ojan: We need to fix that.
  dauwhe: General problem.
  tantek: That's a scheduling problem.
  Florian: Also, Tab and fantasai are always in the discussion.

  ojan: I would feel very differently if this was a javascript
        library.
  ojan: Forcing whole page to re-layout would be fine for a JS
        library.
  ojan: Fine feature, serves user purpose
  Florian: Our UA is a giant JavaScript library.
  ojan: Understand about tracks... but how about splitting such
        things off into a separate day... houdini ...
  astearns: Houdini topics [...]
  Florian: Some are native implementations, not JS.

  esprehn: What about the rest of the spec?
  esprehn: This spec contains a bunch of stuff. Datetime and
           document-url, leaders, etc.
  esprehn: From our perspective belongs in a library
  esprehn: Think it should be in a JS library
  esprehn: Be really nice to publish a spec that describes what
           browsers really do.
  fantasai: That was 2.1.
  esprehn: But it didn't.
  fantasai: What's missing?
  Florian: I put this one on the agenda because we want to
           implement, but not how it's been implemented by the other
           print formatters.

  dauwhe: I came into the WG just as Håkon was leaving, and took
          over editorship of GCPM.
  dauwhe: Much of it was a .. junkyard.
  dauwhe: Stuff just collected in GCPM.
  dauwhe: Over past few years, resolutions to move things into their
          respective specs.
  dauwhe: Moved GC features into GC.
  dauwhe: Also now a spec hadn't been published in 13 years, full of
          cool and interesting ideas from Håkon and Ian Hickson.
  dauwhe: Many things important to the Web, would like to publish.
  dauwhe: What we have now is an early-stage ED.
  dauwhe: fantasai and I did some work on this to bring it to state
          where we can present to WG and publish a new WD.
  dauwhe: Beyond that it really is WG decides.

  tantek: I'm agreeing what Elliott is saying, if we want to publish
          the WD, need to shrink it.
  tantek: If we want to publish REC track document, need to have a
          concerted effort to split the features that are being
          actively developed vs. junkyard features into next level.
  fantasai: This *is* a trimmed down spec, it's just not trimmed
            down to "browsers only" set of features.
  fantasai: If the working group is not taking that as being trimmed
            down then the WG is not browser only.
  fantasai: We're working towards trimming it to get it to REC.
  dbaron: We shouldn't look at Ebook things as being separate from
          browsers. Ebooks being in a browser is maybe a good thing.
  dauwhe: W3C and IDPF are working towards more integration.

  skk: We also creating epub viewer, based on Blink, creating page
       generation with C++.
  skk: I think generated content spec is not for web browsers, but
       for Ebook viewers.
  skk: But if we have a chance, but we are using Blink so we want to
       talk with Google web browser team to generate the page numbers.
  skk: Page numbers are async generated.
  skk: We already did the implementation now, but the collaboration
       API specified between Blink and us would be very useful.
  skk: Also Gecko, better if API is specified.
  skk: Such kind of discussion is productive between Web and Ebook.
  skk: From Ebook viewer implementer perspective.

Alt Text for Generated Content
------------------------------

  dauwhe: One thing in the spec that hasn't been implemented is
          trying to design a mechanism to make generated content
          more accessible.
  dauwhe: Which is a critical point.
  fantasai: Not about exposing generation content to a11y--that's
            already required.
  fantasai: but about some generated content is symbols or pictures,
            needs different text for speech.
  dauwhe: Our idea was that the content property takes this endless
          trying of various bits of strings or replaced elements.
  dauwhe: We're proposing that at the end you can put a slash and
          then alt text.
  dauwhe: Potentially empty, if the string is decorative (e.g.
          asterisks)
  TabAtkins: Previous proposal was an 'alt' property.
  fantasai: It was bad because it doesn't cascade together with
            'content'
  dauwhe: This would also allow us to provide alt text for images
          inserted via content.
  plinss: Instead of using URL, maybe using image function?
  plinss: ... and put alt text inside that function

Ebooks vs web cont.
-------------------

  ojan: Ebooks as part of the Web is a good future.
  ojan: That's why I think speccing these things is important.
  ojan: But think of it similar to SVG-CSS day.
  ojan: Where it's a special cross-functional meeting.
  ojan: To have an explicit Ebook Track, not necessarily in
        parallel, but be explicit about that section of the F2F.
  Florian: I think that's fine, a bit concerned about the boundary
           being fuzzy.
  ojan: I think being explicit about that might change what's
        plausible, practical to implement.
  ojan: E.g. target-counter() is great as a feature, but not as a
        built-in browser feature.
  TabAtkins: Could have JS api about getting the counter value.
  Florian: I'm happy about being explicit about scheduling this in
           sections, less happy about putting it into the spec.
  TabAtkins: I don't like specs that web authors look at but don't
             know what to use.
  Florian: caniuse.com
  TabAtkins: I think authors should know whether such features will
             ever be useful.
  plinss: I don't want to be picky about language, but going to be
          picky about your language.
  plinss: Have no problem with your fundamental point "this is
          what's expected to be in browsers today."
  plinss: But I have a problem with saying "this is never expected
          to be on the Web ever."
  plinss: Saying up front "We don't ever expect this to work in a
          browser", that's going to change the future.
  plinss: That's a bad way to scope the future.

  astearns: Has to be put in language that the current efforts are
            going to be in script, but that they may move into the
            browsers. And we don't have a good set of terms to talk
            about that migration purpose.
  Florian: I'm not sure it's bad to get peoples' hopes up.
  Florian: If there's a lot of people that think people really want
           it, then it pushes forward.
  esprehn: GCPM didn't move forward for many years.
  Florian: Because it was a lousy spec.
  Florian: I don't want to have that again.
  Florian: People just didn't pay attention to junky spec "for print"
  ...

Page Number References cont.
----------------------------

  Florian: target-counter(), when it doesn't refer to pages, I don't
           see any reason why it wouldn't be useful on the Web.
  Florian: Can do "See Figure 17", nothing wrong with having that on
           the Web.
  Florian: Until you care about pages, might not do page counter
           references, though.
  TabAtkins: Mark it at-risk
  ojan: What if we had two specs. "here's generated content as it is
        in browsers today" and "here's generated content for
        pagination"
  * fantasai thinks we're arguing over this wayyyy more time than
             we're spending on the technical work

  plinss: If you say never gonna implement this, I think you're
          going to be wrong. Can't tell your business interests 5
          years from now.
  plinss: But also, you're not the only browser.
  esprehn: If you have something doing layout totally different, e.g.
           mathml
  esprehn: We're not going to spec that, you can't do that.
  esprehn: I think making the string thing, take the text content
           value of a node, it's a cycle in algorithm, violates
           fundamental constraints of how the system work.
  Florian: I agree with that, and that's why I want this discussed
           here so that we spec it in a way that doesn't violate
           fundamental constraints.
  [repetition of existing points]

  fantasai: Also Elliot I think you're missing the point,
            target-counter() can be used for other things than page
            numbers.
  esprehn: We're not going to ever add more features to counters.
  esprehn: We want people to write JS for counters.
  plinss: CSS doesn't define what's implemented in C++.
  esprehn: HTML had a sorting algorithm, was never implemented, and
           it was removed, and people can implement it in JS.
  tantek: That was also before incubation and stuff.

  <leaverou> esprehn: Do you realize that there are tons of CSS
             authors that do not *know* JS?
  <leaverou> esprehn: you are complicating the web platform for
             authors with this kind of thinking.
  <shane> leaverou: implemented in JS won't mean unusable by CSS
          authors - the whole point of Houdini is to open CSS up to
          extensibility
  <leaverou> shane: Using libraries has a ton of overhead. First you
             need to find the best library, you need to bear the
             bandwidth cost, you need to learn its documentation,
             often switch to another library halfway through etc etc
  <leaverou> shane: Libraries are not the solution for essential
             functionality. Library authors are not spec writers and
             will often create worse APIs
  <TabAtkins> leaverou: Yes, of course we realize that. We also
              realize that the stdlib *cannot* be extended
              infinitely, nor do we want to forever hijack *every
              feature forever* on how quickly browsers can cycle
              their implementation.
  <leaverou> TabAtkins: Then allow people to conditionally import
             modules, but standardize said modules instead of having
             them hunt down libraries which could be terrible.
  <leaverou> it seems to me that Houdini is used as an excuse to not
             implement anything Google devs don't like

  fantasai: This is a draft, you can cut this down we get to the
            point that browsers need to consider implementing it.
  tantek: If that is true then call it something else.
  fantasai: We are replacing those drafts with something new, and
            we're going for a FPWD from the old scope of that module
            and should have the same shortname, etc.
  tantek: I want to suggest a path forward, if there's a dispute wrt
          implementability, indicate in the draft
  tantek: e.g. put a box explaining the implementation experience
  tantek: and caveat of whether it's stabilizing, or is very shaky
          etc.
  tantek: Give authors anyone else reviewing spec a chance of what's
          the context of evaluating this feature.
  tantek: This feature has some print implementation, but no browser
          implementations yet, pls send feedback, at least that's
          conveyed to authors.
  tantek: Rather than mis-conveying any intent.
  tantek: E.g. features that are widely-implemented vs not
          implemented at all labeled separately to help with
          reviewers.
  tantek: So my request is to put per-feature, implementation
          experience and suggestions for feedback.
  dauwhe: We have a mechanism for doing that already: test results
          per browser.
  dauwhe: If we extend that a little bit, can see that Prince/AH/etc.
          that are passing, that gives you the information you need.
  dauwhe: If you see that the tests pass in blink/safari/edge,
          useful information too.
  astearns: Script that does the annotations could be more verbose.
  dauwhe: You can filter on that information, all sorts of things.
          This is data we can get.
  dauwhe: Might make us write more tests, all sorts of things.

  dbaron: plinss said something earlier about not wanting the spec
          to say that a spec was intended for print.
  dbaron: My general feeling is that we do not write enough of what
          our intent was into the specs, and that hurts our ability
          communicate with the ppl reading the specs and trying to
          understand them.
  dbaron: I think we should have more informative text in specs
          saying why what's there is there and what it was intended
          for and so on.
  dbaron: And one of the other worries I have is, if we don't say
          that things are intended for print, then essentially
          there's an incentive for me to try and make things not be
          in the spec because I'm worried about the risk that some
          junior developer at Mozilla will try to implement a thing
          that they shouldn't spend time on.
  dbaron: But they saw it in the spec and thought we should
          implement it.
  <tantek> exactly the problem
  <tantek> or you get some advocacy groups pushing for something

  jensimmons: I just wanted to toss in that I think we should be
              careful not to assume what a browser is.
  jensimmons: Not say "there's real browsers and there's ebook
              readers, and there's print, and we don't care about B
              and C".
  jensimmons: There are lots of experiments happening around what is
              a browser.
  jensimmons: This is going to come up a lot. What a browser is
              is going to change.
  * ojan just wants to clarify that noone said "real" browsers or
         that we don't care about ebooks

  gregwhitworth: Lea commented in IRC.
  gregwhitworth: She was saying that there are a lot of CSS devs
                 that don't know JS, and we're over-complicating
                 things.
  ChrisL: Lea doesn't disagree with Houdini existing. Disagrees that
          Houdini should be used as a blunt hammer for everything.
  <leaverou> yup
  <leaverou> I've seen it used way too much as an excuse to not
             implement or spec things that authors need.

  TabAtkins: The circular ones should be dropped from expectation
             for browser implementations.
  <leaverou> TabAtkins: depends on what you define as a browser. Is
             a print formatter a browser?
  <TabAtkins> For the purpose of this discussion, "a browser" is
              "the things developed by the browser vendors here in
              the room".

  fantasai: I'm happy to keep discussing technical issues, but I
            would like to close the scoping issue.

  Florian: Other topic, not GC, is interaction between Media Queries
           and @page { size }
  Florian: Anyone want to discuss this now?
  Florian: Probably should talk about something other than pages,
           because everyone is hating on pages right now.
  dauwhe: 3D transforms? :)

Testing
=======

  gsnedders: We basically have consensus on getting rid of a lot
             flags.
  <gsnedders> http://testthewebforward.org/docs/css-metadata.html
  * fantasai wants a WG resolution on test metadata so to move
             forward with conversions
  gsnedders: If we want to get browsers actually running the test
             suite, then we need to get browser vendors to actually
             talk about this.
  gsnedders: The metadata thing is actually smaller bit.

  gsnedders: Big thing I want to discuss is stuff like how we want
             to deal with test review in the future.
  gsnedders: How we want to deal with issues on tests in the future.
  gsnedders: Because at the moment we have this weird split system
             with pull requests on GitHub which get reviewed before
             merging.
  gsnedders: Whereas other things are just pushed directly to
             Mercurial.
  gsnedders: In principle, sometime in the future, but realistically
             never, they get reviewed.
  gsnedders: In my opinion it makes sens to move everything to
             github PRs with review before merging.
  gsnedders: Realistically, there's no motivating factor for people
             to review tests before they're landed.
  gsnedders: We saw this with 2.1 as well.
  gsnedders: Just moved everything to approved, thousands of tests
             never gonna review.
  gsnedders: Would like to avoid ending up in that state again.

  <tantek> first piece of feedback, no content in HTML/SGML comments
           please. e.g. "<!-- YYYY-MM-DD -->" bad. Instead, slap it
           on the end of the title attribute, e.g.
           title="NAME_OF_REVIEWER, YYYY-MM-DD"
  <tantek> (kind of like how people sign documents)
  <tantek> could http://testthewebforward.org/docs/css-metadata.html
           cite the previous documentation of the CSS WG test suite
           documentation?
  <tantek> what happened to atomic vs basic tests as Hixie and I had
           distinguished so you could check whether something
           implemented anything at all?
  * tantek digs up https://www.w3.org/Style/CSS/Test/testsuitedocumentation.html
  <gsnedders> tantek: we badly need to sort out the fact we have
              contradictory documentation all over the place…
  <tantek> gsnedders: could you start by citing the previous
           documentation?
  <tantek> (I mean inline in the css-metadata doc)
  <tantek> ah this is the one I was looking for
           https://www.w3.org/Style/CSS/Test/guidelines.html is a
           good predecessor to cite for css-metadata.html
  <tantek> e.g. that guidelines.html has the rel=author, and
           rel=help aspects

  astearns: Got a bit of history because we had tons of tests
            waiting for review in Shepherd, that we new would never
            get fixed.
  astearns: Tests hanging as Github PRs isn't great either.
  dbaron: I don't think reviewing tests is particularly a good use
          of time. Particularly the way they have been reviewed so
          far.
  dbaron: Certain test reviewers have given feedback that ends up
          more or less meaning "go away".
  dbaron: Also best way to to review tests is to write an
          implementation, and to see if the tests pass.
  dbaron: And the important thing to check for is that the test
          tests what it thinks it's testing.

  fantasai: Three things to check:
            1) Test pass when it's supposed to pass
            2) Test fails when it's supposed to fail
            3) Tests what it thinks its testing
  Scribe: iank
  fantasai: I've seen some tests which only pass sometimes, for
            example when it depends on the width of the window.
  dbaron: Implementers are probably going to catch #1.
  fantasai: You can have tests which are mis-labeled.
  fantasai: I think that dbaron has a good point that (1) is going
            to be checked by browsers, (2) & (3) you should check
            for, you may as well check for everything.
  dbaron: I think that web platform tests have worked out a model
          that works here.

  dbaron: I'm pretty close to telling people to create a folder in
          the wpt repo at the moment. This is largely due to
          preprocessing issues.
  fantasai: In terms of review process to we review then land or
            visa versa?
  fantasai: Agree that this is an issue we should fix, but back to
            gsnedders issue on review.
  dbaron: For wpt as a Gecko dev, I can just check in the repo, and
          a Mozilla dev will deal with it working.
  <gsnedders> in w-p-t there's an extra motive to review: you start
              running them when they land
  <gsnedders> in csswg-test, there isn't that if the test is already
              in the repo
  ojan: We have chrome folks that are working on better wpt
        integration tests.
  zcorpan: WPT, review from the browser vendor means you just merge
           the tests.
  ojan: Doing future things in wpt will be good for us.
  <gsnedders> for w-p-t, the browser vendor review must be public
              for free landing.

  Florian: Simply the review process, decrease the amount of
           meta-data, remove the build system, and once we've done
           that we've done a lot. If we do this, then we can
           re-evaluate if we need to do other things. I would not
           want to start by dropping things that we know that have
           value.

  ChrisL: Should keep the metadata, as difficult for people to know
          what the test is testing.
Scribe: fantasai
  gsnedders: WPT has loose rule that the reviewer must be able to
             understand what the test is doing.
  gsnedders: How you communicate that to the reviewer, isn't really
             stated.
  gsnedders: Can put in an HTML comment.
  gsnedders: Could be completely obvious cuz it's a 2-line test
  gsnedders: No point in actually stating forms of how it must be.
  gsnedders: Merely gate it on the review
  gsnedders: but that only works if you do the review before merging.

  fantasai: For our purposes, having links to the sections test
            helps us a lot.
  fantasai: Not just for review, but also to generate test coverage
            reports, implementation reports, and to know which tests
            need to be updated when the spec changes.
  fantasai: So I think having this in a fixed format is useful and
            important to try to keep.

  fantasai: With regards to other metadata,
  fantasai: WPT have slightly different format because they use a
            lot of JS tests that are combined in one file, so having
            HTML comments etc. as documentation makes sense, per-
            file data is not going to be as useful.
  fantasai: Whereas in reftests, more granular per test.
  fantasai: I don't see a problem with pulling out the comment on
            the test, having it in a standard format rather than
            just in a random HTML comment somewhere.
  fantasai: Comment being the documentation of what it's testing.
  fantasai: Just as a function has a formalized comment at the top
            saying its purpose and its parameters,
  fantasai: So that it can be used and understood and reimplemented
            if necessary, so should a test.
  fantasai: This is independent of clearly-written code with good
            comments.

  Florian: On the assertion itself, I think that's one of the things
           that we may need to come back to later
  Florian: if it's getting in the way.
  Florian: But I don't think we should do this until we've cleared
           out all the other problems
  Florian: and have determined that it's still a blocker. Not a good
           place to start from.
  Florian: FWIW, this is less necessary in JS tests. Because you
           have an assertion written as code.
  Florian: But in CSS tests, that's not the case.
  Florian: We might need to drop that description eventually, but
           let's not start here.

  Florian: As for what we can do, with regards to links to specs, I
           think we can sort of have best of both worlds.
  Florian: You store your test in predefined directory structure,
           points to the right part of the spec.
  Florian: But should you wish to link to extra information to link
           to other sections of the spec, then you can add more such
           links.
  Florian: Not blocking.

  Florian: Another bit of metadata is the title;
  Florian: HTML requires a <title> so we have to have one, but can
           say put anything in it.
  Florian: With regards to author/reviewer, leave that to source
           control.
  Florian: With regards to flags, I think we can get rid of most of
           them.
  Florian: Make them a niche occurrence.
  Florian: Spending something like 10-15 min going through the list
           might be useful,
  Florian: but should do that.
  Florian: I think we do that for title, keep assert meta, get rid
           of author meta etc. simplify flags, simplify reviewer
           process.
  Florian: If we just do these bits then we'll have done a lot.
  Florian: Then we can discuss why not.
  Florian: But not to do more than that
  Florian: yet.

  <tantek> I don't think it's useful to brainstorm this realtime here.
  <tantek> I'd rather read a proposal, e.g. a delta on what
           gsnedders has already put forth.
  <gsnedders> tantek: as would I, but there's almost no replies on
              the mailing list to much of what I say :(
  <tantek> gsnedders, yes, I understand why you brought it to the
           f2f agenda, that part makes sense
  <tantek> my point is that if someone here wants to take the time
           to make counter proposals beyond "here's a simple fix"
           (e.g. see my suggestions above ;) ) then they can be
           actioned to write up a longer proposal

  zcorpan: In WPT, you can put link to the spec using <link> element
  zcorpan: But can also infer from directory structure
  zcorpan: by spec/id.
  fantasai: I'm ok with this, as long as that information is
            available and people writing tests for cross-section or
            cross-spec interaction are able to add <link>s.
  zcorpan: OK with requiring assert for reftests, but less so for
           testharness.js
  [Florian and fantasai agree]

  [discussion of where to track reviewer information]

  plinss: I would argue against removing any existing metadata, but
          remove requirement to put it in.
  <tantek> +1
  gsnedders: Question remains, how do we flag that a test has been
             reviewed?
  plinss: If there is a reviewer flag, shepherd will mark it
          reviewed. But otherwise Shepherd will track it.
  plinss: Merging GitHub PR also counts as a reviewer.

  RESOLVED: Drop requirement for author or reviewer metadata
  RESOLVED: Move to primary <link> to spec+section being inferred
            from directory structure. Supplemental <link>s must be
            inline.

  <tantek> not hearing consensus
  plinss: Do we have nested sections?
  fantasai: No. Just spec/fragID.
  fantasai: We move sections around or change their nesting level.
            But we keep the fragID stable.
  plinss: So I propose we have shortname + leafesectionid and N
          levels in between.
  plinss: We ignore the levels in between, they can be named anything.
  fantasai: I don't think there's a need for intermediary directories...

  RESOLVED: spec-shortname/N-levels-of-ignored-subdirectory-
            names/frag-id-of-section

  RESOLVED: for 2.1 use css2/filename/frag-id-of-section
  <dbaron> ?: or for 2.1 we can require help links
  (remove previous RESOLVED)
  plinss: If spec doesn't match this format (e.g. 2.1 which has
          chapters) then have to use <link>

  RESOLVED: Remove any title requirement, other than having one
            (implied by validity of HTML requirement)

  tantek: Want to say, don't keep content in HTML comments.
  <tantek> can we get a resolution on not putting content in HTML
           comments?
  * fantasai didn't understand your comment, tantek

  RESOLVED: testharness.js tests don't need a meta assert (but
            reftests still do)

  <tantek> PROPOSED: Drop any requirements or even suggestion to put
           test meta info into HTML comments
  <tantek> ^^^ yes this is about reviews

  gsnedders: Basic agreement on the flags to keep/drop on ML.
  ...
  gsnedders: For WPT, if there's a public review, you can use that
             to upstream it to WPT repo.
  gsnedders: I think that only affects Edge now :)

  Florian: I'm in favor of that, one thing that we're losing if we
           move away from Shepherd way of doing things.
  Florian: If you are using Shepherd to review, you have access to a
           view of tests,
  Florian: You don't have that view in GitHub,
  Florian: We're assuming people will be diligent and look at the
           test on their own machine.
  ....
  fantasai: I think the biggest thing is being able to understand
            the test when reviewing it.
  fantasai: For me at least, I understand better when I can see the
            output and try to connect that to the code I'm looking at.
  fantasai: So it's much harder for me to review tests from a GH PR
            than to pull down the repo and look at the test locally.

  gsnedders: For WPT, PRs by some people ...
  zcorpan: I'm on the whitelist, so if I make PR, my PR gets
           mirrored on w3ctest.org
  zcorpan: In a subdirectory, where I can run it.
  zcorpan: Right now there's no automatic link in the PR itself, but
           we could add that.
  zcorpan: So could add a link directly.
  zcorpan: I agree it's useful to be able to easily run the tests
           without having to checkout.
  Florian: So you can also whitelist PRs from other people.
  zcorpan: The whitelist is there to avoid security problem
  astearns: So it wouldn't be a problem to take anyone that's
            interested in reviewing tests on the whitelist.
  Florian: I think having a system like that is valuable, not sure
           about blocking on it.
  Florian: gtalbot feels pretty strongly against a system which
           doesn't have that (ability to easily run the tests).
  gsnedders: I think that's fine, practically already have it for
             WPT, should make it explicit.
  zcorpan: Should make it easier to get there, from the PR,
  zcorpan: e.g. have a bot add a link.

  Florian: Are we resolving to move to GitHub, review then merge,
           and also to merge the system?
  Florian: Or is the system a dependency on moving to GitHub?
  gsnedders: Should be easy to set up as long as don't need to build.
  fantasai: Shouldn't need to build except in [weird very rare cases].
  astearns: So seem to want to move to GitHub.

  plinss: Suggestion to move all of our tests to WPT repo.
  gsnedders: That's a long-term goal shared by browser vendors
             (except Microsoft hasn't said anything).
  Florian: Short term or long term goal?
  plinss: Long term goal since beginning of WPT.
  plinss: Block has been our dependency on build system.
  plinss: If we eliminate that, then no reason not to move to other
          repo.
  plinss: I'm happy doing that. Takes Shepherd out of the feature. I
          can maintain the historical data, but won't run against
          WPT for a long time if ever.
  plinss: Would need to keep build system and parts that help us
          generate implementation reports.

  [Meeting closed.]
  [discussion deferred to later]

"Math Editor" addon for CKEditor

Source: FMath • Ionel Alexandru (noreply@blogger.com) • May 17, 2016 • Permalink

Hi,

      I created a plugin "FMath Editor" for CKEditor to add and edit Mathematics on web.
"FMath Editor" is a WYSIWYG formula editor (an equations editor) and is based  ONLY on JavaScript, so it runs on any browser.
You can download from: http://ckeditor.com/addon/FMathEditor

Also the editor allow you to edit the same formula later.



You can see how it works: http://www.fmath.info/plugins/CKEditor/demo.jsp

regards
Ionel Alexandru

Last Call for Papers: Workshop on User Interfaces for Theorem Provers (UITP 2016 @ IJCAR), Coimbra, Portugal, Deadline May 17th *NEW* (was May 9th, 2016)

Source: www-math@w3.org Mail Archives • Serge Autexier (serge.autexier@dfki.de) • May 04, 2016 • Permalink

                         Last Call for Papers

                              UITP 2016
  12th International Workshop on User Interfaces for Theorem Provers
                    in connection with IJCAR 2016
                  July 2nd, 2016, Coimbra, Portugal
          http://www.informatik.uni-bremen.de/uitp/current/
          
              * NEW Submission deadline: May 17th, 2016 *

----------------------------------------------------------------------
NEWS:
- Invited Speaker: Sylvain Conchon (LRI, France) giving a talk about
  "AltGr-Ergo, a graphical user interface for the SMT solver Alt-Ergo"
- Submission deadline postponed by one week to May, 17th, 2016
----------------------------------------------------------------------

The  User  Interfaces  for  Theorem  Provers  workshop  series  brings
together   researchers  interested   in   designing,  developing   and
evaluating interfaces  for interactive proof systems,  such as theorem
provers,  formal  method  tools,  and  other  tools  manipulating  and
presenting mathematical formulas.

While  the reasoning  capabilities of  interactive proof  systems have
increased dramatically over the last years, the system interfaces have
often  not   enjoyed  the   same  attention   as  the   proof  engines
themselves.  In many  cases,  interfaces remain  relatively basic  and
under-designed.

The User  Interfaces for  Theorem Provers  workshop series  provides a
forum for  researchers interested in improving  human interaction with
proof  systems. We  welcome participation  and contributions  from the
theorem proving, formal  methods and tools, and  HCI communities, both
to  report on  experience with  existing systems,  and to  discuss new
directions. Topics covered include, but are not limited to:

- Application-specific  interaction mechanisms  or designs  for prover
  interfaces Experiments and evaluation of prover interfaces
- Languages and tools for authoring, exchanging and presenting proof
- Implementation  techniques (e.g.  web  services, custom  middleware,
  DSLs)
- Integration of interfaces and tools to explore and construct proof
- Representation and manipulation of mathematical knowledge or objects
- Visualisation of mathematical objects and proof
- System descriptions

UITP 2016 is a one-day workshop to be held on Saturday, July 2nd, 2016
in Coimbra, Portugal, as a IJCAR 2016 workshop.

** Submissions **

Submitted   papers  should   describe   previously  unpublished   work
(completed or  in progress), and  be at least 4  pages and at  most 12
pages. We encourage concise and relevant papers. Submissions should be
in PDF format, and typeset with  the EPTCS LaTeX document class (which
can be downloaded from  http://style.eptcs.org/). Submission should be
done via EasyChair at 

        https://www.easychair.org/conferences/?conf=uitp16

All papers will be peer reviewed by members of the programme committee
and selected by the organizers in accordance with the referee
reports.

At  least one  author/presenter  of accepted  papers  must attend  the
workshop and present their work.

** Proceedings **

Authors will have the opportunity to incorporate feedback and insights
gathered during the  workshop to improve their  accepted papers before
publication  in the  Electronic  Proceedings  in Theoretical  Computer
Science (EPTCS - http://www.eptcs.org/).

** Important dates **

 Submission deadline: May 17th, 2016
 Acceptance notification: June 6th, 2016
 Camera-ready copy: June 20th, 2016
 Workshop: July 2nd, 2016

** Programme Committee **

 Serge Autexier, DFKI Bremen, Germany (Co-Chair)
 Pedro Quaresma, U Coimbra, Portugal (Co-Chair)
 David Aspinall, University of Edinburgh, Scotland
 Chris Benzmüller, FU Berlin, Germany & Stanford, USA
 Yves Bertot, INRIA Sophia-Antipolis, France
 Gudmund Grov, Heriott-Watt University, Scotland
 Zoltán Kovács, RISC, Austria
 Christoph Lüth, University of Bremen and DFKI Bremen, Germany
 Alexander Lyaletski, Kiev National Taras Shevchenko Univ., Ukraine
 Michael Norrish, NICTA, Australia
 Andrei Paskevich, LRI, France
 Christian Sternagel, University Innsbruck, Austria
 Enrico Tassi, INRIA Sophia-Antipolis, France
 Laurent Théry, INRIA Sophia-Antipolis, France
 Makarius Wenzel, Sketis, Germany
 Wolfgang Windsteiger, RISC Linz, Austria
 Bruno Woltzenlogel Paleo, TU Vienna, Austria

FMath "HTML + MathML" for Chrome Solution

Source: FMath • Ionel Alexandru (noreply@blogger.com) • April 29, 2016 • Permalink

Hi,

I have created an extension for Google Chrome to display MathML inside HTML.

The solution is ONLY javascript.

To install the extension  https://support.google.com/chrome_webstore
Search for "MathML" keyword.

After you install the extension you can test by going on this page
http://www.fmath.info/plugins/chrome/test.html

The page is built using HTML and MathML. No other tricks.

enjoy
ionel alexandru

P.S. Let me know on www.fmath.info if you find bugs or you need more features.


Math Accessibility Trees

Source: Murray Sargent: Math in Office • MurrayS3 • April 28, 2016 • Permalink

This post discusses some aspects of making mathematical equations accessible to blind people. Presumably equations that are simple typographically, such as E = mc², are accessible with the use of standard left and right arrow key navigation and with each variable and two-dimensional construct being spoken when the insertion point is moved to them. At any particular insertion point, the user can edit the equation using the regular input methods, perhaps based on the linear format and Nemeth Braille or Unified English Braille keyboards. But it can be hard to follow a more typographically complex equation, let alone edit it. Instead, the user needs to be able to navigate such an equation using a mathematical tree of the equation.

More than one kind of tree is possible and this post compares two possible kinds using the equation

integral

We label each tree node with its math text in the linear format along with the type of node. The linear format lends itself to being spoken especially if processed a bit to say things like “a^2” as “a squared” in the current natural language. The first kind of tree corresponds to the traditional math layout used in documents, while the second kind corresponds to the mathematical semantics. Accordingly we call the first kind a display tree and the second a semantic tree.

More specifically, the first kind of tree represents the way TeX and Microsoft Office applications display mathematical text. Mathematical layout entities such as fractions, integrals, roots, subscripts and superscripts are represented by nodes in trees. But binary and relational operators that don’t require special typography other than appropriate spacing are included in text nodes. The display tree for the equation above is

DisplayTree1

Note that the invisible times between the leading fraction and the integral isn’t displayed and the expression a+b sinθ is displayed as a text node a+b followed by a function-apply node sinθ, without explicit nodes for the + and the invisible times.

To navigate through the a+b and into the fractions and integral, one can use the usual text left and right arrows or their braille equivalents. One can navigate through the whole equation with these arrow keys, but it’s helpful also to have tree navigation keys to go between sibling nodes and up to parent nodes. For the sake of discussion, let’s suppose the tree navigation hot keys are those defined in the table

Ctrl+→ Go to next sibling
Ctrl+← Go to previous sibling
Home Go to parent position ahead of current child
End Go to parent position after current child

For example starting at the beginning of the equation, Ctrl+→ moves past the leading fraction to the integral, whereas → moves into the numerator of the leading fraction. Starting at the beginning of the upper limit, Home goes to the insertion point between the leading fraction and the integral, while End goes to the insertion point in front of the equal sign. Ctrl+→ and Ctrl+← allow a user to scan an equation rapidly at any level in the hierarchy. After one of these hot keys is pressed, the linear format for the object at the new position can be spoken in a fashion quite similar to ClearSpeak. When the user finds a position of interest, s/he can use the usual input methods to delete and/or insert new math text.

Now consider the semantic tree, which allocates nodes to all binary and relational operators as well as to fractions, integrals, etc.

SemanticTree1

The semantic tree has two drawbacks: 1) it’s bigger and requires more key strokes to navigate and 2) it requires a Polish-prefix mentality. Some people have such a mentality, perhaps having used HP calculators, and prefer it. But it’s definitely an acquired taste and it doesn’t correspond to the way that mathematics is conventionally displayed and edited. Accordingly the display tree seems significantly better for blind reading and editing, as well as for sighted editing.

Both kinds of trees include nodes defined by the OMML entities listed in the following table along with the corresponding MathML entities

Built-up Office Math Object OMML tag MathMl
Accent         acc mover/munder
Bar bar mover/munder
Box         box menclose (approx)
BoxedFormula     borderBox menclose
Delimiters d mfenced
EquationArray eqArr mtable (with alignment groups)
Fraction f mfrac
FunctionApply func &FunctionApply; (binary operator)
LeftSubSup sPre mmultiscripts (special case of)
LowerLimit limLow munder
Matrix m mtable
Nary nary mrow followed by n-ary mo
Phantom phant mphantom and/or mpadded
Radical rad msqrt/mroot
GroupChar groupChr mover/munder
Subscript sSub msub
SubSup sSubSup msubsup
Superscript sSup msup
UpperLimit limUpp mover
Ordinary text r mrow

 

MathML has additional nodes, some of which involve infix parsing to recognize, e.g., integrals. The OMML entities were defined for typographic reasons since they require special display handling. Interestingly the OMML entities also include useful semantics, such as identifying integrals and trigonometric functions without special parsing.

In summary, math zones can be made accessible using display trees for which the node contents are spoken using in the localized linear format and navigation is accomplished using simple arrow keys, Ctrl arrow keys, and the Home and End keys, or their Braille equivalents. Arriving at any particular insertion point, the user can hear or feel the math text and can edit the text in standard ways.

I’m indebted to many colleagues who helped me understand various accessibility issues and I benefitted a lot from attending the Benetech Math Code Sprint.

The Community Group ‘Getting math on Web pages’ launched

Source: W3C Math Home • April 19, 2016 • Permalink

The Community Group ‘Getting math on Web pages’ launched

Feeds

Planet MathML features:

If you own a blog with a focus on MathML, and want to be added or removed from this aggregator, please get in touch with Bert Bos at bert@w3.org.

(feed)This page as an Atom feed

A mechanical calculation machine (with an added W3C logo)

Bert Bos, math activity lead
Copyright © 2008–2015 W3C®

Powered by Planet