Publishing Working Group F2F Day 2 — Minutes

Date: 2019-09-17

See also the Agenda and the IRC Log


Present: Masakazu Kitahara, jemma, Luc Audrain, Ralph Swick, Jun Gamou, Charles LaPierre, Rachel Comerford, Gregorio Pellegrino, Toshiaki Koike, Benjamin Young, zoebijl, Romain Deltour, Avneesh Singh, georgek, Wendy Reid, George Kerscher, Matt Garrish, Yu-Wei Chang (Yanni), matt-king, Brady Duga, Juan Corona, achraf, Reinaldo Ferraz, Jeff Jaffee, Laurent Le Meur, Garth Conboy, davidclarke, addison, duerst, Murata Makoto, bert, Bobby Tung, Richard Ishida

Regrets: Ivan Herman


Chair: Wendy Reid, Garth Conboy

Scribe(s): Ralph Swick, Romain Deltour, Marisa DeMeglio, Benjamin Young, Dave Cramer


Wendy Reid:

1. ARIA WG meeting

Wendy Reid: welcome ARIA friends!

joaniediggs: myself and James Nurtham
… you have a DPUB ARIA 2.0 deliverable in your charter
… we’re wondering about this


joaniediggs: Aaron Leventhal (Google) filed this ^^ issue
… running headers on footers
… a JAWS developer thinks this is a good idea

Ralph Swick: .. the ARIA WG also thinks this is a good idea

joaniediggs: but the ARIA WG doesn’t own this spec

Wendy Reid: Pub WG isn’t currently working on dpub-aria 2.0
… Tzviya knows more about this

Matt Garrish: we had started proposing that we’d do a 2.0
… we’ve been working on the larger issue of semantics in publishing
… early in this period we decided to scale back
… and do a dpub-aria 1.1 to fix some of the issues in 1.0
… focussing on owned elements and parent semantics
… we didn’t want to add a lot more semantics before there is more implementation in AT
… when Aaron proposed this I did understand his use case
… I agree it would be useful
… we’ve been consumed with Web Publications and Manifest
… so we haven’t reconvened the dpub-aria TF
… I don’t know if the group will still prefer to focus on a 1.1 rather than 2.0
… it’s probably time to reconvene that TF

Dave Cramer: how are running headers and footers expressed in markup?
… there are largely unimplemented CSS specs where this doesn’t appear in markup so there’s no place to attach a role

joanie: maybe


joanie: would you comment in the issue with that question, so Aaron can answer?

Avneesh Singh: +1 to what Matt said

Avneesh Singh: we’re looking for AT implementations before doing a lot more work
… one question was whether page numbering was being mixed with running headers/footers
… this has to be resolved in more discussion; there’s not consensus yet
… another question: are there threats to this attribute? we need identities for extended descriptions

joanie: we’re discussing this 1:30 to 2:30 today

Romain Deltour: we also wanted to take a fresh look at ARIA in HTML before taking up dpub-aria again

joanie: on wanting to see more implementations, I’m curious
… the way the mappings were done, many were done to landmarks and they work automatically in AT that does landmark navigation
… so there is already support for roles that map to links, list items, etc.
… the goal is to make ARIA work automatically in platforms

Avneesh Singh: aria-role=doctitle – screen readers should know this is the title and should say “Title” then read the title

joanie: are you filing issues?

Avneesh Singh: yes

joanie: because the word “title” isn’t spoken, are you saying this isn’t implemented?

mattking: we don’t tell AT what exactly to do
… but I am working on a project to try to clearly define some level of expectations for screen readers
… but this still won’t define what word to use
… if a screen reader uses the word “title” instead of “heading” consistently, that would be fine
… its users become accustomed to that
… does the spec have to say the word must be “title”?

Avneesh Singh: we can make some recommendations to screen readers
… the purpose of the roles is to help the AT describe what sort of material it is reading

Matt Garrish: there’s two parts to implementation issues
… some bugs; e.g. in doc cover
… on the publishing side we also want to understand what seantics are the most useful
… what semantics actually improve reading ommprehension
… what do we expect from the semantics?
… what publishers might want for internal workflows vs. what is useful for AT

Luc Audrain: as a publisher we rely on structure
… we know the structure of the document and can derive roles from the structure

jemma: thanks for clarifying the meaning of “lack of implementation”, Matt.

Luc Audrain: in the EPUB world we had the issue that epub-type was only structural
… with dpub-aria we can implement more structure
… we know there are still needs for AT

mck: +1 to Matt’s point that it is critical to understand what is useful to end users of assistive technologies.


joanie: we weren’t sure of your plans

Ralph Swick: .. we do recognize there are issues

joanie: but we do see a use case for what Aaron has cited
… we see decorative things appear over and over again
… so we’re toying a new ARIA feature to mark something as repeated content


joanie: the ARIA issue ^^ mentions some DPUB-related use cases and I’ve also filed them in your repo
… for the next time you update your spec
… if the content is elsewhere; e.g. a pull quote, a screen reader probably should not repeat it
… when proofreading a document you do probably want to hear the pull quote again
… but not in general
… if you do decide to work on running headers, these would map to this hypothetical repeated content issue
… we have a broader need to address this
… and we should come back to this if you update dpub-aria

jamesn: we’ve also been working on an extenstion to ARIA: aria-annotations
… not yet at FPWD
… still early, to solve a limited Google DOcs annotation use case


jamesn: not ready to discuss here in detail

Benjamin Young: this picks up ARIA details to point to annotations in content
… we’ve tried to match Web Annotation terminology; “bodies” are the things annotations point to
… and call out the purpose of the annotation
… so AT can distinguish between commentary and descriptive content


Benjamin Young: see section 3.2.1
… many of these are informed by Google Docs use cases
… in 4.1 there’s a mapping from existing Web Annotation purpose/motivation terminology to these new role vocabulary additions

Charles LaPierre: for conformance certification for EPUBS, we’re finding dpub-aria roles in books from some large publishers
… the data is getting out there
… we need now a richer experience for screen reader users
… so we should talk with the AT developers to expose this

Romain Deltour: the issue about the context roles should be remembered: some rules conflict with the ARIA roles

Romain Deltour:

Romain Deltour: some markup is broken as a result

Romain Deltour:

Romain Deltour: there’s both a dpub and an ARIA issue (#15 and #748)

jamesn: yeah; not easy to solve with what we have today

Romain Deltour: maybe some extension hooks

jamesn: yes; it would be very useful to have a way to handle this

joanie: we’ll figure something out and get back to you
… thanks for the reminder

Benjamin Young: should we work on dpub-aria 2.0 in the remaining term of our charter?

Wendy Reid: we should probably look into this
… I’ll bring it up in the next chairs’ call when Tzviya is back

Benjamin Young: let’s keep communication lines open

joanie: yes

Luc Audrain: it’s very important as we’re setting our tooling for accessible ebooks
… we need the specs to be stabilized
… we are in a new era of publishers producing all new titles in accessible EPUB3
… this is also a goal of the European Accessibility Act

George Kerscher: having the additional semantics is really, really good
… knowing that an H1 is a Chapter or a Glossary is terrific
… other functionality such as aria-details pointing to an element and being able to navigate there would be really terrific also
… the additional concept of an ability activate/move/link-to the element that aria-details points to
… my screen reader says “it has details” but you can’t do anything with it

jonie: right now a dpub-aria 1.1 seems quite reasonable
… if you don’t have enough for a 2.0
… right now you’re blocked on us
… you pointed out #748
… we’ll try to prioritize the issues that block you from doing a 1.1

Wendy Reid: thank you for bringing this again to our attention

2. MathML



bkardell: I brought some links
… I wanted to talk to you as you’ve faced difficulty getting things into browsers
… this isn’t necessarily political; browsers have a business too
… [summarized on slides]
… igalia was able to implement css-grid because things were open
… we’re working on MathML

Romain Deltour: for remote attendees, Brian is showing us this slide deck:

bkardell: because we think this is the right thing to do
… the Web was created in order to share research
… MathML layout is hard
… MathML layout is text
… MathML layout deserves to be solved
… only Chrome lacks MathML support
… if we land support in Chromium, this pretty much solves the problem
… igalia is actively working on this
… we have what we think is a very achievable goal of August 2020
… we’re trying to fund this via a group of people with a common interest
… we have a grant from NISO for this year and from APS physics
… we’re finalizing some stuff with Pearson
… see our FAQ
… you’re welcome to contribute code
… and download our linux distro and file bugs
… we’re also working on normalizing the MathML spec
… we have just about completing the normalization of the DOM in all three browser engines
… we’re working on completing the required funding for 2019

Ralph Swick:

Charles LaPierre: what is the relationship to the new Microsoft browser?

briank: they’d get it for free
… as would all the other chromium-based browsers
… which is a lot of them
… as well as embedded systems

Dave Cramer: you are understated, Brian
… this is a revolution in how we could work
… it does place a lot of demands on us
… historically we’ve been beggars
… this presents the possibility of having a lot more control over our own fate, if we’re willing to make business investements
… there’s a way to get implementations into browsers that doesn’t require convincing the browser developers that it will make money for them

brian: it’s a compelling idea to think about how to do this together
… we have common needs
… this is W3C; we work together as a Commons
… taking some ownership of this Commons and working together in this way could be really positive

Dave Cramer: there’s more than just writing a spec
… writing the spec is just part of the journey

Laurent Le Meur: this is very exciting for us
… EDRLab is an open-source development organization
… we will happily use a native implementation
… do you have any recommendations for the publishing industry on presentation MathML vs content MathML?

brian: we do have a recommendation on that
… MathML is an example of a specification that has a lot of theory that didn’t get a lot of implementation
… it defines 580 elements
… no browser ever implemented anything close to that number
… we’ve done research on what are the commonly-implemented elements
… we’ve found about 20 elements
… all presentation MathML
… we have 1800 WPT tests
… WebKit is passing about 60-70%
… we’re filing bugs and seeing advancement
… we expect to see 400-500 new passes in webkit soon too

Romain Deltour: MathML Core current implementation report:

Charles LaPierre: MathML in chromium is crucial to textbooks
… how is the MathML Refresh CG going to affect your development?
… as they try to sreamline MathML

briank: we’re implementing MathML core
… the streamlined MathML
… we’re working with the CG
… being more rigorous in the spec
… there’s no IDL for MathML, if you can believe it!
… we’re modernizing this
… I think it’s compelling to have a modern DOM that can fit into the modern world
… [with the streamlined spec] you can use shadow DOM

Luc Audrain: the Publishing Business Group can talk about this project
… we’ve funded the ePubCheck work
… perhaps we can look into supporting this

Benjamin Young: thanks for coming
… John Wiley and Sons ships a lot of math

brian: there’s been a lot of good work in MathJax
… we’d like to take some of the further work and learn from MathJax
… e.g. interactive math

Dave Cramer: can we sponsor individual elements? :)

Ralph Swick:

Laurent Le Meur: when you add the code to chromium, how can you be sure Chrome will pick it up?

briank: I don’t think the intent of the fork was to kill MathML
… they wanted to change the architecture to improve performance and dropping the initial implementation of MathML was a side-effect

Charles LaPierre: I’ve heard some publishers say they can’t put a lot of math in an EPUB because MathJax takes too long to render it

brian: yes, and this will really help with that problem

Wendy Reid: are these publishers inlcuding MathJax or relying on the reading system?

Charles LaPierre: relying on the RS

brian: [shows a demo of rapid DOM updates]
… this demo is slowed down so you can see it!
… [demos reflowing math, responsive adjustments to the size]
… here’s a Custom Element that renders LaTeX

Wendy Reid: RS don’t have lots of memory
… how well will this work in those?
… have you tested those scenarios?

brian: it should work as well as anything else in CSS that does text layout

Romain Deltour: I’m interested in your collaboration with ARIA

brian: yes; we think there’s good work there that we hope to evolve toward

Luc Audrain: any idea what Amazon might do?

brian: I spoke with the Amazon participants in CSS WG in June, don’t have answers yet

Dave Cramer: it’s possible that having Chrome support in the pipeline will help

Brady Duga: caution that it will still take years for support to appear in RS once it’s in chromium

Dave Cramer: this will motivate some apps to add MathML support too

Benjamin Young: we’ll have to continue to provide fallbaks

brian: it does simplify things
… the need for a polyfill will go down

Wendy Reid: thanks for coming and telling us about this!

Ralph Swick:

Wendy Reid:

Wendy Reid:

3. implementation and testing

Wendy Reid: what do we need to get to CR
… we need to come up with a test plan and implementation guide
… timCole shared the Web Annotations testing and implementation docs
… we can probably do something very similar

Wendy Reid:

Wendy Reid:

Wendy Reid: the tests are pretty concise
… I would like our test suite to be as concise and comprehensive as possible
… I’ve created a Google doc to compile the tests before we turn them into WPT material

Wendy Reid:

Wendy Reid: I’ve already put high level tests for Pub Manifest
… basically we should take all the MUST/SHOULD statements and turn them into tests
… the other part is to create excit criteria for the specs
… so that we set the expectations
… is there any advice from the other groups bigbluehat ?

Benjamin Young: one of the things we did was to use the actual sentences out of the spec and pull out the MUST do / SHOULD do
… the pass criteria is nice but hard to connect it to the spec
… the pass criteria can be helpful to understand the test
… but being able to find an exact quote from the spec is very useful
… it also makes the tests very pretty and readable
… a report is generated
… (thanks to Greg Kellogs)

Wendy Reid: currently I wrote a first couple tests, for the basic things (if a manifest is present, can you detect it? etc.)

Benjamin Young: most of the manifest checks are probably JSON/JSON-LD based

Gregorio Pellegrino: are the tests for the UA or the manifest itself?

Wendy Reid: that would depend on the implementer?
… we’re testing the manifest format

Benjamin Young: we make no demand on UA, so there is nothing to test for them

Gregorio Pellegrino: testing the manifest is fine

Benjamin Young: a lot of the current tests look like what was in Web Pub, not in Web Manifest
… the only thing we can test for Web Manifest is a manifest, the JSON document and the requirements around it
… how you get that, how you feed it to the tests is a different issue
… out of scope
… package tests could exist which can leverage these tests

Charles LaPierre: if we take bigbluehat suggestion, things like “Process the manifest” are very vague
… what does it mean? we need to clarify that

Wendy Reid: right, this is just a start
… it’s currently only high level, then we need to break it down

Matt Garrish: what we have is a vocabulary in the manifest, and we’re gonna have to show that we have publishers or others who’re gonna implement it
… and then Audiobooks is gonna be the actual implementation of the manifest
… it’s probably where we’re gonna have more UA testing
… we’re gonna have to show commitments of implementation
… getting all the metadata approved is gonna take some work

Ralph Swick: part of the goal of this testing needs a certain mindset
… usually we’re testing an implementation
… here we’re also testing that the spec is interpreted in the same way by the implementers
… the implementers need to say why the test fail, so that we can clarify the spec
… we’re not just testing implementations, we’re testing whether the spec is clear enough

George Kerscher: do we produce tests that are expecting to fail?

Wendy Reid: good question, which came up yesterday
… when we were discussing validation errors
… should the spec be clearer about what we consider a failure vs. a recoverable error, etc
… there’s some work we can do to clarify that
… tests can tackle some of these validation problems

Matt Garrish: you can do tests that verify that a warning is properly issued, or an error raised
… you’re testing a requirement of the specification

Charles LaPierre: you’re expecting the algo to fail, in which case the test passes

Ralph Swick: [may be related to -> #62 ]

Benjamin Young: the processing steps are really the only thing that go beyond pure json validation
… the MUST in the spec that I’m finding are mostly “must have a context” and “must have a type”
… and then datatype requirements
… the processing section is something that is gonna be interesting all the way around
… as it results in an internal object (or API?) and we don’t really have a way to test those things
… the way WPT works is that they have hooks in the browser so they can test that
… I’m not sure what we can do in that case

Laurent Le Meur: I agree that there are two sections in the tests, one about the structure and one about the processing
… I propose to separate the testing document in two parts

Wendy Reid: the Pub Manifest is really about structure, the structural tests are around the manifest
… we could move the processing tests to the Audiobooks, as they will have to implement the processing part
… Ralph, is it an OK approach?

Ralph Swick: yes, that’s where they’re gonna be implemented

Juan Corona: are we seeking implementations for the Pub Manifest without any profiles?

Wendy Reid: no clear promises about Pub Manifest outside of Audiobooks
… but we can expect something that is not an Audiobook to verify that it can apply to something else

Ralph Swick: ???

Benjamin Young: by testing an Audiobook implementation can we use these tests to validate the Manifest spec?

Ralph Swick: there are pieces of the Manifest spec that may not be covered by the Audiobook profile

Wendy Reid: for instance, one of the requirement of Audiobooks is that the reading order only contains audio files
… however Manifest doesn’t require that
… when we’re testing the processing part for Audiobooks, does it suffice or do we need to create a publication with non-audio files too?

Ralph Swick: I expect the Audiobook implementations to be sufficient

Brady Duga: what is an implementation of the Pub Manifest? it’s not a thing, we’re always supposed to create profiles
… there cannot be implementations of it outside of a profile context

Benjamin Young: right, it’s just an abstract interface
… the processing step currently in Pub Manifest talks about issuing warnings and errors, and says that how UA do it is UA-dependent
… how do we test that?
… we would need some needs of systems recording they do that
… WPT would have solutions for JS/browser implementations, but we can’t test console loggers

Laurent Le Meur: for example, an Audiobook must contain only audio files. if it doesn’t, what should the UA do?
… I don’t think we spec’ed that

Wendy Reid: right, we may need to clarify that
… for Audiobooks 1.0 we decided to accept only audio, but if the market wants a way to render complementary content, we can add that

Laurent Le Meur: I think the UA should not reject such books, we’re Web oriented and we should be as permissive as possible

Ralph Swick: yes, it should be captured as an issue

Benjamin Young: this question about which spec we test, whether or not tests are inherited, needs to be pinned down
… does the processing section need to be moved to the Audiobook?
… we don’t have a generic mediatype that declares support for Pub Manifest
… it’s a big question

Wendy Reid: when we decided on the profil model, the idea wasn’t to declare media types for everything but to add specific requirements
… a valid Audiobook needs a valid manifest, the other way isn’t true

Benjamin Young: then both specs should probably go to rec, and no profile should include any additional processing steps

Charles LaPierre: for the reading order we may want to consider use cases where each chapter has images associated to audio

Ralph Swick: sounds close to sync media

Wendy Reid: the other challenge with that is that we have a cover image, so we’d need to come up with rel values for declaring that the image is related to chapter n…
… the discussion is valuable, but it’s potentially version 2

George Kerscher: this is related to ToC which can present a series of headings, images
… an audio reader which supports the presentation of images is gonna ??? the table of content

Wendy Reid: in Audiobooks we don’t strongly defined expectations for UA
… implementors might use the manifest or not, it’s for them to decide

George Kerscher: don’t we have to specify what triggers the switch between the chapter images when it starts to play?

Matt Garrish: stepping back a little.
… with the Pub Manifest processing model is inherited and extended
… the actual implemention is the Audiobook and it’s the one we need to use
… in that way we’re able to test what was defined in the Pub Manifest specification
… we don’t want every profile spec to define its own processing model

Benjamin Young: yes, completely agree

Avneesh Singh: +1 Matt

Benjamin Young: to be safe, whatever we define in the profiles need to fit in the upstream processing steps
… we need to define things that do not require changes to the Manifes processing model
… and make sure that the extensibility model accomodates profiles to be added to version 2 of the manifest
… if we add additional terms, it still needs to be a Pub Manifest, and UA requirements come on top of them
… so we can avoid media types ad nauseum
… I think it’s testable, if we test the JSON validation thing with schema, test the processing part of the manifest, and test audiobook specific things as UA tests

Matt Garrish: makes sense
… we may need to say more about the inheritence/exstensibility

Wendy Reid: the testing may need to be structured so that the processing tests are about the Manifest processing model
… and let UA decide what the behavior is when producing a warning or error
… and when testing Audiobooks, all the processing tests apply, and then you have expectations on UA behavior

George Kerscher: I don’t understand how the toc relates to the play order
… I understand how UA can take the toc, go to an mp3 file and start playing it
… but how an app can walk from one mp3 file to the next, and how would the toc know that you just moved from ch2 to ch3?
… what data is in the files that is gonna allow a UA to know that?

Wendy Reid: the time stamps depend on the structure of the file
… if each chapter has its own audio file we know that
… but Audiobook also allow media fragments
… that really comes down to the UA to know how to map this information
… that’s a UA thing, we cannot specify how to do it, just that they have to

Rachel Comerford: if this comes down to a best practice?

Wendy Reid: we have a section in the UCR devoted to what the users know about the current location in the publication
… it’s in the UCR, not a best practice

Benjamin Young: the UCR are hopes or aspirations, I don’t think they can be tested on the machinery
… if we need to demand things on UA, we need tests
… we’ve avoided UA declarations in Pub Manifest, but the profiles can specify that

Matt Garrish: I’m wondering who’s gonna take responsbility for this? there’s lot of issues
… how are we gonna break up this work?
… I have a lot of logistics questions

Wendy Reid: should we need a TF? this is our work for the next 8 months, so I think we’re all responsible

Benjamin Young: the only thing I regret with web annotations is to not having started the testing effor from day 0
… it is really a thing the group should be doing
… it will potentially reshape what we write in the specs, and how we write it

Wendy Reid: yes, it needs to be done before we lock down the documents
… bigbluehat, how did you do it with the Web Annotation group?

Benjamin Young: in our case, one of the person writing the tests was also an editor, which was very helpful
… a potential way forward is for anyone interested in writing tests to extract the MUST and the SHOULD, writing pseudo code to describe what the test is expected to do
… it’s gonna help a lot in identifying what is testable
… once you write tests, the vision clears

Wendy Reid: we should al lbe resonsible, but we need people to write tests, scour documents, and who’re not Matt :-)
… is anyone interested?

Benjamin Young: I’m happy to help whoever wants to write JS and JSON schema to operationalize the tests
… we did do a lot of effort to get in running in WPT
… I would start with that, UA tests is probably someone else’s job

Brady Duga: I’m still confused what these tests look like
… how do you check a data format?
… if I had to sit down and write tests, I wouldn’t know how to start

Benjamin Young: yes we have some examples in Web Annotations

Benjamin Young:

Benjamin Young: most of these tests a JSON schemas
… JSON schema was not flexible enough for some of our requirements, so we created a bunch of schemas which mapped to our MUST and run them in sequence

Wendy Reid:

Benjamin Young: we’re testing the same concepts in multiple ways, with micro schemas
… the test runner code is intersting as well
… in tools and samples, there are “correct” and “incorrect” annotations which we used to test tests
… then we made an npm package

Wendy Reid: it looks like we could pretty easily create a spreadsheet of the requirements and then turn that into little schemas
… so we can get everyone to participate, coders or not

Benjamin Young: we also used section numbers from the spec, which is easier to reference

Matt Garrish: I like your idea of getting everyone involved
… we may want to chunk the spec in groups to distribute the assigments, like we did for the EPUB spec review

Laurent Le Meur: I still wonder why you’ve split unit schemas what we have as a global schema
… e.g. the different shapes that contributors can take are already part of the schema
… what’s the point of having micro schemas when we can have a larger super schema that could be also be used by implementors

Benjamin Young: at the time we couldn’t find a validator that said what part of the schema worked or not, or failed or not. that’s why we broke it in smaller per-feature schemas
… we couldn’t get the tools to tell us the precise info without breaking up the schema

Laurent Le Meur: when I write a schema I get a report

Benjamin Young: it was 3 years ago, tooling may have improved

Wendy Reid: you want the tests to be granular; you can have one super schema as long as the tests are granular

Benjamin Young: for implementation reporting, we need the info feature by feature

Benjamin Young: this is the code from Apache Annotator that does validation

Benjamin Young: the implementation is 94 lines of code
… it pulls in Web Annotations JSON schemas and run it against the annotation
… this is the kind of thing I can setup for the Pub Manifest files
… that doesn’t touch the processing sections, but can be used for the schemas
… we had Web Annotations implementors if they were creators or consumers of them
… it helped to test round tripping
… for Pub Manifest, someone else need to look at how to test the internal representation

Laurent Le Meur: to test content we need some sort of schema
… to test UAs, we need a set of samples, with various shapes or features
… it’s totally separate; we should start creating samples soon
… the actual content isn’t important

Wendy Reid: we have a good collection of samples with the various shapes of Audiobooks, we can use whatever public domain content

Laurent Le Meur: right, the size of the chapter is not relevant, we can use short chapters

Wendy Reid: to tackle all the testing, it sounds we need a little bit of research to see if we need to chunk the schema like Web Annotations did
… and then we need people to take the MUST/SHOULD out, I like the idea of splitting the work in spec chunks
… then another group has to work on how to test UA behavior

Matt Garrish: where does CR fit into this? do we need the tests before going to CR?
… we may discover issues with the spec, are we concerned about finding significant issues after CR?

Dave Cramer: you write tests for CRs, then change the CR. the quality of the spec is the most important thing, that’s why CR is for

Ralph Swick: it’s considered a good thing if implementations get you to update the spec
… depending on the kind of changes to the spec
… editorial changes are always easy to do
… substantial changes are supposed to require approval
… but CR is definitely for improving the spec

Wendy Reid: after lunch, we’re gonna talk about the Pub CG, then talk about a plan to get started with these tests
… we’ll reconvene at 1:30

4. publishing cg

Wendy Reid: the pub cg has been formed, please join it
… as mentioned yesterday, we want to spend more time on incubation. this is the point of the pub cg, and they want to hear your ideas
… things start in pub cg and then move on to other WGs (not necessarily pwg)
… let’s brainstorm topics for pub cg

Juan Corona: lower-level features that browsers could support (e.g. bigbluehat talked about iframe use cases; dauwhe talked about the challenges of large doms)

Dave Cramer: iframes are interesting, lots of discussion across w3c; feels like the role of this group might be to describe the problems we have when using iframes for the type of content we’re trying to create as something to bring to whatwg or wicg or whomever is getting closer to the bare metal. we have possibly unique use cases, so conveying them to the people who could advice re feasibility would be a valuable task for this group

Wendy Reid: the ideas submitted to pubcg could also be problems that you’re having
… e.g. how do publications and renderers do a better job with footnotes and endnotes
… rather than leave it entirely up to UAs

Dave Cramer: you can make anything happen on the web today - we have this view that we should be able to use some declarative markup that implies a processing model
… we want to come closer to the web, not further away. no complex rendering pathways.
… we haven’t identified the causes of the problems we see with the current solution
… fragmentation of footnote experience in UAs, for example, feels like a prob with the UAs

Brady Duga: there were issues
… UAs use wacky logic to find out if something is a footnote
… due to loss of epub:type

Garth Conboy: () i think we ought to look at something like footnotes
… what goes to pub cg vs epub cg - footnotes might go to epub cg

Romain Deltour: experiment with creating an API for the manifest
… define an API via web idl and then polyfill it

Juan Corona: what ideas are they working on in pub cg already?

Juan Corona:

Wendy Reid: iframes is one thing they’re looking at
… accessibility

Juan Corona: addressability

Gregorio Pellegrino: multicol layout
… pagination

Dave Cramer: many ideas are related to behaviors - we have expectations - what about custom elements with the desired behavior

Marisa DeMeglio: current state of custom elements and accessibility?

Wendy Reid: marisa: Who knows about the current state of custom elements and accessibility?

Marisa DeMeglio: in practice

Wendy Reid: … I understand the practice, shadow DOMs and accessibility trees

Wendy Reid: we have an opportunity to take advantage of our position in the web world

5. internationalization

Marisa DeMeglio: [we are joined by people from internationalization]

Richard Ishida: we’ve been working with you all and esp ivan - working on some issues related to pub-manifest

Wendy Reid: it has features related to language declarations
… your self-review found some issues so you’ve rewritten some pieces but we haven’t had a lot of time to discuss your rewrites
… i had a look and perhaps some things need clarification
… the spec doesn’t have a lot about language - maybe we can go through it
… our worldview includes 3 different aspects of declaring lang: 1. you are declaring the actual language of the text in the element on which the lang declaration is placed. this affords for example lang-specific spellcheck
… 2. who are the people who are intended users of this text (could be an entire document) - there could be more than one lang value here, e.g. french and english in quebec
… in the same document
… and each lang section is then declared within that doc
… 3. if you are referring to external resources (speech, images, more text), those resources can be said to be for a particular audience (or in a particular language, but that is not as common as indicating audience)
… audience vs text processing

Wendy Reid:

Romain Deltour:

Dave Cramer: pub-manifest is a meta container for actual artifacts
… manifest itself is json, can have metadata about itself as well as about the artifacts it refers to

Romain Deltour: our self review (made by ivan):

Richard Ishida: i see global and local lang settings

Dave Cramer: global is the default lang
… for items within the scope of this context

addison: i see an inLanguage property that can talk about lang or audience

Romain Deltour: we want to allow names to be specifiable in multiple scripts - can we use the lang tag for this?

addison: yes
… “und” subtag can carry script info but no one really looks for that tag

Richard Ishida: intended audience and language may not be 1-1. example of taiwanese-german dictionary, both lang groups won’t use it. just one will.

addison: how does inLanguage work in this case?

Wendy Reid: primary lang is german and each item in the resource list could have both (assuming it contains both langs)
… inLanguage can be an array

Murata Makoto: is it possible to specify both idiographic representation and hiragana representation for japanese author names and titles, in a compact way

Wendy Reid: it depends on the field

Murata Makoto: want to avoid repetition of id and type
… e.g. array of title values rather than multiple titles

Gregorio Pellegrino: has an audience object

Marisa DeMeglio: .. the lang metadata in epub - every lang that appears in the document ends up in the OCF

Dave Cramer: we’re deferring to authoring tools rather than defining behaviors

Gregorio Pellegrino: assistive tech may want to have a list of all the languages

addison: direction - there is a default dir attribute - not sure that auto is going to be that helpful as a value since youre trying to communicate a base direction
… we don’t have an item-level direction solution yet

Dave Cramer:

addison: is there a provision to provide pronunciation fields, to sort things for example for langs like japanese

Murata Makoto: q

Murata Makoto: for tts of title or author info, we might want to have both SSML and recorded voice for each piece of information. TTS is unreliable in some languages.
… epub3 recently introduced a property for specifying a bit of audio file for the title and author

Wendy Reid: this came up in audiobooks, we don’t have anything for it though

Richard Ishida: section

Wendy Reid: publication lang
… seems that you’re trying to provide a list of all the langs in the resource

Dave Cramer: this was metadata for an intended audience; UAs want to create the environment to display the content before they parse the content themselves; this is a hint that, if this book is for a german speaking audience, having the UA preload the german dictionary might not be a bad idea

Richard Ishida: maybe make it more clear in the text

addison: you’re using this as information about this book so it should be process-able before opening the book

Richard Ishida: you could also use it for searching

addison: a few disjoint use cases - some bound to the content - some bound to the audience metadata

Wendy Reid: scholarly publications may have a requirement to list all the langs
… profiles can clarify if they need a strict usage

Dave Cramer: we should prob further clarify this; not helpful here

Benjamin Young: a wiley use case - we will publish research in english but with multilang titles so it can be found by audiences that speak many languages and want to read english stuff
… i think this use case is taken care of

Richard Ishida: do you infer one from the other - language vs inLanguage

Wendy Reid: see section

Dave Cramer: let’s have a stronger statement here

Richard Ishida: reading progression direction

addison: binding direction
… vertical text is a difficult case

Dave Cramer: ereaders give additional complications - top to bottom, bottom to top - individual html resources may be paginated in diff directions

addison: recommend setting overall page progression direction, tied to default dir of book, tells you things about cover image

Dave Cramer: must avoid a situation where we have unreachable content

Richard Ishida: one book could be bound in both directions, e.g. in flight magazines

davidclarke: ltr, rtl - consider top- and bottom-bound as well

Wendy Reid: we have an open issue for this
… UAs might render as a scrolling experience rather than paginated ltr/rtl
… UAs might be unable to render in the specified modality

addison: do you describe the writing mode of the book? or infer it?

Dave Cramer: historically no

Laurent Le Meur:,-pagination-and-user-settings

Laurent Le Meur: in this document, we describe how the primary language inLanguage is used by usar agents

Richard Ishida: seems like you’re ok

Dave Cramer: a couple ambiguities

Wendy Reid: we will clarify a few things - primary lang, default direction, spoken alternatives

addison: asking wider community to support, e.g. adding direction metadata to JSON

Marisa DeMeglio: [i18n leaves]

6. TAG Time!

sangwhan: you may have heard of the TAG. We review all the specs here at the W3C
… we reviewed these, and we had some concern about these specs not connecting to the rest of the Web
… it didn’t seem that audiobooks was connected to the rest of the Web
… the media group had a similar situation
… and didn’t seem to connect with the rest of the Web
… it would be great to have a general chapter and playlist format
… and talking to the media group would be a great way to accomplish that
… the media group is meeting on thursday and friday, so maybe you all can sync up
… the other topic is packaging
… google recently released something called bundled exchanges
… which is better for this group than signed exchanges
… because of the absolutely pointless 7 day certificate expiration
… at least for this group’s use case that doesn’t make sense

Romain Deltour: spec ref:

Dave Cramer: no, we want people to pay use every 7 days


Jeffrey Yasskin: romain: Oh hai. What’s up?

sangwhan: we’d love to help you all work with google on bundled exchanges
… as we’d like to see how these things progress and get your help to file issues, etc.
… if you have questions about what the TAG is or in particular this topic of bundled exchanges
… I’m happy to answer

Benjamin Young: one general ask… you mention the rest of the web
… does the tag have a definition of the web? what the boundaries are?
… we have things that use http, hypermedia etc
… like rest APIs
… but sometimes these things are not part of the web, according to some browserfolks
… web of things is not a browser thing, but is a web thing
… and publishing is a similar opportunity

sangwhan: that’s a tricky question I’d say
… we don’t have a hard line about what is the Web and what is not the Web
… for publishing that line is fuzzy

Romain Deltour: another very good read for this group, the report on the ESCAPE workshop held in July:

sangwhan: you can’t load ebook’s in browsers
… unless it’s in a weird extension thing
… it would be nicer to bridge these things together
… I believe it opens a new window

Laurent Le Meur: it does. others don’t

sangwhan: I want interoperability
… if it can’t interop with the rest of the Web, then I don’t consider it part of the Web

Marisa DeMeglio: so you mentioned going to the media group
… I’ve been working on synchronized media
… we’re sort of a variant of audiobooks that sync text and audio
… I’ve gone on many wild goose chases in other groups only to find what we’re doing is unique
… so if there are specific things you see overlap, that would help us target our work

sangwhan: chapter and section metadata would be something to collaborate on
… timing is also a key concern
… audiobooks are also interested in that
… there is overlap which is why I’m here

Dave Cramer: we sort of have short term goals and long term goals
… all of us are here because we see the power of the web
… and we see publications being linkable and publishable across the web
… we also have short term business cases for audiobooks
… we’d like to stop emailing files and shipping hard drives
… and having loads of middle-folks converting files and remaking formats
… there is indeed value in keeping long term goals in mind while we address the short term needs
… that’s why I hope we can take something from what the TAG is telling us
… and even if we don’t get there immediately, I don’t want us to road block our future work “on the Web”
… even if these are not accessed via HTTP
… we don’t want to start over when we get here

Laurent Le Meur: before we discuss with the media entertainment group
… I would like more details about why the TAG thinks what we’re doing not on or for the Web
… the model can work on and off the Web
… on the Web, you’ve got a link to an HTML page which can include a polyfill
… it loads a JSON manifest, it then creates the menu, and plays the contents
… show table of contents, etc.
… how is this not on the Web

sangwhan: I’m not trying to invalidate what the group has done
… the plumbing conversation is one of them
… the media group is about playing
… and I don’t see how this content and those APIs will be bound together

Wendy Reid: one thing I’ll point out
… turns out I’ve been talking to them about MediaSessions with them for awhile
… we have been in touch with them, and we will certainly find out where they don’t align
… I’ve already asked if we’re colliding or aligning
… I do plan to check in this week also
… obviously we want that work to align with ours

Jeffrey Yasskin: we’re from the team that is specing and building the Web Packaging work
… so I wanted to let you know I’m here to be available for questions etc

sangwhan: file bugs while things are in the works

Jeffrey Yasskin: we’re designing it to be proposed to the IETF in November
… and they’ll certainly be changing it when they begin work in the next year

Wendy Reid: 3:13 PM <romain> spec ref:

Jeffrey Yasskin: the integration with fetch() is not written down yet

kinuko: we are also trying to make it easier

Romain Deltour: links to the sister specs (signed exchanges) and explainer are in the gh repo readme:

Wendy Reid: can these bundles (or signed exchanges) be loaded directly without using fetch()?

Jeffrey Yasskin: yes. that’s something we’re working on

Dave Cramer: [super smart questions that got missed by the scribe]

Wendy Reid: come tomorrow the Web Packaging breakout for more!

sangwhan: yes. be there.

Wendy Reid: we need things to last longer than 7 days

Jeffrey Yasskin: yes. you can do that with bundling, and we’re working on making APIs available in a special security context (possibly)

Dave Cramer: we do need storage access

Jeffrey Yasskin: because?

Dave Cramer: we do have books where one fills out quizes or adds content to customize the book
… and in epub readers now that doesn’t stick or work at all really
… so we’d like to have that available

Wendy Reid: thanks to sangwhan (TAG) and jyasskin and kinuko (Web Packaging) for joining!

7. testing and implementation

Wendy Reid: I think we nailed down what we need to do, now we need people to do it
… 3 main groups of people
… 1. people to investigate whether we need smaller schemas or if we can use one super schema

Laurent Le Meur: I can

Benjamin Young: I’ll help

Wendy Reid: 2. people who can break down the specs into MUST/SHOULD statements
… /me it’s bullshould

Wendy Reid: we’re probably gonna do it in Google Docs

Juan Corona: I can help

Benjamin Young: it’s nothing harder than copy/pasting those in a doc, initially
… the hardest part is keeping it up to date with the specs

Juan Corona: I’m volunteering to write a bot for it

Romain Deltour: what about Matt’s idea of splitting the spec into chunks to distribute the work

Wendy Reid: sure, we don’t expect 1 people to work on everything
… 3. people who can deal with the UA specific use case testing (behavior testing?)

Brady Duga: I volunteer to find MUST/SHOULD in a section of the specs

Romain Deltour: me too

Gregorio Pellegrino: we made some Web Pub a month ago, maybe I can turn it in a test file
… I already tested it with VivlioStyle reader

Benjamin Young: we do need publications to test on the JSON side and the UA side

Gregorio Pellegrino: the Web Pub profile or the JSON only?

Benjamin Young: the JSON is all we need for the manifest tests

Romain Deltour: [discussion about URL having to dereference to a resource, and how it’s wrong]

Romain Deltour: in section []

Gregorio Pellegrino: what is the timeline for this work?

Wendy Reid: need to double check with Ivan, but we want to transition to CR at the end of this month
… but we can write tests during CR
… we’d like to have tests in the next 1.5 month, to have implementors looking at these tests

Benjamin Young: I really don’t see how we can have testable spec without UA guidance in Audiobooks

Wendy Reid: right, we’ll need to address that

Murata Makoto: what’s the status of the current spec?

Wendy Reid: it’s a public working draft

Murata Makoto: did the director approve it?

Dave Cramer: we got it for the FPWD, don’t need it for other PWD

Wendy Reid: we did a lot of work over these past two days
… I want to thank everyone!
… we have an approach for testing, we’ll get everything we need for CR by the end of the month
… I’ll make sure we can get started with the tests


Wendy Reid: thanks to everyone who volunteered to help

Romain Deltour: @dauwhe_ +1000!!

Wendy Reid: interesting discussion with i18n, awesome and exciting progress and demo on sync media (thx marisa!)
… unless anyone has anything else, meeting adjourned

Romain Deltour: [cue general kumbaya]

Romain Deltour: [round of applause for our awesome chair]