Minutes, 30 September 2009 SVG WG F2F - Day 5

Hello www-svg.

Below are the minutes from day 5 of the SVG WG Mountain View F2F.

  http://www.w3.org/2009/09/30-svg-minutes.html

   [1]W3C

      [1] http://www.w3.org/

                               - DRAFT -

                   SVG Working Group Teleconference

30 Sep 2009

   See also: [2]IRC log

      [2] http://www.w3.org/2009/09/30-svg-irc

Attendees

   Present
   Regrets
   Chair
          SV_MEETING_CHAIR

   Scribe
          Cameron

Contents

     * [3]Topics
         1. [4]modules status
         2. [5]Requirement to copy into shadow trees for SVG font
            glyphs
         3. [6]SVGLength behaviour tightening
         4. [7]testing
         5. [8]Connectors
         6. [9]timeline
         7. [10]telcons
     * [11]Summary of Action Items
     _________________________________________________________



   <trackbot> Date: 30 September 2009

modules status

   <ChrisL> scribenick: chrisl

   (cam draws on whiteboard)

   cm: to be published soon

   1.1se, integration, color, filters

   a bit later - transforms, compositing, vector effects

   cm: need to find out status on mae and pin to clip
   ... params
   ... fonts on back burner for now, point to 1.1se

   ds: no recent changes to params
   ... ds: pinned clip is miniscule, should be folded into something
   else
   ... (explains pinned clip for video)

   ag: should go in clipping and masking

   cm: do they participate in compositing?
   ... layout, no recent change but doing some prototyping software,
   kind of like spings and flexbox

   ag: want to take stuff out of the normal painters flow for
   transforms
   ... by default its painters

   cl: 3d transformed stuff is taken into a different tendering pass
   ... same as z-order actually

   <shepazu> Pinned Clip spec ->
   [12]http://dev.w3.org/SVG/modules/pinnedclip/publish/SVGpinnedClip.h
   tml

     [12] http://dev.w3.org/SVG/modules/pinnedclip/publish/SVGpinnedClip.html

   cl: also the result of the main render pass has to be held as rgba
   since z-order passes could be behind as well as in front of it
   ... same mechanism could do z-order and 3d transform

   ag: think of a preserve-3d attribute that says when to pull it out
   of ther main flow

   cm: (draws example on board with interleaved z index)

   ds: andrew emmons had a ssystem which only allows z to be set on
   groups
   ... layered g only allowed reordering within that group
   ... dont want to allow arbitrary setting of z throughout the
   document

   cm: (tries to understand css stacking context. fails)

   cl: (tries to explain it. fails)

   z-index: no one understands me!

   <shepazu> z-index: (tries to be consistent. fails)

   zaki, mute z-index

   ds: there is an accessibility interest here. for well structured
   documents, if we allow people to mix then grouping all labels and
   all objects does not work
   ... so putting all the text labels, one level up
   ... logical order vs. rendering order

   cm: html docs have reasing order and rendering order tightly coupled
   ... not the same in svg due to rendering and transforms, grouping is
   different

   ag: so how would transforms affect z-index

   ds: want to talk about the connector element, i have an early
   proposal

   cm: z-index is an ordering, not an absolute distance

   ds: good to talk to emmons on layerd-g vs z-index. constrained and
   performant solution

   cm: when you get the z index you store them and then work from
   lowest to highest index
   ... and keep track of which z index are not used or are empty

   ag: will fold this into the transforms spec, helps with describing
   the rendering model

   ds: suppose it was called z-transform

   <heycam> cm: z-index is different in that it just changes layering
   of rendering, while the z-transform would place it somewhere in z
   space

   <heycam> ... and thus become larger or smaller

   <heycam> ... so i think they're pretty much orthogonal

   <heycam> ... i would specify z-index as part of the rendering model
   in the base svg spec

   <heycam> ... the 3d transforms spec wouldn't need to worry about it
   then

   ds: lets not use css z-index

   cl: and if ours has a different name, we also need to explain why
   z-index was not applicable to our rendering model

   ds: explain exactly how it differs
   ... that way, they search o=for z-index and find z-layer

   cm: who will write up the proposal

   <scribe> ACTION: anthony to work with chris to write up a z-layer
   and 3d transform tendering proposal [recorded in
   [13]http://www.w3.org/2009/09/30-svg-minutes.html#action01]

   <trackbot> Created ACTION-2678 - Work with chris to write up a
   z-layer and 3d transform tendering proposal [on Anthony Grasso - due
   2009-10-07].

   ag: should we match up our names to the css names? theirs have
   changed

   cm: css wg did not seem willing to change theirs

   ag: happy to keep svg transforms moving forward

   cm: their perspective tx could occur multiple times which was odd

   <heycam> Scribe: Cameron

   <heycam> ScribeNick: heycam

   DS: last i remember anthony was going to write up a description of
   why our 3d matrix transforms have only 12 values and the css one has
   16
   ... and about the compatibility with OpenVG

   AG: to use theirs with openvg you'd need to do a bit more work

   DS: is it a case where we have to choose to be compatible with
   openvg or opengl?
   ... and if so, and there's an easy translation from one to the
   other, is there really any incompatibility at all?
   ... and if there is, which one should we go with?
   ... it's obvious why they went with opengl, because of the hardware
   ... which one is a more compelling target in terms of market
   penetration?
   ... if we can only pick one or the other
   ... or is it easy to translate from one to the other?

   AG: in the editor's draft that was updated a while ago i did put the
   equations that go from the 3x4 matrix to the 3x3, to pass to openvg

   CM: and the extra three values from the 3x4 matrix just encode the
   perspective transform, which is supplied to openvg outside of the
   regular transform pipeline, right?

   AG: right
   ... openvg is a 2d renderer, opengl is a 3d renderer
   ... so in one you need to preserve z information, in the other you
   don't

   DS: so compatibility with opengl is the one we should go for?

   CM: but you can't do as many strange transformations, right?
   ... if you just stay with the 3x4 one?

   AG: i want to know what the use cases are for multiple perspective
   points
   ... the also allow perspective transforms to be strung together in
   the transform string, which doesn't really make sense either

   <scribe> ACTION: Anthony to write up 3d transforms for svg model vs
   css model [recorded in
   [14]http://www.w3.org/2009/09/30-svg-minutes.html#action02]

   <trackbot> Created ACTION-2679 - Write up 3d transforms for svg
   model vs css model [on Anthony Grasso - due 2009-10-07].

   close action-2472

   <trackbot> ACTION-2472 Fill in the currentTranslate/currentScale
   erratum to explicitly make using those attributes on inner <svg>
   elements undefined closed

   issue-2001?

   <trackbot> ISSUE-2001 -- Prose describing <font> content model does
   not match DTD -- CLOSED

   <trackbot> [15]http://www.w3.org/Graphics/SVG/WG/track/issues/2001

     [15] http://www.w3.org/Graphics/SVG/WG/track/issues/2001

   close action-2415

   <trackbot> ACTION-2415 Check the Tiny 1.2 Chapter to see if there is
   any text in there that can be used for ISSUE-2001 closed

   close action-2635

   <trackbot> ACTION-2635 Add wording to both 1.1 2nd and the filters
   module that clarifies the usage of optional numbers for kernal unit
   length closed

   <scribe> Scribe: Cameron

   <scribe> ScribeNick: heycam

   ISSUE: Inheritance of properties into SVG font glyphs not liked by
   all

   <trackbot> Created ISSUE-2298 - Inheritance of properties into SVG
   font glyphs not liked by all ; please complete additional details at
   [16]http://www.w3.org/Graphics/SVG/WG/track/issues/2298/edit .

     [16] http://www.w3.org/Graphics/SVG/WG/track/issues/2298/edit

Requirement to copy into shadow trees for SVG font glyphs

   ED: this is from the 'use' element:

   <ed_> The effect of a 'use' element is as if the contents of the
   referenced element were deeply cloned into a separate non-exposed
   DOM tree which had the 'use' element as its parent and all of the
   'use' element's ancestors as its higher-level ancestors. Because the
   cloned DOM tree is non-exposed, the SVG Document Object Model (DOM)
   only contains the 'use' element and its attributes. The SVG DOM does
   not show the referenced element's contents as children of 'use'
   element.

   <ed_> For user agents that support Styling with CSS, the conceptual
   deep cloning of the referenced element into a non-exposed DOM tree
   also copies any property values resulting from the CSS cascade
   ([CSS2], chapter 6) on the referenced element and its contents. CSS2
   selectors can be applied to the original (i.e., referenced) elements
   because they are part of the formal document structure. CSS2
   selectors cannot be applied to the (conceptually) cloned DOM tree
   because its cont

   <ed_> Property inheritance, however, works as if the referenced
   element had been textually included as a deeply cloned child of the
   'use' element. The referenced element inherits properties from the
   'use' element and the 'use' element's ancestors. An instance of a
   referenced element does not inherit properties from the referenced
   element's original parents.

   <ed_>
   [17]http://dev.w3.org/SVG/profiles/1.1F2/publish/struct.html#UseElem
   ent

     [17] http://dev.w3.org/SVG/profiles/1.1F2/publish/struct.html#UseElement

   <ed_>
   [18]http://dev.w3.org/SVG/profiles/1.1F2/publish/fonts.html#GlyphEle
   ment

     [18] http://dev.w3.org/SVG/profiles/1.1F2/publish/fonts.html#GlyphElement

   CM: there's a lot to reword
   ... to avoid talking about cloning trees

   CL: that's why we said "conceptual clone"
   ... it's not an actual clone

   JW: i agree with what roc saying it says clone, then you're better
   off actually implementing it as a clone because you'll find over
   time that doing it some other way will cause problems in the future
   ... that's been my observation as well

   ED: but i don't see why that will be, since it's a non-exposed tree

   JW: sure, but you have to synthesize cloning
   ... the spec says to pretend to clone, but not actually clone
   ... but if you've not got an actual clone, then there's always a
   risk that what you have won't behave like a clone would
   ... and various parts of the spec can rely on it being a clone

   ED: we want to avoid clones

   JW: if you're htinking of removing the word "clone", then specify
   exactly how it should behave, then it would solve the issue

   ED: you still have the original subtree
   ... depending on how you represent the use, it's implementation
   specific
   ... the original nodes are there, but it doens't just work, you have
   to do other things like inheriting the style in

   CL: if we say it's a clone, you have to say it's a clone and then do
   some other stuff
   ... "conceptually" implies it's not exactly

   JW: if the spec says it has to behave like a clone, then you'll find
   bugs and coming to the conclusion that it would be an actual clone

   CM: perhaps it'd be better to talk about it in terms of the element
   instance tree, nodes of which then behave like the nodes they're
   pointing to
   ... because you don't actually want clone-like behaviour
   ... you want to essentially clone all state about the referenced
   nodes, and then keep them in sync afterwards

   JW: if you don't have a clone you don't have the overhead of
   restyling the referenced elements every time you paint it
   ... with clones you don't

   ED: but with actual clones it has worse memory requirements

   JW: probably restyling outweighs that

   ED: i don't think that's true
   ... for the glyph case, i could maybe see that it would be better to
   not have the property inheritance into the shadow tree
   ... for use, on the other hand, i think that helps
   ... it would break content if we stopped doing that
   ... wouldn't be hard to see what opera is doing wrt memory usage
   ... e.g. creating some use elements with huge subtrees
   ... and see the memory usage of each
   ... i don't know exactly how they compare, but i'd imagine it would
   be quite a difference

   <scribe> ACTION: cameron send out proposal for limited prefix
   processing in text/html [recorded in
   [19]http://www.w3.org/2009/09/30-svg-minutes.html#action03]

   <trackbot> Created ACTION-2680 - Send out proposal for limited
   prefix processing in text/html [on Cameron McCormack - due
   2009-10-07].

   CL: [explains the glyph inheritance problem to dbaron]

   DB: my reaction is sort of that authors have expectations about how
   something called a "font" performs
   ... and they expect that to get accelerated by things like
   generating the bitmap for a glyph once per size
   ... essentially being what windows or osx does with a native font,
   in terms of performance

   CL: right if you make an abritrarily complex thing with lots of
   content then it's going to be slower

   DB: even for things like this, if you have to inherit stroke-width
   then it becomes a lot more complicated
   ... compared to if you could convert the svg font to an opentype
   font

   CL: we have an optimization for that, the d="" attribute on <glyph>
   ... but if you've actually gone to produce a multicoloured glyph
   then it's going to be slower

   DS: we understand that the landscape has shifted since we started
   with svg fonts
   ... in ASV that was pretty much the only way you could deliver a
   font for SVG content
   ... you would convert an otf into an svg font and use that
   ... but now we're going to have WOFF, etc.
   ... so the kinds of use cases you're going to use svg fonts for are
   different from those for woff etc.
   ... things such as richly styled glyphs
   ... so we're cool with giving people copious warnings about how what
   they're doing is going to affect performance

   DB: i'd almost be more comfortable talking about it as a glyph
   replacement mechanism instead of a font mechanism

   CL: postscript type 3 fonts also could have arbitrary power
   ... you got the benefits and the drawbacks of that

   DS: authors should know why they're using svg fonts, what the
   implications of inheritng the style etc. are
   ... within that context, i don't feel that it makes sense to limit
   how somebody can style/inherit from the text element into font
   glyphs
   ... then it becomes less useful

   JW: roc had a proposal for a magic value for color, that would
   reference back up out to the text element to resolve to the colour
   of the text referencing the font
   ... so there'd be no style resolution each time you painted the
   glyph

   DS: that wouldn't work with existing content/UAs, but...

   DB: there are ways we can optimize that we don't do now

   DS: what if you wanted the stroke to be thicker or thinner? or not
   there at all?
   ... you just wanted fill and not stroke?
   ... what if you wanted to change whether the stroke was rounded on
   the corners or square?
   ... you can do that with any svg shape with 'use'

   ED: i wonder how many of those real svg fonts with arbitrary child
   content and what they're using
   ... whether they actually use inherited property values

   DS: james deering did a lot of stuff with fonts
   ... around 2003
   ... it'd be good to talk to people who work with svg fonts to find
   out uses

   ED: point them to the wiki page too
   ... on the IG wiki

   DB: the thing that seems difficult to me in svg fonts is how without
   svg fonts you have a nice layered system with css and markup up
   here, and they pass a bunch of information down to a graphics system
   that does fonts and font matching
   ... and what fonts you want to use
   ... svg fonts require a back thing reverses the normal layering of
   the system

   CL: the reason we did that is because the people realised that
   others don't have the fonts they are using
   ... so they can either fall back to system fonts and break the
   layout
   ... or convert to curves (not text any more)
   ... so turning into a font left it as text
   ... interesting use cases are also manipulating the font in the dom

   DB: one question is whether yo udo it for eveyr paint or whether you
   store that information
   ... andi f you're storing it, how much memory does it take up
   ... and if you're doing it for every paint, then how much time is it
   taking
   ... if we did the restyling efficiently, it probably wouldn't be a
   huge problem
   ... i don't know how long it takes to draw an svg shape compared to
   style computation
   ... so it's hard for me to say

   DS: seems to me that most svg content using svg fonts is not going
   to be dynamically chanigng much of the svg content
   ... a lot of fonts people are going to be using are going to be
   woff, for the average text
   ... there are many optimizations you could do
   ... not like they're going to be changing the colour of every other
   glyph

   DB: the amount we optimize something depends on the amount it's
   going to be used
   ... the assumption for svg fonts is that they're not going to be
   used much, so we're not going to worry about it
   ... so the main reason would be for acid3
   ... maybe this isn't the case but i wouldn't be surprised if we did
   whatever subset is needed for that and then stopped

   CM: which is just woff-style fonts

   DB: has webkit implemented more than that?
   ... given the other capabilities that are available, with opentype
   or woff, it seems like the rest of the use cases would be solved by
   having a mechanism for saying that a particular part of a graphic
   represents a certain piece of text
   ... rather than a piece of text representing the fonts
   ... it doesn't involve going backwards through the layers of the
   font system business

   DS: for a system for building fonts in a web application, being able
   to use an inbrowser drawing program to draw a font
   ... so making the font editable is a use case
   ... animated fonts too, perhaps

   DB: in terms of building a piece of software to do this, do you
   implement that by making the entire font matching system happen
   outside the graphics layer?
   ... so you don't need this layering violation?
   ... basically all of the features become harder when you're
   expecting it to be drawn at this weird time when you don't have the
   change to store the information

   DS: i think it'd help us to have a diagram explaining how mozilla
   currently works, wrt this layering feedback

   DB: it's changed substantially in the last couple of releases, and
   it's going to change more

   DS: it'd be nice to understand the constraints under which we are
   working

   DB: roc's probably a better person to talk to about this

   JW: the other thing i wanted to talk about, related, is the 'use'
   element
   ... in mozilla we create an anonymous clone
   ... which is not how some other implementations do it

   DB: there is stuff we could do to make the clone restyling faster
   ... the style rule matching for rule aiui is that they inherit from
   something different
   ... we could make it skip running selector matching

   JW: it seems like you were saying that it shouldn't

   DB: usemaps in html have similar issues in mozilla
   ... came up with a neat solution, which i think found it's way into
   html5
   ... where we do use one property on the area elements
   ... and that property is 'cursor'
   ... so we do the same thing that svg does with inheritance, for the
   cursor property, into imgmap areas
   ... so you can style the cursor on the area and you can style it on
   the img
   ... but that's relatively easy to implement, we can resolve style
   whenever we need it
   ... we can even do that for every mouse movement -- it's just one
   style property resolution

   JW: but you're saying it how opera's implement would be a lot harder

   DB: maybe
   ... right now we have this "multiple presentaitons" thing
   ... we sort of don't have a mapping from content nodes to frames
   (i.e., rendering objects)
   ... except we do, stuck off into a hash table
   ... so you need it per use mapping instead of per presentation
   mapping
   ... and stuffing pointers in the content elements

   CM: could you specialise for just use?

   DB: what you need is per use, not per presentation

   [more talking about mozilla's internal style resolution]

   scribe: our style data computation happens using what i've sometimes
   heard as a "lexicographic tree'< but not sure that's the right term
   ... optimizes strongly for a bunch of cases that are common in css
   ... i.e. relatively few properties are specified per rule
   ... so a css rule is in the abstract something that provides values
   for some small set of properties
   ... the thing that maintains the style data for an element we call a
   StyleContext
   ... there's this additional object in between them in the rule tree
   ... a node in the rule tree represents the sequence of rules the
   element matches
   ... each style context points to a node in the tree
   ... the path from the root of the tree to the node represents the
   rules that style context matches
   ... in cascading order, where each node points to a single rule
   ... then, beyond that, we group all the css properties into a bunch
   of structs
   ... we do a related group of properties at a time
   ... so e.g. all the ones in the font shorthand are in a single group
   ... then the structs can end up getting cached in the rule tree or
   in the style context tree
   ... if the values in the struct have no dependencies on inherited
   values, they can be cached in the rule tree
   ... e.g. no em units, no inheritance, etc.
   ... in the property groups, every group consists of all inherited or
   all non-inherited properties
   ... in the case of "nothing specified", the structs of non-inherited
   properties will get cached in the rule tree
   ... and shared between nodes
   ... in the edge case of inherited properties, they'll get shared
   between parent and child style contexts
   ... in both cases they use the same struct
   ... in the style context case, it'll copy pointers down the tree,
   since inheritance can be pretty deep
   ... but in the rule tree case we walk up the tree every time we need
   it

   DS: [mentions an object oriented css proposal]
   ... [nicole sullivan]

   CL: if you're cloning whole trees, you can take all of their style
   contexts?

   DB: we would make a new style context in the clones, but point to
   the same rule node
   ... since the selector matching won't change
   ... so we get data sharing for many cases

   CL: so the inheritance changes, since it has a new parent, which
   doesn't affect selcetors, but there's a ne wsource for inherited
   values

   DB: when changing a value, we make new style contexts
   ... rule nodes and style contexts are immutable
   ... which provides good comparison when things change
   ... so if an element goes into :hover, and there are :hover
   selectors that might or might not match, we'll re-run selector
   matching on that element and its descendant
   ... this was a big problem IE/5/6/7 only handled hover on links
   ... when we started supporting hover on everything, other elements
   would colour on hover

   DS: could pageX and pageY, offsetX and offsetY be defined usefully
   within svg?
   ... if so they should be generalised out and put in dom 3 events

   <shepazu>
   [20]http://www.w3.org/TR/cssom-view/#extensions-to-the-mouseevent-in
   terface

     [20] http://www.w3.org/TR/cssom-view/#extensions-to-the-mouseevent-interface

   [21]http://www.w3.org/mid/20090830055001.GD27340@wok.mcc.id.au

     [21] http://www.w3.org/mid/20090830055001.GD27340@wok.mcc.id.au

SVGLength behaviour tightening

   ED: i don't think value should preserve the units

   CM: i think you always want to rely on assigning to .value working

   ED: yes

   JW: i agree

   ED: allowing an invalid length in the middle of a length list seems
   strange

   CM: yeah, i'll propose something else

testing

   JW: is opera looking at ref tests?

   ED: yes, looks good
   ... not sure about writing our own ones yet, but using the existing
   ones

   CM: will you provide a public harness to run the tests?

   ED: i suspect so, but don't know

   JW: there's a runner, written by Sylvain Pasche, it pretty much
   works in any implementation
   ... that you can run on a desktop
   ... so you should be able to test it
   ... it'll be slow, slower than mozilla's harness
   ... but it will give people the flexibility to run the tests in the
   release builds, and check results, rather than relying on vendors
   reporting their results
   ... there's a fair bit of work still to do on it
   ... from my pov the testing project that started at the hackathon
   week, that doug, plh, fantasai, mikesmith and myself were at, is now
   mainly waiting on mozilla to actually figure out how best to push
   their tests into w3c's repositroy
   ... quite tricky problems to be worked out there
   ... such as there are existing tests that we're running on a
   per-commit basis
   ... so we don't want to remove them from our tree, move them to
   w3c's, then start running the w3c tree
   ... the question is how can we keep them in both places and
   synchronize them?
   ... changes in moz local ones should propagate to the w3c one, and
   vice versa
   ... especially complicated when renaming and moving tests around
   ... we've also got to get consensus internally in moz on how to do
   this
   ... figure out how to, first, technical issues on getting the w3c
   test suite automatically run on our build machines
   ... then figure out how to sync up on that
   ... people responsible for doing this
   ... figuring out which failures that then crop up, what the causes
   are, what should be done about them
   ... an ongoing process, fair bit of work
   ... unclear how that should proceed
   ... all this sort of stuff that makes it not straightforward to
   chuck our tests in the w3c repository
   ... apart from the fact we haven't resolved whether we are using svn
   or hg, but probably we will end up using svn
   ... i believe we're also waiting for plh to source a more robust
   server for an actual public facing site
   ... instead of a testing server
   ... the things that came out of the meeting included that basically
   we won't go with a centralized system for running tests at the w3c
   ... simply because the resources that are needed for that are
   immense
   ... once you start to multiply all the OSes, browsers, versions of
   browsers, plugins installed, etc.
   ... it's a large number of different combinations you can have
   ... probably more atractive to have a test swarm type system
   ... external volunteers will run batches of tests
   ... so they might point the test harness to 20-30 tests
   ... which it will run, and send the results back to the w3c's
   results gathering thing
   ... then get another batch, etc.
   ... on a conversation with roc, doug, plh and myself, roc considered
   that it was a very low priority, especially at this stage for the
   w3c to be gathering results
   ... and creating the test swarm system to do this
   ... and given that there would be a w3c runner that could run the
   tests on end browsers, there'd be transparency for the results
   ... we could then rely on vendors being honest about their results
   ... which would save the whole complexity about the decentralized
   crowdsourced solution

   DS: my take on it is twofold
   ... yes, transparency would help
   ... that would decrease the load, people could report their own
   results
   ... but one, we need to collect these results anyway
   ... part of our conformance testing
   ... and second, roc had proposed that they could post them on their
   own sites, which is fine, but then there's no way of harvesting that
   ... people should be able to trust the canonical results on the w3c
   site rather than individuals' sites
   ... who might have another agenda
   ... so we need them anyway for our (w3c's) processes
   ... third, people shouldn't have to search for results all over the
   web when they could come to a single place to find the results

   JW: i'm sympathetic to having the w3c collect results, but at this
   stage it's not high priority for me personally
   ... once all this conference stuff is over i can get back to looking
   at testing
   ... figuring out a system for moz to move their tests to w3c, and
   running them, is the first big issue that needs to be tackled

   DS: we don't need to solve all these things at once

   JW: so there are still some things to be worked out
   ... about the test formats
   ... a lot of moz specific stuff in the tests
   ... and the formats rely on a few mozilla features, e.g.
   invalidation tests
   ... they require you to paint the window after everything is
   loaded/rendered
   ... so we have a specific event for that, not based on a timer,
   since that would make it very slow
   ... our tests have that sort of thing in it
   ... so the w3c versions might want to have that, plus a timer to
   fall back on
   ... or we may want to standardize that event
   ... changes would be required to the test formats which is another
   thing that needs to be worked out, what needs to be compatible with
   other browsers, and what moz needs internally

   DS: there's a whole second part of this
   ... there's another set of problems
   ... MS submitted 7000 css 2.1 tests
   ... it's appreciated, but at the same time all those tests need to
   be reviewed
   ... the tests could be erroneous
   ... not actually testing the thing they mean to test
   ... they could accidentally align with some expected behaviour
   that's seen in IE that isn't per spec
   ... it could be that something about the test points out an
   ambiguity in the spec itself

   JW: also the issue that that many tests, to be practical, need to be
   in an automated format

   DS: the csswg cannot review all these tests
   ... it's time consuming doing review
   ... the part fantasai and i were specficialy working on was a tes
   treview system
   ... we have volunteers to do parts of the test suite, but using
   email, so it's hard for tracking
   ... couldn't say which were reviewed, which needed review, which had
   priority, etc.
   ... without doing a lot digging
   ... she was interested in creating a system so that volunteers could
   review tests systematically

   CM: from the wider community?

   DS: more within a company
   ... if we're making this system anyway, we should make it crowd
   sourceable
   ... so anyone can come in and review tests
   ... we could give them the more tests they reviewed and were
   accurate about, we could give them scores
   ... also, since we're doing that, providng an interface so that
   people could submit tests for what they think are undertested
   ... the review system would track the tests through different
   revisions
   ... a revision could have comments on it, when a new rev was
   uploaded, people could look at the diff as part of their review
   ... being able to move a test is good, since one test may apply to
   different test blocks

   CL: the current way of doing it in css has multiple occurrences of a
   test in different parts of the report
   ... their report is the type of thing that should be generated

   DS: we have similar problems
   ... the version is located in the test slide, e.g.
   ... which causes no end of problems
   ... we found we didn't know if the image had been updated or not
   ... having the metadata part of the image is worse than having a
   system that keeps a track of this
   ... the review system would also be a test submission system
   ... we wouldn't get the bulk through this, but parts of the spec
   people are interested in would get interest
   ... that tells us that people are trying to use specific features
   ... part of the goal of the crowdsouring review system is to connect
   more directly with the community, and get them involved in the
   process

   JW: to become an actual accepted w3c official test, the reviewer
   needs to have his competence known
   ... it's still useful for random community people to do some sort of
   review
   ... if their pre-reviews of a consistently high quality, then that's
   a way to get people in later
   ... if two people say a test is bad, then it's likely it needs
   looking at
   ... if 5 people say it looks good, then it can have percolated up to
   one of the official reviewers

   DS: so is this group willing to go to reftests for the bulk of its
   tests?
   ... if so, jwatt should write up exactly what we want

   JW: for those people not familiar with those, they should become so
   ... that is partly waiting on me
   ... there's some docs in the wiki, but concrete examples are
   necessary
   ... the w3c testing wiki needs a lot of work
   ... overall i think the test hackathon rocked, a major step forward
   ... the conformance testing is important, but the huge step forward
   is the interop testing
   ... in one or two years from now we can get all the major browser
   vendors running each others' tests

   DS: it also saves each browser vendor reinventing the wheel with
   tests
   ... it decreases costs, since they don't need as many people just
   dedicated to runnign tests

   JW: as long as everyone pulls their weight
   ... re too many tests, it's important that wide coverage and deep
   coverage is good
   ... but need to minimize duplication
   ... the length of time they take to run is a big factor in how
   useful they are
   ... it impacts their development cycles quite substantially
   ... the bigger the testing time, the bigger window there is to find
   regressions

   AG: once the reftests have been reviewed, there'll be some process
   to package them up for members to get?

   JW: they'd be in the repository
   ... we could create zips too

   RESOLUTION: we'll move to using reftests as soon as is feasible

Connectors

   DS: we have in the past talked about this
   ... i want to briefly go over some use cases for connectors
   ... and details
   ... we've avoided connectors because you can graphically represent
   something connecting something else
   ... but it doesn't have a logical connection
   ... there are a lot of use cases for things that are logically
   connected
   ... circuit diagrams, flow chart diagrams, any kind of node-edge
   graph
   ... we've avoided them in the same way as avoiding default symbols
   ... so connectors could also be used for representing roads, or any
   kinds of routes through some thing
   ... i think the class of uses for these things is large

   ED: xlink:href on every element?

   DS: there are a number of approaches
   ... i think the basic idea is that there are two aspects to it
   ... one, a logical connection, between two elements which represent
   two different objects
   ... and one is a physical connection
   ... an obvious thing people would like to do is [draws on board]
   ... [two circles connected by a line, moving one circle should make
   the line follow it]
   ... that could be done in the very simplest case, but i don't think
   people are going to want to do something where the line has to wrap
   around another shape, for example
   ... implementors aren't going to want to solve that case
   ... edge routing
   ... but i think there is still a lot of utility in straight line
   connections
   ... in the future, potentially allowing the author to say "go
   through this point" when routing

   CL: this needs to be a new curve type?

   DS: could be just straight lines
   ... solve many use cases, not all

   CL: curves that pass through particular points we've come up against
   before, so perhaps we want to add them

   DS: making a rendering element is one aspect of the problem
   ... making a logical one is another aspect
   ... so are these connections directed, undirected?
   ... that's part of the logical aspect of it
   ... another part is "what is the relationship itself"
   ... e.g. does that mean parent--child relationship, or what
   ... that might be represented as a form of metadata, like rdf
   ... or it might be, for a UI, some textual equivalent of that
   equivalent
   ... if i'm following a map, so you walk from the subway stop to the
   building
   ... maybe the metadata is "walk from from subway stop to building"
   ... how would you express that? maybe you'd have a <title> element
   ... how do you represent the connector
   ... i'm positing that we need a 'connector' element
   ... what characterstics does it have?
   ... it has a from and a to
   ... so a connector, as i see it, is a special form of path that
   doesn't have a fill
   ... what i'm thinking is that it would take path syntax so that if
   you wanted to do your own routing, you could say this is what it
   looks like
   ... another way i thought of doing this would be to have
   role="connector" on a path
   ... and that would mean logically that it is a connector
   ... if connector weren't implemented, you could represent it as a
   path
   ... what is the role of a connector? either as a separate element or
   a path
   ... it's to present to a user to go from one thing to another
   ... pretend we are in a flow chart, and we're navigating around it

   [draws on board]

   scribe: if you're in one node, and you want to navigate to another
   node that is connected from that one
   ... the tilte of connectors could be read out, for example
   ... and the user could choose
   ... or maybe it's a popup that lets the user see what the
   relationship is
   ... in terms of navigating documents in a logical order, a blind
   person could query the connector to find out where it's going
   ... so they could step through a complex diagram
   ... one use case i've seen for this is that they wanted to have all
   of their wiring diagrams for cars as svg files
   ... apparently in japan, wiring diagrams for cars are done in svg
   ... so when a mechanic is trying to sort out "is this circuit live
   or not", we set up a little animation to show them if they clicked a
   button which connectors are active
   ... so in this scenario you might write a state machine and
   enable/disable certain connectors
   ... it would be an accessible way to present these diagrams
   ... right now, if i were navigating these elements, in 1.2T we could
   use the nav-* properties
   ... but they only solve a physical case
   ... with the connectors they define implicit navigation options
   ... maybe its role would indicate the direction of the connector
   ... so there's also a visual aspect of the problem
   ... we should reuse existing path syntax, so if a person wants to do
   custom routing they can
   ... failing that, it would just be a direct line
   ... but when they use a direct line, dragging this element, the line
   would follow
   ... so either you would take advantage of the auto routing, which
   would be a straight line in the first version, or you would write
   your own path
   ... but you always get the logical connection
   ... in later versions you could provide some way to tell it to do
   routing
   ... there are two problems with diagrams like this
   ... one is the line routing
   ... the second is the node placement
   ... we're also not going to solve that, unless it gets solved in the
   bounds of some layout algorithm
   ... e.g. cameron's layout might take care of that

   CL: you're not just going to point at an object, but then you to say
   i'm going from this point to this other point

   AG: you could specify the endpoints as bounding box percentages

   DS: i'm not sure that's going be satisfying
   ... e.g. on a diamond shape, you don't want connecting lines going
   to one of its sides
   ... so not necessarily the shortest path
   ... we've already thought of the idea of svg point elements
   ... it could act as an anchor point
   ... so what if we defined the anchor points of children of the
   element

   AG: how do you associate the connectors with an object?
   ... i was thinking they might be a child

   DS: can't be a child of two elements though
   ... they can be in their own block somewhere
   ... you can name the point you want to connect to
   ... the default might be to choose the shortest path between the
   possible anchor points on the shapes
   ... you could name these anchors and identify them explicitly

   AG: could you have free floating svg:points?

   DS: in future versions

   AG: many-to-many connections?

   DS: no, just one-to-one connections

   CM: i wonder if you ever want navigation that is different from what
   the connections indicate

   <shepazu> as far as semantics of relationships, you should be able
   to extract a triple from this diagram

   <shepazu> as far as navigation goes, they don't ahve to render,
   might be a better way to define meaningful navigation with
   titles/descs

   <shepazu> rather than using nav-next/-prev attributes

   RESOLUTION: We'll consider the connector proposal

   <scribe> ACTION: Doug to write up the connector proposal [recorded
   in [22]http://www.w3.org/2009/09/30-svg-minutes.html#action04]

   <trackbot> Created ACTION-2681 - Write up the connector proposal [on
   Doug Schepers - due 2009-10-08].

timeline

   CL: our charter expires in april
   ... we have 6 months charter period left
   ... we need to see what we will have achieved by that time
   ... usually the aim is for a charter to be circulated before the
   current one ends
   ... it's coming to the point where we show what we did during the
   charter period
   ... also we have the roadmap document that needs a refresh

   DS: we've talked about major revisions to the language
   ... we've helped direct the discussion for svg in html

   CL: we've engaged more browser vendors

   DS: we've already talked about planning on doing the modules until
   CR, parking them, making sure they integrate together, then
   publishing independent modules as they are applicable to 1.1 or 1.2T
   ... and folding them in to svg 2.0
   ... as a single spec
   ... so i think that's something we can talk about to people at svg
   open
   ... whenever we publish something, we should update the timetable on
   the wiki
   ... we need to make the home page better

telcons

   CM: should we go back to two telcons per week?

   DS: we could try one hour on one day another 1.5 hours on another
   day

   CM: maybe not two telcons per week but one followed by working time

   <shepazu> Resolution: we will update the timeline in the wiki
   whenever we publish a new WD

Summary of Action Items

   [NEW] ACTION: anthony to work with chris to write up a z-layer and
   3d transform tendering proposal [recorded in
   [23]http://www.w3.org/2009/09/30-svg-minutes.html#action01]
   [NEW] ACTION: Anthony to write up 3d transforms for svg model vs css
   model [recorded in
   [24]http://www.w3.org/2009/09/30-svg-minutes.html#action02]
   [NEW] ACTION: cameron send out proposal for limited prefix
   processing in text/html [recorded in
   [25]http://www.w3.org/2009/09/30-svg-minutes.html#action03]
   [NEW] ACTION: Doug to write up the connector proposal [recorded in
   [26]http://www.w3.org/2009/09/30-svg-minutes.html#action04]

   [End of minutes]

-- 
Cameron McCormack ≝ http://mcc.id.au/

Received on Thursday, 1 October 2009 06:07:47 UTC