See also: IRC log
<scribe> scribe: nikos
👍
nikos: Sent an email to
www-svg
... just trying to work out what the common wisdom was around
small pixel differences
...
https://lists.w3.org/Archives/Public/www-svg/2016Nov/0020.html
... dbaron gave an interesting reply
... before in the references I was using the absolute minimum
set of SVG features so I could be sure something would
work
... dbaron suggested only differering SVG and ref by what is
being tested
... then you should use mostly the same code paths and get a
very similar result
AmeliaBR: think that's a good
path when it can be used
... the tests have to be approached as a complete test
suite
... can't look at one test and assume reference is what it is
supposed to be
... because that reference relies on other features
shepazu: you have to treat the test suite as a whole. you can't run a test in isolation
AmeliaBR: Yeah if that were not the case you would need PNG references
shepazu: it introduces the idea of prerequisites into the test suite
AmeliaBR: exactly, no point going
to the next test if you haven't passed the previous
... not sure that will solve all the problems
... certainly if your basic test is whether markers are
supported
... you're going to have to use markers in one version and
something else in another
nikos: Yeah well in that case you
can test markers without any transforms
... I need to update the tests that were failing to verify all
this works, but I suspect it will
... and I'll update the documentation
gsnedders: I looked at your references. Was wondering if the differences were caused by the transforms and scaling
AmeliaBR: existing tests were
written with the idea in mind that they would be doing manual
comparisons against the image
... if we were to break away from that and start from
scratch
... be very precise and minimise any rounding errors, we can
probably alleviate that
nikos: think that's the path we'll have to take, where we don't copy 1 for 1, we have to restructure
<gsnedders> public-test-infra
gsnedders: The other thing I
wanted to mention. Is the public-test-infra is a good place to
email about testing issues
... that's about all testing stuff, not just limited to
infrastructure
AmeliaBR: what about going forward? Should we still focus on trying to convert the old tests as we can?
nikos: So the old tests are a
good source of places to find tests to write
... the other option
https://github.com/w3c/svgwg/issues?q=is%3Aopen+is%3Aissue+label%3A%22Needs+tests%22
scribe: I added a label to Github
for issues that require a test
... So there's old tests that need converting, but that won't
cover all the corner cases. But it's also good to focus on
things being worked on and discussed now so we can get tests in
and file browser bugs.
AmeliaBR: lots of issues resolved
in SVG 2 that aren't covered by the svg 1.1 test suite
... so it's not an end game
nikos: Original plan was for chapter owners to write tests for their chapters
gsnedders: Given I haven't looked at the spec much, is there a list of changes?
https://nikosandronikos.github.io/svg2-info/svg2-feature-support
AmeliaBR: we have lists of features, but not sure we were so good about documenting where we fixed cross browser incompatibilities
nikos: one of the reasons I want
to get feedback is to prioritise test creation
... so we are making tests for things people actually want to
implement
<gsnedders> https://bugzilla.mozilla.org/show_bug.cgi?id=1321066
<AmeliaBR> https://blog.mozilla.org/security/2016/11/30/fixing-an-svg-animation-vulnerability/
<AmeliaBR> ^ Mozilla got a scare with a 0-day security bug hidden in SVG/SMIL API code.
AmeliaBR: it would be nice to get
clarity on what the priorities are - from the authoring side, a
big priority can be focusing on existing features where there
are incompatibilities
... unfortunately we don't have a single list
gsnedders: how hard would it be
to run the browsers own tests against each other?
... if the goal is to find places where they don't
interoperate
... probably not too hard?
AmeliaBR: if we can import all
their svg related tests
... then to make sure what they're testing already is correct,
would be an important first step
... suspect a lot of issues are in what's not being tested
nikos: sounds like an excellent idea
gsnedders: suspect you may hit a
few that match exactly in the browser they are written for but
not in others
... it's been a while since I've tried doing stuff like thi
nikos: difference between WebKit is mostly the naming conventions. So that could likely be script converted mostly
gsnedders: Blink is the same as
WebKit. Mozilla have manifest files
... would be easy to convert
... wouldn't be surprised if some were using JS layout
stuff
... looking at the DOM to see where things are rendered
nikos: yeah a lot of the older WebKit tests might be like that
gsnedders: Mozilla may be the best place to start because many are cc0 licensed nowadays
<AmeliaBR> MDN on how to create ref tests: https://developer.mozilla.org/en-US/docs/Mozilla/Creating_reftest-based_unit_tests
This is scribe.perl Revision: 1.148 of Date: 2016/10/11 12:55:14 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: nikos Inferring ScribeNick: nikos Present: nikos Tav gsnedders AmeliaBR shepazu stakagi Agenda: https://lists.w3.org/Archives/Public/www-svg/2016Nov/0025.html Found Date: 01 Dec 2016 Guessing minutes URL: http://www.w3.org/2016/12/01-svg-minutes.html People with action items:[End of scribe.perl diagnostic output]