IRC log of coremob on 2012-10-03

Timestamps are in UTC.

08:40:12 [RRSAgent]
RRSAgent has joined #coremob
08:40:12 [RRSAgent]
logging to
08:40:14 [trackbot]
RRSAgent, make logs 25
08:40:14 [Zakim]
Zakim has joined #coremob
08:40:16 [trackbot]
Zakim, this will be
08:40:16 [Zakim]
I don't understand 'this will be', trackbot
08:40:17 [trackbot]
Meeting: Core Mobile Web Platform Community Group Teleconference
08:40:17 [trackbot]
Date: 03 October 2012
08:40:24 [jo]
-> Tobie's Paper on Test Frameworks
08:40:30 [Josh_Soref]
s/Teleconference/Face to Face/
08:40:35 [Josh_Soref]
Topic: Agenda
08:40:49 [Josh_Soref]
jo: to let Alan have some flexibility, we'll start with him
08:40:54 [Josh_Soref]
... and then we'll discuss our agenda
08:41:04 [Josh_Soref]
Topic: XXX
08:41:10 [Josh_Soref]
alan: Alan, W3C
08:41:27 [Josh_Soref]
... I came here today to talk about open platforms shared among mobile carriers
08:41:34 [Josh_Soref]
... my main purpose is to attend apps world
08:41:43 [JenLeong]
JenLeong has joined #coremob
08:42:02 [Josh_Soref]
... surprisingly yesterday, most people asked me how they could develop portable-html based apps without upsetting Apple/Google
08:42:14 [Josh_Soref]
... and I responded that in fact, Apple and Google are both members of W3C
08:42:30 [Josh_Soref]
... I'm responsible for recruiting for W3C
08:42:38 [Josh_Soref]
... I'm a member of this CG
08:42:51 [Josh_Soref]
... To make something into a standard, it will have to move into a WG
08:42:59 [Josh_Soref]
... I spoke with GSMA yesterday afternoon
08:43:16 [Josh_Soref]
... If you have a question about W3C, membership, who they are, who they aren't, please feel free to ask me
08:43:24 [Josh_Soref]
... dom has more history, i'm the new kid on the block
08:43:27 [bryan]
present+ Bryan_Sullivan
08:43:28 [Josh_Soref]
jo: Thank you very much, alan
08:43:59 [Josh_Soref]
08:44:14 [Josh_Soref]
present+ Alan_Bird
08:44:21 [Josh_Soref]
topic: Agenda
08:44:33 [Josh_Soref]
s/Topic: Agenda/Topic: Brief Agenda/
08:44:46 [Josh_Soref]
jo: i'd like to try to come back to CoreMob 2012 mid afternoon
08:44:52 [Josh_Soref]
... i'd like to do issue/action bashing
08:44:56 [Josh_Soref]
... it's always fun,
08:45:07 [Josh_Soref]
... the best part is when we say "oh, yeah, you did that, didn't you?"
08:45:15 [dom]
08:45:17 [Josh_Soref]
... i hope everyone has read tobie's document
08:45:56 [Josh_Soref]
topic: Test Approach
08:46:09 [fantasai]
ScribeNick: fantasai
08:46:16 [Josh_Soref]
i/Meeting:/Scribe: Josh_Soref/
08:46:17 [jet]
jet has joined #coremob
08:46:23 [Josh_Soref]
RRSAgent, draft minutes
08:46:23 [RRSAgent]
I have made the request to generate Josh_Soref
08:46:29 [Josh_Soref]
RRSAgent, make logs public
08:46:51 [dan]
dan has joined #coremob
08:47:14 [fantasai]
08:47:21 [Josh_Soref]
RRSAgent, draft minutes
08:47:21 [RRSAgent]
I have made the request to generate Josh_Soref
08:47:39 [wonsuk]
wonsuk has joined #coremob
08:47:52 [mounir]
present+ Mounir_Lamouri
08:47:53 [mattkelly]
mattkelly has joined #coremob
08:48:02 [dom]
RRSAgent, draft minutes
08:48:02 [RRSAgent]
I have made the request to generate dom
08:48:12 [jfmoy]
jfmoy has joined #coremob
08:48:40 [girlie_mac]
girlie_mac has joined #coremob
08:49:04 [fantasai]
tobie: I was asked to do Robin's action, to discuss some options for building a test runner
08:49:10 [fantasai]
tobie: So I wrote this draft paper over the last few weeks
08:49:25 [fantasai]
tobie: I learned a lot in the process, so at least for me it was a useful exercise. Hope it's useful for you, too.
08:49:31 [fantasai]
tobie: First thing I had to spend time on was terminology
08:49:34 [Josh_Soref]
s/XXX/W3C Thank You/
08:49:40 [Josh_Soref]
Chair: Jo_Rabin
08:49:45 [fantasai]
tobie: Mostly because JS library that W3C is using is referred all over the place
08:49:48 [fantasai]
tobie: testharness.js
08:49:54 [Josh_Soref]
RRSAgent, draft minutes
08:49:54 [RRSAgent]
I have made the request to generate Josh_Soref
08:50:13 [fantasai]
tobie: Given the structure of testing on the web in general, comes way after testing for other environments
08:50:19 [fantasai]
tobie: Hard to understand how the pieces fit together
08:50:24 [fantasai]
tobie: So I decided on some arbitrary wording
08:50:46 [fantasai]
tobie: Any questions?
08:50:57 [wonsuk]
Present+ Wonsuk_Lee
08:51:07 [fantasai]
fantasai: I read it, and the terminology doesn't work for reftests or self-describing tests
08:51:17 [Josh_Soref]
present+ Josh_Soref
08:51:17 [Josh_Soref]
present+ Elika_(fantasai)
08:51:17 [Josh_Soref]
present+ dom
08:51:17 [Josh_Soref]
present+ Jo_Rabin
08:51:17 [Josh_Soref]
present+ Bryan_Sullivan
08:51:17 [Josh_Soref]
present+ Tomomi_Imura_(girlie_mac)
08:51:20 [Josh_Soref]
present+ Matt_Kelly
08:51:21 [Josh_Soref]
present+ Shuhei_Hub
08:51:23 [Josh_Soref]
Present+ Wonsuk_Lee
08:51:25 [Josh_Soref]
present+ Gavin_Thomas
08:51:27 [Josh_Soref]
present+ Tobie_Langel
08:51:28 [fantasai]
fantasai: e.g. says a "test" is a single JS function, which isn't true for CSS tests
08:51:29 [Josh_Soref]
present+ Jean-Francois_Moy
08:51:31 [Josh_Soref]
present+ Giridhar_Mandyam
08:51:33 [Josh_Soref]
present+ Jonathan_Watt
08:51:35 [Josh_Soref]
present+ Bryan_Sullivan
08:51:36 [natasha]
natasha has joined #coremob
08:51:37 [Josh_Soref]
present+ Markus_HP
08:51:39 [Josh_Soref]
present+ Jennifer_Leong
08:51:41 [Josh_Soref]
present+ Max_NTT
08:51:43 [Josh_Soref]
present+ Natasha_GSMA
08:51:46 [Josh_Soref]
present+ Robert_Shilston
08:51:53 [Josh_Soref]
present+ Alan_Bird
08:51:58 [fantasai]
fantasai: and the test framework being defind as a JS program probably won't work for CS tests
08:52:01 [fantasai]
08:52:02 [fantasai]
08:52:07 [Alan]
Alan has joined #coremob
08:52:20 [Josh_Soref]
08:52:50 [fantasai]
tobie: So, there are 3 categories of tests
08:53:29 [fantasai]
fantasai: testharness.js tests (automated JS)
08:53:36 [fantasai]
fantasai: reftests (automated visual)
08:53:42 [fantasai]
fantasai: self-describing tests (manual)
08:53:57 [fantasai]
tobie: This document only discusses automated JS tests
08:54:23 [fantasai]
dom: The terminology section seems to be very specific to what this group is doing, not general to W3C
08:54:31 [max]
max has joined #coremob
08:54:36 [fantasai]
dom: e.g. a test being a single JS function is true of many W3C tests, but not all
08:54:58 [fantasai]
dom: My question is whether you are trying to define these terms in the broader sense, or trying to focus on what's covered in the document
08:55:51 [fantasai]
dom: So maybe you need a section that lists assumptions, scopes the document to what coremob will be testing
08:56:12 [fantasai]
jo: Need to discuss that, e.g. seems premature to me to exclude non-automated testing
08:56:59 [fantasai]
jo: So assumption to state for today, that we're only talking about automated testing
08:57:16 [fantasai]
dom: What is the main goal for this?
08:57:35 [fantasai]
dom: Is it for something like Ringmark, which is an impactful way to evaluate whether a browser conforms to things this group wants
08:57:45 [gmandyam]
08:57:46 [fantasai]
dom: Or is it a generalized mobile test runner
08:58:03 [fantasai]
tobie: It's a spectrum, e.g. Ringmark runs in under a minute
08:58:19 [fantasai]
tobie: On the other hand, Jet was saying that e.g. Mozilla test suite takes 24 hours to run.
08:58:29 [fantasai]
tobie: And maybe good to have a subset that runs in 15 minutes
08:58:47 [fantasai]
jo: Maybe have a framework can do both
08:58:53 [fantasai]
tobie: Not sure you can do that with the same software
08:59:01 [fantasai]
tobie: I don't know what it is that vendors run for 24 hours
08:59:03 [jo]
ack g
08:59:05 [Josh_Soref]
08:59:06 [rob_shilston]
08:59:24 [fantasai]
gmandyam: Talk about uploading results, if we're tyring to crowdsource, then doing something that doesn't involve an automated test framework, will be very diffficult
08:59:33 [fantasai]
gmandyam: Some tests can be automated, some can't
08:59:49 [fantasai]
gmandyam: For this we can say it's just existence tests, not functional tests
08:59:57 [fantasai]
gmandyam: Anyone can download these tests and upload results
09:00:21 [jo]
09:00:28 [fantasai]
tobie: Because testharness.js is asynchronous, and because test runners are async, it is possible to add human intervention in there, or even reftests
09:00:32 [fantasai]
tobie: It needs to be plugged in
09:00:38 [fantasai]
tobie: this paper does not concentrate on explaining that
09:00:53 [fantasai]
gmandyam: If you're talking about professional testers ...
09:01:07 [jo]
ack j
09:01:12 [fantasai]
gmandyam: You have to minimize the intervention elements for common people
09:01:40 [fantasai]
Josh_Soref: You don't have to eliminate human intervention to get results from untrained people
09:01:55 [fantasai]
Josh_Soref: e.g. a lot of CSS tests are asking is this red, is it green, click the right button. Anyone can do that
09:02:00 [mattkelly_]
mattkelly_ has joined #coremob
09:02:12 [fantasai]
Josh_Soref: We can have a system where you can load up the results, and mostly be looking at cached results
09:02:18 [fantasai]
Josh_Soref: And we can have a system where we tag things that are fast
09:02:31 [fantasai]
Josh_Soref: Can say that this set of tests, run these tests quickly
09:02:44 [fantasai]
Josh_Soref: It's certainly possible for tests to be self-instrumenting
09:02:50 [fantasai]
Josh_Soref: And say which ones are fast/slow
09:02:52 [dom]
a "lite" version of test would run in a split second, and the more consequential would require more time/intervention
09:02:58 [jo]
09:03:09 [Josh_Soref]
fantasai: you probably want to say which tests which you want to do for the fast run
09:03:14 [Josh_Soref]
... you may have fast edge cases
09:03:30 [Josh_Soref]
... but you want to define which core, possibly slow tests are key
09:03:47 [Josh_Soref]
tobie: it's pretty easy to decide which class of tests to run
09:03:55 [Gavin_]
Gavin_ has joined #coremob
09:04:03 [Josh_Soref]
jo: we agree that we need bundles of tests that can be runnable in different circumstances
09:04:08 [jo]
ack r
09:04:10 [Gavin_]
09:04:28 [fantasai]
rob_shilston: ... some basic things like position: fixed, where vnedors implement it on android phones
09:04:47 [fantasai]
rob_shilston: We've not been able to determine that programmatically, whereas it's an important test that a human can determine
09:04:53 [fantasai]
rob_shilston: Your'e talking about fast tests and slow tests
09:05:17 [fantasai]
rob_shilston: Out of the last F2F, the ideal test framework would allow anyone to define a profile of tests to run
09:05:39 [jo]
ack g
09:05:39 [mattkelly_]
09:05:45 [fantasai]
rob_shilston: There might be fast tests, or slow tests, or FB tests, or coremob tests, but the framework should be able to handle all these.
09:05:53 [dom]
q+ tobie
09:06:28 [fantasai]
gavin: We dived into this, wondering whether we have a clear view on the requirements and customers from the test results
09:06:35 [fantasai]
gavin: That would help shape some of these topics
09:07:10 [Josh_Soref]
fantasai: your audience is web developers
09:07:13 [Josh_Soref]
... and browser vendors
09:07:22 [Josh_Soref]
... for the latter to compete
09:07:26 [Josh_Soref]
... for the former to XXA
09:07:27 [jo]
09:07:35 [fantasai]
s/XXA/see what's supported where/
09:07:38 [jo]
ack m
09:07:54 [fantasai]
mattkelly_: One thing to keep in mind is the amount of effort it's going to take to do something like 100,000 test test suite
09:08:08 [fantasai]
mattkelly_: A ringmark-style thing vs. 100,000-test test suite
09:08:23 [fantasai]
mattkelly_: Want to create a framework that can be added to to create the latter eventually
09:08:40 [fantasai]
mattkelly_: The nice thing about ringmark is that it got traction
09:08:49 [fantasai]
mattkelly_: One ...
09:09:11 [fantasai]
mattkelly_: A lot of people have been using it, fixing bugs in browsers, ealuating products, etc.
09:09:18 [fantasai]
mattkelly_: I think we should start with something simple like ringmark
09:09:29 [fantasai]
mattkelly_: and then go towards having a large test suite, with device certification
09:09:32 [fantasai]
09:09:44 [fantasai]
dom: what you're saying is the focus should be on browser vendors, and people selecting browsers?
09:09:46 [gmandyam]
Response to Rob - QuIC's Vellamo automated framework automates testing such as scrolling which could also be verified using human intervention. There are tests that are not capable of being automated, but we can provide a clear delineation in an automated framework (e.g. differentiating between existence tests and functional tests).
09:09:51 [jo]
09:10:10 [fantasai]
mattkelly_: yes, I'd like the group to focus on taking developers priorities and informing browsers
09:10:19 [fantasai]
mattkelly_: Going back to ringmark, we're already doing that
09:10:41 [fantasai]
mattkelly_: we've added to other groups that build frameworks to determine what ppl can build on with different browsers
09:11:01 [dom]
ack tob
09:11:04 [fantasai]
jo: Informing devs is an essential part of the job, just not what we might do ourselves
09:11:20 [fantasai]
tobie: Hearing lots of good ideas, great feedbac, but haven't heard yet of anyone willing to commit resources to work on this
09:11:32 [fantasai]
tobie: Opening up the conversation and requirements to things liek crowdsourcing reftests,
09:11:36 [fantasai]
tobie: I'm not going to do it
09:11:44 [fantasai]
tobie: Whoever wants taht will have to step up and do it
09:11:48 [hptomcat] is great for science crowd-sourcing
09:11:53 [hptomcat]
slightly related
09:12:02 [fantasai]
tobie: Given resources committed here to do this, most of the propositions I made are out-of-reach, and they're a 10th of what we're discussing right now
09:12:08 [Alan]
09:12:10 [fantasai]
tobie: So either we refocus discussion, or ppl step up and commit resources
09:12:22 [Gavin_]
09:12:32 [hptomcat]
crowd-sourcing the analysis of science data (e.g astronomy)
09:12:34 [MikeSmith]
RRSAgent, make minutes
09:12:34 [RRSAgent]
I have made the request to generate MikeSmith
09:12:56 [fantasai]
dom: hearing that we should start simple, automatable, leave more complex stuff for later
09:13:47 [fantasai]
fantasai: That works for other technologies, but not for CSS. Anything you can test via JS for CSS will be limited and not really testing the layout
09:13:49 [mattkelly_]
q+ tobie
09:14:00 [fantasai]
dom: Would be limited ot existence tests.
09:14:18 [fantasai]
jo: First we need a formal statement of requirements that are in-scope for the charter
09:14:35 [fantasai]
jo: Second is architecture that matches the requirements, but will not be implemented in full yet due to resources
09:14:41 [fantasai]
jo: Third is something that works, soon
09:15:04 [hptomcat]
if you could automate the testing, record the results and then have the "crowd" analyze it and report the result in an automated fashion
09:15:04 [fantasai]
gavin: I can take an action to draft what I think the requirements should be
09:15:05 [jo]
09:15:10 [jo]
ack G
09:15:19 [fantasai]
gavin: I think our role is to apply from structure to this topic
09:15:19 [dom]
ACTION: Gavin to start a draft of requirements of what our testing efforts should be
09:15:19 [trackbot]
Created ACTION-56 - Start a draft of requirements of what our testing efforts should be [on Gavin Thomas - due 2012-10-10].
09:15:43 [fantasai]
gavin: We should be discussing the structure, and figure out [..] then resources will flow
09:15:47 [fantasai]
gavin: Requirements need to be real
09:16:22 [fantasai]
gavin: If this isn't the group that funds 100,00 tests, suppose they exist somewhere and we just need to find them and run them
09:16:25 [fantasai]
09:16:42 [fantasai]
gavin: There's an assumption that this group will create a technical framework that will handle all this
09:16:46 [fantasai]
gavin: Not sure tha'ts needed
09:17:04 [fantasai]
gavin: Would love to see a table of all the features in coremob, does a test for this exist?
09:17:08 [jo]
09:17:13 [jo]
ack t
09:17:16 [fantasai]
tobie: This is my assumption of the requirements
09:17:39 [Josh_Soref]
fantasai: i'd like to raise an issue
09:17:40 [mattkelly_]
09:18:01 [Josh_Soref]
... existing ringmark sometimes tests for existence of features
09:18:10 [Josh_Soref]
... but not correctness
09:18:15 [Josh_Soref]
... we've had issues with ACID
09:18:18 [Josh_Soref]
... and with other tests
09:18:34 [jo]
ack m
09:18:39 [Josh_Soref]
.. where people do a half assed implementation just to satisfy the test
09:18:43 [Josh_Soref]
09:18:58 [Josh_Soref]
mattkelly_: we have limited resources at Facebook
09:19:06 [Josh_Soref]
... it's why we open sourced ringmark
09:19:14 [mattkelly_]
09:19:22 [Josh_Soref]
... and contributed it to w3
09:19:29 [Josh_Soref]
... hoping that people would contribute to the tests
09:19:40 [Josh_Soref]
jo: i'm hearing css can't be tested
09:19:48 [Josh_Soref]
fantasai: it can be, but not with javascript
09:20:03 [Josh_Soref]
jo: i'm hearing the framework needs to go beyond existence tests
09:20:19 [jo]
09:20:22 [Josh_Soref]
fantasai: you can't use the framework here to do it
09:20:30 [Josh_Soref]
tobie: when you talk about being able to automate ref tests
09:20:36 [Josh_Soref]
... it has hardware constraints
09:20:46 [Josh_Soref]
... i can't go to an arbitrary site
09:20:55 [Josh_Soref]
fantasai: you need something outside the browser
09:21:06 [Josh_Soref]
fantasai: it can be automated
09:21:15 [Josh_Soref]
tobie: there are two categories
09:21:23 [jo]
09:21:26 [Josh_Soref]
... automated within the browser, automated outside the browser
09:21:39 [Josh_Soref]
... it seems it might be better not to test css at all than to test it
09:22:02 [fantasai]
... since we're focusing on the former, not the latter here
09:22:20 [fantasai]
tobie: There are requirements that were expressed i nthe charter, which was to have a test suite for the coremob spec
09:22:24 [dom]
fantasai, some CSS can be automatically tested beyond existence, I would think; like you can check that settings a CSS rule does affect the right DOM properties (which could reasonably assumed to directly reflect how the browser would render the DOM)
09:22:27 [fantasai]
tobie: Implicitly, this test suite uses testharness.js
09:22:35 [dom]
s/reasonably/reasonably be/
09:23:04 [fantasai]
tobie: And then there's a bunch of nice-to-haves, that came up at the lsat F2F, which are 1. being able to run non-W3C tests
09:23:12 [fantasai]
tobie: we could do the shim ourselves
09:23:29 [fantasai]
tobie: otherwise, all the architecures I'm suggesting make it possible for a third party to write a shim
09:23:29 [max]
max has joined #coremob
09:24:26 [hptomcat]
sweet :)
09:24:33 [fantasai]
dom, you can test the cascade and parsing that way, sure, but not layout and rendering which is the most significant part of the implementation
09:25:01 [fantasai]
tobie: Run the test suite in a closed environment, e.g. if a carrier wants to test a non-released device -- internal QA
09:25:43 [fantasai]
dom, as well, CSS has a number of implementations that don't support JS, so we wouldn't want to encourage JS tests over reftests, because that excludes those clients from being able to use those tests
09:26:12 [dom]
fantasai, right re JS, but I think the focus of this group is very much on JS-enabled implementations
09:26:17 [jo]
09:26:39 [fantasai]
dom, which is fine, but doesn't enable sharing tests with the CSSWG so much
09:26:46 [dom]
agreed :/
09:27:06 [fantasai]
tobie: Testing at W3C, there's a massive Mercurial repostisitory, and a lot of WGs have their own sub-directory in there
09:27:14 [fantasai]
tobie: in which they have hteir own organization of tests
09:27:28 [fantasai]
tobie: Every group has a very different architecture
09:27:38 [fantasai]
tobie: Group with clearest process I've seen is WebApps group
09:27:49 [dom]
s/<fantasai> dom,/<fantasai_scribe> dom,/g
09:27:55 [fantasai]
tobie: A company submits tests, they end up in a repo, they need to get reviewed by someone from a different company
09:28:03 [fantasai]
tobie: then they become the test suite for a particular test
09:28:05 [jo]
-> W3C Test Repository referred to by Tobie
09:28:14 [fantasai]
tobie: The problem is this review process usually doesn't happen
09:28:23 [fantasai]
tobie: So very difficult to figure out which specs are tested, which aren't
09:28:36 [fantasai]
tobie: which tests rae good, which are not
09:28:40 [fantasai]
tobie: ...
09:28:50 [fantasai]
tobie: Given a name of a test suite, need to figure out where the tests are
09:28:55 [fantasai]
tobie: Run the test suite
09:29:01 [fantasai]
tobie: Log the results and ge thtem back
09:29:09 [fantasai]
jo: What state is the JSON API today?
09:29:32 [fantasai]
dom: ...
09:29:38 [fantasai]
tobie: I played with it, it's alpha [...]
09:29:45 [fantasai]
tobie: I have a suggesto to bypass this API altogether
09:29:47 [hub]
hub has joined #coremob
09:29:56 [fantasai]
tobie: Most basic thing, the way the current test runner at W3C works
09:29:58 [dom]
s/.../the JSON API works, it is used as a back-end to the main UI of the test framework/
09:30:01 [fantasai]
tobie: First you navigate to the test runner
09:30:23 [fantasai]
tobie: testrunner has an iframe
09:30:49 [fantasai]
tobie: First you go to testrunner, then you get the JSON list of resources, then pull the resources,
09:31:04 [fantasai]
tobie: then testrunner automatically figures out results and sends them back to W3C servers
09:31:09 [fantasai]
tobie: or you can post to .e.g
09:31:17 [fantasai]
tobie: Then someone can look at those results afterwards
09:31:28 [fantasai]
tobie: First basic proposition I'm making is that, to fulfill the most important req we have
09:31:37 [fantasai]
tobie: would be to just create the test suite on the w3c servers themselves
09:31:45 [fantasai]
tobie: and that just rquires us to have a repo there
09:31:54 [fantasai]
tobie: pretty easy to do, low-cost, serves that requirement very well
09:32:09 [fantasai]
dom: If it does, why looking at alternative solutions
09:32:18 [fantasai]
tobie: It only suits our must-have reqs, not the nice-to-have ones
09:32:50 [fantasai]
tobie: wouldn't allow something liek ringmark to work, e.g.
09:32:55 [fantasai]
tobie: and doesn't allow running on a closed netowrk
09:33:08 [fantasai]
tobie: and makes it difficult to run 3rd-party tests
09:33:17 [fantasai]
tobie: eg. ECMAScript
09:33:24 [fantasai]
tobie: Option 2
09:33:38 [fantasai]
gmandyam: why can't it run ringmark?
09:33:56 [fantasai]
tobie: this system only works due to same-origin constraints
09:34:00 [fantasai]
tobie: b/c of iframe
09:34:08 [fantasai]
tobie: W3C would have to modify its stuff
09:34:13 [fantasai]
tobie: Option 2
09:34:25 [fantasai]
tobie: But instead of hosting testharnes son w3c server, can host it anywhere
09:34:28 [fantasai]
tobie: e.g.
09:34:51 [fantasai]
tobie: This architecture allows for the requirement to have tests hosted on different domains
09:35:10 [fantasai]
tobie: testrunner queries JSON API, runs the test pages in an iFrame
09:35:13 [jet]
jet has joined #coremob
09:35:14 [fantasai]
tobie: but the iframe is not same-origin
09:35:21 [fantasai]
tobie: What it contains comes from W3C repo
09:35:30 [fantasai]
tobie: Solution there is to use post-message
09:35:36 [fantasai]
tobie: These are widely-supported
09:35:55 [fantasai]
tobie: but would require W3C servers to either whitelist the origin of the testrunner
09:36:08 [fantasai]
tobie: or allow posting messages to whatever server
09:36:12 [fantasai]
tobie: security implications are small
09:36:19 [fantasai]
tobie: Opening some form of DOS attack
09:36:45 [fantasai]
tobie: but basically, it requires a change in w3c server that needs to be vetted by someone
09:36:52 [tobia]
tobia has joined #coremob
09:37:10 [fantasai]
tobie: My understanding is that the security implications are limited to leaking data of whoever is connected to the server at that time
09:37:14 [fantasai]
tobie: but that can be assessed
09:37:17 [jo]
09:37:26 [fantasai]
rob_shilston: I thought it was just a difference in communicating data vai iframe
09:37:35 [fantasai]
tobie: It is pretty safe, just needs to be done right
09:37:55 [fantasai]
tobie: For it to be completely safe, need to whitelist targets
09:38:08 [fantasai]
tobie: but not necessary to do that in the particular case of what we're doing here
09:38:17 [fantasai]
tobie: because only message we're sending out is the results of running the tests
09:38:32 [fantasai]
tobie: Just need to make sure in that data, there is no personally-identifiable data
09:38:45 [fantasai]
tobie: W3C needs to do some work to make that happen and make sure it's not opening a security hole
09:38:51 [fantasai]
tobie: But it still has to be done, and explained properly
09:39:04 [fantasai]
dom: Another consequence is that it measn to pass the test suite you have to implement postmessage
09:39:08 [fantasai]
dom: Makes it an implicit requirement
09:39:21 [fantasai]
tobie: There are some iframe tricks to get around this, but horrible mess
09:39:27 [fantasai]
tobie: That said if you look at caniuse,
09:39:28 [dom]
09:39:38 [fantasai]
tobie: The data for this, it seems safe
09:40:24 [fantasai]
rob_shilston: This chart isn't focused on mobile
09:40:35 [fantasai]
rob_shilston: You have to really do testing for mobile version of browsers
09:40:39 [rob_shilston]
09:41:00 [fantasai]
09:41:05 [fantasai]
09:41:13 [fantasai]
tobie: And it is a coremob requirement
09:42:14 [fantasai]
tobie: Want ringmark to be able to run this test suite
09:42:26 [fantasai]
tobie: Want the UI for this runner to be runnable from anywhere
09:43:22 [fantasai]
tobie: Other cool thing is it allows you to run tests from other groups via same protocol
09:43:43 [fantasai]
tobie: You can run any test on a different origin, as long as its able to send its results across origin in a format that is shimmable or that is standardized
09:43:57 [fantasai]
tobie: Then you have complete decoupling between the runner and where the tests are
09:44:07 [fantasai]
tobie: use cases: running ECMAScript test
09:44:19 [fantasai]
tobie: the only thing 262 has to do is to post results through postmessage API
09:44:22 [fantasai]
tobie: in a format that we can parse
09:45:09 [fantasai]
tobie: cons are it requires change to w3c code, and that it doesn't allow running on closed networks
09:45:20 [fantasai]
dom: Seems simple enough change to make
09:46:06 [fantasai]
tobie: Next option is...
09:46:23 [fantasai]
tobie: There's aproblem with the JSON Api in its current state
09:46:40 [fantasai]
tobie: It lets you figure out where the test page is that you watn to run, but it doesn't tell you what resources that test page relies on
09:46:46 [fantasai]
tobie: e.g. external CSS or JS or whatever
09:46:53 [fantasai]
tobie: or whether it's stati or dynamic
09:47:01 [fantasai]
tobie: This is why it's important to run the tests where they are
09:47:06 [fantasai]
tobie: instead of getting them
09:47:16 [fantasai]
tobie: This third solution is to proxy requests through a remote server
09:47:24 [fantasai]
tobie: You go visit coremob server with your device
09:47:28 [fantasai]
tobie: it returns the runner
09:47:39 [fantasai]
tobie: asks for "give me address of test pages I want to run"
09:47:54 [fantasai]
tobie: Then instead of visiting those pages through the iframe, you provide a proxy that's hosted on
09:48:01 [fantasai]
tobie: with the detail of which page it's supposed to proxy
09:48:10 [fantasai]
tobie: So it proxies youre test page, and all subsequent requests made by that page
09:48:14 [fantasai]
tobie: that requires a bit of juggling
09:48:30 [fantasai]
tobie: an my understanding is it would only be possible because all resources on the w3c test framework are relative and are not absolute URLs
09:48:41 [fantasai]
dom: Any testcase that doesn't match that requirement would be buggy
09:49:11 [fantasai]
dom: Any testcase that doesn't use relative URLs won't work
09:49:29 [fantasai]
fantasai: That would be true of some tests of relative URL / absolute URL handling
09:49:49 [fantasai]
Josh_Soref: There are some testcases like this, but you have to run a custom harness for them anyway
09:50:18 [fantasai]
tobie: The benefit of this is that the device itself never has to visit the W3C server, or any other server that's hosting tests
09:50:39 [fantasai]
tobie: Which leads us to architecture where the tests can be run on a closed network
09:51:02 [fantasai]
tobie: The device itself never visits the W3C webserver
09:51:25 [fantasai]
tobie: So that enables testing devices without the device's UA string being all over the web
09:51:45 [fantasai]
tobie: Might notice that e.g. weeks before iPhone 5 was released, browserscope already had results for it
09:51:57 [fantasai]
tobie: this avoids that problem
09:52:31 [fantasai]
dom: Problem with specs that require specific server-side components
09:52:58 [fantasai]
09:53:09 [fantasai]
tobie: Tests that require server-side components would not [pause]
09:53:17 [fantasai]
tobie: Why could those not be proxied too?
09:53:24 [fantasai]
dom: Could, but the proxy needs to be aware of this
09:53:27 [jo]
ACTION: Tobie to investigate tests that requie server side components
09:53:27 [trackbot]
Created ACTION-57 - Investigate tests that requie server side components [on Tobie Langel - due 2012-10-10].
09:53:38 [fantasai]
tobie: Just getting resources. Excep for websockets
09:53:45 [fantasai]
09:53:57 [fantasai]
dom: WebRTC
09:54:47 [fantasai]
dom: Also if you're dealing with websockets, also dealing with absolute URLs
09:54:59 [fantasai]
tobie: Ok, and then last solution is to avoid the JSON api altogether
09:55:04 [fantasai]
tobie: You just download the whole repository
09:55:20 [fantasai]
tobie: and then have a runner that's able to do that on the local server
09:55:36 [fantasai]
tobie: that satisfies all requirements, excep tyou do need a websocket server on your local server
09:55:42 [fantasai]
tobie: but you lose the JSON Api capability
09:55:56 [fantasai]
tobie: so you're stuck with understanding the W3C test repo and maintaining that understanding
09:57:38 [fantasai]
tobie: I think browser vendors already do this
09:57:41 [Josh_Soref]
fantasai: mozilla's runs tests on every checkin
09:57:50 [fantasai]
fantasai: We pull some of the tests into our repo, but they lose sync over time
09:58:30 [Josh_Soref]
Josh_Soref: mozilla/microsoft don't like running tests against remote servers, they prefer to have isolated machines w/ no network links (it avoids latency / connectivity variance)
09:58:52 [jo]
09:59:08 [fantasai]
fantasai: if we could automate the runs, we could probably run them against the w3c server on every release, to have the test results recorded there
10:00:06 [fantasai]
[discussion of keeping things in sync]
10:00:11 [fantasai]
tobie: the cron job is the easy part
10:00:25 [mattkelly_]
10:00:26 [fantasai]
tobie: hard part is keeping up with changes to structure and systems
10:01:14 [fantasai]
10:01:28 [fantasai]
tobie: Having the JSON API report all the resources required by a test would be a nice step up
10:01:41 [mattkelly_]
10:02:01 [fantasai]
dom: Think your other solutions rely on JSON API to get the right pieces of the tests
10:02:15 [fantasai]
dom: Which assume that .. has the right organization of tetss, which I don't believe is true today
10:02:41 [fantasai]
jo: Is that a fundamental flaw?
10:02:46 [Josh_Soref]
10:02:52 [fantasai]
dom: If it doesn't get solved, the other solutions [...]
10:02:58 [fantasai]
dom: Not fundamentally unsolveable
10:03:36 [fantasai]
fantasai, Josh_Soref: what was the rpoblem
10:03:47 [fantasai]
dom: JSON API exposes what you see, which is a list of 40 test suites
10:03:49 [Josh_Soref]
Josh_Soref: +1
10:04:26 [fantasai]
dom: if you look at the details of that page
10:04:34 [fantasai]
dom: THere is no direct match between an entry on that page and a feature
10:04:40 [fantasai]
dom: Coremob is interested in features, or specs,
10:04:51 [fantasai]
dom: but it's not even at granularity of specifications here
10:05:02 [fantasai]
dom: So we would face issue of matching whatever we wanted to what's in the JSON api
10:05:05 [fantasai]
tobie: I agree with that
10:05:13 [fantasai]
tobie: that the JSON API is half-baked
10:05:25 [fantasai]
Josh_Soref: Seems like a bug everyone wants fixed, though
10:05:37 [fantasai]
tobie: Would be sad to build a solution to that problem with a slightly different API
10:05:48 [fantasai]
dom: It is a fundamental flaw that if not fixed, we can't rely on this API
10:06:02 [fantasai]
dom: But fundamental to so many use cases that we can't bribe Robin to fix it
10:06:16 [fantasai]
s/can't/can surely/
10:06:28 [fantasai]
dom: First thing would be to discuss with Robin
10:06:36 [fantasai]
dom: But we do depend on it
10:07:19 [dom]
PROPOSED ACTION: SOMEBODY to work with Robin on getting the right granularity exposed through the JSON API of the W3C Test Framework
10:08:09 [dom]
ACTION: Dom to work with Robin on getting the right granularity exposed through the JSON API of the W3C Test Framework
10:08:09 [trackbot]
Created ACTION-58 - Work with Robin on getting the right granularity exposed through the JSON API of the W3C Test Framework [on Dominique Hazaël-Massieux - due 2012-10-10].
10:36:53 [jfmoy]
jfmoy has joined #coremob
10:37:48 [natasha]
natasha has joined #coremob
10:40:11 [fantasai]
tobie: First, we need an answer to whether W3C can make the changes we need
10:40:30 [fantasai]
tobie: Second issue is, I believe the JSON API is the right way to go, but we have to make sure it has the data we need
10:40:33 [fantasai]
dom: agree
10:40:50 [fantasai]
dom: Thinking about the JSON API again, what do we expect to get from that API that we would reuse?
10:41:04 [fantasai]
dom: One thing we talked about this morning is that some version of coremob would target to be runnable in very short amount of time
10:41:13 [fantasai]
dom: which is not something that I expect you would get from JSON API
10:41:22 [fantasai]
Josh_Soref: I think it's ok for JSON API to be designed for that feature
10:41:29 [fantasai]
Josh_Soref: Should have a way to ask for the key tests to be run
10:41:39 [fantasai]
Josh_Soref: and by default, for databases that don't have it, default to none or all
10:41:48 [fantasai]
dom: Big assumption is that whoever decides what are key tests
10:42:03 [fantasai]
dom: have same definition of key tests as we want
10:42:11 [fantasai]
fantasai: Just have tags. Different groups can tag different subsets
10:42:30 [fantasai]
dom: Does the definition of key test have relationship to what we consider key tests?
10:42:39 [fantasai]
dom: The picking of tests that we'd want to run woudl go through various parameters
10:42:57 [fantasai]
dom: They are key in some sense of that word, that they are fast, that they don't overlap with other contexts
10:43:04 [fantasai]
dom: Not convinced the selection of tests can be automated
10:43:28 [fantasai]
tobie: My understanding from JSON APIs is they'll at least define what kind of test it is, testharness.js vs reftetst
10:43:39 [fantasai]
tobie: I like to take ECMAScript test suite as an example
10:43:47 [fantasai]
tobie: It has 27,000 tests, and runs in ~10min
10:43:58 [fantasai]
tobie: Until we have such amount of tests in the W3C repo
10:44:13 [fantasai]
dom: we have ~3000 tests right now
10:44:23 [fantasai]
tobie: Don't think we're going much beyond 15 minutes
10:44:32 [fantasai]
Josh_Soref: Animation tests might run you over pretty quickly
10:44:49 [fantasai]
tobie: If it's just those, then it's easy to not do
10:45:42 [fantasai]
tobie: I'm not sure if we need to solve this subsetting problem right now.
10:45:51 [fantasai]
fantasai: I agree. Let's just run all the automatable tests.
10:46:11 [jo]
10:46:35 [fantasai]
fantasai: Maybe it doesn't take too long. Maybe it's not a problem that it takes long.
10:46:44 [dom]
q+ to note that whether subsetting can be done usefully at the JSON-API level is a fairly important aspect on deciding whether the JSON API is the right approach (and so that we should invest resources in it)
10:46:57 [fantasai]
fantasai: Maybe ppl who want fast results are satsified with pulling cached results, and ppl who want live results are ok waiting
10:47:12 [fantasai]
10:47:25 [fantasai]
mattkelly_: Small use case of showing off devices at Mobile Web Congress
10:47:38 [fantasai]
bryan: QA depts dont' care if it takes 10 or 30 minutes
10:47:59 [fantasai]
Josh_Soref: For the MWC, could solve that use case by priming the repo with data, and just pulling the cached data
10:48:04 [jo]
ack dom
10:48:04 [Zakim]
dom, you wanted to note that whether subsetting can be done usefully at the JSON-API level is a fairly important aspect on deciding whether the JSON API is the right approach (and
10:48:07 [Zakim]
... so that we should invest resources in it)
10:48:28 [Josh_Soref]
dom: ... so that we should invest resources in it)
10:48:47 [fantasai]
tobie: We don't even know if timing is a problem, so why discuss it now. Let's discuss it later.
10:49:13 [fantasai]
dom: My concern is spending resources fixing JSON API for something we don't need
10:49:22 [fantasai]
tobie: This can be a req down the line
10:49:42 [fantasai]
tobie: If we need this data, we can figure it out later
10:49:45 [fantasai]
s/data/data later/
10:49:53 [bryan]
We have test departments responsible for app regression testing on devices, and while the difference between 10 min and 1 hour is significant, it's not a show stopper in reality, especially if the expectation is closer to 15-30 min.
10:50:57 [fantasai]
10:51:06 [fantasai]
Josh_Soref: You do need the API to say which tests belong to which specs
10:51:32 [jo]
10:51:52 [fantasai]
dom: So the action item I took is not a showstopper
10:52:48 [gmandyam]
10:53:17 [fantasai]
RESOLVED: we will subset only to the extent that we want to test the specs we're interested in, but not subset testing within the spec for time etc. until/unless it's shown that the tests take too long and it's a problem
10:54:06 [fantasai]
jo: So which option do we take? I suggest getting cracking asap and not closing off options 3/4
10:55:19 [jo]
PROPOSED RESOLUTION: We will attempt to proceed with Tobie's option2 on the basis that we want to do something soon, we don't close off Options 3 and 4 because we may want to come back to them later
10:56:04 [fantasai]
jo: So how do we proceed with option 2? Saying that some changes are required in current API to do that
10:57:20 [fantasai]
RESOLVED: We will attempt to proceed with Tobie's option2 on the basis that we want to do something soon, we don't close off Options 3 and 4 because we may want to come back to them later
10:57:37 [tobie]
ACTION: tobie to talk to Robin to get cross-origin messages baked into testharness.js/ test report.
10:57:38 [trackbot]
Created ACTION-59 - Talk to Robin to get cross-origin messages baked into testharness.js/ test report. [on Tobie Langel - due 2012-10-10].
10:58:19 [fantasai]
tobie: There are 2 steps, depending on whether we support only W3C tests or others too
10:58:24 [fantasai]
tobie: Let's focus on W3C tests
10:58:31 [fantasai]
tobie: There has to be a piece of test runner that talks to JSON API
10:58:39 [fantasai]
tobie: to know which test cases to run
10:58:46 [fantasai]
tobie: then the runner has to create iframe, and stick test pages in there
10:59:01 [fantasai]
tobie: then runner has to listen to result of running that test, and collect the results
10:59:06 [fantasai]
tobie: then it needs to ship the results out
10:59:42 [fantasai]
jo: So someone needs to spec such a test runner, and someone needs to write it
10:59:53 [fantasai]
dom: So first task is understanding in more detail what Tobie descripts as option 2
11:00:10 [fantasai]
dom: In practice I assume this is going to be using a JS library that takes data from the web, processes it, and sends it back
11:00:20 [fantasai]
dom: Someone has to analyze what this means.
11:00:26 [fantasai]
dom: Then someone has to write the JS code
11:00:30 [fantasai]
dom: to make that actually happen
11:00:56 [jfmoy]
jfmoy has joined #coremob
11:01:01 [fantasai]
dom: For the second action item, need someone who can write code. First action item doesn't need that, but is a deeper dive into how to interact with the tests
11:03:25 [fantasai]
11:03:33 [fantasai]
jo: The task is to look into what this task entails
11:03:36 [fantasai]
jo: in sufficient detail
11:03:41 [fantasai]
jo: so that someone can then write the code
11:04:01 [JenLeong]
11:05:17 [fantasai]
11:05:40 [fantasai]
dom: Neither me nor Tobie should be the bottleneck in understanding what's going on, but I'm fine taking this initial task ...
11:06:10 [fantasai]
tobie: Just to give a ibt of bg on this, when Jo asked me to write this document. I thought [...]
11:06:23 [fantasai]
tobie: It's time I spend not working on other things.
11:06:42 [fantasai]
tobie: Because ppl have the knowledge of things, they end up being the bottleneck
11:07:00 [fantasai]
[more discussion of resources, not minuting this]
11:07:31 [fantasai]
bryan: Who are the experts on the W3C infrastructure?
11:08:53 [fantasai]
Josh_Soref: just ask around
11:09:28 [mattkelly]
mattkelly has joined #coremob
11:09:55 [fantasai]
tobie: Not good for same person to do all the work. What if I die tomorrow?
11:09:57 [lbolstad]
lbolstad has joined #coremob
11:10:45 [lbolstad]
lbolstad has joined #coremob
11:12:10 [fantasai]
mattkelly volunteers to help
11:12:30 [hptomcat]
i'll help too with coding (javascript stuff)
11:13:02 [fantasai]
jo: We'll hope to have some forward motion in a month
11:15:01 [jo]
Action: dom to write a wiki page with a breakdown of the tasks required to build the initial test frameworkj
11:15:01 [trackbot]
Created ACTION-60 - Write a wiki page with a breakdown of the tasks required to build the initial test frameworkj [on Dominique Hazaël-Massieux - due 2012-10-10].
11:15:40 [jo]
action: bryan to find resources to implement what Dom writes in ACTION-60 within 1 month
11:15:40 [trackbot]
Created ACTION-61 - Find resources to implement what Dom writes in ACTION-60 within 1 month [on Bryan Sullivan - due 2012-10-10].
11:15:56 [jo]
action: shilston to find resources to implement what Dom writes in ACTION-60 within 1 month
11:15:56 [trackbot]
Created ACTION-62 - Find resources to implement what Dom writes in ACTION-60 within 1 month [on Robert Shilston - due 2012-10-10].
11:16:06 [jo]
action: kelly to find resources to implement what Dom writes in ACTION-60 within 1 month
11:16:06 [trackbot]
Created ACTION-63 - Find resources to implement what Dom writes in ACTION-60 within 1 month [on Matt Kelly - due 2012-10-10].
11:16:28 [dom]
11:16:28 [trackbot]
ACTION-60 Write a wiki page with a breakdown of the tasks required to build the initial test frameworkj notes added
11:16:51 [dom]
bryan, rob_shilston, mattkelly, if you can look at and see if that helps you
11:17:38 [fantasai]
jet asks about ringmark
11:18:03 [fantasai]
jet: Here's what I'd like to see. Currently lives in FB chunk of github
11:18:32 [fantasai]
jet: Would like the tests accepted as coremob, with a policy of pulls accepted by coremob
11:18:42 [fantasai]
tobie: ringmark tests, or ringmark runner?
11:18:52 [fantasai]
jet: What is coremob taking from
11:19:03 [fantasai]
tobie: I think the project of what we've talked about testing here, and what the charter of the gorup says,
11:19:09 [fantasai]
11:19:12 [jo]
11:19:16 [dom]
RRSAgent, draft minutes
11:19:16 [RRSAgent]
I have made the request to generate dom
11:19:23 [fantasai]
tobie: First step is for this group to go look through the W3C test suites and find tests
11:19:32 [fantasai]
tobie: When tests are missing, go to WGs to write tests in that context
11:19:39 [fantasai]
tobie: Because that's the only way that the W3C accepts us
11:19:46 [fantasai]
tobie: The tests are related to a WG
11:19:54 [fantasai]
tobie: It's the WG that wnats to decide whether a test is a good quality test or not
11:20:12 [fantasai]
jet: Right now, is associated with coremob, whether correctly or not
11:20:27 [fantasai]
jet: Would like group to say publicly whether is or is not associated with this group
11:20:32 [fantasai]
tobie: The answer is no.
11:21:02 [fantasai]
jet: Question then is, if it's not related to coremob, would coremob be, if they forked as the Level 0 of this
11:21:08 [fantasai]
tobie: It could be done. it's open source
11:21:37 [fantasai]
tobie: It's perfectly ok for this group to fork the testrunner and use it however it wants
11:21:37 [mattkelly]
11:21:44 [fantasai]
tobie: for the tests themselves, a bit more complicated
11:21:55 [fantasai]
tobie: but the tests are licensed for use in w3c
11:22:09 [fantasai]
tobie: Not saying the runner we're working on should be a fork. Or not.
11:22:28 [fantasai]
jo: that would be up to the people coding the testrunner, maybe some of it is useful maybe not
11:22:37 [mattkelly]
11:22:47 [jo]
ack giri
11:23:04 [mattkelly]
ack gmandyam
11:23:25 [Gavin_]
11:24:47 [fantasai]
discussion of indexedDB test suites
11:25:04 [fantasai]
tobie: Short term, make a decision, longer term bitch at WGs to decide which is the actual suite
11:25:57 [jo]
ack matt
11:26:28 [fantasai]
mattkelly: This group essentially gathers requirements and tests and builds the test suite, and then ringmark would be the results page on top of that
11:26:34 [fantasai]
mattkelly: People could have their own results page
11:26:40 [fantasai]
mattkelly: Might want to brand pages with it, whatever
11:26:46 [fantasai]
mattkelly: Give flexibility in how ppl wnat o show results
11:26:58 [fantasai]
mattkelly: while maintaining consensus around the tests
11:27:02 [fantasai]
s/wnat o/want to/
11:27:11 [fantasai]
jet: Why not start with what you've got?
11:27:14 [jo]
ack matt
11:27:49 [jo]
ack matt
11:27:53 [jo]
ack g
11:27:53 [fantasai]
gavin: The conversation has been focused around W3C tests, using that as center of gravity
11:28:20 [fantasai]
gavin: Wondering why dismissed the ringmark tests, then fill with more tests
11:28:37 [fantasai]
fantasai: Because there are a lot more W3C tests, don't want to recreate them
11:28:41 [fantasai]
jo: And they are authoritative
11:29:32 [fantasai]
fantasai: You might want to reimplement ringmark using the testrunner we'll use, and convert the ringmark tests to W3C tests, and just have a W3C implementation of ringmark
11:29:38 [fantasai]
11:29:50 [fantasai]
tobie: The decision that this grou pmade was to process this way
11:29:51 [dom]
"High-quality, comprehensive and automated test suites are important interoperability drivers. The CG will compile accompanying test suites for each specification it releases. Where appropriate, the test suites will draw on pre-existing tests developed for the feature’s original specification."
11:29:56 [jo]
11:29:56 [dom]
11:30:09 [fantasai]
tobie: If we start to take tests from elsewhere, and these get included into the test suites of the relevant specifications
11:30:17 [fantasai]
tobie: then whoever made those tests will have to give proper licensing to do that
11:30:25 [fantasai]
tobie: and we don't want to handle all of the cross-licensing issues
11:30:32 [fantasai]
gavin: For us to add value to all the different audiences
11:30:39 [fantasai]
gavin: Strikes me there's more than just availability of testcases
11:30:53 [fantasai]
gavin: That's the nuts and bolts of it, but where can they go to see the iOS level of support?
11:31:18 [fantasai]
jo: Should be able to go run the tests yourself, or go to a repo and look up existing rsults
11:31:30 [fantasai]
tobie: The results are getting collected, and some work at W3C to make those displayable
11:31:39 [fantasai]
tobie: We want to make that all much more discoverable and package it nicely
11:31:45 [fantasai]
tobie: the cool thing about Ringmark is that people can relate to it
11:31:55 [dan]
11:31:57 [fantasai]
tobie: The W3C tests and how they're presented, it's crappy and ugly and no one knows where they are
11:32:06 [fantasai]
gavin: Ringmark and HTML5test
11:32:10 [fantasai]
tobie: the two are very different hings
11:32:19 [fantasai]
gavin: Would this group have more value by focusing on the front end
11:32:43 [fantasai]
jo: the conclusion we came to was that it was sufficient for there to be a front end, but htis group didn't have to write on eitself
11:32:58 [fantasai]
jo: The place to most expeditiously osurce tests is the w3c repos
11:33:08 [fantasai]
jo: our job is to build the bridge between the tests and the ppl who want o sue them
11:33:25 [fantasai]
s/o sue/to use/
11:33:39 [fantasai]
jo: The FB ringmark front end is their front end
11:33:53 [fantasai]
gavin: And it's not the Coremob front end. But we could to that.
11:34:08 [fantasai]
tobie: If the test runner is done properly, I can add a front end on it easily.
11:34:44 [fantasai]
jo: I think whether or not this group has its own front end. We need a framework you could put a front end on.
11:35:03 [fantasai]
<br type=lunch>
11:52:20 [Alan]
Alan has joined #coremob
12:01:17 [ArtB]
ArtB has joined #coremob
12:01:30 [ArtB]
RRSAgent, make minutes
12:01:30 [RRSAgent]
I have made the request to generate ArtB
12:29:47 [jet]
jet has joined #coremob
12:31:52 [rob_shilston]
rob_shilston has joined #coremob
12:32:11 [rob_shilston]
Travel status for London:
12:34:09 [wonsuk]
wonsuk has joined #coremob
12:34:32 [Josh_Soref]
Topic: Testing
12:34:37 [Josh_Soref]
tobie: testing...
12:34:41 [dom]
12:34:53 [Josh_Soref]
... dom has a document about testing
12:35:01 [Josh_Soref]
... once we've identified the gaps
12:35:08 [Josh_Soref]
... the work is to prioritize them
12:35:25 [Josh_Soref]
... i like the plan2014 document
12:35:26 [max]
max has joined #coremob
12:35:40 [Josh_Soref]
... not bothering about writing tests for stuff that's known to be interoperable in existing implementations
12:35:53 [Josh_Soref]
... but instead focus on the things which aren't known for this
12:36:04 [Josh_Soref]
jo: how do we know where there's interop without tests?
12:36:10 [Josh_Soref]
tobie: that's a hard problem
12:36:15 [Josh_Soref]
... but there are areas where there's agreement
12:36:19 [Josh_Soref]
... such as the html 5 parser
12:36:44 [Josh_Soref]
jo: we should record that we have a question here
12:36:53 [Josh_Soref]
... although it's desirable to not write tests for things known good
12:36:57 [Josh_Soref]
... it's hard to know that without tests
12:37:07 [Josh_Soref]
bryan: do we have a way to record which things we think are good
12:38:29 [Josh_Soref]
tobie: we should rely on html wg for html
12:38:36 [Josh_Soref]
... we should liase for ietf/ecma
12:38:41 [Josh_Soref]
... for http, there's no tests
12:38:50 [natasha]
natasha has joined #coremob
12:39:14 [Josh_Soref]
jo: can we treat http as known working?
12:39:22 [Josh_Soref]
Josh_Soref: there are browsers which get http things wrong
12:39:33 [Josh_Soref]
tobie: there's an instance in iOS 6 where posts are cached
12:40:05 [jo]
-> TFL service status for folks planning to rely on getting to airports etc. on time
12:41:07 [rob_shilston]
12:41:14 [Josh_Soref]
tobie: there are areas of http which have surprising behaviors
12:41:25 [Josh_Soref]
... where vendors are evolving their behaviors in ways that were unexpected
12:41:31 [Josh_Soref]
... the ECMAScript test suite is massive
12:41:36 [Josh_Soref]
... i don't know if we want to include it
12:41:42 [Josh_Soref]
... and there's pretty good interop
12:41:55 [tobia]
tobia has joined #coremob
12:42:35 [Josh_Soref]
tobie: there's little point in asking them to run those tests, they're already doing well
12:43:27 [Josh_Soref]
tobie: you don't necessarily need to join a WG
12:43:41 [Josh_Soref]
... but for licensing, it's best if a Company and not the CG that writes a test
12:44:02 [Josh_Soref]
jo: it's desirable that individual contributors create a test to say "this is what i mean that X doesn't work"
12:44:26 [Josh_Soref]
tobie: i gave the example of TestTheWebForward with fantasai
12:45:14 [Josh_Soref]
jo: do we see a need for a repository for people to contribute outside the w3c provided test repository?
12:45:21 [Josh_Soref]
tobie: i think that implies we'd handle the licensing
12:45:25 [Josh_Soref]
jo: which would be horrible
12:45:46 [jo]
12:46:43 [Josh_Soref]
dom: there's a requirement that any test contribution be contributed by the contributor under the w3c license
12:46:56 [Josh_Soref]
jo: there's an element to "find the WG that owns the test"
12:47:03 [Josh_Soref]
tobie: and to get them to accept the test
12:47:07 [Josh_Soref]
dom: i agree with finding the group
12:47:12 [Josh_Soref]
... i'm not sure i agree on advocacy
12:47:16 [Josh_Soref]
... if it's wrong, then...
12:48:42 [fantasai]
jo: So if I write a test [...]
12:48:49 [fantasai]
jo: I'd like to see if it passes or fails
12:48:51 [ArtB]
ArtB has left #coremob
12:49:06 [fantasai]
tobie: Unless you're doing something special, you should be able to run the test on your own
12:49:20 [dom]
s/then.../then it won't be accepted but then we have no reason to push to get it adopted/
12:49:44 [fantasai]
tobie: Main problem I've seen is that every group has some documentation on how to write and submit a test
12:49:50 [jo]
12:49:55 [fantasai]
tobie: There is no central place where all of this is explained
12:50:05 [fantasai]
tobie: Maybe we should just point to what is the best explanation elsewhere
12:50:38 [fantasai]
dan: wrt the relationship between ringmark and w3c tests
12:50:54 [fantasai]
dan: we talked about having ringmark be the front end for w3c, now I understand we need a test runner to be in between
12:51:01 [fantasai]
dan: ... writing the test runner
12:51:06 [fantasai]
tobie: effort needed to do that?
12:51:17 [jo]
notes that explanation as to how to go about creating and submitting tests to the W3C repo may be something we need to address
12:51:25 [fantasai]
tobie: to write the test runner? Depends on how much process you stick on top of it and how good the engineers are
12:51:38 [fantasai]
tobie: I would suggest it would take a reasonably good engineer a week to do
12:52:05 [fantasai]
tobie: There's a lot of extra complexity in the ringmark runner mainly because it does something we don't want to do anymore
12:52:17 [fantasai]
tobie: which is to compile a test page from sources
12:52:23 [fantasai]
tobie: the front end itself is ~ 2000 lines of JS
12:52:26 [fantasai]
tobie: it's reasonably well-written
12:52:32 [fantasai]
tobie: it could be reused pretty effectively
12:52:40 [fantasai]
tobie: need to wire it to cross-origin
12:52:44 [jo]
12:52:48 [jo]
ack d
12:53:13 [fantasai]
jo: the existing ringmark tests are not referenced here
12:53:27 [fantasai]
tobie: Impossible to know if they're useful until we have gap analysis against W3C tests
12:53:40 [fantasai]
jo: So let's do the gap analysis and then see what needs to be ported over from ringmark
12:53:49 [fantasai]
jo: what is the status of the ? that contains ringmark toay
12:54:00 [fantasai]
jo: Status is pending further investigation of what tests are covered and which are not
12:54:12 [fantasai]
tobie: Best person to answer that question is Robin, he had an action to do the assessment
12:54:30 [fantasai]
tobie: His assessment was that the tests are not good enough to spend analyzing them
12:54:43 [fantasai]
tobie: better to assess the W3C tests and see what the gaps are
12:55:04 [fantasai]
tobie: He was very polite about it, but said most of the tests were doing feature testing and some of them rae really not good
12:55:06 [hptomcat]
can we do the gap analysis here and now to avoid food coma? :)
12:55:10 [fantasai]
tobie: This has to be from someone else
12:55:29 [fantasai]
rob_shilston volunteers to look at some of the tests
12:55:39 [fantasai]
rob_shilston: I'll do half of them
12:55:50 [mattkelly]
hptomcat: jetlag+gap analysis = sleep for me
12:55:56 [fantasai]
jo: we need a test meister here, who is going to basically knock this into some form of plan
12:56:08 [fantasai]
jo: Can i convert your offer into a greater scope?
12:56:28 [fantasai]
rob_shilston: There are only certain interest to us. we're not interested in 2D gaming, so canvas stuff, I don't have any knowledge of
12:56:36 [hptomcat]
can the gap analysis be crowd-sourced?
12:56:43 [fantasai]
rob_shilston: Could say these are areas of tests and ...
12:56:54 [hptomcat]
or split into parts and assigned to different people?
12:57:18 [fantasai]
rob_shilston: Reading through existing w3c test suites, selecting ones that are pertinent to coremob 2012
12:57:30 [fantasai]
rob_shilston: I can do that for the areas that I know about, but hard for me to do areas I don't know about
12:58:28 [fantasai]
discussion of finding expertise per spec
12:58:39 [fantasai]
fantasai: Can we do that today, look at the list of specs and split it up?
12:58:47 [hptomcat]
12:58:52 [fantasai]
tobie: Just need to flatten the list and put names on it
12:59:15 [fantasai]
tobie: Talk with Dom, he's done similar work together
12:59:25 [fantasai]
rob_shilston: I'll just put a list together then, and we can go over it in 10-15 minutes
12:59:42 [fantasai]
tobie: Then we have the gap analysis of W3C tests
12:59:57 [fantasai]
tobie: we can do the same with ringmark, and if there are tests there that will fill the W3C gaps, get those tests incldued in the relevant repositories
13:00:23 [fantasai]
dan: [... outside test suites ]
13:00:31 [fantasai]
tobie: Given these test suites would have to be licensed in a way we can use
13:00:44 [fantasai]
dan: Can cover that, maybe look at the license issue later
13:00:51 [fantasai]
dan: Need to identify what's available to be used
13:01:09 [fantasai]
tobie: I don't know of other test suites that cover W3C tech and are not public
13:01:31 [jfmoy]
jfmoy has joined #coremob
13:01:38 [fantasai]
Dom: For audio stuff, the guy behind [some service] has a fairly advanced set of audio testcases that don't use testharness.js, but are more useful and important than what we have today
13:02:15 [fantasai]
Dom: I expect that some projects have developed testcases in different formats
13:02:37 [fantasai]
tobie: if there's stuff and it's available, we can try to shim it
13:02:49 [rob_shilston]
13:03:26 [fantasai]
tobie: Integrating 3rd party tests is a good thing to do
13:03:44 [fantasai]
tobie: either we need to host them ourselves, or we can convicne the 3rd parties to use a system that we can use to run the tests directly from their servers
13:04:08 [fantasai]
tobie: to convince them to post their results with postmessage
13:04:17 [fantasai]
tobie: would make it possible to integrate with coremob and other test suites around the web
13:04:25 [fantasai]
tobie: that would be really cool
13:04:43 [fantasai]
tobie: If we talk about small projects like Dom mentioned, value for those projects to
13:04:47 [fantasai]
tobie: Bring traction to their work
13:04:59 [fantasai]
tobie: Write a tests suite to get problems fixed, so this is win win situation
13:05:30 [fantasai]
tobie: wrt Mozilla's tests, cna't run them directly from Mozilla's servers
13:05:37 [fantasai]
fantasai: no, they're in the mercurial repo
13:06:01 [dom]
(the tests I was thinking of are the one from soundcloud )
13:06:04 [betehess]
betehess has joined #coremob
13:06:16 [fantasai]
mounir: Easy to get ahold of and run, though
13:06:44 [fantasai]
fantasai: Ideally would get them cron-synched to the WG repositories, then write shims or slowly convert them
13:06:51 [jo]
13:08:19 [fantasai]
[looking at spreadsheet]
13:08:26 [fantasai]
tobie: Let's be clear, this is a W3C spreadsheet
13:08:28 [Josh_Soref]
13:08:35 [fantasai]
tobie: Do ringmark analysis later if necessary
13:09:19 [fantasai]
tobie: The authoritative source is W3C. Where that is lacking, then we intervene
13:16:51 [lbolstad]
lbolstad has joined #coremob
13:17:57 [hptomcat]
is there an example we can go through?
13:18:06 [hptomcat]
a simple spec&test
13:19:52 [fantasai]
13:21:10 [fantasai]
13:24:58 [fantasai]
jo: So let's have 4 levels: no tests, poorly tested, widely tested, comprehensive (done)
13:25:15 [fantasai]
fantasai: You'll want to have this analysis per spec section, not just per spec
13:29:29 [Josh_Soref]
jo: the process is ...
13:29:34 [Josh_Soref]
... you open the sheet
13:29:43 [Josh_Soref]
... for each line in the summary, assigned to you
13:29:47 [Josh_Soref]
... you update your contact name to you
13:29:54 [Josh_Soref]
... you copy the template sheet
13:30:00 [Josh_Soref]
... rename the sheet to match the test
13:30:08 [Josh_Soref]
13:30:15 [Josh_Soref]
... and for each section of the spec
13:30:21 [Josh_Soref]
... you assess test coverage
13:30:26 [Josh_Soref]
... as a row in your sheet
13:30:33 [Josh_Soref]
... and then you put a summary status on the summary sheet
13:38:58 [bryan]
13:39:52 [jo]
ACTION: Bryan to supply reasons to reference SVG 1.2 Tiny rather than SVG 1.1
13:39:52 [trackbot]
Created ACTION-64 - Supply reasons to reference SVG 1.2 Tiny rather than SVG 1.1 [on Bryan Sullivan - due 2012-10-10].
13:48:53 [fantasai]
question of HTTP tests, do any exist anywhere
13:49:00 [fantasai]
discussion of data urls
13:49:06 [fantasai]
and http testing
13:50:09 [fantasai]
13:53:08 [Gavin_]
13:53:50 [fantasai]
So for cases where W3C doesn't have tests, we should survey vendors to see if they have tests
14:32:10 [rob_shilston]
14:32:30 [rob_shilston]
scribe nick: rob_shilston
14:32:39 [dom]
scribenick: rob_shilston
14:33:38 [rob_shilston]
Jo: Tobie and I chatted during the break and decided not to resume on coremob2012 as there's other items that ought to be done first. Once we're clearer on use-cases and requirements, we can revisit it.
14:34:02 [rob_shilston]
dom: Several people were uncomfortable yesterday about the progress that had been made. Can we get some feedback?
14:34:34 [dom]
s/the progress/the lack of progress on coremob-2012/
14:34:37 [rob_shilston]
jo: What we've agreed is that, within a month, the requirements should be established. So, I'm hoping that everyone will be happy if the group is back on track with those by the end of October.
14:35:24 [tobie]
14:35:25 [rob_shilston]
gavin: I've a concern that the requirements docs will be redoing work that should have been doing in the working groups, and that that'll then need to be revisited. I see us as an umbrella spec and don't want us to get tied into the individual specs
14:35:51 [rob_shilston]
tobie: I think the spreadsheet initiated by Matt is the starting point for building the requirements document.
14:35:57 [rob_shilston]
14:36:37 [rob_shilston]
jo: I think we'll find that the features mentioned there should broadly map to existing specs. I think on a pragmatic basis, the majority of the things we come up with are broadly matched to the specs we're working on
14:37:08 [rob_shilston]
... I think it'll be hard to work with each working group to find their requirement gathering process. Instead, we're producing the requirements for mobile use cases
14:37:17 [rob_shilston]
gavin: Do we get the working groups to validate the requirements?
14:37:21 [rob_shilston]
jo: I don't think so.
14:37:36 [jo]
ack tobie
14:38:02 [rob_shilston]
tobie: Having done some work in this area, I've found that it's difficult to express what exactly we want in the form of use cases. The high level is "we want stuff that works on other devices to work on mobile" - it's hard to have a specific use case for (say) CSS 2.1
14:38:52 [rob_shilston]
... The second problem is the risk of getting stuck in a rat hole to dig into the specs on individual working groups, and that must be avoided. We need to define some high level scenarios / apps, and hopefully that'll avoid the problem.
14:39:10 [rob_shilston]
... And this has to be done in a short amount of time, which also helps avoid Gavin's concerns.
14:39:57 [rob_shilston]
... Overall, until there's a requirements document listing the use cases, then there'll be an ongoing argument as to whether a given spec is included or not.
14:40:05 [rob_shilston]
Gavin: But surely that's just moving the question
14:40:14 [rob_shilston]
Dom: No, because it's then clearer against the coremob charter
14:42:27 [rob_shilston]
jo: I think we've been overly focusing on the standards doc as a primary deliverable, but I think all deliverables are important. We should realise that there's the possibility that there will only be specs for a certain percentage of our requirements, and tests for only some of those specs
14:42:57 [rob_shilston]
... I'm pleased we've got names against actions, clarity over the methodology, and tight timescales against these actions.
14:42:59 [rob_shilston]
14:43:58 [rob_shilston]
... Speaking personally, the group is only six months old and I imagine that as the group continues and learns from its experiences, then the direction may change, which is fine. However, for now, we've got a route and we should follow that for the coming months.
14:44:19 [rob_shilston]
... "You learn to swim by getting into the swimming pool and trying, not by just standing on the side looking at it"
14:44:27 [rob_shilston]
... I'm happy that we're well positioned right now.
14:44:39 [rob_shilston]
... Dom - would you like to comment on where we are?
14:45:11 [rob_shilston]
dom: To me, this face to face has clarified a lot of things, and I now understand the obstacles that exist and the strategy for solving them.
14:45:54 [rob_shilston]
... and there are many WG in less strong state.
14:46:16 [rob_shilston]
Jo: I'll take an action to summarise this meeting, aiming to have done that within a week.
14:47:47 [rob_shilston]
... I propose that we basically spend the remaining time tidying up, rather than bashing through pending ACTIONs, which we can instead pick up in a forthcoming teleconference.
14:48:32 [rob_shilston]
Giri: [Presenting about Vellamo - system level benchmarking from Qualcomm Innovation Center"
14:48:49 [rob_shilston]
... This is a project that's been going on for about four years.
14:48:58 [rob_shilston]
... The first version focussed on web runtime benchmarking
14:49:45 [rob_shilston]
... It's evolved since then, and we're now looking into four broad categories: Page load, user experience (eg touch), video (performance, assessment of simultaneous playback0 and device APIs
14:50:19 [rob_shilston]
... We tried to consider multiple parts of the hardware platform - from the CPU, GPU, Modem and multimedia components.
14:51:10 [rob_shilston]
... We use this tool extensively within our organisation. It is publicly available, and we're hoping to make it available as an open source project.
14:51:25 [rob_shilston]
... You can go to Google Play and get Vellamo right now.
14:52:38 [rob_shilston]
... First version (strictly HTML5 testing) included rendering, javascript engine benchmarking (eg SunSpider), user experience tests (automated tests with simulated user interactions for flinging text, images and complex webpages)
14:52:49 [rob_shilston]
... and basic networking (such as 2G and wifi)
14:53:52 [rob_shilston]
dom: How does this compare with EMBC (Embedded Microprocessor Benchmark Consortium)
14:54:28 [rob_shilston]
Giri: We're doing some metal-level testing (eg Dhrystone) which are closer to EMBC whereas we started with HTML5
14:55:38 [rob_shilston]
... Webviews are not the same as the browser on the device, so you always need to test in multiple different ways.
14:56:27 [rob_shilston]
... Our challenge was that, particularly for rendering, we had to be native to do accurate measurements. This is probably an action item we'll probably have to take from this group to explore how web performance benchmarks can be made available from the browser.
14:57:14 [rob_shilston]
... We do crowdsource data. When people run the benchmarks, there's a bit of variability even on identical hardware platform.
14:57:48 [rob_shilston]
... This can be run on any Android devices - contrary to blogosphere comments it's not just Qualcomm hardware that's supported.
14:59:16 [rob_shilston]
Giri: [Demos the app inside an emulator and shows the metal tests. HTML5 apps take a bit longer to run]
14:59:49 [rob_shilston]
... There's also a series of additional tests that do involve user-interaction, where you start measuring touch screen responses and simultaneous media playback.
15:01:27 [rob_shilston]
... We're willing to make this coremob compatible tests. But, rendering tests might be hard because of javascript issues and variability in Date() implementations.
15:02:49 [rob_shilston]
Josh_Soref: Web performance has created things similar to Date().now. If you're just looking for a clock for timing, then it's not really related to the Date() object, and you can use the XXX API.
15:03:10 [rob_shilston]
Gavin: Is there a browser-based implementation?
15:03:32 [rob_shilston]
Giri: No, and we're hoping to work with coremob to deliver that.
15:03:52 [rob_shilston]
s/XXX API/Navigation Timing/
15:05:02 [rob_shilston]
Dan: What's the relationship between this test if run between the webview and in the browser
15:05:39 [rob_shilston]
Giri: I don't currently have quantitative information about the performance. We know they run in the browser. This'll also change with Chrome on Android with Jellybean 4.1
15:06:11 [rob_shilston]
Dan: What's the minimum supported Android version?
15:06:23 [rob_shilston]
Giri: I don't believe there's anything specific.
15:06:30 [rob_shilston]
Dom: Google Play says Android 2.3
15:06:48 [rob_shilston]
Jo: Thank you Giri.
15:07:05 [girlie_mac]
Showcase demo app URL -
15:07:17 [girlie_mac] QR Code
15:07:48 [girlie_mac]
Try it on your mobile browser :-)
15:08:06 [rob_shilston]
tobie: I wrote a post on performance issues on the coremob mailing list. John Nealan(sp?) from Nokia said wouldn't it be good to build a real web app to assess the real performance issues. Rebuilding existing apps was discussed, but we decided in the end to build a camera app.
15:08:24 [hptomcat]
John Kneeland
15:08:49 [rob_shilston]
... Building an app like this would be a good showcase to developers of what they could do, and how they can use the different specs. We could then explore the gaps and challenges of cross browser implementation
15:09:27 [girlie_mac]
warning: it does not work on: < iOS 6, IE, etc
15:10:28 [girlie_mac]
tap the "instagram" icon to take a pic
15:10:43 [rob_shilston]
... The TodoMVC project was considered to be a model of how a common app can be built on top of multiple frameworks.
15:11:35 [rob_shilston]
... and so it's hoped that the camera app can in future be built on top of multiple frameworks, as each will also highlight performance issues that become apparent.
15:11:56 [rob_shilston]
s/John Nealan\(sp\?\)/John Kneeland/
15:13:55 [rob_shilston]
tobie: This app covers a lot of use cases whilst being fairly simple. From media capture, swiping, upload, index DB etc. It covers a lot of things that I think the coremob group cares about.
15:14:25 [rob_shilston]
girlie_mac: It's just working on a few devices, mainly because of the media capture API support.
15:15:42 [rob_shilston]
dom: It's a great project and really of excellent use. It seems to work fine in Chrome on Android, but not currently on Firefox. Is this supposed to be develoepd for cross-browser work, or targeted at a particular browser?
15:16:12 [rob_shilston]
tobie: Tamomi had developed a bit of a prototype, and then we've hacked it into a mobile project over the last four days. So, it's very rough and ready around the edges.
15:17:06 [rob_shilston]
... It's very much a proof-of-concept rather than even an alpha
15:17:19 [rob_shilston]
... the goal is to have this open sourced on the coremob github account.
15:18:51 [rob_shilston]
... We're looking to use Docco. It's a documentation tool and a good way of walking someone through a tutorial in code.
15:18:52 [tobie]
15:21:42 [rob_shilston]
Dan: Going back to the test running framework - doesn't option 2 (which we've chosen to do) prevent the private running of tests.
15:21:56 [rob_shilston]
tobie: Yes, we've not got promises of further resource to actually do the other options.
15:22:18 [rob_shilston]
jo: It should be noted that we're not ruling out doing options three or four, but two is the starting point.
15:22:55 [rob_shilston]
tobie: There's lots of unknowns and analysis that needs to be done - will the W3 JSON API support what's needed? How will the proxying work for some of the more unusual tests?
15:23:11 [rob_shilston]
... Option 4 is definitely the long term desirable, but it's not something that's achievable right now.
15:23:36 [rob_shilston]
Dan: How easy is it to scale from option 2 to option 4
15:24:21 [rob_shilston]
tobie: There'll be reusable components.
15:24:43 [rob_shilston]
jo: Merits of option 2 is that it's a constrained engineering problem that can be solved with the available resources and can deliver useful results.
15:25:24 [rob_shilston]
tobie: Until the W3 JSON API is enhanced, it's hard to determine whether to pursue option 3 or 4.
15:26:05 [rob_shilston]
Dan: How can we use option 2 with pre-release devices?
15:26:41 [rob_shilston]
tobie: You can't. Your device details will get out. You'll need to work out how to do this, and it'll probably depend on exactly the agreement you've got with OEMs.
15:27:21 [rob_shilston]
tobie: If user-agent can be seen to be visiting websites, then option 2 can work fine by copying the test runner and using it on your own servers.
15:29:13 [rob_shilston]
jo: That's a wrap.
15:29:31 [rob_shilston]
josh_soref: I'll try to have the minutes tidied in one week's time.
15:29:48 [rob_shilston]
jo: Follow up meetings - we've been holding teleconferences approximately every fortnight. Shall we continue?
15:30:07 [rob_shilston]
[implicit yes from the group]
15:30:56 [rob_shilston]
jo: Time is 2pm UK time, as that's equally inconvenient for east Asia and west coast US. Should we vary the times?
15:31:19 [rob_shilston]
[implicit 2pm on Wednesdays]
15:31:21 [hptomcat]
Asia for the next F2F?
15:31:33 [rob_shilston]
jo: Let's plan to meet again in January.
15:33:37 [hptomcat]
since HP is all over the place, i'll try to find out for a location in Asia
15:34:11 [rob_shilston]
Mansouk: I'll see if Samsung can host in Seoul in late January.
15:34:29 [jo]
ACTION: Jo to look at organising the next F2F late Jan in Aisa
15:34:29 [trackbot]
Created ACTION-65 - Look at organising the next F2F late Jan in Aisa [on Jo Rabin - due 2012-10-10].
15:36:03 [rob_shilston]
RESOLUTION: Thank Mozilla for generous hosting and excellent catering.
15:36:14 [rob_shilston]
RESOLUTION to Thank Mozilla for generous hosting and excellent catering.
15:36:28 [rob_shilston]
RESOLUTION to thank Tobie for preparing all the documentation in preparation for the meeting.
15:36:34 [rob_shilston]
RESOLUTION to thank the scribes for writing
15:36:51 [rob_shilston]
RESOLUTION to thank the chair for his excellent chairing.
15:37:28 [Josh_Soref]
i/Tobie/[ Applause ]/
15:37:32 [Josh_Soref]
i/scribes/[ Applause ]/
15:37:37 [Josh_Soref]
i/chair/[ Applause ]/
15:37:55 [Josh_Soref]
15:37:56 [Josh_Soref]
15:37:58 [Josh_Soref]
15:38:06 [Josh_Soref]
s/RESOLUTION to Thank Mozilla for generous hosting and excellent catering.//
15:38:11 [Josh_Soref]
RRSAgent, draft minutes
15:38:11 [RRSAgent]
I have made the request to generate Josh_Soref
15:41:33 [Josh_Soref]
15:41:47 [Josh_Soref]
15:41:56 [Josh_Soref]
15:42:48 [Josh_Soref]
15:42:52 [Josh_Soref]
15:43:31 [Josh_Soref]
s|s/John Nealan\(sp\?\)/John Kneeland/||
15:43:47 [Josh_Soref]
s/John Nealan(sp)/John Kneeland/
15:48:28 [Josh_Soref]
15:49:01 [hptomcat]
hptomcat has left #coremob
15:49:35 [Josh_Soref]
15:49:40 [Josh_Soref]
RRSAgent, draft minutes
15:49:40 [RRSAgent]
I have made the request to generate Josh_Soref
15:51:57 [Josh_Soref]
15:52:51 [Josh_Soref]
RRSAgent, draft minutes
15:52:51 [RRSAgent]
I have made the request to generate Josh_Soref
15:53:32 [Josh_Soref]
s|s/John Nealan(sp)/John Kneeland/||
15:53:57 [Josh_Soref]
15:54:04 [Josh_Soref]
15:55:02 [Josh_Soref]
RRSAgent, draft minutes
15:55:02 [RRSAgent]
I have made the request to generate Josh_Soref
16:12:23 [Alan]
Alan has joined #coremob
16:25:00 [JenLeong]
s/Navigation Timing/Navigation Timing API/
16:25:13 [JenLeong]
RRSAgent, draft minutes
16:25:13 [RRSAgent]
I have made the request to generate JenLeong
17:11:29 [Zakim]
Zakim has left #coremob
17:48:26 [JAB]
JAB has joined #coremob
18:01:42 [Alan]
Alan has left #coremob
19:14:44 [andreatrasatti]
andreatrasatti has joined #coremob
20:08:50 [jet]
jet has joined #coremob