W3C

- DRAFT -

SV_MEETING_TITLE

31 Oct 2012

See also: IRC log

Attendees

Present
Jean-Francois_Moy, sgodard, Arnaud_Braud
Regrets
Chair
SV_MEETING_CHAIR
Scribe
tobie

Contents


Dan: missing from WebPerf agenda: end-user and operator visibility in perf optimization.

Paul: agreed. required info from users of specs
... loook at audio spec
... every spec really are missing perf metrics for exit criteria.
... which means that implementations might not be usable by devs.
... I'd like to get perf criterias as a req to go to rec.
... I'd like to get feedback about this.

aaa: skeptical about this because of the wide range of devices that implement specs.
... could define profiles.
... much more important is to be able to measure.

Daniel: requires a bigger focus on network.

aaa: not only.

phl: difficult to define this because of the different class of devices.

Paul Bakaus: this should be done based on use cases that can be easily expressed in perf reqs

scribe: some are hard to express, but some are easy.

Henry: seamless audio splicing is an important issue with [?]

Dan: perf doesn't necessarily mean numbers.
... defining perf profiles within specs can help devs

Daniel: would like to be able to compare native apps perf to web apps.

pbakaus: difficult to do the spec work in the native side.

Daniel: its similar at the network level.
... so you can get at least network perf.

phl: network measurement ispart of the discussion at webperf, so adding use cases would be good. The group has little experience in network. so please join if you do.

Daniel: paypal native app calls http apis
... backend is similar to what you have for reg web apps.
... one of the thing webperf should do is look into this.

pbakaus: other suggestions on how to have perf a criteria for successful specs?
... what bothers me is to see implementors claim support but don't have good enough perf caracteristics.

slightlyoff: phrase your questions in form of benchmarks.
... eg robohornet effort
... iplementors want clarity too.

pbakaus: adding benchmarks to spec criteria seems reasonable.
... related issue:
... problem is not only with slow implementations but also that some specs can't be used because related specs aren't implemented.
... eg every dev runs in the same kind of problem on mobile using gyroscope
... which can't be used because orientation lock is missing.
... real problem that can't be expressed easily.
... what can be done?

Daniel: specs list deps
... would you seen a need for something beyond that.

pbakaus: in that case, screen orienation doesn't exist yet.

slightlyoff: how that's been done is to pile up everything that's required in the same spec.

Daniel: perf isn't taken seriously.

Henry: can't be right.
... perf is crucial.

slightlyoff: coremob test suite has perf aspects to it.
... game here is to get vendors interested in your use cases
... benchmarks
... benchmarks are hard to write
... can cause premature optimization
... pushing back on the idea that people don't care.
... they do.

bbb: we have people doing games. we found that the best solution was to talk with vendors and ask them why a particular problem is slow.

pbakaus: lot of interest in perf, we just need to know how to make it happen.
... works well when benchmarks are not written by vendors.
... used as a way to market browsers.
... any other feedback?

ccc: making benchmarks part of an accepted test suite makes sense.
... better in a test suite for process reasons.

Daniel: within w3c we could decide on a vendor agnostic methodology to define how to measure the web.
... this should be pretty straightforward.

slightlyoff: you're vastly underestimating the complexity and difficulty of the problem.

Daniel: maybe but this would still be a useful doc.

ccc: we need to write benchmarks first
... open-source browsers have benchmarks
... why don't we migrate those back into the W3C?

phl: space of webperf is pretty vast.
... it helps when people come with use cases and examples.
... where do we start?

pbakaus: is the webperf the right venue?

phl: no convinced either way.
... but that could be possible.

pbakaus: i like the benchmark approach but that works for single specs only
... devs use tech in ways that aren't considered
... eg property change in IE
... some of those tech, when used together can get really slow.
... not sure how to bring this up to W3C.
... is this a problem? Where should this be brought?
... devs using a combination of tech and discovering it is slow.

Daniel: examples?

pbakaus: using SVG and canvas is slow.
... technically it works.
... but its slow.

phl: use benchmarks is the right approach to raise the issue.
... in the webperf WG, we realized Firefox was creating extra work when using CSS.

so we talked to Mozilla.

scribe: vendors need to be told about those issues and help prioretize

pbakaus: what is the venue for this?

Henry: in terms of end to end benchmarking you have friends at the IETF.
... everyone agrees about that a lot of work will need to be put in benchmarking.
... not the same focus as here, but there's a lot of overlap.

Dan: I can also take this to the IETF next week.
... IETF has working groups that are specialized on benchmarking

Daniel: early days for that group.

Henry: name HTTPBis is getting overloaded.

pbakaus: bring pain points to the webperf mailing list.
... so far we've been talking about memory perf.
... garbage collection.

and tobie has been talking about scrolling perf.

scribe: we need to hear more from you.

<dan_romascanu> ietf bmwg - https://datatracker.ietf.org/doc/charter-ietf-bmwg/

phl: we published a survey

<ht> So, the httpbis IETF WG has been rechartered to put HTTP/2.0 into scope, see http://tools.ietf.org/wg/httpbis/charters for an overview

phl: we have to decide between now and the end of Nov what our focus is going to be on.
... we are good at implementing stuff.

pbakaus: super grateful for the work of the webperf group.

<sgodard> tobie: Great great job for your first time as scribe ;)

RRSAgent: generate minutes

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.137 (CVS log)
$Date: 2012/10/31 14:25:13 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.137  of Date: 2012/09/20 20:19:01  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: s/aaa/Henry/
Succeeded: s/;/'/
Succeeded: s/rrsagent, [please] create [the] minutes//
No ScribeNick specified.  Guessing ScribeNick: tobie
Inferring Scribes: tobie

WARNING: No "Topic:" lines found.

Present: Jean-Francois_Moy sgodard Arnaud_Braud

WARNING: No meeting title found!
You should specify the meeting title like this:
<dbooth> Meeting: Weekly Baking Club Meeting


WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Got date from IRC log name: 31 Oct 2012
Guessing minutes URL: http://www.w3.org/2012/10/31-webperf-minutes.html
People with action items: 

WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report


[End of scribe.perl diagnostic output]