W3C

Web Real-Time Communications Working Group Teleconference

26 Apr 2018

Agenda
Slides

Attendees

Present
Stefan_Hakansson, Bernard_Aboba, Harald_Alvestrand, Vivien_Lacourba, Dominique_Hazael-Massieux, Daniel_Burnett, Alexandre_Gouaillard, Emmanuel_Andre, Jan-Ivar_Bruaroey, Karthik_Budigere_Ramakrishna, Lennart_Grahl, Lennart_Schulte, Martin_Varela, Nils_Ohlmeier, Patrick_Rockhill, Philipp_Hancke, Soares_Chen, Taylor_Brandstetter
Regrets
Chair
Stefan_Hakansson, Bernard_Aboba, Harald_Alvestrand
Scribe
Vivien

Contents


(recording in progress)

WebRTC 1.0 Implementation Status (Dom)

Dom: worked on an implementation status report based on web confluence project

Bernard: saw some surprised seeing this report, some features are only implemented by one browser
... as you scroll down you get more red, meaning not implementing
... question is how do those results change when you use adapter.js
... should we track results with adapter.js so that we can work on those implementation details

Jan-Ivar: from the list you have here, I don't think the difference will be that great

Bernard: maybe 10 entries or less (7 I think)

Alex: adapter.js is a tool to mitigate teh diff between teh browsers not something to show implementation results so would say we should not use it here

Jan-Ivar: same comment

Dom: Rather than running tests with adapters.js, we should collect annotations on teh red method and properties to understand if it is trivial to fix (eg name change) or if nobody implemented it

Jan-Ivar: the charter says if only 1 implentation is implement it shoudl be removed, so adapter.js could show other implementations with slight diff

Burn: I think if it is fair to list adapter.js as it is here to help devs, and we should list it as an implementation

Nihls: I am fine if it is an addition to the default browser implementation

Bernard: Dom, we can look at adding a new entry for adapter.js

Harald: It would depend on the feature. if a feature is implemented on 2 browsers it is done. If it works with adapter.js this likely means 2 people understood the spec

Bernard: Dom and I will look at adding a new column

RESOLUTION: Dom and Bernard to look at adding new columns for adapter.js

WPT

Bernard: even more red than the conference tracker, some comes for permissions timeouts, some from other reasons
... not very representative of what is implemented

jan-Ivar: if you have 99 tests and 1 is failing you got red

Nihls: could we have yellow ?

Burn: there is some shading

Nihls: I heard people using those results to decide if they should use WebRTC or native apis, this red scares them

(diff of light red and not so light red)

Harald: we are suggesting to use yellow for the lighter red

RESOLUTION: We are suggesting to use yellow instead of the lighter red in WPT results color

WPT Status

Bernard: only a few tests since we adopted the "test before commit" policy, and we have no issue with Needs test label
... do we need process to improve PR submission

WPT ownership

Soares: most WPT owners are working on a volunteer basis, so there is a delay before a test is reviewed and accepted
... so if you are willing to help and become an owner please say so

Alex: when Soares worked on all those tests last year I tried to have all browsers team involved to review the tests and I was unsuccessful
... What fippo is currently doing giving thumbs up is very helpful

Burn: from experience in other groups, the QA people are very good at it

Nihls: This assumes you have QA people knowledgeable in the field

Harald: back to original pb finding enough QA people

Jan-Ivar: is the bottleneck finding reviewers or finding owners ?

harald: I think we need reviewers first. It is the obvious bottleneck now
... Can we get commitment from browser vendors to get more reviewers ?

Bernard: I'll try to find someone from the Edge Team

Harald: And I'll look for someone at Google (not me)

Jan-Ivar: can we make sure the submitter as tested it before ?

Alex: usually people test on one browser and then rely on Travis to test on the other browsers
... there is automatic test on PR

Dom: it runs at least on Chrome and Firefox to see if there are no obvious test failure
... what travis check is that there is a consistent result across running your test between chrome and firefox

Harald: In Chrome a bug is filled automatically if a new failure occur in a given directory, I have only received a few of those for webrtc so not sure if this is really working

Nils: When a PR is submitted, could it look at confluence so see if a feature is supposed to work

Dom: One challenge is to determine automatically what you are trying to test
... this double check is what we are expecting reviewers to do

Nils: do we require review from all browser vendors ?

Dom: To clarify test reviewers are reviewing the test, they are not here to make sure that a given test on all browsers, knowing something has been implemented is only a hint

RESOLUTION: Implement Harald suggestion of finding reviewers from the various browsers team

Test Helpers

Soares: more and more tests, using variables and inclusion from script
... could move those helper functions to a dedicated directory
... also not sure what ES Module and features we can use in WPT

Jan-Ivar: I would support supporting async/await this is supported by all browsers implementing WebRTC and I think I have seen it used elsewhere on WPT

Harald: we already have 66 occurrences of await in WebRTC tests

Nils: I'm in favor of the helper directory to aid new contributors

Jan-Ivar: sounds good

RESOLUTION: The group agrees to take the recommendations from Soares

WPT/WebRTC Issues and PR

Issue 7424: Need mock MediaStream data for some WebRTC tests

Issue 7424

Jan-Ivar: for WPT do we really need to test with camera and mic to make sure the peerconnection works ?

Alex: Apple released a implementation in webdriver to @@@ but it is proprietary
... WPT is not depending on webdriver

Dom: There is ongoing work to make some test case depend on webdriver, but as Jan-Ivar says if we don't need this to test the peer connection we should skip calling getUSerMedia()

Jan-ivar: in FF we have test that analyze the data that comes thru, not sure if Chrome is doing so.

Harald: we do

Jan-Ivar: anything that would make it greener would be good
... in FF we have those ini files to set some prefs
... is teh issue that firefox does not accept command line parameters

Bernard: Alex, Does firefox timesout ?

Alex: it is mainly Edge
... for Edge there are hacks to deal with the permissions, but not very stable from one revision to the new one

Bernard: conclusion from this discussion ?

Alex: good to have the list, I'll have my team to test them

RESOLUTION: Alex's team will test this and make recommendations

Issue 9213: Parts of WebRTC require generating RTP to test

Issue 9213

Alex: we'll talk about this in a minute

Issue 836871: WebRTC Tests Are Leaking Resources

Chrome Issue 836871

Fippo: Tests are missing reviews from other browsers
... would be good to say if you fix this we will be done here
... Cleaning up after the tests is really important
... saw a limit at around 20 concurrent peerconnections

Harald: strange in Chrome we have a limit at 500

Nils: one cause of this issue is with travis limited resource and a limit in number of opened ports

Fippo: same issue with getUserMedia if you don't release after the test. So code reviewers should look for that
... Dependencies on addTransceiver was a hack, this was fixed on Chrome and is a Priority 1 for the FF Team

Bernard: you should have a test to make sure that RTCPeerConnection() with no argument works

RESOLUTION: Using helper function will make this easier

Nils: problem if someone modify the helper functions and all tests are failing

Fippo: You then need test for the tests
... I do not recommend automatic merging without review

Jan-Ivar: a larger issue, there is a safe area that most browsers support, hard to write dependencies to make sure that just one test fails

Nils: the WPT at the beginning of a test as no check to see if a given feature is implemented before running teh test

Jan-Ivar: Should we add review or should we not add barrier ?

(discussion about a test that was failing in WPT with Firefox)

WPT Example Test (Fippo)

Example test

Fippo: one bad thing the cleanup needs to go in the setup function

LennartG: this async test is an issue here as tests are run in parallel so you can have more than 20 active peerconnections at the same time

Jan-Ivar: bit scared by wrappers in the past, it is a pattern I don't like in tests, I prefer tests to test the current API

(slide 35)

RESOLUTION: We don't want to add wrappers and prefer Jan-Ivar's approach

Cross-Browser Testing (Lennart)

LennartG: I wrote new DataChannels test for WPT
... I notices that most cross browsers tests overlap single browser tests

(showing the architecture slide 38)

(showing example on slide 39)

(slide 40)

LennartG: hoping to see what breaks and what works in cross-browser world

Nils: In FF we have looked at this path and gave up, too much overhead and too much problem
... so careful evaluation to this if this is worth the extra cost

Alex: WPT was not meant for cross browser test only a few features need that including WebRTC this is why we worked on KITE

LennartG: wouldn't using KITE require to rewrite your tests ?

Nils: WPT is for single browser tests, whereas KITE can help with interoperability

LennartG: as said I saw a lot of overlap

Nils: I agree and we went this path and it was costly and we went back

LennartG: We should discuss what failed exactly

Bernard: we are running out of time we would likely have another meeting on Testing so that Alex can give us an update on KITE

(meeting adjourned)

Summary of Action Items

Summary of Resolutions

  1. Dom and Bernard to look at adding new columns for adapter.js
  2. We are suggesting to use yellow instead of the lighter red in WPT results color
  3. Implement Harald suggestion of finding reviewers from the various browsers team
  4. The group agrees to take the recommendations from Soares
  5. Alex's team will test this and make recommendations
  6. Using helper function will make this easier
  7. We don't want to add wrappers and prefer Jan-Ivar's approach
[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2018/04/26 17:43:06 $