14:58:28 RRSAgent has joined #webmachinelearning 14:58:28 logging to https://www.w3.org/2022/02/10-webmachinelearning-irc 14:58:30 RRSAgent, make logs Public 14:58:32 please title this meeting ("meeting: ..."), anssik 14:58:33 Meeting: WebML WG Teleconference – 10 February 2022 14:58:37 Chair: Anssi 14:58:42 Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2022-02-10-wg-agenda.md 14:58:47 Scribe: Anssi 14:58:53 scribeNick: anssik 14:59:05 Present+ Anssi_Kostiainen 14:59:15 RRSAgent, draft minutes 14:59:15 I have made the request to generate https://www.w3.org/2022/02/10-webmachinelearning-minutes.html anssik 15:00:18 scribe+ 15:01:08 Present+ Ningxin_Hu 15:01:11 ningxin_hu has joined #webmachinelearning 15:01:15 Present+ Rafael_Cintron 15:01:22 Present+ 15:02:18 Present+ Chai_Chaoweeraprasit 15:02:24 RafaelCintron has joined #webmachinelearning 15:02:26 chai has joined #webmachinelearning 15:02:56 Present+ Dominique_Hazael-Massieux 15:03:31 Present+ Raviraj_Pinnamaraju 15:04:57 Present+ Geunhyung_Kim 15:05:07 Topic: Security review response 15:05:40 Geun-Hyung has joined #webmachinelearning 15:05:41 anssi: on the last call we reviewed the security review feedback from the chrome team 15:05:55 present+ 15:05:57 ... and agreed that there was a lot of good feedback, with a few open questions from our end 15:06:09 ... I started a draft pull request https://github.com/webmachinelearning/webnn/pull/251 15:06:23 ... which I would like us to refine before circling back with the security people 15:06:25 -> General Security Questions https://github.com/webmachinelearning/webnn/issues/241 15:06:35 -> PR: Update Security Considerations per review feedback https://github.com/webmachinelearning/webnn/pull/251 15:07:02 -> All security-tracker issues https://github.com/webmachinelearning/webnn/issues?q=label%3Asecurity-tracker+ 15:07:49 anssi: I need you review, please take a look at it 15:07:57 Jonathan has joined #webmachinelearning 15:08:47 Present+ Jonathan_Bingham 15:09:57 anssi: integrating that security feedback will be important in our path to Candidate Recommendation 15:10:14 -> Guidelines/philosophy for new operations, including security principles #242 https://github.com/webmachinelearning/webnn/issues/242 15:10:40 raviraj_pinnamaraju has joined #webmachinelearning 15:11:02 anssik: I welcome feedback on my response to #242 15:11:19 -> Op metadata that helps avoid implementation mistakes #243 https://github.com/webmachinelearning/webnn/issues/243 15:11:52 q+ 15:12:10 q? 15:13:07 ack chai 15:13:27 dom: I think this is about having pro-forma information about ways operations can create security risks 15:13:56 chai: IIRC, #243 was considered an implementation detail 15:14:22 q? 15:14:48 chai: for some of the operations, you don't know the tensor shape before execution, when constructing the graph 15:15:02 ... I don't see how you would be able to annotate / validate at construction time 15:15:13 ... bound checking would have to be done at execution time 15:15:16 q+ 15:15:19 q? 15:15:25 ack dom 15:16:35 dom: I think the reviewer is asking us to highlight this information consistently across the spec, I think validation is not expected at graph construction time 15:17:07 ... the comment says "create checks that can be enforced as a graph is constructed, and as a graph is executed" 15:17:09 q+ 15:17:16 ack ningxin_hu 15:17:41 ningxin_hu: some of the questions (e.g. tensor size input/output) have already answers in the prose 15:17:48 q+ 15:18:14 ack dom 15:18:56 dom: my interpretation is, the reviewer is saying, to have this information consistently across ops, if provided in some regular convention, a subsection or table 15:19:23 ... currently this is hidden in the prose, so having them extracted in a quick to parse format would facilitate reviews 15:19:39 ningxin_hu: is this metadata supposedly part of the spec, or a separate doc? 15:19:44 q? 15:20:59 q+ 15:21:01 dom: in the spec, in addition to describing this information in prose have it in a consistent format e.g. in a table 15:21:03 q? 15:21:28 ack chai 15:22:48 chai: it may be not be able to extract very useful metadata given the complexity of operations 15:23:15 s/able/possible/ 15:23:28 anssik: let's clarify that in the spec and then with the reviewer 15:23:36 ... chai, maybe you could chime in in the issue? 15:24:02 q? 15:24:12 -> A conformance suite with disallowed intra-op examples would be helpful for hardening #244 https://github.com/webmachinelearning/webnn/issues/244 15:25:03 q? 15:25:39 anssik: any concerns with the proposal to formalize these failure cases in the test suite 15:25:48 Anssi, I agree with your reponse in the Github issue. 15:25:53 q+ 15:26:00 ack chai 15:26:46 chai: I'm not sure the notion of badly formed graphs applies here 15:27:11 ... you can construct any graph you want; not sure what a badly formed graph would be 15:27:22 q+ 15:27:24 ... I don't think we reject malformed graphs in DirectML 15:27:51 ... one case perhaps would be an insanely complex graph that would trigger a DOS attack 15:28:04 ack ningxin_hu 15:28:43 ningxin_hu: would this be related to input buffers with out of bounds access (e.g. smaller buffer than what the graph defines) 15:29:34 ... another scenario would be around constant uploading; the shape & uploaded buffer could not match and create another case of out of bound access 15:29:42 anssi: it would be helpful to bring this back to the issue 15:30:15 anssi: how would we bring this to WPT? 15:31:18 dom: how to bring this to w-p-t we need to discuss, failures would throw exceptions 15:31:53 ... OOB would require us to be extra careful 15:32:00 q? 15:32:09 s/careful/careful and thorough in covering error scenarios 15:32:32 will do that 15:32:38 q? 15:32:44 Topic: Privacy review refresh 15:33:31 anssi: we completed a first privacy review a year ago; we should discuss & identify changes that may impact privacy properties 15:33:39 q+ 15:33:53 ... we would then go to PING with a delta change request 15:34:16 q? 15:34:18 ack dom 15:34:44 dom: yes that delta review sounds good 15:35:10 ... there's a possibility PING understanding of privacy space has changed so they may also revisit some earlier discussions 15:35:50 ... in terms of timing, given the discussion whether to make some part of the API device type specific, given the impact that may have on exposing what devices are available, my guess is it is better to wait for that to be clarified first 15:36:03 ... only then do back to PING to not need to do back and forth too much 15:36:35 anssi: we should probably hand them the papertrail of the previous review to help 15:37:02 ... also hearing the potential dependency on the device type 15:37:55 dom: if answers to questionnaire have changed, then revising those makes sense, give them diff from previous to the current and PRs that got merged with possible privacy impact 15:38:05 -> All privacy-tracker issues https://github.com/webmachinelearning/webnn/issues?q=label%3Aprivacy-tracker+ 15:39:13 q? 15:39:31 Topic: Double-precision baseline implementation of WebNN operations for testing 15:40:00 -> PR: WebNN baseline implementation for first-wave ops https://github.com/webmachinelearning/webnn-baseline/pull/1 15:40:05 -> GH repo (webnn-baseline is a CG deliverable) https://github.com/webmachinelearning/webnn-baseline 15:40:17 -> WebML CG Charter: Test Suites and Other Software https://webmachinelearning.github.io/charter/#test-suites 15:40:43 ningxin_hu: this project is to answer the request to establish baseline results for the WebNN conformance test 15:41:00 ... chai had mentioned this should have all the computation done in double precision, both for input & output data 15:41:31 ... that this should be done in an open source implementation, and making it easy to review (dom helped improve readability) 15:42:05 ... it was also suggested that reviewability would be helped by having very limited dependencies, by opposition to e.g. the polyfill where performance is more important 15:42:27 ... So we developed this implementation that we want to contribute to the CG as a tool to generate results for the WebNN conformance tests 15:42:54 ... We've implemented 41 operations linked to the 1st wave model, based on JS double precision numbers & calculation 15:43:14 ... the algorithms are plain & straightforward algorithms without dependencies 15:43:39 ... and brought it to a repository under the webmachinlearning github org, with the webnn-baseline name 15:43:59 ... I think it's now in good shape for wider review of the implementation of operations 15:44:27 ... to help with reviewability, input validation and other utilities have been moved to separate modules 15:45:20 anssi: thank you for bringing this work to that stage! 15:45:53 q+ 15:45:58 ack RafaelCintron 15:46:42 Rafael: is this all your code? 15:46:58 ningxin_hu: yes, all of our own 15:47:21 Present+ Daniel_LaLiberte 15:47:23 ... to be clear, it's not a WebNN polyfill - it's implementing the WebNN ops 15:47:56 ... based on their semantics, in JS 15:48:14 Rafael: can this be used along with the polyfill for a JS-only version of the spec? 15:48:20 ningxin: yes, but to the cost of performance 15:48:58 Rafael: I'm not opposed to the principle of having something like this 15:48:59 q+ 15:49:15 ningxin: a significant amount of code is test code to test the implementation 15:49:18 q? 15:49:23 ack dom 15:49:51 dom: just to highlight what ningxin_hu said, we as a WG should not be scared by the high LOC 15:50:19 ... the idea is to be able to focus on the raw code of the computation 15:50:41 ... if a few of us could give an indepth look on this, it'd increase our confidence further 15:50:54 q+ 15:51:09 ... comparison with other implementation will allow us to compare the results 15:51:13 ack chai 15:51:26 chai: we can help review conv2d 15:51:35 ... for the other operators, there are quite straightfoward 15:51:42 s/foward/forward 15:51:59 ... we have our own baseline for our conformance testing, so we can do some comparison and see if you're missing some cases 15:52:16 q? 15:52:16 ... at a high level, the current code looks reasonable, but would want to make sure corner cases are taken care of 15:52:18 appreciate 15:52:36 anssi: so let's keep the PR open to capture conv2d review from chai 15:52:52 conv2d impl: https://github.com/webmachinelearning/webnn-baseline/pull/1/files#diff-d222e933d836a550dfd80d8260c94df0ee0179aa1eb52d86c88caa38b3519052 15:53:29 q? 15:54:10 ningxin_hu: the complex ones like conv2d definitely require review; for other big ops (like gru), we followed what's defined in the spec, by composing it on top of smaller ops 15:54:26 ... we captured bugs in the spec while doing so 15:54:45 s/gru/gruCell/ 15:55:07 s/gruCell/gru and gruCell/ 15:55:15 RESOLUTION: The WebML WG recommends its incubator WebML CG to adopt webnn-baseline as a new deliverable per "Test Suites and Other Software" provision to help demonstrate Web Neural Network API implementation experience 15:55:32 q? 15:55:39 Topic: WebNN API open pull requests 15:55:45 -> Open PRs https://github.com/webmachinelearning/webnn/pulls 15:56:07 -> Update createContext() and MLContext interface prose and algorithms #250 https://github.com/webmachinelearning/webnn/pull/250 15:57:23 anssi: this might also help with the discussion around MLContext 15:57:38 https://github.com/webmachinelearning/webnn/issues/230#issuecomment-1034604857 15:58:10 q? 15:58:26 -> Add MLOperand and update MLOperator description #237 https://github.com/webmachinelearning/webnn/pull/237 15:59:05 q? 15:59:11 -> Fix output size calculation of conv2d and convTranspose2d #232 https://github.com/webmachinelearning/webnn/pull/232 16:00:06 ningxin_hu: would like chai's review on the output size calculation due to the dilation parameter (based on impl feedback) 16:00:22 -> Adding a new use case for 'Framework Use Cases' https://github.com/webmachinelearning/webnn/pull/207 16:01:05 RRSAgent, draft minutes 16:01:05 I have made the request to generate https://www.w3.org/2022/02/10-webmachinelearning-minutes.html anssik 16:02:22 anssik: thanks everyone for joining and Dom for scribing! 16:02:23 RRSAgent, draft minutes 16:02:23 I have made the request to generate https://www.w3.org/2022/02/10-webmachinelearning-minutes.html anssik 18:06:35 Zakim has left #webmachinelearning