13:58:05 RRSAgent has joined #webmachinelearning 13:58:05 logging to https://www.w3.org/2020/04/30-webmachinelearning-irc 13:58:11 Zakim has joined #webmachinelearning 13:58:15 RRSAgent, make logs public 13:58:19 Meeting: WebML CG Teleconference – 30 April 2020 13:58:33 Chair: Anssi 13:58:37 Agenda: https://github.com/webmachinelearning/meetings/blob/master/telcons/2020-04-30-agenda.md 13:58:41 Scribe: Anssi 13:58:45 scribeNick: anssik 13:58:57 Present+ Anssi_Kostiainen 13:59:00 Present+ Rafael_Cintron 13:59:04 Present+ Rafael_Cintron 13:59:33 ningxin_hu has joined #webmachinelearning 14:00:13 Present+ Ganesan_Ramalingam 14:00:24 daniel_smilkov has joined #webmachinelearning 14:00:32 Present+ Ningxin_Hu 14:00:43 Regrets+ Daniel_Smilkov 14:00:58 Present+ Paul_McDaniel 14:00:58 RafaelCintron has joined #webmachinelearning 14:00:59 gabe_intel has joined #webmachinelearning 14:01:35 Jonathan has joined #webmachinelearning 14:01:37 Present+ Gabe_Esteven 14:01:39 Rama has joined #webmachinelearning 14:02:05 Present+ Jonathan_Bingham 14:02:34 Present+ Ping_Wu 14:02:43 RRSAgent, draft minutes v2 14:02:43 I have made the request to generate https://www.w3.org/2020/04/30-webmachinelearning-minutes.html anssik 14:02:44 Present+ ningxin_hu 14:03:22 paul_mcdaniel_msft has joined #webmachinelearning 14:03:28 Nikhil has joined #webmachinelearning 14:03:51 RRSAgent, make logs public 14:04:01 RRSAgent, draft minutes v2 14:04:01 I have made the request to generate https://www.w3.org/2020/04/30-webmachinelearning-minutes.html anssik 14:04:24 TOPIC: WebNN first wave models and ops 14:04:39 anssik: Discuss WebNN first wave models and ops, action review 14:04:46 anssik: Review PR 14:04:51 -> https://github.com/webmachinelearning/webnn/pull/52 Add first_wave_models.md #52 14:05:05 -> https://github.com/webmachinelearning/webnn/blob/df7d9b0b5bfa5f9426e8b0a897e9fdf46a97d537/op_compatibility/first_wave_models.md first_wave_models.md (HTML preview) 14:05:20 anssik: models in this proposal target image classification and object detection use cases 14:05:29 ... models are SqueezeNetV1.1, MobileNetV2, ResNetV2, TinyYOLOV2 14:05:41 ... 13 ops: Add, AveragePool, BatchNormalization, Concat, Conv, Gemm, GlobalAveragePool, LeakyRelu, MaxPool, Mul, Relu, Reshape, Softmax 14:05:53 anssik: related actions: 14:06:06 ... 1. Compare with XLA to see what ops, if any, are too high level (Nikhil & Daniel) 14:06:15 ... 2. Review opsets from a year ago to see what couldn't be ran on ONNX opsets (Daniel) 14:06:27 ... 3. Intersection between XLA & ONNX (Paul) 14:06:41 pingyu has joined #webmachinelearning 14:06:52 anssik: Daniel's review comments to fix action 1 was to compare which ops are too high level for XLA HLO (see PR#52 for the context), feedback: 14:07:02 Daniel: AveragePool, GlobalAveragePool, and MaxPool can be expressed in XLA HLO via ReduceWindow which is lower level because it can take an arbitrary function f: T->T of type XLAComputation. 14:07:07 Daniel: Gemm can be covered by Matmul and Add. If the underlying hardware supports Gemm, the fusion of matmul+add --> gemm should be done by an internal optimization pass. 14:07:13 Daniel: LeakyRelu, Relu and Softmax can all be lowered into other XLA HLO ops. 14:08:55 https://github.com/pytorch/xla/blob/master/torch_xla/csrc/ops/softmax.cpp#L15 14:09:05 https://github.com/pytorch/xla/blob/5bc5a0fd522845595961d42f377bde566b4ec94a/torch_xla/csrc/softmax_builder.cpp#L83 14:09:55 q+ 14:10:02 ack Chai 14:11:28 Chai: I think no need to map 1:1 to XLA-HLO, but to make sure the set is not too large, should consider break them up 14:11:51 ... compiler is specific to the browser, so my concern is how to make sure the compiler has enough context to handle the big group as one 14:13:08 ... want to understand how to preserve this context, maybe fusion container could be contained within the graph 14:13:59 ... with this information compiler can use it as a clue to check whether the hardware actually support fusion, otherwise fall back and handle ops one by one 14:15:05 ... maybe that'd be a good compromise between XLA and ONNX, allow retaining context to allow optimizations 14:15:29 daniel_smilkov: that's a great point, just for a full disclosure, I did not comment on batch normalization 14:15:58 ... I like the label idea that gives a safe fallback 14:16:12 ... no need for complex heuristics 14:16:22 ... I looked at ONNX and other high-level op sets 14:20:02 Chai: not obvious at authoring time, what kind of hardware would run the model, even with the composite idea there's a chance that compiler walks the graphs and can do a second level fusion, on the heuristics side of the compiler 14:20:52 Paul: I also like the idea of labels 14:21:09 ... having heuristics is so difficult, we've learned that hard way 14:21:40 ... on the path of labels, they probably need to be structured, have some semantics with it, the bottom knows what's on top 14:21:52 s/with it/with them/ 14:22:01 From tooling point of view, MLIR has a multi-level representation of ML graph. If WebML can be defined as a dialect, the composite can be presented as function of low level HLO ops 14:22:16 Chai: for the label, we need to define what the label means, it cannot be arbitrary 14:22:54 ... need to define what the subgraph is, no need to define what to do if we fail, the graph is the source of truth, label is just a hint for fusion 14:24:24 TFLite uses labeling before for fusing but it has been bit hacky. 14:25:34 ACTION Chai to open an issue for the label hint for defining semantics for op fusion 14:25:45 q+ 14:25:57 ACTION: Chai to open an issue for the label hint for defining semantics for op fusion 14:26:02 ack ningxin_hu 14:26:10 ningxin_hu: thanks for discussion 14:27:05 ningxin_hu: regarding the PR, daniel commented on particular ops, how about we go forward with elementwise add and multiply? 14:27:20 ... concat and reshape also well supported 14:27:45 ... these primitives supported in both XLA and ONNX 14:28:28 q? 14:29:06 PROPOSED RESOLUTION: Add elementwise add, elementwise multiply, concatenation, and reshape ops to the WebNN API 14:29:30 looks good 14:29:37 ningxin_hu: LGTM 14:29:45 LGTM 14:29:57 RESOLUTION: Add elementwise add, elementwise multiply, concatenation, and reshape ops to the WebNN API 14:31:19 paul_mcdaniel_msft: for the ntersection between XLA & ONNX, I have something but maybe better for me to share those on our next call? 14:31:36 anssik: SGTM 14:32:43 @paul, at google we can open xls files :) 14:32:58 :) 14:33:04 lol 14:33:09 interop ftw 14:33:44 anssik: are we ready to merge your PR? 14:34:29 ningxin_hu: OK to keep this PR open and iterate on it a bit, then come back with a revised version 14:34:58 Present+ Jonathan_Bingham 14:36:12 RRSAgent, draft minutes v2 14:36:12 I have made the request to generate https://www.w3.org/2020/04/30-webmachinelearning-minutes.html anssik 14:37:43 TOPIC: Model Loader API 14:37:53 anssik: Review spec strawman, please open issues for your feedback 14:37:57 -> https://webmachinelearning.github.io/model-loader/ Spec strawman 14:38:02 -> https://github.com/webmachinelearning/model-loader GitHub repo 14:38:06 Elementwise ops can also benefit by introducing a single higher-order op "elementwise" 14:38:16 anssik: thanks Jonathan for volunteering to edit the spec 14:38:19 which takes a scalar function f as a parameter 14:38:31 ... explainer is in a good shape, documents use cases, FAQ, discusses design considerations 14:38:52 @Rama, SGTM 14:39:06 ... this repo to be used to iterate on the API and evolve examples with it 14:39:20 ... repo welcomes issues and PRs 14:39:43 ... Jonathan, do you have some open questions in mind you'd like to ask the group for feedback? (sees zero open issues) 14:40:17 Jonathan: thanks Ningxin for proposing extending ML namespace 14:40:29 ... first thing to do is to review the spec examples section 14:40:40 ... when first drafted, it was pseudo code 14:40:55 ... especially interested from folks how to improve it 14:41:56 ... another update, I've talked to Googlers working on MLIR, and we're trying to figure out how to get engineering staffing behind this 14:42:58 welcome ping !! 14:43:05 thanks 14:43:07 ... discussed with Android and TF teams as well 14:43:19 welcome ping! 14:43:24 anssik: pingyu welcome! 14:43:31 pingyu: working with Nikhil on TF.js 14:43:51 ... looking forward to contributing to this group 14:44:03 Nikhil: excited to have pingyu in this group, a lot of experience with MLIR 14:44:43 q+ 14:46:18 Nikhil: with Daniel we're moving from TF.js to another project in around 6 months 14:46:36 daniel_smilkov: nothing changes in TF.js we've already reduced our direct involvement in TF.js over the last months so no slowdowns for that project 14:47:49 Jonathan: for next steps, need to have discussion in GH on engineering details, as Ping ramps up, my hope is he can coordinate with Ningxin 14:47:50 Jonathan, thanks for the suggestion 14:48:05 ningxin_hu: SGTM, happy to work with Ping on that 14:48:47 Jonathan: also the prototype, but also the future of how this should look like 14:50:18 TOPIC: Virtual workshop 14:50:30 anssik: wanted to discuss virtual workshop prepwork status 14:50:44 ... on our last call heard interest for a session to discuss XLA and MLIR 14:50:53 ... I created a "Domain-specific compilers for ML" high-level topic to discuss these topics, if not accurate name help me perfect it 14:52:30 Nikhil: high-level topic name looks good to me 14:52:48 ... I can poke people for lightning talks 14:53:22 -> https://bit.ly/webml-workshop-talks Review workshop agenda details 14:53:32 -> https://bit.ly/webml-workshop-survey Review topics survey 14:54:26 anssik: survey maps 1:1 to the proposed agenda 14:54:32 ... asks to rate high-level topics on scale 1-5 from not important to very important 14:54:48 ... the idea is to send to through your personal networks to gather feedback on scoping. Couple of open questions at the end. 14:55:09 ... The Program Committee would build an agenda based on its assessments of what topics would benefit to be covered (starting from https://bit.ly/webml-workshop-talks). 14:55:15 ... To gather feedback from the community on the workshop scoping and focus areas, we prepare a survey (starting from https://bit.ly/webml-workshop-survey) to be sent to the prospective participants. 14:56:51 anssik: does this survey look good to you? would you respond? too long, too short, asking wrong questions? LGTM? 14:57:16 anssik: we'll discuss this on the workshop program committee call right after this meeting, you're feedback is extremely valuable. 14:58:04 Sure, happy to introduce the model-loader API 14:59:17 TOPIC: Adjourn 14:59:40 RRSAgent, draft minutes v2 14:59:40 I have made the request to generate https://www.w3.org/2020/04/30-webmachinelearning-minutes.html anssik 16:03:54 Present+ Ping_Yu 16:03:58 RRSAgent, draft minutes v2 16:03:58 I have made the request to generate https://www.w3.org/2020/04/30-webmachinelearning-minutes.html anssik 16:12:52 myles has joined #webmachinelearning