14:01:58 RRSAgent has joined #webmachinelearning 14:01:58 logging to https://www.w3.org/2019/10/03-webmachinelearning-irc 14:02:03 Zakim has joined #webmachinelearning 14:02:06 RRSAgent, make logs public 14:02:09 Meeting: WebML CG Teleconference – 3 October 2019 14:02:13 Chair: Anssi 14:02:16 Agenda: https://github.com/webmachinelearning/meetings/blob/master/telcons/2019-10-03-agenda.md 14:02:21 Scribe: Anssi 14:02:25 scribeNick: anssik 14:02:35 daniel_smilkov has joined #webmachinelearning 14:02:35 RRSAgent, draft minutes v2 14:02:35 I have made the request to generate https://www.w3.org/2019/10/03-webmachinelearning-minutes.html anssik 14:02:48 Present+ Anssi_Kostiainen 14:03:00 Present+ Rafael_Cintron 14:03:12 Present+ Ganesan_Ramalingam 14:03:20 Present+ Daniel_Smilkov 14:03:27 Present+ Nikhil_Thorat 14:03:28 Present+ Paul_McDaniel 14:03:29 Present+ Greg_Whitworth 14:03:57 Present+ Ningxin_Hu 14:06:16 Present+ Nikhil_Thorat 14:07:12 jdarpinian has joined #webmachinelearning 14:11:15 TOPIC: F2F recap 14:11:42 -> https://github.com/webmachinelearning/meetings/blob/master/2019-09-17-fukuoka/README.md F2F agenda 14:11:47 -> https://www.w3.org/2019/09/17-webmachinelearning-minutes.html F2F Day 1 minutes 14:11:51 -> https://www.w3.org/2019/09/20-webmachinelearning-minutes.html F2F Day 2 minutes 14:12:18 me too, "The Webex Meeting is locked" 14:14:17 got in, thanks! 14:15:28 Rafael: F2F minutes were clear, discussed with Apple at WebGPU F2F 14:17:18 ... spoke with Myles Maxfield, he told me Apple favors an API that is not WebGPU extension 14:17:55 ... would be easy for developers to misuse the API if so 14:18:19 q+ 14:18:24 ack jdarpinian 14:18:42 Ningxin_Hu has joined #webmachinelearning 14:18:53 jdarpinian: also talked to Myles, and I think he did not know if their hw allows sharing buffers between GPU and ML hardware 14:19:13 Present+ Ningxin_Hu 14:19:27 ... WebGPU extension does not mean buffers allocated on GPU necessarily 14:19:50 ... would be good to be able to specify "I want to use this buffer for ML" 14:20:03 q+ paul mcdaniel 14:20:51 anssik: are there minutes from WebGPU? 14:21:04 jdarpinian: can look into the minutes 14:21:08 ack paul 14:21:14 ack mcdaniel 14:22:02 paul: Microsoft has also custom hw for ML offloading, not sharing GPU buffers, so must support scenario with non-GPU hw not sharing buffers 14:22:03 q+ 14:22:40 ack paul 14:23:17 PaulM has joined #webmachinelearning 14:23:46 q+ 14:23:52 ack jdarpinian 14:24:34 jdarpinian: about sharing buffers, we'll want an API that does not share buffers, not WebGPU-based, not necessarily mean we shouldn't investigate WebGPU-based APIs, GPUs are growing ML-based features 14:25:07 ... also it still might be simpler to release a WebGPU-based API even if it'd not perform as well on every platform, e.g. on those that cannot share buffers 14:25:30 q? 14:25:34 ack Ningxin_Hu 14:26:12 Ningxin_Hu: questions re WebGPU F2F, james mentioned WebGPU extension, did you discuss WebGL extension too at WebGPU F2F? 14:26:32 jdarpinian: WebGL extension not discussed directly 14:26:54 ... VuklanML F2F had discussions on MLIR and TVM(?) 14:27:26 ... no meta command API going into Vulkan, instead prefer exposing lower-level primitives allowing shaders access tensor cores of GPUs of today and write their own kernels, do kernel fusion 14:27:44 ... not sure if that direction makes sense for as, just a data point 14:28:29 s/TVM(?)/TVM/ 14:29:13 anssik: anyone from VulkanML to participate this group? 14:29:50 jdarpinian: more hw vendors, e.g. ARM, Qualcomm would be nice to get as participants here 14:29:57 q? 14:30:55 https://www.w3.org/2019/Talks/dhm-ml-workshop/standardization.html 14:32:04 https://www.w3.org/2019/Talks/dhm-ml-workshop/ 14:35:03 https://github.com/webmachinelearning/webnn/blob/master/explainer.md 14:36:18 https://github.com/immersive-web/webxr/blob/master/explainer.md 14:36:45 webgpu face to face meeting minutes, ML mentioned briefly: https://docs.google.com/document/d/1CmKo59tjZwmePVrFpHpIG0W5shKR_GOrnNuMStPCEko/edit 14:38:12 TOPIC: WebNN interop investigation next steps 14:38:33 https://docs.google.com/presentation/d/1KGRc1RnnYt_1JK2Pk6r2xRkD60v4F8jc4beHMv0crng/edit#slide=id.g6353211274_0_23 14:39:03 https://github.com/webmachinelearning/webnn/issues/6#issuecomment-536408448 14:39:33 Ningxin_Hu: after F2F, I provided details of the investigations to issue #6 for WebGPU buffer sharing 14:40:25 ... we have Apple MPS POC, Metal backend, WebNN can compile subgraph for WebGPU device 14:41:20 [Ningxin recaps WebNN investigation from F2F] 14:44:10 Ningxin_Hu: need extend WebNN API to allow compute subgraphs, to avoid data moving across devices 14:46:38 anssik: does Ningxin's POC results agree with Apple's concerns re buffer sharing? 14:47:02 Rafael: interested in hearing Ningxin's view on performance delta in this scenario? 14:47:42 Ningxin_Hu: POC investigations on MBP, so does not have dedicated ML hardware 14:48:59 ... tests exercise WebGPU compute shaders and Metal compute shaders 14:50:49 q+ 14:50:54 q? 14:51:18 PaulM_ has joined #webmachinelearning 14:51:22 +q 14:52:22 Ningxin_Hu: is this reasonable requirement: we want WebNN to compile to a dedicated ML hardware, test with WebGPU shader compute shader exchanging data with it, profile performance of buffer sharing 14:52:24 q? 14:52:36 Can we use intel ml chips as a test case ? 14:53:54 Paul: you're looking for hardware to prove this out? 14:54:34 zkis has joined #webmachinelearning 14:54:39 Ningxin_Hu: re future POC requirememnts 1) choose dedicated ML hardware to test with, 2) decide a data point we want 14:54:58 Paul: I like data-driven design as proposed by anssi 14:56:05 Ningxin_Hu: we have Movidius VPU in our POC via OpenVINO on Linux 14:56:44 ... we could probably have similar setup on Windows through DirectML 14:57:17 Paul: that sounds awesome, let's follow up off this call 14:58:00 POC repo: https://github.com/otcshare/chromium-src 14:58:42 jdarpinian: comment on using Movidius, these are often connected over USB implies bandwidth constraints 14:58:59 ... PCI Express would be better 14:59:29 Ningxin_Hu: previous setup was with USB, but current hardware on PCI Express 15:01:45 Explore custom op support by DSL 15:02:04 Present+ Kai_Ninomiya 15:02:17 Kai: sort of interested, but not up to speed with it 15:02:36 Support compiling and executing ops for devices, CPU or GPU 15:02:41 Need to drop off. 15:04:46 TOPIC: Adjourn 15:05:20 RRSAgent, draft minutes v2 15:05:20 I have made the request to generate https://www.w3.org/2019/10/03-webmachinelearning-minutes.html anssik