15:07:36 RRSAgent has joined #webmachinelearning 15:07:36 logging to https://www.w3.org/2020/01/09-webmachinelearning-irc 15:07:42 Zakim has joined #webmachinelearning 15:07:45 RRSAgent, make logs public 15:07:50 Meeting: WebML CG Teleconference – 9 January 2020 15:07:55 Chair: Anssi 15:08:12 Agenda: https://github.com/webmachinelearning/meetings/blob/master/telcons/2020-01-09-agenda.md 15:08:20 Scribe: Anssi 15:08:25 scribeNick: anssik 15:08:26 Jonathan has joined #webmachinelearning 15:08:31 Present+ Anssi_Kostiainen, Rafael_Cintron, Ningxin_Hu 15:08:49 Present+ Jonathan_Bingham 15:08:54 Present+ Thomas_Steiner 15:09:03 Present+ Gabe_Esteven 15:09:57 Present+ Chai_Microsoft 15:10:31 RRSAgent, draft minutes v2 15:10:31 I have made the request to generate https://www.w3.org/2020/01/09-webmachinelearning-minutes.html anssik 15:11:19 RRSAgent, make logs public 15:11:40 Present+ Baul_Eun 15:11:51 TOPIC: Buffer sharing between GPU and ML accelerator, update & next steps 15:11:53 RRSAgent, draft minutes v2 15:11:53 I have made the request to generate https://www.w3.org/2020/01/09-webmachinelearning-minutes.html anssik 15:12:33 -> https://github.com/webmachinelearning/webnn/issues/33#issuecomment-563913379 #33 investigation update and open questions 15:15:02 jdarpinian has joined #webmachinelearning 15:20:42 ningxin_hu: the next step of the investigation is to run Test3 on programmable ML accelerator, e.g. VPU 15:21:12 ... identified 3 issues that block us: 15:21:32 Can VPU run HLSL? 15:21:32 Can WebGPU support compute-only device (e.g. VPU) and run computer shader on it? 15:21:39 Can WebGPU/D3D12 share buffer with WebNN/DML on VPU? (run Test3 on VPU) 15:22:18 want to understand whether this is a driver issue, what is the root cause 15:22:42 Present+ James_Darpinian 15:23:49 ningxin_hu: would like to hear other participants input on this issue, now focus on VPU and DirectML 15:24:07 ... e.g. feedback from Android NN API or other accelerators 15:24:59 ... how about using TF Lite as a platform for this investigation? 15:25:18 Jonathan: for TF Lite, probably yes, need to follow up with Nikhil and Daniel to follow up 15:25:25 ... please send an email to them to connect 15:26:10 TOPIC: conv2d native API mapping table 15:26:39 -> https://github.com/webmachinelearning/webnn/issues/28 [op compatibility] conv2d #28 15:26:45 -> https://github.com/webmachinelearning/webnn/blob/master/op_compatibility/conv2d.md conv2d native API mapping table 15:28:02 ningxin_hu: Paul McDaniel wanted to contribute ONNX column to the table 15:28:22 ... also discussed whether ONNX and TF are on framework level, and leave this for native API mapping only 15:28:58 chai: I have discussed this with Rafael, but I haven't looked into it 15:29:21 anssik: is adding in ONNX data reasonable? 15:29:39 Rafael: we can add ONNX data, it seems fine 15:30:05 Chai: DirectML and/or ONNX mapping? 15:30:43 Rafael: both would be useful 15:31:55 TOPIC: Float16 type support 15:32:06 -> https://github.com/webmachinelearning/webnn/issues/26 Float16 type support #26 15:32:23 anssik: Benjamin proposed we should add "float16" into supported tensor types 15:32:29 ... rationale explainer in the issue #26 15:32:36 ... any objection in doing so? 15:32:44 ... affected spec surface https://webmachinelearning.github.io/webnn/#enumdef-operandtype 15:34:01 Chai: want to point out, not explicit objection, but want to point our Float16 is not supported universally, old and new, many use cases in models depend on Float16, how to run models that depend on this type but hw does no support it natively 15:35:48 ningxin_hu: very good input! If you look at Benjamin's proposal, there's a fallback 15:36:00 ... without native support, there's a fallback to Float32 15:36:29 Chai: depends on how the model is created, if trained with float16, and you internally substitute higher precision 15:37:08 ... it works, but in other cases such as convolution there are issues with accumulated loss, get different result 15:37:39 ... if we define an API where fallbacks happen at some level, this should be OK 15:38:41 ... internal handling that's opaque may not work for this case, the API should expose "hardware capabilities", or simply fail with a correct answer, or return "supported" 15:41:10 proposed RESOLUTION: Add "float16" to OperandType enum in WebNN API and define a way for the API to respond when float16 is not natively supported 15:43:32 Chai has joined #webmachinelearning 15:44:16 ningxin_hu: sounds good to me 15:44:21 RESOLUTION: Add "float16" to OperandType enum in WebNN API and define a way for the API to respond when float16 is not natively supported 15:44:49 TOPIC: W3C Workshop on Web & Machine Learning (24-25 March 2020, Berlin): update & next steps 15:45:18 21 February 2020 15:45:18 Registration deadline 15:45:18 28 February 2020 15:45:18 Acceptance notification 15:45:19 6 March 2020 15:45:19 Position statements deadline 15:45:19 13 March 2020 15:45:19 Program announced 15:45:20 24-25 March 2020 15:45:20 Workshop 15:47:00 https://w3c.github.io/machine-learning-workshop/#program 15:54:37 TOPIC: Adjourn 15:54:49 RRSAgent, draft minutes v2 15:54:49 I have made the request to generate https://www.w3.org/2020/01/09-webmachinelearning-minutes.html anssik 17:45:35 Zakim has left #webmachinelearning