W3C

– DRAFT –
WebML CG Teleconference – 9 January 2020

09 January 2020

Attendees

Present
Anssi_Kostiainen, Baul_Eun, Chai_Microsoft, Gabe_Esteven, James_Darpinian, Jonathan_Bingham, Ningxin_Hu, Rafael_Cintron, Thomas_Steiner
Regrets
-
Chair
Anssi
Scribe
Anssi, anssik

Meeting minutes

Buffer sharing between GPU and ML accelerator, update & next steps

#33 investigation update and open questions

ningxin_hu: the next step of the investigation is to run Test3 on programmable ML accelerator, e.g. VPU
… identified 3 issues that block us:

Can VPU run HLSL?

Can WebGPU support compute-only device (e.g. VPU) and run computer shader on it?

Can WebGPU/D3D12 share buffer with WebNN/DML on VPU? (run Test3 on VPU)

want to understand whether this is a driver issue, what is the root cause

ningxin_hu: would like to hear other participants input on this issue, now focus on VPU and DirectML
… e.g. feedback from Android NN API or other accelerators
… how about using TF Lite as a platform for this investigation?

Jonathan: for TF Lite, probably yes, need to follow up with Nikhil and Daniel to follow up
… please send an email to them to connect

conv2d native API mapping table

[op compatibility] conv2d #28

conv2d native API mapping table

ningxin_hu: Paul McDaniel wanted to contribute ONNX column to the table
… also discussed whether ONNX and TF are on framework level, and leave this for native API mapping only

chai: I have discussed this with Rafael, but I haven't looked into it

anssik: is adding in ONNX data reasonable?

Rafael: we can add ONNX data, it seems fine

Chai: DirectML and/or ONNX mapping?

Rafael: both would be useful

Float16 type support

Float16 type support #26

anssik: Benjamin proposed we should add "float16" into supported tensor types
… rationale explainer in the issue #26
… any objection in doing so?
… affected spec surface https://‌webmachinelearning.github.io/‌webnn/#enumdef-operandtype

Chai: want to point out, not explicit objection, but want to point our Float16 is not supported universally, old and new, many use cases in models depend on Float16, how to run models that depend on this type but hw does no support it natively

ningxin_hu: very good input! If you look at Benjamin's proposal, there's a fallback
… without native support, there's a fallback to Float32

Chai: depends on how the model is created, if trained with float16, and you internally substitute higher precision
… it works, but in other cases such as convolution there are issues with accumulated loss, get different result
… if we define an API where fallbacks happen at some level, this should be OK
… internal handling that's opaque may not work for this case, the API should expose "hardware capabilities", or simply fail with a correct answer, or return "supported"

proposed RESOLUTION: Add "float16" to OperandType enum in WebNN API and define a way for the API to respond when float16 is not natively supported

ningxin_hu: sounds good to me

Resolution: Add "float16" to OperandType enum in WebNN API and define a way for the API to respond when float16 is not natively supported

W3C Workshop on Web & Machine Learning (24-25 March 2020, Berlin): update & next steps

21 February 2020

Registration deadline

28 February 2020

Acceptance notification

6 March 2020

Position statements deadline

13 March 2020

Program announced

24-25 March 2020

Workshop

https://‌w3c.github.io/‌machine-learning-workshop/#program

Adjourn

Summary of resolutions

  1. Add "float16" to OperandType enum in WebNN API and define a way for the API to respond when float16 is not natively supported
Minutes manually created (not a transcript), formatted by scribe.perl version 104 (Sat Dec 7 01:59:30 2019 UTC).

Diagnostics

Maybe present: anssik, chai, Jonathan, Rafael