W3C

– DRAFT –
WebML CG Teleconference – 23 January 2020

23 January 2020

Attendees

Present
Anssi_Kostiainen, Chai_Chaoweeraprasit, Jonathan_Bingham, Rafael_Cintron, Nikhil_Thorat
Regrets
-
Chair
Anssi
Scribe
Anssi, anssik

Meeting minutes

Announcements

anssik: welcome our new WebNN API co-editor, Chai from Microsoft!

Add a co-editor

anssik: Chai will work with Ningxin, help address issues, turn them into spec text, review PRs, triage issues

Buffer sharing between GPU and ML accelerator

anssik: let's sync on the status on VPU and DirectML prototyping

https://‌github.com/‌webmachinelearning/‌webnn/‌issues/‌33

chai: I think Ningxin now has enough data to move forward with POC, I provided him required details on ONNXRuntime
… issues with performance and VPU, think Ningxin should work together with VPU engineering folks, VPU is a bit tricky to get good performance
… also wanted to revisit discussion on using TF Lite as a platform for investigation

https://‌www.w3.org/‌2020/‌01/‌09-webmachinelearning-minutes.html#x01

nikhil: TF Lite makes sense, it's an abstraction layer that makes sense for this investigation as well

conv2d native API mapping table - DirectML and ONNX

anssik: thanks Chai for contributing the data for DML and ONNX

conv2d op compatibility

anssik: Chai please feel free to summarize your key findings, for details please see the table, what are the key implications?

https://‌github.com/‌webmachinelearning/‌webnn/‌issues/‌28

Chai: I was curious what's the thinking around grouped convolution? are they in scope?

nikhil: we can make an argument to keep that as a separate op like TensorFlow does

Chai: can define a new op or fold into existing op
… issue with combining is you'll have a big op hard to optimize
… if focus on compatibility, better to look at an area of overlap and see what works with all existing APIs, a more compelling argument
… also goes for conv3d

anssik: hearing we want to keep these separate

https://‌github.com/‌webmachinelearning/‌webnn/‌issues/‌28#issuecomment-555193911

Float16 type support and handling unsupported OperandType

anssik: resolution from our previous call was:
… Add "float16" to OperandType enum in WebNN API and define a way for the API to respond when float16 is not natively supported
… ningxin submitted a PR to update IDL

Add float16 and tensor-float16 into OperandType #35

anssik: this PR only includes IDL changes, proposal to define handling of unsupported OperandType in a separate issue #36

Handling unsupported OperandType #36

anssik: can we agree to merge PR #35 and address issue #36 separately, any concerns?

anssik: [ hearning none ]

anssik: Chai feel free to merge PR #35 after the call

W3C Workshop on Web & Machine Learning

anssik: please register by 21 Feb 2020

Workshop Call for participation

anssik: committee expected to grow with 1-2 more members to increase diversity, good representation across industries already

anssik: I started an agenda Google sheet and I'll share with the program committee soon

anssik: sponsorship opportunities open

Sponsoring the W3C Workshop on Web & Machine Learning

anssik: any questions? I encourage everyone to attend. Berlin is nice.

Adjourn

Minutes manually created (not a transcript), formatted by scribe.perl version 104 (Sat Dec 7 01:59:30 2019 UTC).

Diagnostics

Maybe present: anssik, chai, nikhil