W3C

– DRAFT –
WebML WG Teleconference – 30 November 2023

30 November 2023

Attendees

Present
Anssi_Kostiainen, Chai_Chaoweeraprasit, Dwayne_Robinson, Etienne_Noel, Joshua_Bell, Joshua_Lochner, Rachel_Yager, Rafael_Cintron, Zoltan_Kis
Regrets
Dominique_Hazael-Massieux
Chair
Anssi
Scribe
Anssi, anssik

Meeting minutes

Repository: webmachinelearning/webnn

Announcements

Implementation status

Implementation Status of WebNN Operations

anssik: implementation status has been updated for:
… - WebNN CPU XNNPack backend
… - WebNN GPU DirectML backend
… - ORT
… for details, please see the webnn_status.json diff at https://github.com/webmachinelearning/webmachinelearning.github.io/pull/58/files

anssik: this was a team effort, thanks @lisa0314 @miaobin @Honry @mingmingtasd @BruceDai @shiyi9801!

<gb> @lisa0314

<gb>@miaobin

<gb>@Honry

<gb>@mingmingtasd

<gb>@BruceDai

<gb>@shiyi9801 @ibelem

Upcoming discussion with the Web LLM author Tianqi Chen

anssik: I had a discussion with Tianqi Chen from CMU, OctoML, creator of Web LLM, a JS library that accelerates select LLMs in browsers with WebGPU

Web LLM repo
… Tianqi shared he's very supportive of our work in this WG and is interested in working with us more closely
… given Tianqi's highly relevant expertise and interest, I've initiated the process to bring him on board the WG as an Invited Expert
… this will allow him to contribute in a full capacity
… I've tentatively scheduled a WG discussion with Tianqi on 11 January 2023
… Tianqi has already shared use cases with this WG for:
… - hybrid execution of models i.e. WebGPU for custom ops + WebNN)
… - JSON schema of the webNN declaration i.e. the compiler projects can generate a schema and invoke executions without explicitly doing so in JS
… if you have questions to Tianqi e.g. about his work on WebLLM that will be a great opportunity to ask those questions
… we have opened a dedicated issue for the hybrid execution use case, currently a high-level description, but can be appended to with more details

Hybrid execution use case issue

<gb> Issue 480 Hybrid execution use case from Web LLM project (by anssiko) [use case]

WebNN v2: Review transformer ops spec contributions (continued)

anssik: issue #375 and PR #478

<gb> Pull Request 478 Add support for operations needed for well-known transformers e.g. Segment Anything, Stable Diffusion, etc. (by wchao1115)

<gb> Issue 375 Support for transformers (by dontcallmedom) [v2] [operation set]

anssik: on our last call I drew the WG's attention to this major PR #478 that was looking for everyone’s review and feedback
… I also shared my expectation that by this meeting on 30 Nov we are in a position to make a merge decision, let's discuss now whether we're there yet or whether we want some additional time for further refinements
… first, I want to thank the the entire group for your active review and Chai for responding to the review comments that reflect the group' consensus
… this major PR has been a great group effort, thank you all!
… I produced a hand-rolled IDL diff between the latest published version from 26 October 2023 and this PR

Hand-rolled IDL diff
… I compiled a list of opens from the PR review, some of these may require no action, so we can go through quickly

NavigatorML mixin

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1410542879

anssik: I suppose the NavigatorML mixin was removed by accident, without this mixin, the ML object is not exposed via navigator.ml
… a fix is to bring this back

chai: thanks for the review everyone, especially Ningxin and Dwayne for careful comments
… all reviewers please resolve discussions in the GH PR that have been resolved
… for changes unrelated to this PR, please open a separate issue and link to this big PR

Hard to tell whether an MLOperand is a constant or not

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1410103395

https://www.w3.org/TR/webnn/#dom-mlgraphbuilder-constant

Ningxin: It's hard to tell whether an MLOperand is a constant or not. The current algorithm of constant only creates an implementation-defined platform operand and sets values to it.

Ningxin_Hu: this is for the gather op validating index parameter
… a step in the validation algorithm, unless constant the implementation can access the data of that operand, otherwise it is runtime behavior
… current algorithm step only works with constant operation, Chai wanted me to propose some text to address this, we don't mark op as constant or not, need some text ensure
… in Chromium impl discussion on how to address OOB, proposed for discussion in a separate issue

Chai: I think this should be tracked as a separate issue

Zoltan: platform tests for this that it is always a constant?

Ningxin_Hu: it does not need to always be constant I think

anssik: proposal to create a separate issue for this

Ningxin_Hu: I'll do that

anssik: thanks!

Make standalone argMax and argMin operators

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1410104309

https://pr-preview.s3.amazonaws.com/webmachinelearning/webnn/pull/478.html#mlgraphbuilder-reduce-op

Ningxin: two issues of output operand creation:
… - The output shape should be calculated instead of just copying input's.
… - The output data type of reduceArgMax and reduceArgMin should be unsigned integer rather than setting to input's.

Chai: I agree these should be separated out
… I will commit that change soon to this branch

Gather op implementation considerations for out-of-bound indices

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1396672041

anssik: Jiewei has a proposal for an informative section:
… 1. Runtime out-of-bounds indices should be explicitly handled by either the browser implementation or the platform implementation, to avoid OOB memory accesses.
… 2. If the platform implementation doesn't handle out-of-bounds indices, the browser implementation should take steps to ensure the platform operator doesn't receive out-of-bound indices

anssik: I propose these bullets to be added in gather section as an informative note and a link added to https://www.w3.org/TR/webnn/#security
… for the third bullet, a separate GH issue should be opened:
… 3. Mention what caller should expect as a result of list item 2: 0, NaN, first/last indices (if implemented with clamp)

Dwayne: discussing this in context of the separate gather issue sounds good

MLReduceOptions.keepDimensions scan direction

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1396829866

https://pr-preview.s3.amazonaws.com/webmachinelearning/webnn/pull/478.html#dictdef-mlreduceoptions

anssik: Dwayne notes that for PT/TF compat, should support both increasing and decreasing axis scan directions for tied values
… this issue preexists this PR, proposed as a separate issue

Ningxin_Hu: this is related to reduceArgMax and reduceArgMin, they don't have separate signatures

Dwayne: will discuss with Chai to come up with a solution

Rename MLOperand.type() to dataType()

https://github.com/webmachinelearning/webnn/pull/478#pullrequestreview-1745700960

https://www.w3.org/TR/webnn/#dom-mloperanddescriptor-datatype

anssik: to align with MLOperandDescriptor.dataType

Chai: already fixed

Examples of how gather works in different slicing schemes

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1402915672

https://pr-preview.s3.amazonaws.com/webmachinelearning/webnn/pull/478.html#example-3d538e0c

anssik: Jiewei proposes a more extreme example as the last one

input.shape = (2,3,2)

indices.shape = (3,4,5)

output.shape = (2,3,4,5,2)

Chai: for gather there are quite a few samples, all major cases covered probably

Naming logicalNot or not

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1406997415

Dwayne: OK to make this a separate issue

Naming where arguments

https://github.com/webmachinelearning/webnn/pull/478#discussion_r1408773430

anssik: proposal from CL review, rationale "both input and other are 'inputs'"
… From: MLOperand where(MLOperand condition, MLOperand input, MLOperand other);
… To: MLOperand where(MLOperand condition, MLOperand trueValue, MLOperand falseValue);

Chai: naming arguments is hard

Chai: discoverability is important, in that if someone implements this API in their framework they need to be able to find the thing and map it to their implementation

Merge readiness check

jsbell: regarding resolving comments in the PR review, my comments are addressed, I just couldn't resolve those (due to GH permissions thing)
… looks like this PR tries to resolve all in one PR, rather than multiple smaller PRs

<Ningxin_Hu> +1 to have small PRs after this one

Chai: agree we should do incremental PRs going forward

anssik: once all PR review discussions have been resolved we are ready to merge the PR, agreed?
… either by spinning off into separate open issues or resolving discussion on the spot in the PR

Dwayne: I don't see resolve conversation button either

anssik: please thumb up on the very last comment to signal you're OK to resolve

<Ningxin_Hu> webmachinelearning/webnn#478 (comment)

Ningxin_Hu: to respond to jsbell re conformance tests, see the link above
… the baseline implementation is pure JS
… we have WIP wpt tests too for these ops
… I hope that will unblock this PR

jsbell: I saw the table, no strong opinion where this lives, someplace where it is maintained properly

Ningxin_Hu: I'll move the table to a separate issue

chai: I have a separate ask, I think that at some point the spec should have a way for the people to identify what are the ops they are talking about
… the browser needs to identify the implementation to be compliant to something
… in the early HTML days, there was a notion of CSS Layer 1 and Layer 2, sounded dubious in the beginning, don't know what it might look like here

anssik: Living Standard is the trend

jsbell: there's no perfect answer, one approach some Chrome DevRel folks are working on is called baseline, that's not at the level of individual methods etc. but saying in 2024 there APIs work across browsers
… not sure if they've looked at lower level "there 5 methods are supported in 2024 across X, Y and Z"
… the v1 of the spec, with multiple implementations across browsers, we want that browsers only claim compliance when they pass all wpt tests
… approaches in my specs, I've called things that are new and track as implementations adopt those
… new functionality could be advertised and communicate what is implemented and where

https://webmachinelearning.github.io/webnn-status/

jsbell: some non-Chromium browsers often say they have no support for a feature unless they pass all the wpt tests
… sometimes tests miss something obvious, so this model fails in that aspect sometimes
… good wpt coverate is important for our team

Enhancements

Should scale and bias be required inputs for batchNormalization op?

anssik: issue #481

<gb> Issue 481 Should `scale` and `bias` be required inputs for `batchNormalization` op? (by huningxin)

anssik: Ningxin did a very thorought investigation into this issue, summary:
… - currently in batchNormalization scale and bias operands are optional members of MLBatchNormalizationOptions dictionary
… - current algorithm: if scale is not present, the element-wise multiplication can be eliminated, and if bias is not present, the element-wise addition can be eliminated too

anssik: Ningxin notes, however, there's an issue: "the optional scale and bias are not widely supported across frameworks and native ML APIs. This would cause the implementation more complex for those native ML APIs which don't support optional scale and bias"
… for details of the framework and native MP APIs, see the GH issue
… Ningxin proposes a solution: "make the two operands required"
… and notes that models that won't use scale and bias the frameworks can set scale to 1 and bias to 0

Dwayne: makes sense, don't know why these were originally optional

jsbell: I'm relaying a comment from someone from my team who pointed out there may be some confusion re optional and required
… if we give scale and bias default values, do we want to force developers to pass values? Or use common defaults?
… i.e. make default scale be 1 and bias be 0

Dwayne: these will be called from frameworks with tensors lying around

Chai: I need some time to read this issue one more time
… the idea of optional in the API is to not make the API signature cluttered and help with future revisions
… for this specific issue, scale and bias as required, need to require how strong this feedback is
… you have something optional, you can always ask to be more explicit about it

Ningxin_Hu: SGTM, I think there's an opportunity to make this an optimization opportunity
… if some native ML API can make use of this optimization then keeping this optional is reasonable
… open for discussion

<jsbell> I'll discuss in more detail w/ my folks, see if they want to add comments to the issue

<gb> @shiyi9801

<gb>@ibelem

Minutes manually created (not a transcript), formatted by scribe.perl version 221 (Fri Jul 21 14:01:30 2023 UTC).

Diagnostics

Succeeded: s|https://github.com/webmachinelearning/meetings/blob/main/telcons/2023-11-30-wg-agenda.md|Topic: Enhancements

Succeeded: s|@shiyi9801|@shiyi9801 @ibelem

Maybe present: anssik, chai, Dwayne, jsbell, Ningxin, Ningxin_Hu, Zoltan

All speakers: anssik, chai, Dwayne, jsbell, Ningxin, Ningxin_Hu, Zoltan

Active on IRC: anssik, chai, jsbell, Ningxin_Hu