IRC log of webmachinelearning on 2024-01-25

Timestamps are in UTC.

14:50:58 [RRSAgent]
RRSAgent has joined #webmachinelearning
14:51:02 [RRSAgent]
logging to https://www.w3.org/2024/01/25-webmachinelearning-irc
14:51:02 [Zakim]
RRSAgent, make logs Public
14:51:03 [Zakim]
please title this meeting ("meeting: ..."), anssik
14:51:06 [anssik]
Meeting: WebML WG Teleconference – 25 January 2023
14:51:07 [anssik]
Chair: Anssi
14:51:12 [anssik]
Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2024-01-25-wg-agenda.md
14:51:15 [anssik]
Scribe: Anssi
14:51:19 [anssik]
scribeNick: anssik
14:51:31 [anssik]
gb, this is webmachinelearning/webnn
14:51:32 [gb]
anssik, OK.
14:51:40 [anssik]
Present+ Anssi_Kostiainen
14:57:31 [jsbell]
jsbell has joined #webmachinelearning
14:57:36 [anssik]
Present+ Joshua_Bell
14:58:25 [anssik]
RRSAgent, draft minutes
14:58:27 [RRSAgent]
I have made the request to generate https://www.w3.org/2024/01/25-webmachinelearning-minutes.html anssik
14:58:51 [anssik]
Present+ Zoltan_Kis
14:59:49 [anssik]
Present+ Rafael_Cintron
15:00:01 [RafaelCintron]
RafaelCintron has joined #webmachinelearning
15:00:30 [Ningxin_Hu]
Ningxin_Hu has joined #webmachinelearning
15:01:16 [anssik]
Present+ Austin_Sullivan
15:01:26 [anssik]
Present+ Ningxin_Hu
15:01:34 [anssik]
Present+ Bryan_Bernhart
15:01:54 [anssik]
Present+ Joshua_Lochner
15:02:03 [anssik]
Present+ Phillis_Tang
15:02:09 [anssik]
Present+ Chai_Chaoweeraprasit
15:02:18 [anssik]
RRSAgent, draft minutes
15:02:19 [RRSAgent]
I have made the request to generate https://www.w3.org/2024/01/25-webmachinelearning-minutes.html anssik
15:02:50 [anssik]
Topic: Delta wide review and a new Candidate Recommendation
15:03:11 [chai]
chai has joined #webmachinelearning
15:03:13 [anssik]
anssik: I want to review the proposed plan for delta wide review and a new CR Snapshot expected in Q1'24.
15:03:20 [anssik]
... let me recap the key concepts:
15:03:40 [zkis]
zkis has joined #webmachinelearning
15:03:48 [dwayner]
dwayner has joined #webmachinelearning
15:04:20 [anssik]
... - wide review: objective is to ensure all web stakeholders are able to perform reviews of the spec and provide comments, "delta" prefix means we seek feedback and comments on changes since last Q1'23 CR Snapstop publication
15:04:37 [asully]
asully has joined #webmachinelearning
15:05:15 [anssik]
... - CR Snapshot: if substantive changes are made to a CR other than to remove features, the WG should publish a new CR Snapshot. This publication has more weight than CR Draft in that it signals it has gone through closer scrutiny
15:05:34 [anssik]
Subtopic: Delta wide review
15:06:04 [anssik]
anssik: I'm happy to handle on behalf of the WG all the wide review coordination and CR publication tasks, but I will seek the WG's review to ensure we're all aligned
15:06:24 [anssik]
... my proposed plan is for the WG is to build upon the work we've done for the initial CR published March 2023
15:06:30 [anssik]
... so we're focus on changes since our initial CR in Q1'23
15:06:49 [anssik]
... I'll discuss a few important things we need to agree on to qualify for a new CR Snapshot publication
15:07:08 [anssik]
Subtopic: Implementation experience
15:07:26 [anssik]
anssik: I want the WG to highlight in the new CR Snapshot the subtantive progress made in implementation experience since initial CR.
15:07:49 [anssik]
... I propose we use the WebNN implementation status page page as an evidence (shout out to Belem & co for keeping this important resource up to date!)
15:08:02 [anssik]
-> Implementation Status of WebNN Operations https://webmachinelearning.github.io/webnn-status/
15:08:37 [anssik]
anssik: given WebNN API sits at the middle in the "Web ML stack", we're tracking both web engine and browser implementations as well as frameworks that are the prime consumers of the WebNN API
15:08:42 [Phillis_Tang]
Phillis_Tang has joined #webmachinelearning
15:08:53 [anssik]
... for Chromium-based browsers Chrome and Edge, we have XNNPACK/CPU backend and DirectML/GPU backend
15:09:06 [anssik]
... for ChromeOS, we have MLService/CPU backend
15:09:29 [anssik]
... as of today, we're 42%, 90%, 17% code complete respectively, with DirectML backend most advanced
15:09:53 [anssik]
... in addition to browser implementations, we're implementing JS ML framework integrations to WebNN API
15:09:58 [anssik]
... currently focusing on TensorFlow Lite for TF.js External Delegate and ONNX Runtime Web Execution Provider
15:10:15 [anssik]
... (consider these as the glue libraries between the framework and the WebNN API)
15:10:36 [anssik]
... currently, TF integration in being worked on in a fork, ONNX is in upstream and is 94% code complete
15:10:50 [anssik]
anssik: the Implementation Status page also links to w-p-t dashboards for details
15:11:33 [RafaelCintron]
q+
15:11:37 [anssik]
ack RafaelCintron
15:11:42 [anssik]
Present+ Dwayne_Robinson
15:12:02 [Joshua_Lochner]
Joshua_Lochner has joined #webmachinelearning
15:12:23 [anssik]
RafaelCintron: for Apple platforms there's a Chromium PR that adds basic support for WebNN, basic infrastructure, translating all ops, WIP
15:12:59 [anssik]
q?
15:13:29 [anssik]
Subtopic: Test coverage
15:14:14 [anssik]
anssik: We are also expected to demonstrate how we ensure implementations are and will remain interoperable with new implementations we don't yet know about
15:14:21 [anssik]
... the cross-browser and cross-platform web-platform-tests is our tool for that
15:14:28 [anssik]
-> Web Platform Tests dashboard for WebNN https://wpt.fyi/results/webnn
15:14:46 [anssik]
... currently we have in total 3750 subtests with a pass rate of roughly 40%
15:14:56 [chai]
brb
15:14:59 [dwayner]
CoreML Initial backend standup https://chromium-review.googlesource.com/c/chromium/src/+/5075312
15:14:59 [Ningxin_Hu]
q+
15:15:05 [anssik]
ack Ningxin_Hu
15:15:24 [anssik]
Ningxin_Hu: 40% represents XNNPACK CPU backend tests, it does not test GPU yet
15:15:50 [anssik]
... XNNPACK op coverage is tested, matches 41% op coverage for XNNPACK
15:16:11 [chai]
b
15:16:41 [anssik]
... my expectation is we continue to evolve the w-p-t alongside the spec and our pass rate will increase, we will add more test cases for existing ops too e.g. when we unearth new edge cases (thanks Bruce & co for you work on w-p-t!)
15:17:04 [anssik]
Subtopic: Significant new & removed features, conventions, use cases
15:17:15 [anssik]
anssik: We should also note significant new & removed features, conventions update, new use cases since our previous CR Snapshot in Q1'23
15:17:32 [anssik]
... this information usually goes to the Status section of the spec, I'll prepare a PR for the WG to review
15:17:43 [anssik]
... for new features, I propose to highlight the new ops and data types "int64" and "uint64" added to support well-known transformers landed in #478 and discussed in #375
15:17:43 [gb]
https://github.com/webmachinelearning/webnn/issues/478 -> CLOSED Pull Request 478 Add support for operations needed for well-known transformers e.g. Segment Anything, Stable Diffusion, etc. (by wchao1115)
15:17:43 [gb]
https://github.com/webmachinelearning/webnn/issues/375 -> Issue 375 Support for transformers (by dontcallmedom) [v2] [operation set]
15:18:07 [anssik]
... I also want to note any ops removed based on implementation experience to streamline the API
15:18:28 [anssik]
... I have pushed a PR for updated use cases #507, thanks for your review JoshuaB and Zoltan, I will merge this soon
15:18:28 [gb]
https://github.com/webmachinelearning/webnn/issues/507 -> Pull Request 507 Revise use cases with transformers (by anssiko)
15:19:08 [anssik]
... Zoltan and JoshuaB have improve the spec conventions significantly, I want to highlight this work, it greatly enhances interop by removing ambiguity and makes future implementers work easier
15:19:45 [anssik]
... to summarize, I'll prepare a PR for the WG to review the changes to the spec to prepare it for CR
15:19:57 [anssik]
q?
15:20:10 [Ningxin_Hu]
q+
15:20:13 [anssik]
ack Ningxin_Hu
15:20:39 [anssik]
Ningxin_Hu: I'd like to give a heads up about the sync API and implementation experience from ONNX RT EP
15:21:19 [Ningxin_Hu]
https://bugs.chromium.org/p/chromium/issues/detail?id=1488162
15:21:22 [anssik]
... we did a performance comparison for sync vs async via asyncify
15:21:37 [anssik]
... ask is to check the state of the sync API
15:22:22 [Ningxin_Hu]
https://github.com/microsoft/onnxruntime/pull/19145
15:22:22 [anssik]
... we are informed JSPI, promise integration is coming, comparing with asyncify model inference perf is close to sync esp with WebGPU
15:22:48 [anssik]
Ningxin_Hu: ONNX RT is using async API rather than sync API as before, proposal for the WG to consider dropping sync API support
15:23:14 [anssik]
... this is to remove a feature
15:24:22 [anssik]
anssik: thanks for this, we have an option to mark the sync API as "at risk" if we believe it might take longer to deprecate
15:24:44 [anssik]
q?
15:25:38 [anssik]
Ningxin_Hu: proposed for removal are computeSync(), buildSync() and createContextSync() specifically
15:26:14 [chai]
q+
15:26:24 [jsbell]
jsbell has joined #webmachinelearning
15:26:28 [anssik]
q?
15:26:59 [anssik]
anssik: is this spec feature removal an intrusive change?
15:27:32 [anssik]
Ningxin_Hu: I need to look carefully, but I think we can just drop the sync execution paths are remove these three 3 *Sync() methods
15:27:48 [anssik]
ack chai
15:28:12 [Ningxin_Hu]
I'll open an issue
15:28:16 [anssik]
chai: I think it'd be helpful to capture this in an issue and PR for the record, but sounds reasonable to me
15:28:17 [anssik]
q?
15:28:40 [anssik]
Subtopic: Refreshing the current Status section of the spec
15:28:46 [anssik]
-> WebNN: Status of this document (SOTD) https://www.w3.org/TR/webnn/#sotd
15:28:52 [anssik]
anssik: SOTD is for busy people to get a high-level view where we're at
15:29:00 [anssik]
... I want to seek the WG's feedback on how to update this section, currently it reads:
15:29:04 [anssik]
"Further implementation experience and user feedback is being gathered for the MLCommandEncoder interface that proposes to enable more efficient WebGPU integration. A proposal to simplify MLContext creation is being discussed. This document is maintained and updated at any time. Some parts of this document are work in progress and further improvements are expected to be reflected in revised Candidate Recommendation Drafts and
15:29:04 [anssik]
Snaphots."
15:29:14 [anssik]
anssik: my questions:
15:29:22 [anssik]
... - how do we want to phrase the MLCommandEncoder status?
15:30:06 [anssik]
chai: I think it depends what we think is the milestone for the next CR
15:30:23 [anssik]
... we want to highlight delta, do we think WebGPU interop is a milestone we want to cross?
15:31:04 [anssik]
anssik: I think we should go to CR in Q1 and not block on WebGPU interop
15:31:52 [anssik]
chai: MLCommandEncoder is there and has been there since last CR
15:32:14 [anssik]
... - what is the current status re "simplify MLContext creation" #322?
15:32:15 [gb]
https://github.com/webmachinelearning/webnn/issues/322 -> Pull Request 322 Simplify MLContext creation (by wchao1115)
15:32:56 [anssik]
chai: I think at this point we should remove this sentence
15:32:57 [anssik]
q?
15:33:31 [anssik]
anssik: we're drop Simplify MLContext creation from the status, not highlight it
15:33:52 [anssik]
... - MLBuffer, should we note this important work in status given it provides an abstraction for CPU, GPU, NPU to interop more efficiently?
15:34:06 [anssik]
q?
15:34:50 [anssik]
... - any other new features, removals of important WIP topics to highlight to busy people in this section, chime in on the upcoming PR
15:34:54 [anssik]
RRSAgent, draft minutes
15:34:55 [RRSAgent]
I have made the request to generate https://www.w3.org/2024/01/25-webmachinelearning-minutes.html anssik
15:35:17 [jsbell]
jsbell has joined #webmachinelearning
15:35:30 [anssik]
Topic: Issue prioritization
15:35:39 [jsbell]
q+
15:35:50 [anssik]
anssik: As we're approaching another spec milestone aka CR Snapshot I want to discuss with you:
15:36:05 [anssik]
... - the most urgent and important issues for the group, or buckets of issues
15:36:27 [anssik]
... - practical steps we can take to make those issues more visible and actionable to the broader group
15:36:49 [anssik]
... I'd characterize the WG's current work mode as implementation-driven. That is a great work mode.
15:37:16 [anssik]
... OTOH that means that while the core group has a shared understanding of where our priorities are, the broader group would benefit from hints and guidance where they should focus their attention and contributions
15:37:49 [anssik]
... I've used these calls to "check the pulse" on issues to build shared understanding, but not everyone can join these calls, and our meeting cadence cannot fit all open issues
15:38:20 [anssik]
... I should acknowledge spec contributions come in many shapes and forms, a typical implementation-driven contribution is a normatively defined feature, this is what I label as "new features" in our agendas
15:38:28 [anssik]
... in addition to "new features" there's a wide range of other contributions:
15:38:39 [anssik]
... - identifying and reporting issues, also help spot stale issues or suggest issues to be closed if addressed
15:38:57 [anssik]
... - patches to keep the spec in a cohesive state e.g. refresh informative parts whenever there's a normative change
15:39:23 [anssik]
... e.g. code examples, use cases, explainer, programming model updates, privacy & security, ethical considerations, notes to implementers (notes are those green boxes)
15:39:41 [anssik]
... - patches that improve normative definitions, fix bugs, align with conventions
15:39:46 [anssik]
... on the agenda these are known as "enhancements"
15:40:00 [anssik]
... contributions outside "new features" are equally or not more important and are great opportunities for new contributors, no contribution is too small
15:40:26 [anssik]
... I'd like to open the floor for discussion and brainstoring on concrete things we could do as a group to help new contributors join with concrete contributions
15:40:39 [anssik]
... maybe we can create a group of volunteers to triage our issues, we can use the GH facilities at our disposals better (labels etc.)
15:40:42 [anssik]
q?
15:40:58 [anssik]
ack jsbell
15:41:31 [anssik]
jsbell: thanks Zoltan, Ningxin, Anssi for reviews of my PRs
15:41:44 [anssik]
... as a new contributor it's been hard to understand where to focus my contributions
15:42:17 [anssik]
... started with clean up first, then going over the 100+ issue we have open and labels would help
15:42:44 [anssik]
... a bulk of the issues are about a specific op and not substantial architectural issues
15:43:00 [anssik]
... knowing the status would help, e.g. "needs a PR" label
15:43:24 [anssik]
... "ops that cross different backends" would be great to know
15:44:11 [anssik]
... I know there are some meta issues, prefer smaller issues over big meta issues
15:44:48 [anssik]
... big directional issues, e.g. sync/async, op sets StableHLO, gigantic issues may be hidden, these should be surfaced clearly with a label
15:45:05 [anssik]
... also areas where the spec may be incomplete even if no interop issues
15:45:32 [anssik]
... e.g. details of some ops refer to references behind a paywall, this is an issue, we should unearth these and label them
15:45:33 [anssik]
q?
15:46:35 [anssik]
q?
15:47:15 [anssik]
q?
15:47:22 [chai]
+1 on opset query as important issue we should tackle
15:47:31 [anssik]
Topic: New features
15:47:36 [anssik]
Subtopic: MLBuffer
15:47:41 [jsbell]
jsbell has joined #webmachinelearning
15:47:54 [anssik]
anssik: I want to use this call for a synchronous discussion on the proposal for a backend-agnostic storage type for WebNN operation (aka MLBuffer) informed by implementation experience.
15:48:02 [anssik]
anssik: issue #482
15:48:02 [gb]
https://github.com/webmachinelearning/webnn/issues/482 -> Issue 482 Support for device-based tensor storage objects (by bbernhar)
15:48:09 [anssik]
-> Chromium implementation: MLBuffer https://chromium-review.googlesource.com/c/chromium/src/+/5173676
15:48:23 [anssik]
anssik: thank you for very professional and in-depth discussion on this MLBuffer issue
15:48:27 [anssik]
... this is a very complex issue and baking it takes time
15:48:41 [anssik]
... perhaps Bryan can take the lead in sharing the latest on this feature and to have a discussion on open questions with Rafael chiming in
15:48:47 [anssik]
q?
15:49:04 [anssik]
Bryan: summary is decisions to be made :)
15:49:25 [anssik]
... how to proceed with buffer transfers?
15:49:51 [anssik]
... MLBuffer is GPU mapped now to simplify it
15:50:32 [anssik]
... how does interop work, concrete story upfront?
15:50:42 [anssik]
q?
15:51:08 [RafaelCintron]
q+
15:51:17 [anssik]
ack RafaelCintron
15:51:39 [anssik]
RafaelCintron: thank everyone who participated in this issue, great feedback Austin and Reilly, others
15:52:43 [anssik]
... WebGPU today has mapAsync, implementation experience
15:53:28 [anssik]
... WebGPU interop is important, several scenarios for camera input, anything that resembles real-time, WebGPU interop required for those use cases
15:53:45 [anssik]
... we should stage the work so we don't have a giant PR
15:53:52 [chai]
q+
15:54:15 [anssik]
... we should ship something we want to change later, it is OK to have the implementation develop gradually
15:54:16 [anssik]
q?
15:54:19 [anssik]
ack chai
15:54:53 [anssik]
chai: I agree with Rafael that the best way to qualify a complex proposal is to back it with an implementation and simple samples that show WebGPU interop
15:55:26 [anssik]
... I think this is a long discussion and people brought out good perspectives, but running code is a compelling way to demonstrate this can be worked out, I would focus on WebGPU interop
15:55:33 [asully]
q+
15:55:38 [anssik]
... conceptually it could be anything but WebGPU is on top of people's minds
15:55:48 [anssik]
q?
15:55:48 [anssik]
ack asully
15:55:54 [anssik]
asully: I reviewed the PR, thanks for the great work!
15:56:14 [anssik]
... I'm excited about the proposal, I agree WebGPU interop is top of mind, MLBuffer to work with WebGPU
15:56:38 [anssik]
... assumption that MLBuffer is in GPU may be a mistake, needs to be device-agnostic
15:56:56 [anssik]
... the buffer could live elsewhere
15:57:10 [Ningxin_Hu]
q+
15:57:23 [anssik]
ack Ningxin_Hu
15:58:36 [anssik]
Ningxin_Hu: thanks for the great discussion, agree WebGPU MLBuffer interop is important, want to highlight non-interop part is a demanded feature, chained inference scenario for LLMs
15:58:36 [anssik]
... ONNX has I/O Binding with WebGPU backend to keep the output from the previous inference and use it as input for the next inference
15:58:44 [anssik]
... if we only consider non-interop parts that is super helpful in itself
15:58:54 [anssik]
... for the chained inference scenario
16:00:08 [anssik]
... API change for the WebNN spec can fullfill this use case, agree with Austin we need to consider WebGPU interop but also want to note this WebNN only use case is important
16:00:08 [asully]
q+
16:00:08 [anssik]
... Rafael's proposal for staging work sounds good
16:00:08 [anssik]
q?
16:00:08 [anssik]
ack asully
16:00:21 [anssik]
asully: parallel to the discussion on MLBuffer, WebNN timelines and WebGPU timelines need some addition work
16:00:40 [anssik]
q?
16:01:42 [anssik]
q?
16:01:49 [anssik]
Bryan: I can put down the opens and everyone's positions on them
16:03:18 [anssik]
anssik: we test if "everyone can live with a proposal" to test if we can more forward, as a test for consensus
16:03:33 [anssik]
RRSAgent, draft minutes
16:03:34 [RRSAgent]
I have made the request to generate https://www.w3.org/2024/01/25-webmachinelearning-minutes.html anssik
16:05:35 [anssik]
s/page page/page
16:07:25 [anssik]
s/Ningxin_Hu: my expectation/anssik: my expectation
16:07:28 [anssik]
RRSAgent, draft minutes
16:07:30 [RRSAgent]
I have made the request to generate https://www.w3.org/2024/01/25-webmachinelearning-minutes.html anssik
16:12:55 [anssik]
s/have improve/have improved
16:14:33 [anssik]
s/are remove/and remove
16:16:20 [anssik]
s/block on WebGPU interop/block on WebGPU interop and keep the current status text
16:18:22 [anssik]
s/or not/if not
16:22:03 [anssik]
s/more forward/move forward
16:22:07 [anssik]
RRSAgent, draft minutes
16:22:08 [RRSAgent]
I have made the request to generate https://www.w3.org/2024/01/25-webmachinelearning-minutes.html anssik
18:14:23 [Zakim]
Zakim has left #webmachinelearning