13:56:10 RRSAgent has joined #webmachinelearning 13:56:10 logging to https://www.w3.org/2022/04/21-webmachinelearning-irc 13:56:13 RRSAgent, make logs Public 13:56:15 please title this meeting ("meeting: ..."), anssik 13:56:19 Meeting: WebML WG Teleconference – 21 April 2022 13:56:24 Chair: Anssi 13:56:31 Agenda: https://github.com/webmachinelearning/meetings/blob/main/telcons/2022-04-21-wg-agenda.md 13:56:38 Scribe: Anssi 13:56:50 scribeNick: anssik 13:56:50 scribe+ dom 13:56:57 Present+ Anssi_Kostiainen 13:57:20 RRSAgent, draft minutes 13:57:20 I have made the request to generate https://www.w3.org/2022/04/21-webmachinelearning-minutes.html anssik 13:59:05 Present+ Humera_Noor 14:00:05 Humera has joined #webmachinelearning 14:00:20 Present+ Belem_Zhang 14:00:30 Present+ Chai_Chaoweeraprasit 14:01:15 Present+ James_Fletcher 14:01:19 chai has joined #webmachinelearning 14:01:32 Present+ Ningxin_Hu 14:01:40 Present+ Ganesan_Ramalingam 14:01:52 ningxin_hu has joined #webmachinelearning 14:01:58 Present+ Jan_Wang 14:02:00 rama has joined #webmachinelearning 14:02:02 Present+ 14:05:43 Topic: Announcements 14:05:56 Subtopic: TPAC 2022 Hybrid meeting announcement 14:06:14 -> TPAC 2022 announcement https://lists.w3.org/Archives/Public/public-webmachinelearning/2022Apr/0003.html 14:06:32 anssik: TPAC 2022 is a Hybrid Meeting 14:06:39 ... takes place 12-16 September 2022 14:06:46 ... Main in-person hub: 14:06:51 ... Sheraton Vancouver Wall Centre 14:06:57 ... Vancouver, Canada 14:07:29 ... plan to also have a hub in China to provide an in-person gathering place to support a minimum number of sessions. 14:07:36 ... Goals: 14:07:43 ... - ensure the safety of our onsite attendees 14:07:59 ... - valuable experience to both in-person and remote 14:08:06 ... - optimizing for hybrid 14:08:15 ... - group meetings and breakout sessions 14:08:29 ... TPAC planners to share more details on practicalities, bookings, registration, health rules etc. 14:08:36 ... Registration to open early July. 14:08:46 ... Looking forward to seeing you virtually or in-person! 14:08:52 ... questions? 14:09:18 dom has joined #webmachinelearning 14:09:21 q? 14:09:32 bbcjames has joined #webmachinelearning 14:11:48 Subtopic: WebML and WebNN logos 14:11:58 -> Logos https://webmachinelearning.github.io/logos 14:12:20 anssik: this WebML effort has been noticed outside this W3C group so we thought this effort deserves its own brand identity 14:12:36 ... a core part of that identity is a logo that distinguish this effort in the minds of users, developers, other stakeholders 14:13:04 ... Belem has created and contributed to this effort a set of beautiful WebML and WebNN logos licensed under CC BY 4.0 (Attribution 4.0 International). 14:13:11 ... I look forward to t-shirts and coffee mugs with these logos 14:13:25 ... I asked Belem to tell a story how these logos came to be and Belem joined us today to share his story with you 14:13:38 Belem: I'm excited to bring these new logos to the community 14:14:01 ... abstract version of a neural network inspired me, I simplified it for this purpose to be more abstract 14:14:14 ... this represents this emerging technology we're bringing to the web with you're help 14:14:24 ... colors match W3C logo's colors 14:14:34 thanks belem 14:15:01 q? 14:15:15 looks great on dark background too! 14:15:46 Topic: Proposed new use cases 14:15:53 anssik: we drive API design by use cases 14:16:03 ... we have two proposed use cases in review 14:16:14 Subtopic: Performance Adaptation 14:16:17 Jonathan has joined #webmachinelearning 14:16:19 -> Performance Adaptation use case https://github.com/webmachinelearning/webnn/pull/207 14:16:27 Present+ Jonathan_Bingham 14:16:44 ... Performance Adaption use case has been discussed earlier, and it seems based on review comments the proposed use case is not practical 14:17:02 ... any concerns with us deferring this use case from the CR scope and continue refine it for post-CR? 14:17:13 q? 14:17:17 Regrets+ Dom 14:18:14 [no concerns] 14:18:26 Subtopic: Ethical Content Filtering 14:18:34 -> Ethical Content Filtering use case https://github.com/webmachinelearning/webnn/pull/253 14:18:48 anssik: Humera submitted this Ethical Content Filtering use case 14:19:25 Humera: the background for this use case is how the browsers can restrict ML implementations, and if this happens we're not able to perform certain things like content filtering important to users 14:20:25 ... we thought of a few scenarios, e.g. user privacy 14:20:43 ... content filtering should not be blocked by browsers, we think 14:20:59 Jan_Wang has joined #webmachinelearning 14:21:34 ... ML based content filtering is possible use case 14:22:07 anssik: current implementation is implemented as a browser extension? 14:22:25 q? 14:22:50 James: thinking this through, "content filtering" vs "content filtering" 14:23:03 ... how do we say filtering is ethical? 14:23:17 ... people could use filtering to e.g. remove political opinions they do not like 14:23:36 ... how do we bake that in, so this will only enable ethical and not non-ethical content filtering 14:23:58 Humera: type of content is a question a develop needs to consider when using this technology 14:24:09 s/develop/developer/ 14:24:57 q? 14:25:42 Topic: Context-based graph execution methods for different threading models 14:25:48 -> PR: Context-based graph execution methods for different threading model https://github.com/webmachinelearning/webnn/pull/257 14:26:15 anssik: extremely important issues, huge thanks to Chai, Ningxin, Rafael, Bryan, Dom, Ping, everyone for your contribs 14:27:16 q? 14:27:49 Chai: I closed the first PR #255, so we focus on #257 14:28:09 ... essentially, to recap, we're now at the point we support both sync and async 14:28:20 ... for sync per Ningxin's feedback we added GPU support 14:28:31 ... we didn't do that initially due to concerns on GPU blocking UI thread 14:28:42 ... but we agreed sync exec is only on worker thread we're OK 14:29:16 ... for async it is clear, the use case is TF.js and Ping drove this use case, no controversy here for supporting both GPU and CPU 14:29:32 ... we have covered the cases where the developer does not want to define the device explicitly 14:29:38 ... "I want to use CPU" 14:29:49 ... simple and straightforward 14:30:12 ... in case of device selection, before this PR we had a hint, default too that leaves selection to the user agent 14:30:52 ... so caller defers the decision to the UA, problem with implicit device type aka hint is that when you exec the model you'll have to choose what form of inputs or constants you want to feed the graph 14:31:05 ... simpler to not having to think about that but there's a price to pay 14:31:12 ... WebNN is a backend API for frameworks 14:31:26 ... in almost all cases fw knows which device they want to use 14:31:56 ... so a change is to have device type not being a hint but a choice 14:32:09 ... making it explicit what type of processor the caller wants to hit 14:32:34 ... harder case is when interopping with WebGPU, then the caller is mingled on the device level, need to choose adapter, map it etc. 14:32:48 ... inefficient to defer to WebNN to create another device or a proxy 14:33:05 ... for WebGPU the caller want to pass the WebGPUDevice and tell WebNN use what you have 14:33:07 q+ to compare to IETF format re shorter meetings 14:33:21 ... cannot be just a hint in this case, we want to make it clear there's an overload to create the context 14:33:36 ... context has a dedicated method to create a new context out of WebGPUDevice 14:34:15 ... caller using WebGPU, we have another overload that takes WebGPU context to create WebNN context, when we interop with WebGPU a fundamental issue is that the nature is to control how you populate the WebGPU workload and execute it 14:34:40 q- 14:34:44 .... WebNN in this model cannot exec the model on behalf of the caller, because it needs to maintain control how the workload is sequences and submitted for execution 14:35:03 ... the additional step is to record the workload into WebGPUBuffer and submit it later 14:35:27 ... recording the workload belongs to WebNN, best implemented synchronously because it happens on CPU and is a quick operation not blocking 14:35:42 ... but the return is in a form of a workload that is "encoded" 14:36:01 ... we press forward with our MLCommandEncoder proposal 14:36:26 ... interops with GPUCommandBuffer, so WebGPU can consume it 14:36:50 ... my preference would be to make it fully compatible with WebGPU instead of having another indirection 14:37:05 ... that might fragment WebNN implementation from the caller point of view, using WebGPU already 14:37:17 ... this was extensively discussed in PR #257 14:37:28 ... from the API standpoints this seems the better choice of the two 14:38:02 ... last commit done last night 14:38:13 ... I hope we can agree on this and move on 14:38:33 q? 14:38:48 q+ 14:39:05 Present+ Daniel_LaLiberte 14:39:06 q? 14:39:09 ack ningxin_hu 14:39:45 ningxin_hu: great job to put this together! 14:39:59 ... this satisfies a very complex and hard requirements from different sides 14:40:10 ... I'd like to capture the usage perspective 14:40:20 ... as Chai mentioned has two major usages 14:40:29 .... 1) web developer using WebNN standalone 14:41:02 ... we call this default ML context, allow selecting CPU or GPU, and is not a hint anymore, makes sense for framework integration 14:41:18 ... for Wasm integration we need CPU device to avoid memory copies 14:41:51 ... also use case to offload to GPU, we have the device type as an explicit instead of a hint will satisfy this use case, so the data is in the device that is requested 14:42:18 ... we have the async API to avoid blocking and sync API restricted to worker to address concerns from TAG and also feedback from Chrome team 14:42:38 ... default ML context for both sync and async accept ArrayBuffers 14:43:06 ... UL/DL is implicit done on behalf of the caller 14:43:19 ... I think this makes the WebNN default usage self contained 14:43:40 ... 2) WebGPU interop, for this we have use case for real-time video processing 14:44:14 ... this use case investigation needs a capability to combine both the rendering API WebGPU and WebNN graph exec capability together to implement e.g. background blur for video stream 14:45:09 ... from my experience implementing this, developers start from constructing GPU pipeline, fitting ML capability into this pipeline, so WebNN should be addition to the existing pipeline 14:45:22 ... CommandEncoder fits this model well 14:46:00 ... WebGPU develop can submit the buffers to WebGPU Queue, Raphael shared new WebGPU developments to enable multiple queues 14:46:19 ... in a summary, current WebGPU interop is in good shape and I'd like to support that 14:46:50 ... there's a remaining ML context than can be created from WebGL rendering context, I believe we leave that for another future PR as it needs further investigation 14:46:59 ... WebGL is a Khronos specification 14:47:57 ... for this PR, we are close to meet the requirements, remaining discussion with Chai on graph initialization, but we can do that in a follow up PR 14:48:20 ... this PR should be merged as soon as possible and work on additional enhancements in other PRs that come later 14:48:22 q? 14:50:02 anssik: Chai, Ningxin any major blockers you'd like to seek WG perspective on? 14:50:15 Chai: I think we have a plan to get this PR to closure 14:50:52 q? 14:51:16 Topic: Candidate Recommendation maturity wide review expectations 14:51:35 Subtopic: Accessibility review 14:51:44 -> Accessibility checklist https://w3c.github.io/apa/fast/checklist.html 14:52:18 anssik: This is a draft checklist to support Framework for Accessibility in the Specification of Technologies (FAST) prepared by the Accessible Platform Architectures Working Group. 14:52:37 ... The goal of FAST is to describe the features that web technologies should provide to ensure it is possible to create content that is accessible to users with disabilities. 14:53:01 ... WebNN API defines an API so the clearly relevant section is "If technology defines an API", checkpoints: 14:53:10 ... [ ] If the API can be used for structured content, it provides features to represent all aspects of the content including hidden accessibility features. 14:53:16 ... [ ] If the API relies on user agents to generate a user interface, the specification provides guidance about accessibility requirements needed to enable full interaction with the API. 14:53:40 ... I can take the first stab at this checklist and submit a proposal for WG review 14:54:02 q? 14:54:36 Subtopic: Internationalization review 14:54:45 -> Internationalization checklist https://w3c.github.io/i18n-drafts/techniques/shortchecklist 14:55:18 anssik: similarly to accessibility, given WebNN API is a low level API, many internationalization considerations do not apply and this review is expected to be a light weight 14:55:26 ... happy to take care of this as well 14:55:45 q? 14:56:08 Subtopic: WebGPU review 14:56:16 -> WebGPU interop investigation https://github.com/gpuweb/gpuweb/issues/2500 14:56:39 our plan of record from https://www.w3.org/2022/03/24-webmachinelearning-minutes.html#t07 is the following: 14:56:45 ... land PR #257 Context-based graph execution methods 14:56:55 ... prototype that in Chromium 14:57:07 ... update the WebNN-WebGPU interop samples accordingly 14:57:14 ... then seek WebGPU review 14:57:23 ... everyone happy with that plan still? 14:57:25 q? 14:59:13 +1 to this plan 15:01:44 anssik: thanks everyone for joining and for your active participation in GitHub! Please review the open PRs. 15:01:53 RRSAgent, draft minutes 15:01:53 I have made the request to generate https://www.w3.org/2022/04/21-webmachinelearning-minutes.html anssik 16:49:03 dom has joined #webmachinelearning 17:06:07 Zakim has left #webmachinelearning